Is inverted test pyramid really anti-pattern?
I know that the inverted test pyramid, i.e., having more end-to-end tests than unit tests is an anti-pattern.
However, I've started to think, what could be advantage of having less unit tests then tests of higher levels, black box tests, talking to a tested component through some API/protocol, e.g. (SOAP, REST, etc.).
The only advantage I’ve found is when I migrate the component from one language to another but preserve same component API/protocol.
In such a case, having high test coverage for API gives me much confidence that the migration went fine. Unit tests do not give this confidence, because they are tightly bound to the programming language, and when I migrate the code under tests from one language to another, I need to migrate unit tests, which means possibility to introduce bugs to unit tests as well. Plus more effort.
Any other advantages of having more integration tests than unit tests?
test-design test-strategy
add a comment |
I know that the inverted test pyramid, i.e., having more end-to-end tests than unit tests is an anti-pattern.
However, I've started to think, what could be advantage of having less unit tests then tests of higher levels, black box tests, talking to a tested component through some API/protocol, e.g. (SOAP, REST, etc.).
The only advantage I’ve found is when I migrate the component from one language to another but preserve same component API/protocol.
In such a case, having high test coverage for API gives me much confidence that the migration went fine. Unit tests do not give this confidence, because they are tightly bound to the programming language, and when I migrate the code under tests from one language to another, I need to migrate unit tests, which means possibility to introduce bugs to unit tests as well. Plus more effort.
Any other advantages of having more integration tests than unit tests?
test-design test-strategy
add a comment |
I know that the inverted test pyramid, i.e., having more end-to-end tests than unit tests is an anti-pattern.
However, I've started to think, what could be advantage of having less unit tests then tests of higher levels, black box tests, talking to a tested component through some API/protocol, e.g. (SOAP, REST, etc.).
The only advantage I’ve found is when I migrate the component from one language to another but preserve same component API/protocol.
In such a case, having high test coverage for API gives me much confidence that the migration went fine. Unit tests do not give this confidence, because they are tightly bound to the programming language, and when I migrate the code under tests from one language to another, I need to migrate unit tests, which means possibility to introduce bugs to unit tests as well. Plus more effort.
Any other advantages of having more integration tests than unit tests?
test-design test-strategy
I know that the inverted test pyramid, i.e., having more end-to-end tests than unit tests is an anti-pattern.
However, I've started to think, what could be advantage of having less unit tests then tests of higher levels, black box tests, talking to a tested component through some API/protocol, e.g. (SOAP, REST, etc.).
The only advantage I’ve found is when I migrate the component from one language to another but preserve same component API/protocol.
In such a case, having high test coverage for API gives me much confidence that the migration went fine. Unit tests do not give this confidence, because they are tightly bound to the programming language, and when I migrate the code under tests from one language to another, I need to migrate unit tests, which means possibility to introduce bugs to unit tests as well. Plus more effort.
Any other advantages of having more integration tests than unit tests?
test-design test-strategy
test-design test-strategy
edited Feb 5 at 12:27
dzieciou
asked Feb 4 at 14:30
dziecioudzieciou
6,62033371
6,62033371
add a comment |
add a comment |
6 Answers
6
active
oldest
votes
Yes and No
More often than not, an inverted pyramid (ice cream cone) is an anti-pattern, but there are circumstances where it is not. Your example of needing to rebuild an API in a different language is one such example.
Some other circumstances where you might want to invert the pyramid include:
You have an integration to a third party API. Your lowest level tests are really there to ensure that the API hasn't changed on you. Most of your tests are going to be end-to-end to make sure your application correctly formats data for the API and correctly handles responses from the API.
You have an integration to a specialized physical device. The same basic principles apply, except that a lot of the end-to-end testing is likely to be manual because specialized devices (like turnstiles, RFID-managed lockers, etc.) are not good candidates for automation and simulators will not necessarily give completely accurate responses.
You are testing communication between different applications/systems/apis. If your primary focus is the communication between the different systems, then the pyramid will be inverted.
You are working with legacy code. If you are working with legacy code, there may not be a choice in how you automate. Any stable legacy system is likely to have UI and processing logic tightly intertwined, making unit testing a challenging exercise if not impossible.
add a comment |
Indeed your observation is correct.
Every level of checking aims to give you confidence on what is being checked: Code unit, service contracts, systems, etc....
Two generic observation that we can have with the Test Pyramid is:
Precision increases when you "go down" the Pyramid: If a unit test fail, you probably know exactly where in the code you had a problem - but you have no idea how does this affect your system.
Reliability increases when you "go up" the Pyramid: If an end-to-end test fail, you surely have a bug on the system - but you don't know where.
Let's say you have a Linked List structure has a bug when dealing very large number of elements. If you add a unit check, you will find this problem right away. However, if you the UI never allows your user to reach such situation, you may never have an end-to-end check that will exercise it.
On the other hand, if you add an end-to-end check, you are indirectly exercising a manifold of possible unit checks. If you find a problem with an end-to-end check, you will have to chase down the multiple of problems your code may have - to "mirror" it with unit checks, you would have to create many unit checks.
I would say that the importance of a solid unit checking base increases when you time-to-market decreases. If you must ship twice a day, you cannot relay on adding new end-to-end checks every day - you have to try to cover yourself with unit level checks and fix the gaps left ASAP.
add a comment |
Be careful what you believe. Take all these answers with a grain of salt, including mine.
I look at one project and the unit test plugin on Visual Studio is showing me 38 unit tests, 5 complete module tests (we anticipate a lot more of these in the future), and 27 cross-module tests (these don't fit in João Farias diagram) where the test harness loads three modules and stitches them together along with a mock backend that persists stuff in RAM so that internal APIs actually change data and the test harness itself actually calls the read APIs and verifies the results.
This has its upsides and downsides. The most obvious upsides are due to where the cutpoints are, this ends up being slightly easier to refactor than typical unit tests and the coverage is excellent. The downsides are that a single bug can easily produce a sea of red, and the developer will likely be debugging the tests to find out why.
We actually designed around being able to swap the persistent store out. That never happened for real, but the ability to do so has made it a lot easier to write these module-stitching tests. They cover about two-thirds as much stuff as the integration tests at something like one tenth of the time and ridiculously less setup cost.
But your mileage will vary. As for us, we went with this weird test level because we believed it got more bang for the buck. Each project is different. On an older project, all the surviving tests are integration tests where it actually creates an instance, actually starts it up, and the test harness actually drives the software. This is expensive and brittle and slow, but it was cheaper than trying to retrofit everything and does cover some of the SQL code. Somebody else wrote unit tests for that project. They're gone now. Nobody misses them. They didn't cover anything important.
7
"Blindly following best practices is not a best practice."
– vsz
Feb 5 at 7:34
@vsz: My saying: "Best practices are not Turning complete."
– Joshua
Feb 5 at 14:30
add a comment |
Debating whether something is or isn't an anti-pattern is like arguing over whether a politician is stupid, or whether one religion is better than another: there is rarely a single right answer. Instead, you should think about the trade-offs between different shapes of test pyramids and how they apply to your situation. I'll mention a few trade-offs.
User-relevance vs diagnostic value. End-to-end tests model actual user interactions, so they can feel more relevant to whether the entire system works than unit tests. However, if an end-to-end test fails, you have little information about what caused the problem. So end-to-end tests don't provide a lot of diagnostic value.
Since a unit test covers something very specific, you can use the Venn diagram of which tests passed and which failed to deduce the root cause. Unit tests have a lot of diagnostic value. On the other hand, if your component depends on other components, you need to mock or otherwise simulate the interaction between your component and its dependencies. That requires making assumptions about how those other components behave. If your assumptions are wrong, your tests become less relevant. So you can't rely solely on unit tests to tell you whether a system actually works.
Cost to run. End-to-end tests probably take a lot longer to run than the full set of unit tests. That's because end-to-end tests often involve long, complicated setups and tear downs and, if anything in your system is asynchronous, lots of waiting. Unit tests run quickly. Everything else being equal, a test that runs quickly is more likely to be used than a test that runs slowly.
Maintenance cost. End-to-end tests tend to be more fragile and require more upkeep than unit tests. When you consider how to invest your limited test budget, you need to consider whether an end-to-end test's coverage justifies its higher maintenance cost.
I've learned again that choosing a solution in IT is about knowing trade offs. There is no unuversal pattern for all situations. Thanj you for reminding me about that.
– dzieciou
Feb 15 at 15:02
add a comment |
I wouldn’t say there’s an advantage to having fewer unit tests, but there can be advantages to having more manual/end-to-end tests, and on not overly relying on unit tests. Obviously having finite time, this means fewer unit tests to ensure better manual/end-to-end coverage can be better.
Unit tests are good for regression testing, but they all can pass with 100% coverage, and the application still fail.
The big thing that unit tests do not cover is testing the design. I’ve worked on apps where the software function led exactly as designed, but not as intended. This occurs where there were eg inaccuracies or misassumptions in the design.
There’s also situations where unit tests pass, but the system fails in the real-world due to eg race conditions.
It’s also easy to think every case is covered in a unit test, but fail to cover edge cases or failure scenarios. Unit tests can give a false confidence in these situations.
Also, unit tests do not test for UX issues - it’s no use if the app works but the interface causes issues. This can even occur for non-UI situations, e.g. if a poorly written API spec is causing users of the API to misuse it.
add a comment |
Any other advantages of having more integration tests than unit tests?
A more diamond like pyramid is something that has been on my mind for a longer time now. The main reason is that unit-testing on a method level often results in a lot mocks and testing of implementation details. A lot of unit tests seems to make refactoring harder instead of easier.
I would vote for a more higher level of "developer tests", tests that test behaviour of the code. Possible combing multiple classes/components, more an integration level test. Still they should be FAST, like milliseconds!
Do your tests help you to keep the codebase adaptable and of high structural quality? If yes, you're in a great place. If no, hmm...time to rethink your test automation strategy.
The anti-pattern is in the large group of manual and end-to-end tests, not the integration tests. They are slow, expensive and fragile. If you want to keep your product nimble and your feedback cycles quick this wont help. This is probably the case for most iterative product development models where a fast continuous delivery cycle is needed to stay ahead of the competition. Probably there are cases where the ice-cone pyramid is perfectly valid.
Some articles on the Testing Diamond:
- https://leeorengel.com/testing-part-1/
- https://labs.spotify.com/2018/01/11/testing-of-microservices/
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "244"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsqa.stackexchange.com%2fquestions%2f37623%2fis-inverted-test-pyramid-really-anti-pattern%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
6 Answers
6
active
oldest
votes
6 Answers
6
active
oldest
votes
active
oldest
votes
active
oldest
votes
Yes and No
More often than not, an inverted pyramid (ice cream cone) is an anti-pattern, but there are circumstances where it is not. Your example of needing to rebuild an API in a different language is one such example.
Some other circumstances where you might want to invert the pyramid include:
You have an integration to a third party API. Your lowest level tests are really there to ensure that the API hasn't changed on you. Most of your tests are going to be end-to-end to make sure your application correctly formats data for the API and correctly handles responses from the API.
You have an integration to a specialized physical device. The same basic principles apply, except that a lot of the end-to-end testing is likely to be manual because specialized devices (like turnstiles, RFID-managed lockers, etc.) are not good candidates for automation and simulators will not necessarily give completely accurate responses.
You are testing communication between different applications/systems/apis. If your primary focus is the communication between the different systems, then the pyramid will be inverted.
You are working with legacy code. If you are working with legacy code, there may not be a choice in how you automate. Any stable legacy system is likely to have UI and processing logic tightly intertwined, making unit testing a challenging exercise if not impossible.
add a comment |
Yes and No
More often than not, an inverted pyramid (ice cream cone) is an anti-pattern, but there are circumstances where it is not. Your example of needing to rebuild an API in a different language is one such example.
Some other circumstances where you might want to invert the pyramid include:
You have an integration to a third party API. Your lowest level tests are really there to ensure that the API hasn't changed on you. Most of your tests are going to be end-to-end to make sure your application correctly formats data for the API and correctly handles responses from the API.
You have an integration to a specialized physical device. The same basic principles apply, except that a lot of the end-to-end testing is likely to be manual because specialized devices (like turnstiles, RFID-managed lockers, etc.) are not good candidates for automation and simulators will not necessarily give completely accurate responses.
You are testing communication between different applications/systems/apis. If your primary focus is the communication between the different systems, then the pyramid will be inverted.
You are working with legacy code. If you are working with legacy code, there may not be a choice in how you automate. Any stable legacy system is likely to have UI and processing logic tightly intertwined, making unit testing a challenging exercise if not impossible.
add a comment |
Yes and No
More often than not, an inverted pyramid (ice cream cone) is an anti-pattern, but there are circumstances where it is not. Your example of needing to rebuild an API in a different language is one such example.
Some other circumstances where you might want to invert the pyramid include:
You have an integration to a third party API. Your lowest level tests are really there to ensure that the API hasn't changed on you. Most of your tests are going to be end-to-end to make sure your application correctly formats data for the API and correctly handles responses from the API.
You have an integration to a specialized physical device. The same basic principles apply, except that a lot of the end-to-end testing is likely to be manual because specialized devices (like turnstiles, RFID-managed lockers, etc.) are not good candidates for automation and simulators will not necessarily give completely accurate responses.
You are testing communication between different applications/systems/apis. If your primary focus is the communication between the different systems, then the pyramid will be inverted.
You are working with legacy code. If you are working with legacy code, there may not be a choice in how you automate. Any stable legacy system is likely to have UI and processing logic tightly intertwined, making unit testing a challenging exercise if not impossible.
Yes and No
More often than not, an inverted pyramid (ice cream cone) is an anti-pattern, but there are circumstances where it is not. Your example of needing to rebuild an API in a different language is one such example.
Some other circumstances where you might want to invert the pyramid include:
You have an integration to a third party API. Your lowest level tests are really there to ensure that the API hasn't changed on you. Most of your tests are going to be end-to-end to make sure your application correctly formats data for the API and correctly handles responses from the API.
You have an integration to a specialized physical device. The same basic principles apply, except that a lot of the end-to-end testing is likely to be manual because specialized devices (like turnstiles, RFID-managed lockers, etc.) are not good candidates for automation and simulators will not necessarily give completely accurate responses.
You are testing communication between different applications/systems/apis. If your primary focus is the communication between the different systems, then the pyramid will be inverted.
You are working with legacy code. If you are working with legacy code, there may not be a choice in how you automate. Any stable legacy system is likely to have UI and processing logic tightly intertwined, making unit testing a challenging exercise if not impossible.
answered Feb 4 at 17:00
Kate Paulk♦Kate Paulk
24.7k64085
24.7k64085
add a comment |
add a comment |
Indeed your observation is correct.
Every level of checking aims to give you confidence on what is being checked: Code unit, service contracts, systems, etc....
Two generic observation that we can have with the Test Pyramid is:
Precision increases when you "go down" the Pyramid: If a unit test fail, you probably know exactly where in the code you had a problem - but you have no idea how does this affect your system.
Reliability increases when you "go up" the Pyramid: If an end-to-end test fail, you surely have a bug on the system - but you don't know where.
Let's say you have a Linked List structure has a bug when dealing very large number of elements. If you add a unit check, you will find this problem right away. However, if you the UI never allows your user to reach such situation, you may never have an end-to-end check that will exercise it.
On the other hand, if you add an end-to-end check, you are indirectly exercising a manifold of possible unit checks. If you find a problem with an end-to-end check, you will have to chase down the multiple of problems your code may have - to "mirror" it with unit checks, you would have to create many unit checks.
I would say that the importance of a solid unit checking base increases when you time-to-market decreases. If you must ship twice a day, you cannot relay on adding new end-to-end checks every day - you have to try to cover yourself with unit level checks and fix the gaps left ASAP.
add a comment |
Indeed your observation is correct.
Every level of checking aims to give you confidence on what is being checked: Code unit, service contracts, systems, etc....
Two generic observation that we can have with the Test Pyramid is:
Precision increases when you "go down" the Pyramid: If a unit test fail, you probably know exactly where in the code you had a problem - but you have no idea how does this affect your system.
Reliability increases when you "go up" the Pyramid: If an end-to-end test fail, you surely have a bug on the system - but you don't know where.
Let's say you have a Linked List structure has a bug when dealing very large number of elements. If you add a unit check, you will find this problem right away. However, if you the UI never allows your user to reach such situation, you may never have an end-to-end check that will exercise it.
On the other hand, if you add an end-to-end check, you are indirectly exercising a manifold of possible unit checks. If you find a problem with an end-to-end check, you will have to chase down the multiple of problems your code may have - to "mirror" it with unit checks, you would have to create many unit checks.
I would say that the importance of a solid unit checking base increases when you time-to-market decreases. If you must ship twice a day, you cannot relay on adding new end-to-end checks every day - you have to try to cover yourself with unit level checks and fix the gaps left ASAP.
add a comment |
Indeed your observation is correct.
Every level of checking aims to give you confidence on what is being checked: Code unit, service contracts, systems, etc....
Two generic observation that we can have with the Test Pyramid is:
Precision increases when you "go down" the Pyramid: If a unit test fail, you probably know exactly where in the code you had a problem - but you have no idea how does this affect your system.
Reliability increases when you "go up" the Pyramid: If an end-to-end test fail, you surely have a bug on the system - but you don't know where.
Let's say you have a Linked List structure has a bug when dealing very large number of elements. If you add a unit check, you will find this problem right away. However, if you the UI never allows your user to reach such situation, you may never have an end-to-end check that will exercise it.
On the other hand, if you add an end-to-end check, you are indirectly exercising a manifold of possible unit checks. If you find a problem with an end-to-end check, you will have to chase down the multiple of problems your code may have - to "mirror" it with unit checks, you would have to create many unit checks.
I would say that the importance of a solid unit checking base increases when you time-to-market decreases. If you must ship twice a day, you cannot relay on adding new end-to-end checks every day - you have to try to cover yourself with unit level checks and fix the gaps left ASAP.
Indeed your observation is correct.
Every level of checking aims to give you confidence on what is being checked: Code unit, service contracts, systems, etc....
Two generic observation that we can have with the Test Pyramid is:
Precision increases when you "go down" the Pyramid: If a unit test fail, you probably know exactly where in the code you had a problem - but you have no idea how does this affect your system.
Reliability increases when you "go up" the Pyramid: If an end-to-end test fail, you surely have a bug on the system - but you don't know where.
Let's say you have a Linked List structure has a bug when dealing very large number of elements. If you add a unit check, you will find this problem right away. However, if you the UI never allows your user to reach such situation, you may never have an end-to-end check that will exercise it.
On the other hand, if you add an end-to-end check, you are indirectly exercising a manifold of possible unit checks. If you find a problem with an end-to-end check, you will have to chase down the multiple of problems your code may have - to "mirror" it with unit checks, you would have to create many unit checks.
I would say that the importance of a solid unit checking base increases when you time-to-market decreases. If you must ship twice a day, you cannot relay on adding new end-to-end checks every day - you have to try to cover yourself with unit level checks and fix the gaps left ASAP.
answered Feb 4 at 16:19
João FariasJoão Farias
2,668415
2,668415
add a comment |
add a comment |
Be careful what you believe. Take all these answers with a grain of salt, including mine.
I look at one project and the unit test plugin on Visual Studio is showing me 38 unit tests, 5 complete module tests (we anticipate a lot more of these in the future), and 27 cross-module tests (these don't fit in João Farias diagram) where the test harness loads three modules and stitches them together along with a mock backend that persists stuff in RAM so that internal APIs actually change data and the test harness itself actually calls the read APIs and verifies the results.
This has its upsides and downsides. The most obvious upsides are due to where the cutpoints are, this ends up being slightly easier to refactor than typical unit tests and the coverage is excellent. The downsides are that a single bug can easily produce a sea of red, and the developer will likely be debugging the tests to find out why.
We actually designed around being able to swap the persistent store out. That never happened for real, but the ability to do so has made it a lot easier to write these module-stitching tests. They cover about two-thirds as much stuff as the integration tests at something like one tenth of the time and ridiculously less setup cost.
But your mileage will vary. As for us, we went with this weird test level because we believed it got more bang for the buck. Each project is different. On an older project, all the surviving tests are integration tests where it actually creates an instance, actually starts it up, and the test harness actually drives the software. This is expensive and brittle and slow, but it was cheaper than trying to retrofit everything and does cover some of the SQL code. Somebody else wrote unit tests for that project. They're gone now. Nobody misses them. They didn't cover anything important.
7
"Blindly following best practices is not a best practice."
– vsz
Feb 5 at 7:34
@vsz: My saying: "Best practices are not Turning complete."
– Joshua
Feb 5 at 14:30
add a comment |
Be careful what you believe. Take all these answers with a grain of salt, including mine.
I look at one project and the unit test plugin on Visual Studio is showing me 38 unit tests, 5 complete module tests (we anticipate a lot more of these in the future), and 27 cross-module tests (these don't fit in João Farias diagram) where the test harness loads three modules and stitches them together along with a mock backend that persists stuff in RAM so that internal APIs actually change data and the test harness itself actually calls the read APIs and verifies the results.
This has its upsides and downsides. The most obvious upsides are due to where the cutpoints are, this ends up being slightly easier to refactor than typical unit tests and the coverage is excellent. The downsides are that a single bug can easily produce a sea of red, and the developer will likely be debugging the tests to find out why.
We actually designed around being able to swap the persistent store out. That never happened for real, but the ability to do so has made it a lot easier to write these module-stitching tests. They cover about two-thirds as much stuff as the integration tests at something like one tenth of the time and ridiculously less setup cost.
But your mileage will vary. As for us, we went with this weird test level because we believed it got more bang for the buck. Each project is different. On an older project, all the surviving tests are integration tests where it actually creates an instance, actually starts it up, and the test harness actually drives the software. This is expensive and brittle and slow, but it was cheaper than trying to retrofit everything and does cover some of the SQL code. Somebody else wrote unit tests for that project. They're gone now. Nobody misses them. They didn't cover anything important.
7
"Blindly following best practices is not a best practice."
– vsz
Feb 5 at 7:34
@vsz: My saying: "Best practices are not Turning complete."
– Joshua
Feb 5 at 14:30
add a comment |
Be careful what you believe. Take all these answers with a grain of salt, including mine.
I look at one project and the unit test plugin on Visual Studio is showing me 38 unit tests, 5 complete module tests (we anticipate a lot more of these in the future), and 27 cross-module tests (these don't fit in João Farias diagram) where the test harness loads three modules and stitches them together along with a mock backend that persists stuff in RAM so that internal APIs actually change data and the test harness itself actually calls the read APIs and verifies the results.
This has its upsides and downsides. The most obvious upsides are due to where the cutpoints are, this ends up being slightly easier to refactor than typical unit tests and the coverage is excellent. The downsides are that a single bug can easily produce a sea of red, and the developer will likely be debugging the tests to find out why.
We actually designed around being able to swap the persistent store out. That never happened for real, but the ability to do so has made it a lot easier to write these module-stitching tests. They cover about two-thirds as much stuff as the integration tests at something like one tenth of the time and ridiculously less setup cost.
But your mileage will vary. As for us, we went with this weird test level because we believed it got more bang for the buck. Each project is different. On an older project, all the surviving tests are integration tests where it actually creates an instance, actually starts it up, and the test harness actually drives the software. This is expensive and brittle and slow, but it was cheaper than trying to retrofit everything and does cover some of the SQL code. Somebody else wrote unit tests for that project. They're gone now. Nobody misses them. They didn't cover anything important.
Be careful what you believe. Take all these answers with a grain of salt, including mine.
I look at one project and the unit test plugin on Visual Studio is showing me 38 unit tests, 5 complete module tests (we anticipate a lot more of these in the future), and 27 cross-module tests (these don't fit in João Farias diagram) where the test harness loads three modules and stitches them together along with a mock backend that persists stuff in RAM so that internal APIs actually change data and the test harness itself actually calls the read APIs and verifies the results.
This has its upsides and downsides. The most obvious upsides are due to where the cutpoints are, this ends up being slightly easier to refactor than typical unit tests and the coverage is excellent. The downsides are that a single bug can easily produce a sea of red, and the developer will likely be debugging the tests to find out why.
We actually designed around being able to swap the persistent store out. That never happened for real, but the ability to do so has made it a lot easier to write these module-stitching tests. They cover about two-thirds as much stuff as the integration tests at something like one tenth of the time and ridiculously less setup cost.
But your mileage will vary. As for us, we went with this weird test level because we believed it got more bang for the buck. Each project is different. On an older project, all the surviving tests are integration tests where it actually creates an instance, actually starts it up, and the test harness actually drives the software. This is expensive and brittle and slow, but it was cheaper than trying to retrofit everything and does cover some of the SQL code. Somebody else wrote unit tests for that project. They're gone now. Nobody misses them. They didn't cover anything important.
answered Feb 5 at 0:34
JoshuaJoshua
1413
1413
7
"Blindly following best practices is not a best practice."
– vsz
Feb 5 at 7:34
@vsz: My saying: "Best practices are not Turning complete."
– Joshua
Feb 5 at 14:30
add a comment |
7
"Blindly following best practices is not a best practice."
– vsz
Feb 5 at 7:34
@vsz: My saying: "Best practices are not Turning complete."
– Joshua
Feb 5 at 14:30
7
7
"Blindly following best practices is not a best practice."
– vsz
Feb 5 at 7:34
"Blindly following best practices is not a best practice."
– vsz
Feb 5 at 7:34
@vsz: My saying: "Best practices are not Turning complete."
– Joshua
Feb 5 at 14:30
@vsz: My saying: "Best practices are not Turning complete."
– Joshua
Feb 5 at 14:30
add a comment |
Debating whether something is or isn't an anti-pattern is like arguing over whether a politician is stupid, or whether one religion is better than another: there is rarely a single right answer. Instead, you should think about the trade-offs between different shapes of test pyramids and how they apply to your situation. I'll mention a few trade-offs.
User-relevance vs diagnostic value. End-to-end tests model actual user interactions, so they can feel more relevant to whether the entire system works than unit tests. However, if an end-to-end test fails, you have little information about what caused the problem. So end-to-end tests don't provide a lot of diagnostic value.
Since a unit test covers something very specific, you can use the Venn diagram of which tests passed and which failed to deduce the root cause. Unit tests have a lot of diagnostic value. On the other hand, if your component depends on other components, you need to mock or otherwise simulate the interaction between your component and its dependencies. That requires making assumptions about how those other components behave. If your assumptions are wrong, your tests become less relevant. So you can't rely solely on unit tests to tell you whether a system actually works.
Cost to run. End-to-end tests probably take a lot longer to run than the full set of unit tests. That's because end-to-end tests often involve long, complicated setups and tear downs and, if anything in your system is asynchronous, lots of waiting. Unit tests run quickly. Everything else being equal, a test that runs quickly is more likely to be used than a test that runs slowly.
Maintenance cost. End-to-end tests tend to be more fragile and require more upkeep than unit tests. When you consider how to invest your limited test budget, you need to consider whether an end-to-end test's coverage justifies its higher maintenance cost.
I've learned again that choosing a solution in IT is about knowing trade offs. There is no unuversal pattern for all situations. Thanj you for reminding me about that.
– dzieciou
Feb 15 at 15:02
add a comment |
Debating whether something is or isn't an anti-pattern is like arguing over whether a politician is stupid, or whether one religion is better than another: there is rarely a single right answer. Instead, you should think about the trade-offs between different shapes of test pyramids and how they apply to your situation. I'll mention a few trade-offs.
User-relevance vs diagnostic value. End-to-end tests model actual user interactions, so they can feel more relevant to whether the entire system works than unit tests. However, if an end-to-end test fails, you have little information about what caused the problem. So end-to-end tests don't provide a lot of diagnostic value.
Since a unit test covers something very specific, you can use the Venn diagram of which tests passed and which failed to deduce the root cause. Unit tests have a lot of diagnostic value. On the other hand, if your component depends on other components, you need to mock or otherwise simulate the interaction between your component and its dependencies. That requires making assumptions about how those other components behave. If your assumptions are wrong, your tests become less relevant. So you can't rely solely on unit tests to tell you whether a system actually works.
Cost to run. End-to-end tests probably take a lot longer to run than the full set of unit tests. That's because end-to-end tests often involve long, complicated setups and tear downs and, if anything in your system is asynchronous, lots of waiting. Unit tests run quickly. Everything else being equal, a test that runs quickly is more likely to be used than a test that runs slowly.
Maintenance cost. End-to-end tests tend to be more fragile and require more upkeep than unit tests. When you consider how to invest your limited test budget, you need to consider whether an end-to-end test's coverage justifies its higher maintenance cost.
I've learned again that choosing a solution in IT is about knowing trade offs. There is no unuversal pattern for all situations. Thanj you for reminding me about that.
– dzieciou
Feb 15 at 15:02
add a comment |
Debating whether something is or isn't an anti-pattern is like arguing over whether a politician is stupid, or whether one religion is better than another: there is rarely a single right answer. Instead, you should think about the trade-offs between different shapes of test pyramids and how they apply to your situation. I'll mention a few trade-offs.
User-relevance vs diagnostic value. End-to-end tests model actual user interactions, so they can feel more relevant to whether the entire system works than unit tests. However, if an end-to-end test fails, you have little information about what caused the problem. So end-to-end tests don't provide a lot of diagnostic value.
Since a unit test covers something very specific, you can use the Venn diagram of which tests passed and which failed to deduce the root cause. Unit tests have a lot of diagnostic value. On the other hand, if your component depends on other components, you need to mock or otherwise simulate the interaction between your component and its dependencies. That requires making assumptions about how those other components behave. If your assumptions are wrong, your tests become less relevant. So you can't rely solely on unit tests to tell you whether a system actually works.
Cost to run. End-to-end tests probably take a lot longer to run than the full set of unit tests. That's because end-to-end tests often involve long, complicated setups and tear downs and, if anything in your system is asynchronous, lots of waiting. Unit tests run quickly. Everything else being equal, a test that runs quickly is more likely to be used than a test that runs slowly.
Maintenance cost. End-to-end tests tend to be more fragile and require more upkeep than unit tests. When you consider how to invest your limited test budget, you need to consider whether an end-to-end test's coverage justifies its higher maintenance cost.
Debating whether something is or isn't an anti-pattern is like arguing over whether a politician is stupid, or whether one religion is better than another: there is rarely a single right answer. Instead, you should think about the trade-offs between different shapes of test pyramids and how they apply to your situation. I'll mention a few trade-offs.
User-relevance vs diagnostic value. End-to-end tests model actual user interactions, so they can feel more relevant to whether the entire system works than unit tests. However, if an end-to-end test fails, you have little information about what caused the problem. So end-to-end tests don't provide a lot of diagnostic value.
Since a unit test covers something very specific, you can use the Venn diagram of which tests passed and which failed to deduce the root cause. Unit tests have a lot of diagnostic value. On the other hand, if your component depends on other components, you need to mock or otherwise simulate the interaction between your component and its dependencies. That requires making assumptions about how those other components behave. If your assumptions are wrong, your tests become less relevant. So you can't rely solely on unit tests to tell you whether a system actually works.
Cost to run. End-to-end tests probably take a lot longer to run than the full set of unit tests. That's because end-to-end tests often involve long, complicated setups and tear downs and, if anything in your system is asynchronous, lots of waiting. Unit tests run quickly. Everything else being equal, a test that runs quickly is more likely to be used than a test that runs slowly.
Maintenance cost. End-to-end tests tend to be more fragile and require more upkeep than unit tests. When you consider how to invest your limited test budget, you need to consider whether an end-to-end test's coverage justifies its higher maintenance cost.
answered Feb 8 at 15:25
user246user246
20.8k23784
20.8k23784
I've learned again that choosing a solution in IT is about knowing trade offs. There is no unuversal pattern for all situations. Thanj you for reminding me about that.
– dzieciou
Feb 15 at 15:02
add a comment |
I've learned again that choosing a solution in IT is about knowing trade offs. There is no unuversal pattern for all situations. Thanj you for reminding me about that.
– dzieciou
Feb 15 at 15:02
I've learned again that choosing a solution in IT is about knowing trade offs. There is no unuversal pattern for all situations. Thanj you for reminding me about that.
– dzieciou
Feb 15 at 15:02
I've learned again that choosing a solution in IT is about knowing trade offs. There is no unuversal pattern for all situations. Thanj you for reminding me about that.
– dzieciou
Feb 15 at 15:02
add a comment |
I wouldn’t say there’s an advantage to having fewer unit tests, but there can be advantages to having more manual/end-to-end tests, and on not overly relying on unit tests. Obviously having finite time, this means fewer unit tests to ensure better manual/end-to-end coverage can be better.
Unit tests are good for regression testing, but they all can pass with 100% coverage, and the application still fail.
The big thing that unit tests do not cover is testing the design. I’ve worked on apps where the software function led exactly as designed, but not as intended. This occurs where there were eg inaccuracies or misassumptions in the design.
There’s also situations where unit tests pass, but the system fails in the real-world due to eg race conditions.
It’s also easy to think every case is covered in a unit test, but fail to cover edge cases or failure scenarios. Unit tests can give a false confidence in these situations.
Also, unit tests do not test for UX issues - it’s no use if the app works but the interface causes issues. This can even occur for non-UI situations, e.g. if a poorly written API spec is causing users of the API to misuse it.
add a comment |
I wouldn’t say there’s an advantage to having fewer unit tests, but there can be advantages to having more manual/end-to-end tests, and on not overly relying on unit tests. Obviously having finite time, this means fewer unit tests to ensure better manual/end-to-end coverage can be better.
Unit tests are good for regression testing, but they all can pass with 100% coverage, and the application still fail.
The big thing that unit tests do not cover is testing the design. I’ve worked on apps where the software function led exactly as designed, but not as intended. This occurs where there were eg inaccuracies or misassumptions in the design.
There’s also situations where unit tests pass, but the system fails in the real-world due to eg race conditions.
It’s also easy to think every case is covered in a unit test, but fail to cover edge cases or failure scenarios. Unit tests can give a false confidence in these situations.
Also, unit tests do not test for UX issues - it’s no use if the app works but the interface causes issues. This can even occur for non-UI situations, e.g. if a poorly written API spec is causing users of the API to misuse it.
add a comment |
I wouldn’t say there’s an advantage to having fewer unit tests, but there can be advantages to having more manual/end-to-end tests, and on not overly relying on unit tests. Obviously having finite time, this means fewer unit tests to ensure better manual/end-to-end coverage can be better.
Unit tests are good for regression testing, but they all can pass with 100% coverage, and the application still fail.
The big thing that unit tests do not cover is testing the design. I’ve worked on apps where the software function led exactly as designed, but not as intended. This occurs where there were eg inaccuracies or misassumptions in the design.
There’s also situations where unit tests pass, but the system fails in the real-world due to eg race conditions.
It’s also easy to think every case is covered in a unit test, but fail to cover edge cases or failure scenarios. Unit tests can give a false confidence in these situations.
Also, unit tests do not test for UX issues - it’s no use if the app works but the interface causes issues. This can even occur for non-UI situations, e.g. if a poorly written API spec is causing users of the API to misuse it.
I wouldn’t say there’s an advantage to having fewer unit tests, but there can be advantages to having more manual/end-to-end tests, and on not overly relying on unit tests. Obviously having finite time, this means fewer unit tests to ensure better manual/end-to-end coverage can be better.
Unit tests are good for regression testing, but they all can pass with 100% coverage, and the application still fail.
The big thing that unit tests do not cover is testing the design. I’ve worked on apps where the software function led exactly as designed, but not as intended. This occurs where there were eg inaccuracies or misassumptions in the design.
There’s also situations where unit tests pass, but the system fails in the real-world due to eg race conditions.
It’s also easy to think every case is covered in a unit test, but fail to cover edge cases or failure scenarios. Unit tests can give a false confidence in these situations.
Also, unit tests do not test for UX issues - it’s no use if the app works but the interface causes issues. This can even occur for non-UI situations, e.g. if a poorly written API spec is causing users of the API to misuse it.
answered Feb 8 at 12:54
Dan WDan W
1211
1211
add a comment |
add a comment |
Any other advantages of having more integration tests than unit tests?
A more diamond like pyramid is something that has been on my mind for a longer time now. The main reason is that unit-testing on a method level often results in a lot mocks and testing of implementation details. A lot of unit tests seems to make refactoring harder instead of easier.
I would vote for a more higher level of "developer tests", tests that test behaviour of the code. Possible combing multiple classes/components, more an integration level test. Still they should be FAST, like milliseconds!
Do your tests help you to keep the codebase adaptable and of high structural quality? If yes, you're in a great place. If no, hmm...time to rethink your test automation strategy.
The anti-pattern is in the large group of manual and end-to-end tests, not the integration tests. They are slow, expensive and fragile. If you want to keep your product nimble and your feedback cycles quick this wont help. This is probably the case for most iterative product development models where a fast continuous delivery cycle is needed to stay ahead of the competition. Probably there are cases where the ice-cone pyramid is perfectly valid.
Some articles on the Testing Diamond:
- https://leeorengel.com/testing-part-1/
- https://labs.spotify.com/2018/01/11/testing-of-microservices/
add a comment |
Any other advantages of having more integration tests than unit tests?
A more diamond like pyramid is something that has been on my mind for a longer time now. The main reason is that unit-testing on a method level often results in a lot mocks and testing of implementation details. A lot of unit tests seems to make refactoring harder instead of easier.
I would vote for a more higher level of "developer tests", tests that test behaviour of the code. Possible combing multiple classes/components, more an integration level test. Still they should be FAST, like milliseconds!
Do your tests help you to keep the codebase adaptable and of high structural quality? If yes, you're in a great place. If no, hmm...time to rethink your test automation strategy.
The anti-pattern is in the large group of manual and end-to-end tests, not the integration tests. They are slow, expensive and fragile. If you want to keep your product nimble and your feedback cycles quick this wont help. This is probably the case for most iterative product development models where a fast continuous delivery cycle is needed to stay ahead of the competition. Probably there are cases where the ice-cone pyramid is perfectly valid.
Some articles on the Testing Diamond:
- https://leeorengel.com/testing-part-1/
- https://labs.spotify.com/2018/01/11/testing-of-microservices/
add a comment |
Any other advantages of having more integration tests than unit tests?
A more diamond like pyramid is something that has been on my mind for a longer time now. The main reason is that unit-testing on a method level often results in a lot mocks and testing of implementation details. A lot of unit tests seems to make refactoring harder instead of easier.
I would vote for a more higher level of "developer tests", tests that test behaviour of the code. Possible combing multiple classes/components, more an integration level test. Still they should be FAST, like milliseconds!
Do your tests help you to keep the codebase adaptable and of high structural quality? If yes, you're in a great place. If no, hmm...time to rethink your test automation strategy.
The anti-pattern is in the large group of manual and end-to-end tests, not the integration tests. They are slow, expensive and fragile. If you want to keep your product nimble and your feedback cycles quick this wont help. This is probably the case for most iterative product development models where a fast continuous delivery cycle is needed to stay ahead of the competition. Probably there are cases where the ice-cone pyramid is perfectly valid.
Some articles on the Testing Diamond:
- https://leeorengel.com/testing-part-1/
- https://labs.spotify.com/2018/01/11/testing-of-microservices/
Any other advantages of having more integration tests than unit tests?
A more diamond like pyramid is something that has been on my mind for a longer time now. The main reason is that unit-testing on a method level often results in a lot mocks and testing of implementation details. A lot of unit tests seems to make refactoring harder instead of easier.
I would vote for a more higher level of "developer tests", tests that test behaviour of the code. Possible combing multiple classes/components, more an integration level test. Still they should be FAST, like milliseconds!
Do your tests help you to keep the codebase adaptable and of high structural quality? If yes, you're in a great place. If no, hmm...time to rethink your test automation strategy.
The anti-pattern is in the large group of manual and end-to-end tests, not the integration tests. They are slow, expensive and fragile. If you want to keep your product nimble and your feedback cycles quick this wont help. This is probably the case for most iterative product development models where a fast continuous delivery cycle is needed to stay ahead of the competition. Probably there are cases where the ice-cone pyramid is perfectly valid.
Some articles on the Testing Diamond:
- https://leeorengel.com/testing-part-1/
- https://labs.spotify.com/2018/01/11/testing-of-microservices/
answered Feb 16 at 14:51
Niels van ReijmersdalNiels van Reijmersdal
20.5k23071
20.5k23071
add a comment |
add a comment |
Thanks for contributing an answer to Software Quality Assurance & Testing Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsqa.stackexchange.com%2fquestions%2f37623%2fis-inverted-test-pyramid-really-anti-pattern%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown