While writing a unit test can increase the cost of a change (since you're writing the code and the test), but the cost is relatively low because of good frameworks, and the benefits outweigh the costs:
- The unit test documents how to use the code that you're writing,
- The test provides a quicker feedback cycle while developing functionality that, say, running the application, and
- The test ensures that changes that break the functionality will be found quickly during development so that they can be addressed while everyone has the proper context.
On one project I worked on the team was extremely disciplined about doing Test Driven Development. Along with unit tests, there were integration tests that tested the display aspects of a web application. For example, a requirement that a color be changed would start with a test that checked a CSS attribute, or a requirement that 2 columns in a grid be swapped would result in a test that made assertions about the rendered HTML.
The test coverage sounded like a good idea, but from time to time a low cost (5 minute), low risk change, would take much longer (1 hour) as tests would need to be updated and run, and unrelated tests would break. And in many cases the tests weren't comprehensive measures of the quality of the application: I remember one time when a colleague asserted that it wasn't necessary to run the application after a change, since we had good test coverage, only to have the client inquire about some buttons that had gone missing from the interface. Also, integration level GUI tests can be fragile, especially if they are based on textual diffs: a change to one component can cause an unrelated test to fail. (Which is why isolated unit tests are so valuable.)
I suspect the reasons for the high cost/value ratio for these UI-oriented tests had a lot to do with the tools available. It's still a lot easier to visually verify display attributes than to automate testing for them. I'm confident that tools will improve. But it's still important to consider cost in addition to benefit when writing tests.
- Integration (especially GUI) tests tend to be high cost relative to value.
- When in doubt try to write an automated test. If you find that maintaining the tests, or execution time, adds a cost out of proportion to the value of a functionality change, consider another approach.
- GUI tests can be high cost relative to value, so focus on writing code where the view layer is as simple as possible.
- If you find yourself skipping GUI testing for these reasons, be especially careful about writing unit tests at the business level. Doing this may drive you to cleaner, more testable, interfaces.
- Focus automated integration test effort on key end-to-end business functionality rather than visual aspects of an application.
I think it's really easy to see how important testing is in software development once you look at it from a real-world perspective as my friend did here: http://app.arat.us/blog/?p=159
These are good points, and I too struggle with testing visual components. On the one hand, it seems like there should be some basic automated test which ensures that the feature visually does what it's supposed to -- e.g. show the "Edit" button if the user has permission, hide it otherwise. While this test is easy to do by hand, it feels like a very simple automated test would save a lot of QA inspection time. On the other hand, if the feature could only break when that area of the system is being touched, then perhaps a simple exploratory test is reasonable.
I expect the answer to the question probably is related to another one: "How often do failures occur? When changing related code? When changing unrelated code?" If unrelated code frequently breaks the feature, then there's probably a higher-level design problem here and lots of automated tests are covering up a design smell.
Dan's comments are right on the mark. With respect to automated tests for core functionality, the problem is figuring out how to test meaning without relying too much on unrelated form. For example, making assertions about an XML document using text diffs is fragile, and not really meaningful -- the client may not care about spaces between elements, for example, but making assertions about the content of elements is useful. With UI testing too many tools use assertions that are sensitive to aspects of the presentation that don't affect function.
The comment about evaluating the frequency of failures reminds me that the quality of a testing system is related to things other than number of tests or even code coverage. Tests should drive you to make your code better (we can argue what "better" means in another context :)) Fragile tests can expose a fragile architecture. Which is good if we then fix the underlying problem.
Post a Comment