Friday, May 29, 2009

Really Dumb Tests

When I mention the idea of automated unit testing (or developer testing as J.B. Rainsberger referred to it in his book JUnit Recipies ) people either love it, or more likely are put off because all the tests that they want to write are too hard, and the tests that they can write seem too simple to be of much value. With the exception of tests that "test the framework" that you are using, I don't think that one should worry prematurely about tests that are too simple to be useful, as I've seem people (myself included) spin their wheels when there was a coding error that could have been caught by one of these "simple" tests.

A common example is the Java programmer who accidentally overrides hashcode() and equal() in such a way that 2 items which are equal do not have the same hashcode. This causes mysterious behavior when you add items to a collection and try to find them later (and you don't get a HashCodeNotImplementedCorrectly" exception.) True, you can generate hashcode and equals with your IDE, but it's trivial to test for this, even without a framework that does the hard work for you.

So when I start working with a system that needs tests, I try not to worry too much about how "trivial" the test seem initially. Once I was working in a Interactive Voice Response application for which we would test by dialing in to a special phone number. We'd go through the prompts, and about half-way into our test we'd hear "syntax error." While I really wanted to write unit tests around the voice logic, we didn't have the right tools at the time, so I tried to see if I could make our manual testing more effective.

This IVR system was basically a web application with a phone instead of a web browser as a client, and VXML instead of HTML as the "page description" language. A voice node processed the touch-tone and voice input and sent a request to a J2EE server that sent back Voice XML which told the voice node what to do next. During our testing the app server was generating the VXML dynamically and we'd sometimes generate invalid XML. This is a really simple problem, and one that was both easy to detect, and costly when we didn't find it until integration testing.

I wrote a series of tests that made requests with various request parameter combinations, and tested the generated voice XML to see if it validated to the DTD for VXML. Basically:

String xmlResponse = appServer.login("username", "password");
try{
XMLUtils.validate(xmlResponse);
} catch (Exception e){
fail("error " + e);
}

Initially people on the team dismissed this test as useless: they believed that could write code that generated valid XML. Once the test started to pick up errors as it ran during the Integration Build, the team realized the value of the test in increasing the effectiveness of the manual testing process.

Even though people are careful when the right code, it's easy for a mistake to happen in a complex system. If you can write a test that catches an error before an integration test starts, you'll save everyone a lot of time.

I've since found lots of value writing test that simply parse a configuration file using and fail if there is an error. Without a test like this the problems manifest in some seemingly unrelated way at runtime. With a test you know the problem is in the parsing, and you also know what you just changed. Also, once a broken configuration is committed to your version control system you slow down the whole team.

If your application relies on a configuration resource that changes, take the time to write some sanity tests to ensure that the app will load correctly. These are in essence Really Dumb smoke tests. Having written these dumb tests you'll feel smart when you catch a error at build time instead when someone else is trying to run the code.

Tuesday, May 26, 2009

SCM as a Change Enabler for Agile Teams

Brad Appleton, Robert Cowham and I wrote an article for the May issue of CM Crossroads that suggests that testing more and branching less can help your team use Software Configuration Management to enable change.

Read Keep the Change.

Monday, May 25, 2009

IDEs in March

In March I wrote an article in CM Crossroads that argues that as long as a team and it's individuals are productive, there isn't a lot of sense in imposing a standard IDE on a team. Organizations go overboard with standards sometimes. As long as a developer is productive, and doesn't cause problems for others, why should anyone care what tools she uses? I end the article:
There is a difference between consistency in important things, which is valuable, and conformity, which is often mistaken for consistency. Focus on delivering consistent results, and respect that a team will know how to get there.

I've been thinking a fair amount about how to balance IDEs and the build configurations, since this seems to be a problems teams struggle with often, though it is getting better as IDEs can model their project settings off of the information in Maven POM files and the like.

Read Beware the IDEs.

Monday, May 18, 2009

Accidental Simplicity

Agile software developers favor simple designs that solve immediate problems, over feature rich frameworks that provide functionality that you may not use. The reason we agile people believe this is the right approach is that building extensibility adds costs, and spending resources (time and money) on something that may not be used is wasteful.

The approach of focusing on simplicity and shorter time horizons works well on agile teams because agile engineering practices such as unit testing and refactoring make it easier to evolve code when it needs change. Without this agile infrastructure teams can fall into the trap of code not changing because change is risky, and what was done first needs to be preserved. Working with the values of doing The Simplest Thing that Could Possibly Work, YAGNI (You Aren't Gonna Need It), and avoiding BDUF (Big Design Up Front) can help you build the right thing more quickly. The challenge is how to find a simple solution, as simplicity doesn't always happen by design. And it's important to remember that "simple" does not mean "no design" nor does a "simple solution" necessarily mean that it is a solution that does less.

Here are some things I try to keep in mind when looking for a simple, agile, solution to a problem:
  • To discover a simple solution it's worth thinking through at least 3 options. Even if your first one will be the clear winner, taking a small amount of time to consider the problem may lead you to a better more flexible solution
  • Clear separation of design concerns leads to more testable, simpler code. If it's difficult to write a unit test for the code that adds some functionality, maybe there is a simpler solution.
  • Simple design can be flexible design. Often the solution that is simplest to implement and test is the one that lends itself to extension.
While simple and flexible are not always correlated, it's important not to toss aside the things you know about good design when you are trying to do the "simplest thing..." Sometimes following good design and testability principles can lead you to a simple design, almost by accident.

Site Reliability Engineering; The Book and The Practices

Site Reliability Engineering It’s difficult to walk into a software development organization without hearing about the discipline of Site ...