Sunday, November 28, 2010

What Agile QA Really Does: Testing Requirements

Teams transitioning to agile struggle with understanding the role of QA. Much of what you read about agile focuses developer testing.  Every project I've worked on which had good QA people had a much higher velocity, and higher quality. And there are limits to what developer Unit Testing can test.

In an "ideal" agile environment a feature goes from User Story, to Use Case, to implementation. With tests along the way, you should be fairly confident that by the time you mark a story done that it meets the technical requirements you were working from. If someone is testing your application after this point, and the developer tests are good, it seems like they should be testing something other than the code, which has already been tested.

Even though it's important that your QA team does not become the place where "testing is done" as that will sidetrack an agile project very quickly, it is  possible that the QA Team is testing (ideally in collaboration with developers) things that the team decided that developers could not test completely. Also, in practice,  problems sometimes slip through developer tests,  and the QA team is then testing the developer tests to give feedback on the types of things the developer tests need to be better at.  

Aside from "catching the errors that slipped through" a QA team doing exploratory testing is also exploring novel paths through the application, and may discover problems during some of these paths. In this case the team and stakeholders need to have a conversation about whether the error is something that users will see and care about, or whether it is something that isn't worth fixing.

In effect, exploratory testing is testing the requirements for completeness.

If you have a QA team, and they are doing exploratory testing, they are really testing:
  • Requirements: Finding interactions between features and components that were not defined or understood when coding and developer testing began. 
  • Developer Tests: Identifying where developer tests were not as good as they could have been (and how they could be better). This is a good use of automated "integration" tests.
  • Systems tests that might be hard to test in a developer context.This might be the one set of tests that are the primary domain of QA.
And, as I've said in other places,  QA team can be extremely valuable testing user stories and requirements before they make it into a sprint backlog, by providing feedback about testability and precision. The QA team can also add value by testing how useful a product that meets the specification actually is.

Of course, this is an idealization.  But if you are disciplined about doing developer testing, you can still get a lot of value from exploratory testing, whether the people doing them are dedicated QA people, or people who are filling the role.

Monday, November 22, 2010

Risks of Manual Integration Testing in the Context Rapid Change

You have probably come across a situation like this: It's close to a release deadline. The QA team is testing. Developers as testing and fixing problems, and everyone is focused on getting the best product they can out the door on time. During this time, you may notice that someone on the QA team, while working late has found an interesting problem. And they clearly spent a lot of time investigating the problem, identifying the expected results, and the details of why what's happening is wrong. If this is a data intensive application, there may well be SQL queries included to allow you to pinpoint the issue quickly. In short, an ideal problem report.

Except for one thing. Your team found and fixed the problem hours before.  And the effort to find and document the problem could have been spent on something else.

It's hard to avoid this kind of overlap when you don't have a complete end-to end automated testing process. And it's probably impossible (for now, anyway) to create an automated test process that will completely replace exploratory  testing. Any manual testing process has to balance the timeliness with the "flow" of the testers. If you update a deployment hourly, then you'll reduce the risk of redundant bug reports,  but the testers will experience too many interruptions. If you have a paradigm where you code, deploy, and stop coding until you see the next set of issues, but that means time wasted, and probably reduced quality. The answer is better communication between the the testers and developers about the state of the application. Under ordinary circumstances, you might have this sort of exchange in your daily scrum.

You're still early in adopting agile, you'll likely have a week or two at the end of a project where you'll have more effort put into manual testing, and that manual testing will find errors. So the simple thing to do is  to query among the team before spending too much time documenting an issue.

Some of the options are:
  • Searching the issue tracking system. This sound like a good idea, but sometimes it's hard to find the right query. And it's possible that something got fixed without and issue being generated.
  • Asking.  Yelling out a question, sending and email, or posting a message in a team chat room gives you the benefit of being able to be a bit vague in your query. 
I tend to prefer the "ask the room" approach. But this isn't effective when you're working in different time zones, or at different hours. On the other hand, since software development is a collaborative activity, you'll need to address the question of how yo collaborate across time. (The simple answer is to try to avoid close coupling between activities that are likely to happen when the teams are on different schedules).

Situations like the one that I described can happen. But it's worth figuring out how often they happen, and if there are simple ways to reduce their incidence and impact. Good communication channels are importanty and sometimes the lower tech ones work better.

Sunday, November 14, 2010

To Scrum, Prepare

Agile methods have some sort of daily all-team checkpoint meeting as part of  the process. The idea behind the Daily Scrum (Scrum) or Daily Standup (XP) is good:  replace status meetings (or someone walking around asking about status) with one short daily meeting where everyone has a chance to communicate about what they are doing and what they need help with. This ensures that there is at least one chance each day for everyone to understand the big picture of the project, and to discover unexpected dependencies.

But just having everyone in the room doesn't make for an effective, focused scrum. You need to be be prepared. Once I was on a team where the scrums started going off track. They took longer. People's updates were often "I don't remember what I did yesterday," or they became long unfocused rambles that didn't convey much information.  I suggested that we all take a few minutes before Scrum to organize our thoughts. This got a lot of resistance. "It feels like a pre-meeting meeting, and with Scrum we're supposed to spend less time in meetings."

While Daily Scrum's are meant to be lightweight, it's respectful of everyone else's time to think about what 's worth sharing with the team. Most days you might just be working on one thing, in which case a quick glance at the Scrum board might be enough. But if you want to do what's best for your team, why not take 2 minutes before Scrum (either in the morning, or even the day before) jotting down what you want to share with the team that addresses the questions:

  • What did I do yesterday?
  • What do I plan to do today?
  • What were my roadblocks?

Starting each day with a clear picture in your head of the answers those questions is probably not a bad thing from a professional development perspective anyway.

Sure, everyone will have off days where they don't get around to this, but if your Scrum's are losing focus frequently, consider:

The Daily Scrum (or standup) is a useful tool for being agile and responsive, but just being in the room does not mean that you are having a Scrum.