When you think of Quality Assurance, do you picture an army of testers who come in at the tail end of a project and either approve the work or send development back to their desks with more assignments? Or does QA make you think of a teammate who works side-by-side with development and design, helping provide feedback at every stage of application development?
QA departments often accidentally become release gatekeepers instead of analysts – adversaries to development as opposed to allies.
This is partly due to the fact that the “meat-and-potatoes” – as well as the most easily-explained aspect of QA’s role in software development – is acceptance testing, which happens towards the end of the development cycle. It's highly visible and easy to bring into focus. (At SpiderOak, acceptance testing is the week of testing right before a release is scheduled, when all updates have been integrated into a single package, just as it would be for end users.) It is easy to over-focus on acceptance testing because acceptance testing is extremely analytical, very repeatable, and the easiest aspect of the job to explain. Though a good QA team will make an effort to create new acceptance tests for new features, legacy functionality typically has the most robust test cases and QA is always going to be more familiar with long-standing functionality. Because of this, acceptance testing tends to skew towards regression testing, which can add to the feeling that QA is only there to review and approve work, not work alongside developers throughout the entire process.
There's also industry precedent for QA being in a gatekeeper (as opposed to team-mate) role. In classic waterfall testing, the QA team gets a release candidate after a period of development. They test the whole concept in one giant effort instead of reviewing small pieces and approving them for inclusion in the final result (which is then itself tested fully to verify all new and old features work together in harmony). Picture a relay race where each person’s timer starts at the passing of the baton: the total time for the race is the sum of each runner’s time. Design passes to Development, who passes to Quality Assurance. It's neat, it's organized. But it has flaws.
The waterfall model for QA testing is out-dated and can hold back an otherwise Agile team. Waterfall QA places QA as gatekeepers at the very end of release process, effectively siloing them from design and development steps where they could identify vague requirements, design and development disconnects, and other issues with the process long before those issues have grown from seeds into trees. QA can and should be included in the design and initial unit tests of a software product being developed using an agile method. Including QA staff early in the process allows developers and designers to get feedback much earlier: in an ideal scenario, within a day or so of pushing their ideas and changes. This early feedback drastically reduces the overall cost of fixing bugs, redressing non-ideal features, or resolving usability concerns.
A release cycle under Waterfall practices might look something like this:
Design (10 days) > Development (10 days) > QA (10 days)
An Agile process tries to mitigate cascading delays that might result from any given stage of the race. An Agile release cycle may look more like this:
Design (10 days) <> QA (2 days)
> Dev (10 days) <> QA (2 days)
> QA (6 days)
In the second example, any bugs turned up during handoff and feedback between departments are discovered closer in time to the original work, which means less context switch if any issues do come up, and less chance of cascading delays. It also means portions of the work can be parallelized, resulting in a hypothetically 30 day project becoming a 28 day project in the second example. Those time savings add up.
Still better is when QA can pair with developers before the code is even written. QA can be a valuable resource on specifications: asking questions, suggesting avenues for more robust checks, and even playing the role of an imaginary end-user who challenges assumptions that arise during the development process. QA as a support organization can help engineers cultivate a “QA mindset” (by way of example, think about how difficult it is to proofread your own writing, and how you can develop strategies over time to separate yourself enough from your work to see it from a third party perspective), which encourages everyone to share responsibility for the overall quality of the product.
In an agile QA environment, QA is introduced to new features in the design stage, well before development occurs. Designers can use QA to challenge their own assumptions about user experience and the intuitiveness of various features. QA can, in turn, use the original designs to draft rough test cases which can develop robustness alongside the feature. This reduces the pressure on QA to develop ad-hoc test cases for the initial release, which often result in verifying behavior as delivered as opposed to as designed.
When QA is pulled in to the entire development cycle, not just the end, the amount of time between a given unit of work, and feedback on that unit of work is reduced. This shift also serves to increase communication between various departments which is another way to weed out complications early in the process. Agility matters for your entire development team, including QA. Ultimately, the product better matches the stated benchmarks and designs, and the testing can speak directly to that.