Our QA department has entertained pretty hearty debate over the years on the value of exploratory testing (not necessarily following test cases) vs rote testing (prescribed pass/fail testing executed in test cases) and keep coming back to the opinion that both are a necessary part of the job.
Rote testing vs exploratory testing
We consider rote testing any testing wherein testers follow prescribed steps, with well defined pass/fail criteria. Rote testing is extremely valuable as a way to catch regressions and verify new features work as designed. Rote testing is performed in near-identical ways each acceptance testing cycle across all supported platforms. It's the careful, measured, scientific approach. Think Dexter of Dexter's Lab. Rote testing's primary disadvantage is that it can result in automaticity and inadvertently squash creativity in testers. Following prescribed steps to be completed in a specific timeframe can also discourage testers from deep diving into issues.
We define exploratory and play testing as testing wherein testers imagine various scenarios and put the app to test. Exploratory testing may or may not make use of more traditional step-by-step test cases. Exploratory testing is a better way to get into the mind of a user (who may not behave as we expect), a malicious user (who may be trying to get the software to break), or even just the user’s cat (who may sit on the keyboard and cause an unrecoverable failure). Exploratory testing varies wildly by tester and cycle, and is more of a wild card. Exploratory testing is more akin to a theatre or dance class - there is structure, but part of the trick is to "see what happens": think DeeDee of Dexter's lab. It can be hard to quantify or compare results for exploratory testing over time, but it is invaluable to shake up assumptions.
In short, rote testing helps us check our assumptions and plans, while exploratory testing helps us see what might happen beyond those assumptions and plans. If you’re going to do rote testing, test cases are critical. These regimented, detailed tests simplify onboarding new testers, provide a framework for possible future automation efforts, and when a robust testing application is used, allow insight into the general health of your releases. Without reproducible test cases, it's next to impossible for testers to say exactly what they have done after the fact, and deeply difficult to consistently define regressions.
Get your test cases reviewed
While we're constantly adjusting and (we hope) improving our processes, our current preferred method of writing test cases requires us to get buy-in from both Design and Engineering. We use the designs and specifications provided by those departments to write test cases based on how things are expected to work. We then get together with the designer(s) responsible to make sure our understanding of the functionality matches theirs. If we get their thumbs up, great! If our understanding does not match theirs yet, chances are high that the documentation is still not clear enough, and we’ve essentially uncovered a “bug” even before any development has taken place. This design review also gives QA a chance to ask questions about corner case behavior and error paths as well as any user flow we are unclear on. This, ideally, gives design some early feedback and helps them either increase their confidence in the materials they’ve provided, or reiterate the design based on feedback.
Once Design approves our test case(s), we update the test case status to "Engineering Review", and repeat the process with our developers. Here, we’ll uncover any changes to the design that were made based on technical necessity or scope reduction requirements on the engineering side. In a perfect world, Design will already know about these plans and we can note where behavior will change in our test cases. If Design has not yet heard about a change, we will bring them back into the conversation to make sure that all departments are on the same page. During engineering review, we may also uncover unexpected or changed functionality based on incomplete or competing understandings of the designs and specifications, which provides us an opportunity to evaluate our communication channels.
Once a test case has passed both Design and Engineering review, we change its status to “Approved”. We currently consider test case creation to be part of a feature’s delivery as opposed to a “nice to have” extra. Of course, work is not done on an Approved test case; we may need to edit it later as functionality changes or expands in future releases.
Doesn’t that create a mountain of work?
It does create some up front work early on in the development cycle, but the potential benefits of strong test cases outweigh those costs; accurate, up to date test cases reduce churn, avoid development of unclear or misunderstood features, and greatly speed up onboarding of new testers.
Think of creating test cases after a feature has already been delivered as a version of tech debt that has to be addressed at the worst possible time: during the test cycle. Not only do you need to get information on work that requires context switch from other departments, but also you slow down the speed at which QA can get results to developers during crunch time. Creating a test alongside implementation of a new feature is less work because the feature is fresh in everyone’s minds. There's no need for development or design to switch gears and reorient when reviewing. Additionally, bugs found during test creation are found much earlier in the development process than bugs found after release or during acceptance testing (just prior to a release). This gives everyone more time under less pressure to course correct.
In QA, our confidence in our test results can be directly correlated to our confidence in our test cases as well as the confidence other departments have in our test cases. In a prior iteration at our company, QA's tests were siloed away in a separate test management system that other departments did not have access to. This placed the burden of trust squarely on our designers and developers; when QA said something “passed”, they had to believe QA had actually tested and confirmed the functionality as they expected it to work. We recently moved to a JIRA plugin for test management so that our test cases are visible to everyone in the company. We’re hoping this will increase our accountability for accurate, up to date test cases, and so far this seems to be the case. This plugin also allows us to cross reference test cases and tickets to better track when changes are needed, as constant iterations on existing features mean test cases can quickly fall out of date.
Finally, new features require new documentation not just from QA but also from customer-facing departments who will need usage documentation or potentially marketing materials. If our test cases are accurate and approved by both Design and Engineering, the customer facing departments can have high confidence in those documents, and in many cases can borrow our test cases and related images to build their customer-facing documentation.
What about scope creep?
A problem that frequently comes up with rote testing is scope creep – if you add new test cases for every feature, eventually you end up exponentially increasing the time it takes to test a product for release. This is a real concern, and we have decided to put streamlining methods in place to prevent excessive bloat in our test cases. This topic warrants its own post, so stay tuned for more here!
All work and no play makes Jack a dull boy
So how do you write accurate, detailed, and reliable test cases but still encourage the creativity required for exploratory testing? We attempt to do this in a few different ways. One, we dedicate time during our acceptance testing cycle to exploratory testing. Two, we attempt to write our test cases in a way that encourage play. For example, a test case for functionality of a preferences menu might list all preferences options and encourage a tester to mix and match. (More on this in the upcoming post!) We are also experimenting with exploratory test cases, which are intended to serve as mnemonics or jumping off points for creative testing. Like rote tests, these can be tracked within our test management system. The always tricky balance to test case creation is finding a way to write tests that give clear steps that are easy to follow and understand but are not so over-detailed that you get the tester’s version of “highway hypnosis”.