Writing test cases is the most underestimated and yet most effective requirements analysis tool. Only the question “How are we going to test this?” can reveal all hidden risks and problems with the requirements. Test cases are software development safety net. Not if they are poorly written, though. Then they can only contribute to the misconception that writing test cases is a waste and a bottleneck.
Here are my 13 tips on writing powerful test cases that build trust in testing.
#1 Test cases must be easy to understand
The first rule of thumb about writing a test case is that (almost) everyone can understand its purpose, its importance, how to prepare for it, how to execute the test case and what the expected result is. When writing a test case, always imagine yourself on your first day at work. Could you execute this?
Always add all the necessary information about the test execution and the verification of the expected results. For example, if a log record must be checked, add instructions on how to access the log. If those instructions are not written, write them and add the link to the article.
Or the expected result must be checked in the database, attach the query to the test case, so the next person doesn’t have to write it again.
#2 Test cases must be easy to execute
Efficient – The test case must be written in a clear and straightforward manner so the execution, when manual, is not slow, error-prone and tedious.
Effective – A test case must enable quick execution and traceability of the performed steps and the used test data so that the reason for it can be easily deducted from the test when a bug is found.
Reliable – The test case must always provide the same results when there are no regressions or modified behaviour requiring modification of the test itself.
Automatable – Also, always have in mind how this test case can be automated. Will it change the way it’s written?
#3 Test cases must be independent from each other
Independent test cases are a tricky practice, but it’s rule number one when it comes to automated tests.
For manual tests, though, here’s how to avoid the confusion: Use preconditions and shared steps. If the result of a test case can be used as input to another test case, mark the last step (or the entire test case) as a “shared step”. This way, instead of sequencing the test cases, you can indicate that the precondition to Test Case A is the X, which can be achieved if you execute Test Case B.
The automation rule mentioned before is that each test must create its test data and then destroy it after, leaving the system in a neutral state not affected by the test execution.
The second automation rule is that tests must be possible to execute in parallel. Imagine the 2104 tests you need to run on every push to the main branch, most of which are unit tests. Some need 10-20 seconds to run. That’s an hour. While typically, that’s an insanely good test performance, imagine that they depend on each other, and the 486th one fails… You’ve wasted 20 minutes, not knowing what’s wrong, because all the rest failed too. Now imagine you can run them independently in parallel, and instead of an hour, the test run takes 20 minutes. All pass – three fail. This will ensure a very narrowed-down reasoning and effective troubleshooting, not to mention the emotion you’d feel when you see 3 tests failed by that exact change vs 1618 RED.
#4 Test case must be literate, thorough, but compact
The quality of the test cases is the second factor that demonstrates the tester’s skills after the bug description.
In whatever language you write, make sure the text is well-written. Appropriate for the test case. Keep it focused, straightforward and compact. Do not use filler words.
Think about the test case like it is a diagram or a flowchart written in text. Yet, it should be thorough and clear, and the reader shouldn’t have to guess anything.
#6 If a step doesn’t make a difference, don’t write it.
Review and optimise your test cases as often as necessary to ensure there are no steps that might change the way the test case is executed or distract from the original intent without being relevant to the expected result.
#7 If a step is the same for every case, extract it.
Do not repeat. Every time you need to do a copy/paste, it’s a red alarm that you’re accumulating waste. If there’s a need to copy and paste and have the same information in more than one place, then when the time comes to update them, you’ll have to update it in multiple places. Nothing is more certain to cause errors.
This rule applies to everything in life and work.
Extract the repeating information to one single place and link or associate its consumers.
#8 Always keep in mind the expected result.
A test is a waste of time if you don’t have expectations for it. You also have expectations when you do exploratory testing. You want to find issues, problems, failures, inconsistencies, non-conformities, and anything else that bothers you as a tester and that might bother the user.
The expected result is your testing North Star. Why are you testing this? What do you expect to find? You don’t expect that it will work. Nothing works until proven with testing.
If you enter the testing with “everything seems to work”, you’re sabotaging yourself.
So when you write the test case, always ask yourself what the expected result of each step is and out of the entire test case. Each step and test data must exploit various ways that might cause the system to fail to meet expectations.
The relation between the positive and negative test cases is one to many. Do not get satisfied with only the positive ones. Prove the system is robust.
#9 There is more than one expected result.
Only one expected result is a rare occasion. In a typical web application, you might have to check the result in the UI, check if the value is properly sent through the API, and if it’s stored correctly in the database. You need to verify the expected result at 3 different places.
Write in the test case and the additional info on how to access those places and how to verify the expected result.
#10 Always write in Tables instead of Lists
What we call a test case can be a single value or a combination of values. It can be a single action or a combination of actions. But most of all, it’s a writing concept depicting a combination of input values, actions and expected results.
A table is the best format for a test case.
Even if you need to execute a sequence of actions, do not write it as a list of actions. When you start writing a test case, regardless if you use a test case management system or not, always start with a table.
#, Step, Input, Expected Result – that’s your test case table, all you need. Add a simple section about the purpose, the preconditions, configurations, etc.
#11 Use domain and UI language.
The wording in the test case must follow your conventions. If there aren’t any, the test cases are a good starting point to build your product vocabulary.
Very often, the engineers use different names for features and components than the product or the sales and marketing teams or the customers.
Testers are the ones who need to speak all the languages, and the test cases are the source of truth for names and behaviour.
#12 Follow test case naming convention:
Whatever content you create, it should be clear at a glance what information it provides and how to navigate through it. The best naming convention for a test case is: “Module – Functionality – What case do you verify with it.”
Do not write test case titles like “Verify the reporting works as expected”.
There are two major advantages of writing title schemes:
First, it’s unambiguously clear what the purpose of the test is and what the feature under the test is.
Second, it makes the test cases searchable, filterable and sortable, easy to group, organise and prioritise.
#13 Follow test data styling convention:
If you’ve read my previous threads on test case writing, you’ll notice a major theme – clarity. And it’s not only about test cases. What is known must be clearly expressed to the target audience. Not surprising at all, the style makes the difference.
Do not go the other extreme wasting too much time and effort into design-first text that is even more unclear.
Use font style clever to distinguish test case text, instructions, steps, expected results from test data, configurations, feature and component names, etc.
The easiest thing to do is use ALL CAPS for anything that is not plain text – any UI components or feature names. That’s the fastest to write too. The other examples of test data styling conventions
- Save, SAVE, Save, SAVE button.
- File, EDIT, View, HISTORY menu
- “Save” button; “File” menu.
Styling conventions also apply to input test data, but the smartest practice is using test parameters and extracting the test data outside the test.
- @firstName, @firstName, @first_name, @FIRST_NAME
- <firstName>, [first_name]
- [firstName], “first_name”
Start writing and see for yourself what’s the easiest and fastest to write. For me, the best format for test parameters is @firstName
Last but not least, parameterised test cases are much faster to automate.
These are my 13 rules on writing test cases that have proven to build great relationships between product, design, testing, engineering and support teams through the years, as they unify the product language and streamline the communication between the teams.
Test cases are a knowledge base. They are the living history of the software and are there to reply to exclamations like “This has always worked like that!”, “It’s by design!”, “How is this supposed to work?”, “Why did that bug escape?”.
Test cases are also a great analysis and risk management tool. While writing them, the team could immediately spots omissions, contradictions, ambiguities, scope creep and scope cuts in the requirements and thus ensure the team enters development with much more clarity about the customer expectations.
If “We should’ve refined the story better” is one of the regular items under the “Improve” section of your retrospectives, don’t miss my “Mastering User Story Analysis” workshop with the state-of-the-art User Story Analysis Miro board!