Thinking about Test Strategy – A mnemonic device
I’ve recently been on the move a little, and have had a lot of chances to work on test strategy. I generally have historical documents to work from, but decided I should try to come up with a mnemonic device to ensure that I have all of the critical conversations that I need.
One of the most influential things for this was the RUP development case that Michael Ruschena presented when I worked with him at ANZ. This documented the team approach we had agreed upon to deliver a solution to the customer’s needs when working on our first agile project. This was one of the first times I was really conscious of “software development as applied problem solving”, prior to encountering Gerry Weinberg’s work. Paul Szymkowiak also commented that my test strategy reproduces a lot of what would be in the RUP vision artefact. I realised that in many cases, that’s true. Before deciding on the testing mission, what I’m frequently trying to facilitate is consensus on the project or business goals.
So that’s how this helps me. I hope it might help you. To that end, I present the first public version of my test strategy mnemonic – “GRATEDD SCRIPTS”.
- Goals – What are the critical goals of the product? What are the things that absolutely must work?
- Risks – What things are we hoping to avoid? What bad things might happen? What are we doing to address these?
- Approach – How are we going to test this? How we will work together? Who will do what? Accountabilities may also be considered here.
- Tradeoffs – What tradeoffs we are prepared to make in order to deliver the desired business outcomes?
- Environments – What environments do we have or need? What testing will we do in those environments?
- Dependencies – Is anyone ‘outside’ the project depending on us? Are we depending on anyone? Are there external gates or constrained resources?
- Data – Where will data come from? Are there any special needs?
- Stakeholders – Who has a stake in the software/product/test effort? What are their goals? Who is accountable for what? Who needs to be involved in signoffs and reviews?
- Coverage models – What models will we use to know that we are testing the things we care about? What models will drive test design and test coverage discussions?
- Resources – Who is available to help with testing? What other resources might we need? What’s the budget?
- Information needs – What information needs to come out of the testing process? What decisions does it need to support? What are we trying to learn?
- Prioritisation – What is most important? How will we resolve competing interests, tasks or tests?
- Tooling – Will any special tools be required? Will support be needed to develop these?
- Schedule – What are the important dates and timings?Some of the items overlap, but that’s OK for my brain. Different prompts cause me to consider things in a different light.Ben Kelly proposed ‘Budget’ as a separate item. For myself, I cover this in Resources, but if you work in an environment where you are more hands-on with budgets than I typically am, you might find the mnemonic “B-GRADED SCRIPTTS” to help you with a more explicit ‘B for Budget’ reminder.So next time you are thinking about what your test effort should look like, try to work through the above points and see if there is anything you’re missing. Try to come up with your own mnemonic devices too to help you remember things that are important.