Passing and failing tests considered harmful

Alan Page of Microsoft suggests that there is a perfect world of passed and failed tests, and shades of grey that help us provide more useful information. He then asks “What else do you report as test results (to supplement test case pass/fail counts)? What do those results mean?” Read more at:
http://blogs.msdn.com/alanpa/archive/2007/11/07/pass-fail-and-other.aspx

I think Alan intended to say *automated* tests either pass or fail, but I’ll wait for him to clarify. As a human, I’m easily capable of saying ‘I don’t know whether we should be happy with what I’m observing or not’. Or, ‘The test case passed, but I observed other problems that don’t appear to be related to this test case’. Or, ‘I got 9 of the 10 useful pieces of information from that test case, and the 10th piece of information isn’t really that important’. More importantly, the binary ‘pass/fail’ state can also distract from the idea that the tests exist simply to reveal information. That a ‘failed’ test violated my expectation may or not be a problem.

So my preference is to shy away from quantitative metrics, and look to have conversations with stakeholders instead. If I can’t have those conversations, then I’ll broadcast the qualitative information via test reports.

To state it another way, I don’t think that management actually care about what percentage of test cases passed, except that someone somewhere along the way made them think it was a useful proxy metric for something else. I think they want to know ‘Can we feel comfortable shipping?’ or the equivalent for your environment. I think reducing the answer to that question to ‘100 percent of the test cases passed’ really doesn’t help.

Having said that, if the metrics can be collected cheaply enough, and not providing them is going to upset the process police, I’ll happily report them along with some other metrics. But I’ll be sure to supplement them with a story about how our testing has gone – Trends, events over time, threats to the product and how confident I feel in the test effort.

3 comments on “Passing and failing tests considered harmful”

  1. Alan says:

    Yes – automated – i.e. if a test passes in the woods and nobody is there to see it…

  2. Alan says:

    And now that I’ve read the whole post…

    Yep – I agree. What I encourage teams to do is track results of automated tests and investigate non-passing tests, BUT – for management reports report something (anything) more meaningful. One example is to just report which scenarios across the product are passing or failing.

  3. Al says:

    I’ve worked for companies in the past that believe all facets of the product should be tested, so this state “The test case passed, but I observed other problems that don’t appear to be related to this test case” shouldn’t happen.

    Of course, this is bollocks. I like the idea of providing qualitative results. The problem would be convincing managers who want numbers and graphs style of reports. In my view they’re extremely limited and it misses out on the testers’ opinion. A test lead from an earlier job provided numbers and graphs but also provided feedback from the test team to management. Essentially it amounted to “How do you feel about the product.” i.e. After all the testing you’ve done, what’s your overall impression of the quality of the product.

    It’s just a matter of educating management in my current role. With a lot of them not being exposed to testing before it’s proving somewhat difficult. The dev team are very accommodating; upper management are good at times, depending on who has the collective neuron sack at the time.

Leave a Reply

Your email address will not be published. Required fields are marked *