Testing does not prevent defects

There seems to be a bunch of discussion regarding whether testers prevent defects or not.

The main source of confusion that I see is confusing ‘testers’ with ‘testing’. Clearing this up seems pretty simple.

Testing does not prevent defects. Testers may. I do. But I don’t call that part of my work ‘testing’, even if testing and experiment design is a part of that work. Clarifying which aspects of my work are ‘testing’ and which are not is important for at least two reasons.

The first, is that I can discuss those skills clearly and keep that knowledge in one place. This allows me to make shortcut reference to those skills when I apply them in other roles I may be playing.

Secondly, time I spend playing some other role that may be about defect prevention (or team/product alignment, shared focus, or anything else) usually involves (sometimes irreconcilable) tradeoffs in the quality of my testing work or less available time for test effort. If I’m not making this clear to stakeholders who matter, or if I am unaware of the potential for conflict, I expose myself to a number of potential problems.

This blog post regarding test code being harder than application code was passed around the office, and I thought I would preserve my response here. You’ll need to read it first for this to make much sense.

I think there’s a reasonable point that testing is frequently trivialised, but I don’t think saying that the non-testing code is trivial is the right way to get that message across. Discussions regarding what developer testing looks like need to take into account a whole raft of context, which this article doesn’t seem to get into (eg. Defect prevention strategies, architecture, system criticality).

For example, this is completely untrue, unless he is assuming an ideological TDD view of the world:

“The tests are the program. Without the tests, the program does not work.”

Without *testing*, the program probably won’t work. Without tests, the program could easily work perfectly well. You’ll certainly have a program (of possibly unknown quality).

This is sometimes true:

“Sometimes the tests are the hard part, sometimes the code is the hard part.”

…but just as often, the code is the hard part and the tests are easy.

At my last job, I remember a simple code change that took a week of test planning, a custom execution framework, two weeks of execution and another week or so of results analysis (plus DBA support to write performant queries against the data). It would have been nice if his examples had included something along these lines. His search example could have provided an interesting example of this, as well as the limits of automated testing when the results are subjective (ie. ‘Good’ search results).

I’d suggest the main reason tests contain as many errors as code is because both code and test cases are the result of similar communication and modelling processes. The general process of getting working software is the process of bringing the code and the test models into alignment.

Where tests have *more* bugs than code, I’d suggest it’s usually a result of the test design time being squeezed, and the fact that nobody really likes writing procedural test cases.


More haiku updates

I’ve added some new ones, need to take one out. At some point, there should probably be a bunch of Scaled Agile haiku.

See the agile haiku page.

Updated haiku

I’m surprised at how relevant it all still is, but I have added something that I think is missing that I’ve learned in my last couple of roles. See my ‘Essence of Agile’ Haiku for the update.

Some thoughts on iteration vs incrementation

Alister Scott’s post on Incrementation vs Iteration was doing the rounds at work with some comments, and I felt the need to comment.  I had a couple of attempts at responding to this.  It’s a big topic, but to some degree I think iterative vs incremental is a bit of a distraction as a general philosophical discussion (and I *love* philosophical discussion, so I am not meaning to be dismissive).

I think it’s more important to ask -

- What is our plan for validating the product definition?
- What is our plan for validating the architecture?
- How does the organisation want to ‘manage the work’.
- As a team, how will we know when our work is ‘good enough’?
- As a team, how do we plan to manage incomplete work (ie. future enhancements and defects).

The first two are probably the biggest factors of uncertainty that will drive the degree of iteration vs incrementation.

The third is a question relating to the organisations belief systems around predictability of software projects and how much power management wants in designing work methods to support their agendas.

The fourth question will shape how actively the team works against ‘doneness’ by finding bugs, soliciting feedback and exploring the solution space throughout the iteration.

The last question is about how the team wants to manage defects and backlog.  If you don’t want to carry bugs and want to minimise backlog management, my experience is that the sprint needs to plan for internal iteration (or you need to get rid of sprints/iterations and go with a pull/dock assembly approach).

The most important thing is understanding why we iterate.  Alister highlights a couple of examples when he talks about time to market vs user experience prioritisation, but it’s only sensible to talk about iteration in specific contexts of uncertainty and risk management.  Similarly, we discuss incrementation in the context of the cost/benefit of a particular release approach and/or schedule.

I feel the biggest lesson in the Android/iOS history is as an example of how expensive it is to fix architecture if you get it wrong.  Excuse me before I start on my old-guy ‘RUP got it right’ rant.

More Ruby goodness for testing

Did I mention how much I love Ruby?

items = (“A”..”Z”).to_a.in_groups(5,false)

5.times do | i |
puts items[i].flatten.to_s
puts “—-”

Source code is at http://apidock.com/rails/Array/in_groups

Yet another reason why testers should use Ruby

Instant sequence of New South Wales licence plates:



return start_lpn.upto(end_lpn).collect { | lpn | lpn }

Cem Kaner interview on uTest blog

Cem Kaner’s interview on the uTest blog talks mostly about his new book, but was especially interesting when it got to the part about emerging trends.

<Scientific> communities are sometimes characterized as moving between two dysfunctional extremes, with stagnation at one end and fragmentation at the other. In the stagnant extreme, everyone pretends to believe in the same things (or the dissenters are unable to gain attention) and the field makes slow, incremental progress. Dissenters break out of this by forming schools that gain influence and add a lot of creative tension.

Testing seems particularly stuck in a period of stagnation at the moment.  Hopefully time will see some interesting new schools form.  I put myself in the ‘dissenter’ category right now with respect to what I would once have considered my testing ‘community’, the context-driven school.  I know I’m not the only one that feels this way, but I’m not in a place in my life right now to voice my feelings on the subject.

Kaner and Bach’s public split is certainly evidence of fragmentation.  It will be interesting to see what happens next.

Jonathan Kohl back down under with testing and user experience training

Jonathan Kohl is going to be back in Australia and New Zealand in February with his mobile testing course, as well as with a new course on mobile application user experience.  Courses are being run through SoftEd in Auckland, Wellington, Melbourne and Sydney, so check it out!

I’m hiring

I currently have a vacancy in my team for an assistant test lead to carry some of the low-level planning, bug and environment management load.  They would also be hands-on.

See Transurban’s career site for details or contact me if you have questions.

Page 1 of 1512345»10...Last »

About me

I'm Jared Quinert, a testing consultant located in Melbourne, Australia. With over fifteen years of experience, I specialise in agile testing, context-driven testing and intelligent toolsmithing with a focus on business outcomes over process. As one of the most experienced agile testers in Australia, I've been diving in hands-on since 2003 to discover how to build successful whole-team approaches to software development.

Contact Me