Why record-playback (almost) never works and why almost nobody is using it anyway

Alister Scott once again calls out a number of spot-on technical points regarding the use of automation tools. In this case, he discusses record/playback automation tools.

Technical reasons aside, we also need to look at the non-technical reasons.

I’ve only once encountered someone trying to rely on the record-playback feature of an automation tool (my boss, working as a consultant, and earning a commission on the licence of the tool). Record-playback exists primarily as a marketing tool. When we say ‘record-playback fails’, I generally take that to mean the product was purchased based on the dream of programerless programming (I’m looking at you too, ‘Business Process Modelling’) and quickly fell into disuse when the maintenance cost exceeded the benefit of the automation.

The other common failing of course is that the most developed record-playback tools are (were?) expensive. I’m not sure what the per-licence cost of QTP/UFT is these days, but it used to be about 30% of a junior tester. Calculating the costs for even small teams, I could never defend the cost of the tool for regression automation over the value of an extra thinking person to do non-rote testing activities.

So if we can get through the bogus value proposition, especially relative to the abundance of licence-free options, there is a very limited space in which record-playback might add value:

– Generating code as a starting point for legacy projects. I’ve used record-playback to show testers how to start growing tools organically. That is, begin with a long, procedural record of something you want to automate. Factor out common steps, then activities, then business outcomes. Factor out data. Factor out environment, and so forth as you determine which parts of your system are stable in the face of change.
– If your test approach is highly-data driven, and the interface is pretty stable with common fields for test variations, you could quite feasibly get sufficient benefit from record playback if your testers were mostly business SMEs and there was little technical expertise. For example, if testing a lending product you might have input files for different kinds of loans with corresponding amortisation and loan repayment schedules. When testing a toll road, we had a pretty simple interface with lots of input variations to test pricing. In this situation, the cost of the test execution piece relative to the cost of identifying and maintaining test data is relative small.
– When we have some execution that we want to repeat a lot, in a short space of time with an expectation that it will be thrown away, quickly recording a test can be beneficial. In this case, we still have free macro-recording tools as alternatives to expensive ‘testing’ tools.

If we think of record-playback as tool assistance, rather than a complete test approach, there are some in-principle opportunities. In practice, they don’t usually stack up economically and we have other strategic options to achieve similar ends.

2 comments on “Why record-playback (almost) never works and why almost nobody is using it anyway”

  1. Derek says:

    Hi Jared. Bang on the money. Time after time I’ve encountered ‘Testing’ teams who revolve their testing around record/playback tooling. Often with disastrous results. By which, I mean pretty much useless testing.

    I’ve always maintained that a tester should first and foremost be, or have significant development experience. Because the best testing suites are those which are crafted as much as the app they are testing. Testers who think testing is a matter of recording and playing back, or worst of all – base their testing on manually executing steps listed in spreadsheets, should be sacked.

  2. Jared says:

    Funnily, someone was trotting out the idea of mobile record-playback tools the other day. To be fair, the context was kind of limited (record-playback-implicit throwaway), so the suggestion wasn’t totally without merit in this case, but I hadn’t even realised it was still a thing.

    I understand the view that testers have development experience, but also see that brings certain tradeoffs. Agree that where testers were/are happy to be robots and execute procedural tests, you almost certainly have the wrong kind of testers for modern software development. I have been meaning to write a post about the first point, and started one on the second about ten years ago that I have never quite been able to finish. I won’t try and do justice to a discussion so complex in the comments here, but thanks for the prompt to finish what I started 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *