Does cucumber suck?

I’ve been having a lot of rants about Cucumber of late, as it’s the new shiny thing for agile teams.  Does anyone else have issues with it?  I’ve asked all of my programmer friends to convince me of its worth, and they’ve all failed so far.  I’ve not seen it adding any value above building a good API (and I see it bringing a lot of negatives relative to other possible approaches).

In my experience, I’m seeing -

- Customers/non-programmers never write the tests (because they have very little interest in specifying everything in given-when-then.  They just want to tell us what they want.  And it doesn’t make much sense to specify everything in that format anyway).
- Customers/non-programmers write the tests but it focuses the test effort on writing those kinds of tests, rather than other testing that seems to add more value.
- The tests are written in English, but what the test actually does depends on how the developers convert the english phrases into code (so there’s no guarantee it tests what the customer intended anyway).
- Avoidance of conversation (ie. tests as contracts).
- Cucumber and the related tools (through the toy examples they provide) encourage developers to put lots of implementation detail into the tests (and sometimes to do a lot more testing through the GUI/http layer rather than pushing some of that testing down).
- Refactoring sucks as we lose IDE support.
- Much heavier test artefacts (when a lot of the teams I work with are already struggling with the weight of their agile test automation).
- A continued focus of tests on what the system does and not why it does it and the business outcome.
- Anecdotally, I’m not seeing better outcomes than people were getting with other approaches.  I realise it may be leading to better designs, but then I’d expect to see improvements somewhere in the process.  That looks like it would require better modelling skills than I’m seeing in most of the cucumber tests on my projects.

Most of the examples that I’ve seen where people are claiming success are fairly small applications.  I’m not seeing the approach scale that well.  Yes, most teams could write their cucumber tests better, but even then, in my experience, other approaches would be more effective and more efficient.

Any thoughts?  If there’s interest, I’ll try and post some examples of what I think those tests might look like if we pushed them into other forms.

The Onion predicts the future once again!

The Onion shows off their systems-thinking skills once more.

Compare this article on predictive sentence completion with The Onion’s Mac Wheel features at around the one minute mark in the video at http://www.theonion.com/content/video/apple_introduces_revolutionary.

Other clairvoyant articles are here and here. Find your own here.

Requirements and specifications: What's the difference and what's it to you?

There have been a number of threads I have followed in a few different forums recently where people have discussed requirements, what it means for requirements to be ‘good’, and what it might mean for requirements to be unambiguous. What usually follows is a long-winded back and forth, with no resolution.

At the heart of this deja vu is the fact that one person’s requirements are not necessarily another’s. I resolve this by drawing a clear distinction between specifications and requirements. The distinction may be obvious to some, but in practice it seems to be something we struggle with.

In “Software for dependable systems: Sufficient evidence?”, Daniel Jackson and his co-authors take care to highlight this issue:

“Software systems that are developed specially for a particular client are typically built to meet preagreed requirements, but these requirements are often a long and undifferentiated list of detailed functions.”

“The requirements of a software system should describe the intended effect of the system on its environment and not, more narrowly, the behavior of the system at its interface with the environment, which is the subject of the specification.”

They also take care to point out that the environment of the system includes the software product, plus the humans that use them, and other environmental factors external to the software.

The specifications are not about things that anybody *needs*. Specifications represent the end result of negotiations, conversations, politics and expediency that some group of people thinks represents an understanding that is good enough for now. The specification is at best, our best guess of what’s going to make the world a better place for the numerous people who have a stake in the thing that we’re building. Specifications are a waypoint on the path to something else.

Specifications are never equivalent to requirements in the case of things that will be used by humans.

Specifications apply to the pointy end of a screwdriver that needs to fit into the indented part of some screw.

As testers, testing to specifications is something we do because finding about the requirements is too hard. We do it because that’s what the testers before us did. We do it because the process might be built around specifications documents, and that’s what managers are tracking to. We might reasonably test to spec if our job is just to test a software component that’s on its way to be integrated with something else. However, testing to spec can’t tell us that the system is going to yield the desired benefits.

In the case of the screwdriver, requirements apply to the handle – how it fits your hand, and the hands of others. They apply to whether it gives you enough grip and whether it has a switch to make it only screw or unscrew. They apply to whether it fits enough hands in the world, and whether it can be built to a price that someone is willing to pay. Requirements apply to whether the pointy end of the screwdriver will make do for unscrewing (or screwing in) a screw that is not of the size covered by the specification for the pointy end of our screwdriver.

Requirements are about utility and real human problems, and are fuzzy, and messy, and never fully understood. When our stakeholders ask us questions about testing, though they often don’t phrase it this way, they are usually interested in information with respect to requirements, both explicit and implicit.

Test to see how the product measures up to requirements, and to learn about what the requirements are. That’s the value that you can bring to the project.

Planning to make use of learning – Incremental vs Iterative

During coffee with Agile-coach and all-round excellent guy Shane Clauson, in sympathy with yet another of my what’s-wrong-with-agile rants, he pointed me to this blog post from Jeff Patton:

Don’t know what I want, but I know how to get it

While my opinions diverge on some of what he says must be true, I think the important message that he (and others – Check Alistair Cockburn’s writing on this) make is that it pays to plan to iterate. That is, if you’re on an agile project and you don’t see anyone planning to rework things in response to feedback from using the product, you’re probably in for some disappointment.

I think we frequently fail to give our customers an appropriate expectation when it comes to (agile) software development. Having them read this isn’t a bad start, but you’ll want to figure out how to make this message your own.

What are your users doing (or interview techniques for project analysis)

For the first time, I’m helping run planning sessions for an agile project. Planning has been a bit of a bugbear for me on many of my recent projects, so I’m excited to have a chance to try some things out. So far, it seems to be going well. It’s a short project, so I shouldn’t have to wait too long for feedback on how things went, which is a bonus.

One of the things I noticed is that when in planning or requirements sessions, there’s a tendency to fall into an interview format. That is, we (the development team) ask questions, write things down, ask clarifying questions, and repeat until we think we have enough information to move forward. I think that can be fine sometimes, but it feels like one of the simplest things (but not necessarily the quickest) we can do to help our understanding is to ask our customers to show us how they work. That is, the interview style becomes ‘fly on the wall’ rather than ‘question time’.

As a tester in more traditional projects, I’ve also had occasions where despite the documentation around the project, deep understanding of the domain was difficult to come by, and generally absent from the test team.

On a recent project I had some success combating this by just having the gumption to ask a passing front-line operator “Can I sit with you and watch you work, or do you have time to walk me through the systems you use?”

“Sure! Your desk or mine?” was the answer.

It didn’t hurt me to ask (as I could fit the time in around my other responsibilities), and he was happy to know that some of his concerns might be represented in future. But I know that I don’t always think to do this, or feel that my request will be considered. I learned a lesson though.

So on my current project, it’s time for us to ask the same question of our client. We’re talking through the problem, but the moment the development team is on its own, we keep hitting decisions that can’t be made because our thinking is functional thinking. That is, the client is suggesting features, but we’re not digging deeper to understand the precise intent of those features requests, or the problem that those features will solve. This understanding is critical not just for our testing, but for the whole team to be successful.

Now, we don’t have to work this way. We could just start building the product and let unsatisfying features fall out of the process with our on-site customer, but I believe the amount of up-front thinking and analysis we do is an uncertainty reduction activity. We almost guarantee ourselves a slightly longer project, but I think we reduce the likelihood of having nothing useful to show to the customer at the end of our budgeted time. Critically, I think we have to inform our customers of the tradeoffs we are making around up-front analysis, and how that will impact them.

Excuse me now, I have to go ask someone a question!

Every tester needs a healthy dose of paranoia

I wonder if Google testers think like this?

http://www.radaronline.com/from-the-magazine/2007/09/google_fiction_evil_dangerous_surveillance_control_1.php

What's in a name?

There seems to be a flurry of post-agile activity on blogs right now. If you haven’t noticed, you can look at an example here. There is more elsewhere, and Jonathan Kohl tells me there is more coming. What this amounts to is a growing number of people who, for a variety of reasons, have a problem associating themself with the word “Agile”.

I still talk about “Agile”. I don’t make assumptions about what others mean when they say “Agile”. I ask. In conversation, I clarify what I mean when I use the word. I suspect this is a habit many testers pick up over time, due to the endless definitions applied to a small set of words in our field.

But I do see a problem with the name. If we look at the top of the agile manifesto, we see “We are uncovering better ways of developing software by doing it and helping others do it”. That doesn’t seem any different to what people trying to re-badge Agile propose, but I do see that Agile might not be the right word for “uncovering better ways of developing software”.

Perhaps it would be better to figure out what it should have been called in the first place. But how often is the name of a movement, phase or era bestowed by the movement itself? My limited knowledge of such things tells me that those who come later that get naming rights.

I can see why some still cling to the name. I can see why others seek a better name. The principles and values of “Agility” as defined in the manifesto are principles of a healthy working environment. The term Agile helped to distinguish a group of approaches from the dominant thinking of the day. Now that Agile approaches have entered the mainstream, the challenge is not necessarily to be more agile, even for Agile projects.

I’m not comfortable with the proposed alternative term ‘Pliant’. I think it creates a judgement in people’s mind, a tool for people to make divisions. Just as people may be derided for not being “Agile”, so might they be derided for not being ‘Pliant’. Perhaps the biggest problem with Agile (and Pliant) is that it’s not about a particular set of practices, or aspect of an organisation, but a word that threatens to encompass the whole organisation. Perhaps we should just get back to words like software development, systems thinking, management, testing…

I’m more comfortable with “Post-agilism”. It’s less confronting. It’s a reference to a time, not a relative property of a development approach. But if we accept that the purpose of the Agile manifesto was to encourage us to figure out better ways of developing software, the post-agile and Pliant thinkers are still doing exactly that. There is a parallel in that some people see post-modernism as simply “late modernism”. Maybe we are in “late agilism”. But history may decide that software development is simply progressing, and the Agile manifesto was just a kick in the pants in a period of stagnation.

“Agile” is not a problem. In some ways though, the name is. It’s a name that captures a reaction, not its more important goals of continual improving of the craft through practice. It served its purpose. It opened up discussion and suggested ways in which things might be different. It was a banner to rally under, for people presenting an alternate vision to the common practice of the day. It wasn’t an absolute, procedural description of how to achieve ‘Agility’. The manifesto is evidence of that. But many see the right hand side of the Agile values gaining dominance over the left.

It’s time to move on, and not get stuck in the process-driven ruts Agile was trying to get us out of. Maybe we can do it without a label this time, and look back on the words of the Agile manifesto as a reminder to keep looking for better ways.

If we can’t, we could do much worse than to label our ideas “post-agilism” or “late agilism”. It’s bound to be renamed by someone else one day anyway.

Study session difficulties, or the Learning Organisation challenge

In the Yahoo group supporting Cem Kaner’s Black Box Software Testing course, Anil Soni has been describing experiences organising and leading internal training, using the BBST course materials. One point in particular caught my attention:

> The major challenge is to have all the testers together in the same time
> needed for the group interaction.

In a recent role, I had trouble getting all of my testers together for study sessions. Often, the testers felt uncomfortable stopping work to come to a study session. At times, work was a genuine priority. At other times, the things testers were working on could wait an hour or two, but it was difficult for the tester to make that decision.

Sometimes I would have to make the decision for them. That is, tell them “This is more important than the work you are doing right now”. I could do this because I had management support for creating a learning environment.

Organisationally though, we were sending mixed messages. The project manager expressed the opinion “We’re not running a school here”. The development manager countered with “We absolutely are”. These two statements captured the tension between getting today’s job done now, and ensuring that tomorrow’s work can be done better or more efficiently. In one statement, is the view that employees should always be working on something that directly contributes business value. In the other statement, the view that it is vital that time be set aside during work hours expressly for improving the skills of staff and for reflection on past experience.

At this point, I don’t really want to enter into the philosophical debate around whether organisations should be learning organisations are not. However, if you’ve decided that it is important that your staff learn and improve here are a few things that I would consider.

  • Make it clear how much time you expect employees to be spending on improving their skills.
  • Check that they are spending the necessary time. If not, find out why.
  • Keep groups small if possible. This allows scheduling to be more flexible.
  • Get a clear statement (or statements) from whoever is in authority that study time is important for the organisation.
  • Establish and communicate principles for when work should take priority.
  • If participants are having difficulty deciding what is more important, help them.
  • If you are going with a less structured informal learning approach, how will you know whether team members are using their study time for study? That is, what feedback mechanisms will you have to help understand and support your team’s learning needs?

On this last point, in one team, a thousand dollar self-education budget was available to each employee per year. They were free to spend this on anything that they wished. However, in a year and a half, almost none of the team had made use of that budget. That’s one example of a feedback mechanism, triggered by the question “Why haven’t you used your training budget?” Regular team meetings, where each team member presented some recent learning were another.

The dominant work culture is about appearing to be busy, and doing ‘real work’. Despite support at the highest level, if other management levels don’t support the company’s learning objectives, employees will be quite nervous about spending time on tasks that their close managers may consider trivialities. This is a significant cultural change, and the forces of history will be working powerfully against you.

I’ve recently come across the Informal Learning blog, and hope to write more on this subject in the future.

Models of software development

After an email exchange with Matt Heusser, Matt has posted my comments on how our work tools sometimes influence our behaviour on projects. That’s because these work tools are based on models of how someone believe software should be developed. Perhaps more importantly, the tools that I’ve seen are usually designed to ensure that the team conforms to a process, not to ensure that the right things have been done.

The tool that I feel most strongly influenced the agile process in a former employer, is the workflow tool. Workflow tools are typically linear flow tools. I believe that the use of these kinds of tools (in our case, JIRA, but you can substitute most bug tracking tools if you like,) strongly reinforced a linear model of software development. I’m not aware of any tools that don’t model software development this way, but am quite happy to be shown to be incorrect.

If you don’t want your team throwing their work “over the wall”, check that your tools model development the way you think it should be done.

Investing for maintenance – Tradeoffs and calculations

In the context-driven software testing Yahoo group, there has been an interesting thread on magic numbers. Part of this discussion related to magic numbers for software maintenance investment. While I think you can find plenty of literature that advises a bias towards maintenance, my friend Michael couldn’t find any models that satisfied our burning questions, so he built his own.

Check it out, and thank him for doing my homework :)

Page 1 of 212»

About me

I'm Jared Quinert, a testing consultant located in Melbourne, Australia. With over fifteen years of experience, I specialise in agile testing, context-driven testing and intelligent toolsmithing with a focus on business outcomes over process. As one of the most experienced agile testers in Australia, I've been diving in hands-on since 2003 to discover how to build successful whole-team approaches to software development.

Contact Me