Free testing book

Via Ben Kelly, Rikard Edgren’s brief but dense ‘Little Black Book On Test Design‘ is worth a read.

It’s cheap both in dollars (free) and time (less than 15 minutes if you’re quick).

Something to try if Squirrel SQL stops working on Windows

I’ve been using the free Squirrel SQL SQL client under windows for a month or so now.  It’s a good tool, though somewhat annoying to get working.  Today it stopped working.  The loading splash screen would display, the progress bar would get about halfway through and then Squirrel would exit without any messages.  I had no desire to recreate all of my connections or reinstall the various drivers again, so I really wanted to fix my installation.

After trolling through forums, there was a suggestion that the problem may have been preferences related. No precise solution was offered, but I began to experiment to see if this was my problem.

First, I found the preferences folder, which lives in Windows’ documents and settings folder (eg. c:\Documents and Settings\). Inside this folder will be Squirrel’s preferences folder, named ‘.squirrel-sql’. I renamed this and restarted Squirrel. Things looked good with the application starting, so it seemed I was looking in the right place. In order to troubleshoot further, I wanted to restore the state of the application, so I renamed the new preferences folder that Squirrel had created and tried to rename the old preferences folder.

No luck! Windows didn’t like me trying to rename the folder back to its original name. I ran Squirrel again, which caused Squirrel to create another preferences folder. I now had three folders – squirrel-sql.old, squirrel-sql.new and the current preferences folder ‘.squirrel-sql’. I opened the old preferences folder, copied the contents and pasted them into the ‘.squirrel-sql’ folder.

Looking inside the preferences folder, I could two folders ‘plugins’, and ‘logs’. I could also see a number of xml files. Now that I had found the broad area I needed to investigate, I wanted to only change one element at a time. As my main objective in resurrecting Squirrel was to not lose my database connections and plugins, I ignored the xml files that were related to these, and looked at the most interestingly named file – ‘prefs.xml’. I renamed this to ‘prefs.xml.bak’ and restarted Squirrel. Still no joy, so I closed Squirrel and restored the original name of the file..

I repeated this step for ’sql_history.xml’, thinking that this file might be dynamic enough to cause problems. Again, Squirrel failed to start correctly.

Next, was a file named SQLAliases23_treeStructure.xml. Suspiciously, this was zero bytes, which seemed odd for something that looked like it was supposed to contain some kind of data structure. I added a ‘.bak’ extension to this and restarted Squirrel again.

Success! I closed Squrrel and I could see that it had recreated the SQLAliases treeStructure file again, this time with data. I restarted Squirrel one more time to make sure that there wasn’t some recurring problem with my database aliases, and it happily started again with my connections and query history intact.

The Onion predicts the future once again!

The Onion shows off their systems-thinking skills once more.

Compare this article on predictive sentence completion with The Onion’s Mac Wheel features at around the one minute mark in the video at http://www.theonion.com/content/video/apple_introduces_revolutionary.

Other clairvoyant articles are here and here. Find your own here.

Claims testing in the wild

As yet another poor internet soul is scammed by a man pretending to be a woman in online chat rooms, I’m reminded of the sensibility of my number one internet heuristic.

Jared’s first law of online safety is ‘Assume that everyone you are talking to online is a man’.

This has held me in good stead over time, but how relevant is this to testing? Well, it’s strongly related to the testing technique of claims testing. Claims testing is what you’re doing when you are testing a product to specifications or requirements. It is perhaps the most common test approach in large corporate environments I encounter, but it’s also a technique that can be applied in a shallow, context-free way.

James Bach has two points in his Heuristic Test Strategy Model that I find key in describing claims testing -

- Verify that each claim about the product is true.
- If you’re testing from an explicit specification, expect it and the product to be brought into alignment.

Perhaps closer to what we are actually doing is that we usually verify that each claim about the product *can* be true, for some particular situation or situations. We also try to understand for which cases and contexts the claim holds true. And in order to verify the claim, we set about collecting *sufficient* evidence of the truth of the claim, because exhaustive proof is either prohibitively expensive or impossible.

An example I like to use to teach how claims testing should work begins with me making the claim ‘My name is Jared’. I follow with the question ‘How would you set about finding out whether it really is true that my name is Jared?’

The example is not as straightforward as it may seem, because the level of evidence required depends entirely on the purpose for which we need to know.

If I’ve introduced myself online in a chatroom or in person, and you don’t care about any interaction beyond that room, my assertion that my name is Jared may be sufficient.

If you’re the government of Australia, then your standards are a little higher depending on the service you’re providing to me. Sufficient evidence may include my birth certificate or passport, an assortment of historical information, and other knowledge of the details that the government has stored about ‘Jared’.

In other instances, social proof may be OK. The fact that others refer to me as ‘Jared’ may be sufficient evidence, and we can build up a body of evidence based on a history of social interactions.

Or perhaps you’re a hitman paranoid about bumping off the wrong person, so you brute-force the solution. You stalk me for a month and rummage through my mailbox, finally whacking me on the head and rifling through the contents of my house and wallet.

Each one of these approaches may be required for sufficiency, and is appropriate for different contexts and purposes. And we might take similar approaches when we’re testing software. We might simply talk to people who can provide evidence of a claim being met. We might perform a simple confirmatory test for a non-critical function. We might spend a longer time collecting a large body evidence to support a claim being met. It’s important that we think about the importance of the claim and ensure that our approaches to gathering information can be defended under scrutiny.

The real-world analogy to the second point about claims testing – ‘Expect the spec and the product to be brought into alignment’ is perhaps more commonly found in legal circles, and occasionally in the forum posts or blog rants of less careful men who feel their masculinity has been threatened. It tends to involve situations like this. And as in a situation detailed here that more closely parallels software testing, remember that refuting a claim can sometimes have much bigger consequences.

Requirements and specifications: What's the difference and what's it to you?

There have been a number of threads I have followed in a few different forums recently where people have discussed requirements, what it means for requirements to be ‘good’, and what it might mean for requirements to be unambiguous. What usually follows is a long-winded back and forth, with no resolution.

At the heart of this deja vu is the fact that one person’s requirements are not necessarily another’s. I resolve this by drawing a clear distinction between specifications and requirements. The distinction may be obvious to some, but in practice it seems to be something we struggle with.

In “Software for dependable systems: Sufficient evidence?”, Daniel Jackson and his co-authors take care to highlight this issue:

“Software systems that are developed specially for a particular client are typically built to meet preagreed requirements, but these requirements are often a long and undifferentiated list of detailed functions.”

“The requirements of a software system should describe the intended effect of the system on its environment and not, more narrowly, the behavior of the system at its interface with the environment, which is the subject of the specification.”

They also take care to point out that the environment of the system includes the software product, plus the humans that use them, and other environmental factors external to the software.

The specifications are not about things that anybody *needs*. Specifications represent the end result of negotiations, conversations, politics and expediency that some group of people thinks represents an understanding that is good enough for now. The specification is at best, our best guess of what’s going to make the world a better place for the numerous people who have a stake in the thing that we’re building. Specifications are a waypoint on the path to something else.

Specifications are never equivalent to requirements in the case of things that will be used by humans.

Specifications apply to the pointy end of a screwdriver that needs to fit into the indented part of some screw.

As testers, testing to specifications is something we do because finding about the requirements is too hard. We do it because that’s what the testers before us did. We do it because the process might be built around specifications documents, and that’s what managers are tracking to. We might reasonably test to spec if our job is just to test a software component that’s on its way to be integrated with something else. However, testing to spec can’t tell us that the system is going to yield the desired benefits.

In the case of the screwdriver, requirements apply to the handle – how it fits your hand, and the hands of others. They apply to whether it gives you enough grip and whether it has a switch to make it only screw or unscrew. They apply to whether it fits enough hands in the world, and whether it can be built to a price that someone is willing to pay. Requirements apply to whether the pointy end of the screwdriver will make do for unscrewing (or screwing in) a screw that is not of the size covered by the specification for the pointy end of our screwdriver.

Requirements are about utility and real human problems, and are fuzzy, and messy, and never fully understood. When our stakeholders ask us questions about testing, though they often don’t phrase it this way, they are usually interested in information with respect to requirements, both explicit and implicit.

Test to see how the product measures up to requirements, and to learn about what the requirements are. That’s the value that you can bring to the project.

Answering a question…

A few weeks ago, Designer commented on Software testing, art and productivity. The question got lost in amongst the comment spam, so I thought I’d give my answer a bit more prominently than usual. The question was:

…Many people who want to get a web-developed project don’t even understand the details of work. They just want to have a result and not to make a lot of efforts. How do you think – is there any solution? I think it is wide-spread problem.

If you’re talking about smaller web-based projects, I think you’re right. It can be really difficult to engage clients in the necessary up-front work to help reduce project uncertainty to a reasonable level. It’s a problem given that those with less money to spend have much more business risk in any project they undertake.

I think the second aspect of this is that customers (both internal and external) often come to us with a solution, not a problem. As we build the solution they asked for, the problem becomes clearer and dissatisfaction starts to creep in.

The focus of my work is increasingly on trying to help people build the right thing in the first place. I’m lucky. In my current role, my employer has the conviction that it’s important to make sure the project is heading in the right direction. This means making sure the project team has a shared understanding of the product vision, stakholders, those stakeholders’ goals, priorities and (in the case of a consumer product) the market and opportunities. They also think that it’s OK to bring development to a pause while we get our project bearings.

If you can’t choose the projects you undertake, then I don’t see any easy answers to these problems. And if you can’t convince your customer to be involved appropriately, and to place *some* value on just talking about the problem, it’s a hard road ahead for everyone. I guess the desire to do things ‘right’ is what drives many of us to start our own companies and projects. Without this option though, we can still focus on providing service – helping our customers better understand their problems and pointing out the benefits, costs and risks present in their solution(s). But choose your moments well, and don’t stop dreaming that things can be better than they are.

INVESTing in User Stories, revisited

Mike Cohn’s “User Stories Applied” discusses using the INVEST mnemonic as a guide to writing better user stories. I was recently asked to dig up a reference for it, and found this presentation here, with the section on the mnemonic on pages 47 and 48.

As I read it, I noticed that there’s been a change to one of the letters. Whereas the book uses ‘S’ to denote ‘Small’, now it’s become ‘Sized appropriately’.

I think this is a change for the better, as I noticed that every time I talked someone through the acronym, I would have a long-winded conversation qualifying ‘Small’ as ‘Just small enough but no smaller’. This would come about as I tried to explain the tradeoffs between a story being small enough to estimate with some reasonable certainty, small enough to fit within an iteration, and still ensuring that stories are ‘V- Valuable to the customer’, needing to ensure that user stories continue to express clearly a problem or need of a person.

Overly small stories push us further from the original context of the problem, and thus force us to compensate with an increasing heirarchy of ’super-stories’ to help us focus on the bigger picture. These become more noticable when working in shorter iterations than might be common on a Scrum project, so three cheers for Mike in spending the effort to come up with ‘Sized appropriately’.

Making user stories work (by writing use cases instead)

I’ve had a few common rants on most of the agile projects I have worked on. Developers bogged down in the detail of stories, while the critical goals of the system wound up ignored, or realising at the last minute that all of the stories built would do nothing useful.

The ideas I came to as a result of the problems I observed were -

- Compensating by starting with extremely high-level stories that defined the critical user and system goals, then progressively breaking these down into tasks.
- Ensuring that the intent and goal is clear from the story title
- Compensating by coaching the testers to ask the critical questions “Why is this feature being added? What problem does it solve and what value does it add?”. The testers would try to ensure that the context was made clear in the story. Links to the higher-level stories in Jira (the tool I most commonly encounter) also helps to provide context.
- Writing high-level acceptance criteria at the top level of stories to help define alternate paths for each goal. These often provided clear boundaries with which stories could be broken down into sub-tasks or sub-stories.
- Evaluating frequently against the high level goals.

I’ve only had a few chances to really try this out, but from a recent project experience with a little more involvement at the early stages of the project, it seemed I was on the right track.
At some point, being aware of Alistair Cockburn’s work, and being a regular reader of his blog, I realised Alistair had probably written about most of this, and I picked up one of his books on use cases. I expected that he’s figured most of this out and more.

Then before I got around to reading the book, it was confirmed for me with this blog post, which I highly recommend taking a look at. Even if you’re not going to apply use cases, it’s worth asking his questions of your project and seeing what you can learn from them.

If you are using use cases, are you avoiding the problems described there?

What are your users doing (or interview techniques for project analysis)

For the first time, I’m helping run planning sessions for an agile project. Planning has been a bit of a bugbear for me on many of my recent projects, so I’m excited to have a chance to try some things out. So far, it seems to be going well. It’s a short project, so I shouldn’t have to wait too long for feedback on how things went, which is a bonus.

One of the things I noticed is that when in planning or requirements sessions, there’s a tendency to fall into an interview format. That is, we (the development team) ask questions, write things down, ask clarifying questions, and repeat until we think we have enough information to move forward. I think that can be fine sometimes, but it feels like one of the simplest things (but not necessarily the quickest) we can do to help our understanding is to ask our customers to show us how they work. That is, the interview style becomes ‘fly on the wall’ rather than ‘question time’.

As a tester in more traditional projects, I’ve also had occasions where despite the documentation around the project, deep understanding of the domain was difficult to come by, and generally absent from the test team.

On a recent project I had some success combating this by just having the gumption to ask a passing front-line operator “Can I sit with you and watch you work, or do you have time to walk me through the systems you use?”

“Sure! Your desk or mine?” was the answer.

It didn’t hurt me to ask (as I could fit the time in around my other responsibilities), and he was happy to know that some of his concerns might be represented in future. But I know that I don’t always think to do this, or feel that my request will be considered. I learned a lesson though.

So on my current project, it’s time for us to ask the same question of our client. We’re talking through the problem, but the moment the development team is on its own, we keep hitting decisions that can’t be made because our thinking is functional thinking. That is, the client is suggesting features, but we’re not digging deeper to understand the precise intent of those features requests, or the problem that those features will solve. This understanding is critical not just for our testing, but for the whole team to be successful.

Now, we don’t have to work this way. We could just start building the product and let unsatisfying features fall out of the process with our on-site customer, but I believe the amount of up-front thinking and analysis we do is an uncertainty reduction activity. We almost guarantee ourselves a slightly longer project, but I think we reduce the likelihood of having nothing useful to show to the customer at the end of our budgeted time. Critically, I think we have to inform our customers of the tradeoffs we are making around up-front analysis, and how that will impact them.

Excuse me now, I have to go ask someone a question!

Every tester needs a healthy dose of paranoia

I wonder if Google testers think like this?

http://www.radaronline.com/from-the-magazine/2007/09/google_fiction_evil_dangerous_surveillance_control_1.php

Page 1 of 212»

About me

I'm Jared Quinert, a testing consultant located in Melbourne, Australia. With over fifteen years of experience, I specialise in agile testing, context-driven testing and intelligent toolsmithing with a focus on business outcomes over process. As one of the most experienced agile testers in Australia, I've been diving in hands-on since 2003 to discover how to build successful whole-team approaches to software development.

Contact Me