Reproducing Bugs Found by Automation

Things That Go “Bonk!” in the Night

It can be incredibly annoying to run a test, have it fail, and not know how to reproduce the problem.

It can also be aggravating to people who haven’t written the test, to have just a stack trace to work with.

Been there, done that.

One of the things that’s crucial for test automation is the ability to communicate what the test was doing when things failed, so that people know how to reproduce the problem themselves.

Below are some strategies that you can try.

If you’re starting from scratch with automation, consider using Behavior-Driven Development tools, like Cucumber for Ruby or Java, Behave for Python or NSpec for C#.

Out of the box, you get documentation of what the test was doing up until the failure. When you write your feature like this:

Given I have navigated to "www.gmail.com"
When I enter a username of "joeschmoe@gmail.com"
And I enter a password of "password12345"
Then I should be logged into the account

Then if the test fails, the steps will get printed out for you automatically.

BDD tools will work for UI or webservice testing. Shouldn’t matter what you’re trying to do–if it can be done programmatically, this is a tool type that will work for you.

Use a log file to publish what the test was doing while you weren’t looking. Put as much info in there as can be understood cleanly. If the test fails, include the log with the bug report.

Take screenshots when problems happen. For example, if a UI test fails, thinking it’s going to be on a particular webpage after doing a set of operations, then why did it fail?

Maybe the server was down, or maybe there was some new required field that was added, or maybe there’s some extra validation happening that wasn’t when the test was first written.

You could write a lot of code to figure this out. Or you can take a screenshot. 

Dump payloads for webservice tests that fail. This is like screenshots but for backend tests.

If your test was expecting a payload to contain a particular field/value set, and it was wrong, then your test would likely yell and say so. But that doesn’t really help find out why it failed.

Maybe there was some other field set weird that would help explain what happened. Nobody would know if the whole payload isn’t printed out.

Dump payloads for webservice tests that succeed too, what the heck! It’s helpful to print out what the payloads were, what the endpoints were, for each step in the test, so that later on, if someone needs to know what happened to lead up to a failure (or even how to run a test manually), the info’s available.

Print steps as they happen if you’re working with legacy automation and rewriting using a BDD tool isn’t feasible. Each method that’s called can print out what it’s doing so that you can follow along. 

If you’re using a scripting language like Ruby, then you can clean up that kind of code by using dynamic programming to generate boilerplate code for you.

Conclusion

Any of these tips will help you understand why a test failed, and how to communicate that to members of your team. Do you have any other tips to share?

Happy hunting!

Fritz


Howdy! My name is Michael Fritzius, and I’m a tester-who-codes-but-isn’t-a-developer based in St. Louis MO. I started a consultancy about two months ago, and have been having a blast helping a client get to the next level with test automation. 

The approach I’m taking is to have a framework simple enough that it doesn’t take as much Brain Juice to understand and use it, and also to cut way back on how much time it takes to maintain it. And then I’m helping manual testers pick it up and begin writing automation. 

I’ve believed for a long time that manual testing is key to software quality–no automation can replace the creativity that a human mind possesses, but many people think automation is meant to replace manual testing. Not true. Automation is meant to assist manual testing–to help free up people to use the creativity instead of getting tunnel vision from repeating the same tests over and over and over and over and over and over and over. 

The results have been astounding. The other day, a manual tester on my team was able to automate 5 test cases, from scratch, in an hour, without writing a single line of code. No foolin’.

Would you like to know more? [ ] yes   [  ] no. Head on over to archdevops.com. Maybe we’ll have something to talk about!

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s