Catching Changes Before They Become Problems

Have you ever had a battery of tests that was stable, that all passes, and you have a beautiful green run all the time?

Do you feel like it’s a pretty good filter for catching most any bug that could be introduced?

How do you feel once you hear that a bug was found, and you see your green board smiling down on you? Not so good maybe?

“How did the tests not catch that?” Hmm…

The Cause

It’s because all of our automation does exactly (and only) what we tell it to. Our tests don’t notice changes in the application unless we tell them to.

I might have mentioned this somewhere else, but: code is an incredibly organic thing. 

Does it seem weird to say that about code? Maybe. But it’s true. Like how we can say that inanimate objects have “personality”, code will take on the nature and flavor of those who write it.

And, if the group of contributors is high enough, you’ll start to see things happening, that don’t occur with a smaller team.

Things get… forgotten. Or, changes get introduced that people don’t know that automation hasn’t been wrapped around the change yet.

Once your automation is written and is reliably giving you feedback, how do you know when you need to write more?

And before people come running out yelling about, “Good agile teams should be communicating so this doesn’t happen”–even when team members are communicating well, there’s still the chance that something could get in without you knowing, and something not being covered by automation.

It’s like defensive driving. Usually you won’t need it, but it’s a good thing to do in case other drivers don’t adhere to “best practices”. Know what I mean?

As testers and automators, we strive to keep a balance between testing and automating too much, or doing too little, of both.

When we think we’ve got a pretty good set of tests, we can also easily think that we don’t need to add anything. It’s pretty complete, and maybe we’re not even looking for 100% coverage, we’re looking for 70%, 80%, and we have that.

And even so, sometimes things squeak past our defenses.

It’s rare (or it should be, anyway) that this happens, but if the wrong kind of bug got through, it could be a problem for a company.

Trying to solve for “how do we test for what was added?” is a pretty tough nut to crack. But a very solvable problem is, “how do we detect when something was added?”

Detecting New Functionality

Anytime something gets added, there’s going to be some kind of indication of that happening. If there’s a functional change, some code somewhere else will have to be changed too, to handle it.

In poker, this is called a “tell”–it’s a change in behavior based on some new condition. You can tell whether a player has a good or bad hand, based on certain things, like facial expressions or nervous tics, and adjust your plan accordingly.

In testing, you can do something similar. Here’s a story.

Let’s say you have a product with a nice UI. The UI has a navigation menu so that each page isn’t clogged with too much information for the user.

If a developer adds new functionality–say, a new page–where do you think the tell would be?

If you said “the menu”, you’re absolutely right. The menu would likely change to include something indicating the new page.

So if we set up a test to simply confirm the contents of the menu are the same as last time, then when that test fails, we’ll see that it’s because there’s a whole new page in the menu that wasn’t there before.

This is our tell, which lets us know that we may need more automation for that new page. After that, we update the tell test with the new menu contents.

Here are some other types of tells we can set up, based on what we’re testing:

 

For Web UI:

  • Count how many user inputs are on a page. New places where a user can shove new data into the system indicate new functionality somewhere else.
    • You might want to track just the number of text boxes, radio buttons, checkboxes, or…
    • …if you’re handy with Javascript, you can put an outline around any elements you don’t have locators for yet, and take a screenshot.
  • Check the contents of menus.
    • Menus help the user navigate around your system.
    • If there’s additional functionality, it’s likely there will be a change in the menu.
    • If order’s not important, then you can make this check a little more change-immune if the order of the items will make your test fail.
  • Confirm that any elements that take you to another page, have the expected “href” attribute.
    • A change here could take you to a new place, and a lot of other tests could fail as a result.
    • Knowing that it’s because of being at a new site will save time troubleshooting later.

For Databases:

  • Get a list of tables and stored procedures.
    • New ones of each of these can cause unknown changes.
    • Check to see that the columns for each table are still as expected.
  • Make sure that the amount and type of data each column holds hasn’t changed.
    • Sometimes changing the data size can lead to data being “clipped” later, such as when a text field max length is 30 characters, but a column is now set to 25 because 30 seemed unrealistic.

For Webservices:

  • If your webservices are equipped with a listing of what get/post/put/delete/etc. endpoints they have, compare that list with the last known set.
  • Parse payloads and confirm the fields in the response are expected.
    • Did you get more or fewer fields back?
    • Have some fields changed names?
    • Have some fields changed types? i.e.: used to return a hash, now returns an array?

Remember: a test like this that fails does not always mean it’s a bug. It’s just a difference. They also make good indicators for why other downstream tests may be failing too.

Where This Testing Fits In

This is a good layer to put between smoke tests, and the rest of the tests. Your smoke tests may still pass, but not test for any changes that were introduced.

Other Benefits

If it’s difficult tot extend your tests to handle changes, this will show you where you need to refactor.

If you have to make almost the same change in multiple places, it’s time to refactor and make those changes reside in a common place. The less time it takes you to maintain your test suite, the more effective they become.

What Others Can You Think Of?

Do you have problems with tests that fail due to known changes elsewhere? Does it slow you down? What kind of changes cause that problem for you?

 

 

One thought on “Catching Changes Before They Become Problems

Leave a comment