How often do you hear about stable, successful UI automation projects?

Not very often?

Me either.

Many of the UI automation projects I hear about have more bugs and gremlins than the software they are attempting to test!

Tests fail because they attempt to click a button, or type into a text field that isn’t fully rendered. Or, they might fail because of changes to the graphic design that made it into the test system over night. Each of these problems requires time and code to fix. If you have enough time, and layer enough code, you might get a stable test suite. At that point you have a new problem on your hands, that the automation project can’t keep up with the speed of feature development.

Let’s take a look at how most people get stable UI automation, and another option that might just save you some time and money.

 

The Normal Stability Path

I have had to commit my life to refactoring code to get a stable UI test automation suite. Everyday is a new game of whack-a-mole that ends with a little more code. I initially wrote a general solution, a utility that polled the web browser, waiting for AJAX calls to be complete. This utility worked, but it was inconsistent. Sometimes we would get lucky and the page would be ready by the time the polling was complete. Other times, the AJAX calls would complete, but the page wouldn’t be completely rendered. When this happened, tests fail because they were trying to use a part of the page that wasn’t visible, or just didn’t exist yet.

My next strategy was building stacking implicit waits. I’d pick a few elements on the page and then write code that checked that they exist and are visible before trying to click any buttons or enter any text. For example, I might check for a button, a text field, and a menu link that the script will use before proceeding.This strategy works much better, but it comes with the cost of more code.

The Document Object Model, or DOM, is fickle. Tests will still fail because of timing occasionally even with that arsenal of techniques attempting to make sure the page is ready. Just because one or two elements on a page exists doesn’t mean the entire page is ready to use.

 

Stability Without Code

Visual testing tools can get you better results with less code.

Rather than creating implicit waits to poll for buttons and text fields, you could perform image comparisons.

Let’s look at amazon.com as an example. I want to write a test to search for Selenium books, but the front page has a lot of components and dynamic data. Using a visual testing tool, I could perform a comparison on every page component (but not their contents) or just a handful of components. For my search test, I would want to compare the current version of the search category selector, the search text field and the search button to the images captured in the previous build. Over the next couple of seconds, a visual testing tool will perform several image captures of the page. When the images match, we know that the script is ready to set the search category selector to books, enter the string ‘Selenium Webdriver’ in the text field and then click the search button. Or for a more thorough and robust approach, I might capture every page element and compare it to what was there in the previous build. Doing this reduces the overall amount of assertion code in your tests, and improves coverage to the UI elements.

For the most part, this strategy can be built in just a couple lines of code. These lines of code don’t need to be tweaked, refactored, or cajoled over time. The real magic happens under the covers when the image comparison happens. Best of all, when the graphic design changes again, I get the notice, accept the change, and the tests have a new standard to test against: The newly promoted capture.

 

Refactor for Simplicity

The Webdriver API is like a swiss army knife. You can use several different tools to complete any given task. Some approaches might be better than others, but you have to try them for yourself to find out.

You could commit yourself to code and refactoring by writing implicit waits, and still have occasional test failures. Or, you could outsource the task of discovering page readiness to a tool that specializes in visual testing.

The best reason I know of refactor test tooling is to delete lines of code.

 

 

Post written by Justin Rohrman:
Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association for Software Testing Board of Directors as President, helping to facilitate and develop projects like BBST, WHOSE, and the annual conference CAST.

Written by Justin Rohrman