Hot off the Press: How to Get Started with the New Applitools / Robot Framework Library!

Getting Started — Published November 23, 2021

I was really excited to hear that there was a new Applitools EyesLibrary. Being a Robot Framework Coach and Mentor I was really excited to see that this new library come out. I have been working with several people on older versions of Applitools EyesLibrary for Robot Framework. But with the recent updates to Robot Framework and Applitools these were in the state of needing repair.

Here I am going to take you through my first insight or exploration into the new EyesLibrary and provide you with a short tutorial which you can follow along, gaining a good introduction to the new EyesLibrary.

Robot Framework and its Libraries

For those not familiar with Robot Framework it is a natural language, keyword based, testing framework. That means instead of reading as a syntactic programming language it reads like a testing story. My test might read like “Navigate To The Home Page”, “Edit The User’s Preferences”, “Add Applitools To My Skill Set”, “Verify Applitools Is In My User Profile”, “Perform Visual Check Of User Profile Ignore Newly Added Skills” etc. And being a framework means that it can be applied to many different areas, like visual testing. This is done through Libraries which provide a set of task specific keywords.

Here I am going to talk about Applitools EyesLibrary, which is a library for performing visual testing using Applitools with Robot Framework.

Dive into the Applitools Library for Robot Framework

Coming from the Robot Framework side the first thing I wanted to do was go look at the keyword documentation for the library. I was curious to see what keywords were there and what I could do with the library. The first thing I noticed was the large number of keywords; much larger than I expected. Luckily I could use the tag filtering so I could filter out the keywords based upon categories. From the categories I can start to see there were keywords for visual checks, targeting parts of the screen, some configuration, and something called ultrafast grid (which I won’t cover in this article).

Get started with Applitools today

Sign up for your free account

You can get started with a free Applitools account today and follow along with this tutorial.

Get My Free Account

Documentation Is Key to Understanding

Taking a step back I decided to go review the documentation for Applitools. There is “Overview of visual UI testing” which really outlines what is visual testing and what are the steps in the process of visual testing. I felt I had these concepts pretty well understood in my mind. To get to the answer of how do I use the EyesLibrary to do visual testing I found the Robot Eyes Library sdk/api documentation key.

The Robot Eyes Library documentation under the SDK section really outlines the “how” whereas the keyword documentation gives us the “what” and Overview/The Core Concepts gives us the “why”.

The “how”, what we need in our robot scripts, really just breaks down to this: one most open their eyes, perform a visual check on a region maybe with some special configurations, and then close one’s eyes.

Setting up my Environment

Let’s start by setting up our environment. You will need Python (version 3.6 or greater, I recommend Python 3.8) installed on your system. The setup involves installing and initializing the library and then setting your API key. To simplify setup, I’ve created a script, setup_eyes.bat for Windows or setup_eyes.sh for Linux. Run the corresponding batch file or bash script at the command line or terminal prompt. It will ask you for your Applitools API Key so have that handy.

My First Test

For my first script, I wanted to do something simple – perform a visual check on the demo page without changing anything. Essentially I wanted a very simple passing test to validate everything was set up properly and to start to explore. Here it is:

Go ahead and run this script by typing at the command line or terminal prompt robot firstsight.robot. You might see a verbose result summary showing matches. You should also see the batch appear within your Applitools Dashboard for which you could set the baseline image. Rerun the script seeing it pass each time.

Now execute the firsterror.robot script (robot firsterror.robot) which contains the one additional keyword line Click Link ?diff just before the visual check. This will change the demo page causing several visual differences. You should notice an error in the robot output, post-execution, noting the difference and referring you to the Applitools Dashboard URL.

Initial Observations

The first thing I noticed with these two scripts is the output or result summary from the visual checks. Passing or failing we get information about the status of the visual checks. Also the status of the visual check does not affect the status of the robot framework check. That is the visual check is outside of the context of the robot checks which has interesting possibilities for “context switching”.

The next initial observation, which is hidden in the scripts above, is the addition and usage of the Fully keyword/setting. We see from the documentation this sets the visual check to the whole page. Initially I did not use this and the size of the visual area I was checking would change although the content did not and my tests were failing. Think about that, the content or what I was seeing was the same but the area I was checking was not, thus a “difference”. This led me to a deeper understanding of what factors into a match against the baseline.

What Factors into a Match Against the Baseline

Quickly the first factor is Viewport size which is what I saw when I did or did not have Fully in my test. There is Host environment and Version information which relates what I call the environment under which I execute. Finally there are test suite/test case related factors testName and the appName which label the application under test. We can set appName in a few spots.

Although these factors seem straightforward I do feel it is important to mention, so that one can see what factors into matching and how one sets these with either the robot script or the environment the script is run on.

Targeted Window, Region, Frame versus Generic Target

Looking beyond the basic Eyes Check on a specific window or region or frame, we see one could do, instead, the generic Eyes Check and then target an area. Taking a small step forward let’s run both the Eyes Check Window and the equivalent Eyes Check using the target of Target Window, targeting the full window, and with a name.

Running to use the specific Eyes Check Window keyword we type robot windowtarget.robot at the command line or prompt. Then re-run this time typing robot -v useCheckWindow:False windowtarget.robot to execute the generic Eyes Check instead.

More Observations

Up to now I haven’t discussed how a Robot Framework Test Suite and Test Case relate to Applitools objects via EyesLibrary. As you have seen in the Applitools Dashboard each time we execute we get a new batch. And from the examples above each batch has a test. As we have had only one robot test case per file (test suite) we see only one Applitools test per batch. These tests have a visual representation for each visual check in the test case. They are steps in Applitools vernacular. The name which we have used in keywords label those steps. Within EyesLibrary (as well as elsewhere) they also refer to a tag which appears to be the same as name and these are used interchangeably.

In manipulating my robot test cases one observation was that the visual order of steps relates to the order in which multiple visual checks take place within a test case. This raised in my mind the question: how do I see the history of a visual check? It appears one can see history by grouping results within the Applitools Dashboard. There are also branches which allow you to version your baseline visual checks.

If these batches, tests, steps, branches, tags and names are confusing to you don’t worry. I experienced the same when first looking at Applitools Dashboard. With some experimentation I was able to start mapping the relationship between robot test suites and cases and Applitools.

The design of this EyesLibrary is slightly different then other libraries. Here keywords, especially the check settings and target keywords, act like what would be arguments in other libraries. But due to the large amount of configuration, it works. One should also note that the keywords are case sensitive within the library. For example, to check all the contents of the window if we used FULLY (all caps) we would receive an incorrect keyword argument error. The proper usage, as we have seen, would be Fully.

Advanced Use: Configuring Your Visual Check and EyesLibrary Keywords

Let’s explore this “building block” design of keywords with the script regioncheck.robot. First, as we have done before, we check a region using a single specific keyword, using the new keyword Eyes Check Region By Selector, naming the check. Next we build a visual check that involves the full window and, in addition, ignores a region; that being the random number within the sentence. Finally we start with the full window and then we ignore multiple regions. We can run this script by typing at the command line or terminal prompt robot regioncheck.robot.

This example starts to show how we can build complex visual checks using the keywords and the structure of the EyesLibrary.

Additional Features?

Asking what additional features could be added, I could see how the EyesLibrary could provide a keyword for getting the coordinates of a region that encompasses several different elements. That is, given the various selectors or elements as arguments, return a coordinate set which encompasses them all.

Maybe a configuration option to fail within the test stopping the execution of remaining tests. as well as the error at the end. In addition a method for saying here is a visual check and we expect an error. If an error is not found then fail. Similar to the Robot Framework keywords Run Keyword And Expect Error and Run Keyword And Ignore Error.

Build versus Buy Decision

Before I conclude, I want to address a frequently asked question: “Can’t I just build my own visual checking tool?” The answer is yes but the real question is at what cost? There are a lot of factors that one needs to consider when considering to build or buy a solution. Image processing is not a simple task and it takes a lot of factors to make it work successfully. One should ask how much effort will go into getting it right and dealing with false positives. Another factor is maintaining the solution. If your developer leaves will anybody be able to maintain your visual checking? It is always an option to build but there’s also a cost to building oneself. I encourage every organization to perform an in-depth build versus buy cost analysis.

What’s Next

I’ve explored WebTesting using Selenium. There are other areas that Applitools works in too – mobile, responsive design, even accessibility. How could Robot Framework and EyesLibrary help in testing those areas?

Denali Lumma at the 2015 Selenium Conference in Portland gave an excellent talk outlining what should testing in the future look like. Included in her points was the idea of easily context switching such as adding in visual checking as a goal. I would like to see examples of this vision, making it a reality using Robot Framework and EyesLibrary.

I encourage you to explore the EyesLibrary even further. I look forward to seeing users combine Robot Framework and Applitools using the EyesLibrary.

Are you ready?

Get started Schedule a demo