In this blog post, I cover the customer insights into their successes through the use of Applitools. I share what we learned about how our users speed up their application delivery by reducing test code, shortening code rework cycles, and reducing test time.
Customer Insight – Moving To Capture Visual Issues Earlier
We now know that our customers go through a maturity process when using Applitools. We see a typical maturity process:
- End-to-end test validation on one application
- [OPTIONAL] Increasing the end-to-end validation across other applications (where they exist)
- Moving validation to code check-in and
- Build validation
- Validating component and component mock development
End to End Validation
In automating application tests, our customers realize that they need a way to validate the layout and rendering of their applications through automation. They have learned the problems that can escape when even a manual check does not occur. But, manual testing is both expensive and error-prone.
Every one of our customers has experience with pixel diff for visual validation. They uniformly reject pixel diff for end-to-end testing. From their experience, pixel diff reports too many false positives to make it useful for automation.
So, our customers begin by running a number of visual use cases through Applitools to understand its accuracy and consistency. They realize that Applitools will capture visual issues without reporting false positives. And, they begin adding Applitools to other production applications for end-to-end tests.
Check-In Validation
As Applitools users become comfortable with Applitools in their end-to-end tests, they begin to see inefficiencies in their workflow. End-to-end validation occurs well-after developers finish and check-in their code. To repair any uncovered errors, developers must switch context from their current task to rework the failing code. Rework impacts developer productivity and slows product release cycles.
Once users uncover this workflow inefficiency, they look to move visual validation to an earlier point in app development. Check-in makes a natural point for visual validation. At check-in, all functional and visual changes can be validated. Any uncovered errors can go immediately back to the appropriate developers for rework.
So, our customers add visual validation to their check-in process. Their developers become more efficient. And, developers become attuned to the interdependencies of their code with shared resources used across the team.
Regular Build Validation
As our customers appreciate the dependability of Applitools Visual AI, they realize that Applitools can be part of their regular build validation process. These customers use Applitools as a “visual unit test”, which should run green after every build. When Applitools fails with an unexpected error, they uncover an unexpected change. In this mode, our users generally expect passing tests.
At this level of maturity, end-to-end validation tests provide a sanity check. Our customers who have reached this level tell us that they never discover visual issues late in their product release process anymore.
Component and Mock Validation
Our most mature customers have moved validation into visual component construction and validation.
To ensure visual consistency, many app developers have adopted some kind of visual component library. The library defines a number of visual objects or components. Developers can assign properties to an object, or define a style sheet from which the component inherits properties.
To validate the components and show them in use, developers create mock-ups. These mock-ups let developers manipulate the components independent of a back-end application. Tools like Storybook serve up these mock-ups. Developers can test components and see how they behave through CSS changes.
Applitools’ most mature customers use Visual AI in their mock-up testing to uncover unexpected behavior and isolate unexpected visual dependencies. They find visual issues much earlier in the development process – leading to better app behavior and reduced app maintenance costs.
Customer Insight – Common Problems
Our customers break themselves into two kinds of problems:
- Behavior Signals Trustworthiness
- Visual Functionality
- Competitive Advantage
Behavior Signals Trustworthiness
When buyers spend money, they do so with organizations they trust. Similarly, when investors or savers deposit money, they expect their fiduciary or institution to behave properly.
Take a typical bill paying application from a bank. The payees may be organized in alphabetical order, or in the amount previously paid. A user enters the amount to pay for a current bill. The app automatically calculates and reports the bill pay date. How would an online bank customer react to any of these misbehaviors:
- Missing payees
- Lack of information on prior payments
- the inability to enter a payment amount
- Misaligned pay date
How do buyers or investors react to these misbehaviors? As they tell it, some ignore issues, some submit bug reports, some call customer support. And, some just disappear. Misbehavior erodes trust. In the face of egregious or consistent misbehavior, customers go elsewhere.
App developers understand that app misbehavior erodes trust. So, how can engineers uncover misbehavior? Functional testing can ensure that an app functions correctly. It can even ensure that an app has delivered all elements on a page by identifier. But, functional testing can overlook color problems that render text or controls invisible, or rendering issues that result in element overlap.
When they realize they uncover too many visual issues late in development, app developers look for a solution. They need accurate visual validation to add to their existing end-to-end testing.
Visual Functionality
Another group of users builds applications with visual functionality. With these applications, on-screen tools let users draw, type, connect and analyze content. These applications can use traditional test frameworks to apply test conditions. The hard part comes when developers want to automate application testing.
Sure, engineers can use identifiers for some of the on-screen content. However, identifiers cannot capture graphical elements. To test their app, some engineers use free pixel diff tools to validate screenshots or screen regions. How do these translate to cross-browser behavior? Or, how about responsive application designs on different viewport sizes?
At some point, all these teams realize they have wasted engineering resources on home-grown visual validation systems. So, they look for a commercial visual validation solution.
Competitive Advantage
The final issue we hear from our users involves a competitive advantage. They tell us they seek an engineering advantage to overcome technical and structural limitations. For example, as teams grow and change, the newest members have the challenge to learn the dependencies that cause errors. Also, development teams build up technical debt based on pragmatic release decisions.
Over time, existing code becomes a set of functionality with an unknown thread of dependencies. Developers are loath to touch existing code for fear of incurring unknown defects that will result in unexpected code delays. As you might imagine, visual defects make up a large percentage of these risks.
Developers recognize the need to work through this thread of dependencies. They look for a solution to help identify unexpected behavior, and its root cause, well before code gets released to customers. They need a highly-accurate visual validation solution to uncover and address visual dependencies and defects.
Hear From Our Customers
A number of Applitools customers shared their insight in our online webinars in 2020. All these webinars have been recorded for your review.
Full End-To-End Testing
In a two-talk webinar, David Corbett of Pushpay described how they are running full end-to-end testing to achieve the quality they need. He described in detail how they use their various tools – especially Applitools. Later in that same webinar, Alexey Shpakov of Atlassian described the full model of testing for Jira. He described their use of visual validation. Both talks described the movement of quality validation to the responsibility of developers.
Alejandro Sanchez-Giraldo of Vodafone spoke about his company’s focus on test innovation over time – including the range of test strategies he has tried. As a veteran, he knows that some approaches fail while others succeed, and he recognizes that learning often makes the difference between a catastrophic failure and a setback. He describes the full range of Vodafone’s testing.
Testing Design Systems
Marie Drake of News UK explained how News UK had deployed a design system to make its entire news delivery system more productive. In her webinar, Marie explained how they depended on Applitools for testing, from the component level all the way to the finished project. She showed how the design system resulted in faster development. And, she showed how visual validation provided the quality needed at News UK to achieve their business goals.
Similarly, Tyler Krupicka of Intuit described their design system in detail. He showed how they developed their components and testing mocks. He described the design system as giving Intuit the ability to make rapid mock-up changes in their applications. Tyler explained how Intuit used their design system to make small visual tweaks that they could evaluate in A/B testing to determine which tweak resulted in more customers and greater satisfaction.
Testing PDF Reports
Priyanka Halder, head of quality at GoodRx, describes her process for delegating quality across all of the engineering team as a way to accelerate the delivery of new features to market. She calls this “High-Performance Testing.” In her webinar, one of the keys to GoodRx is the library of drug description and interaction pages served up by GoodRx. GoodRx uses Applitools to validate this content, even as company logos, web banners, and page functionality get tweaked constantly.
Similarly, Fiserv uses Applitools to test a range of PDF content generated by Fiserv applications. In their webinar, David Harrison and Christopher Kane of Fiserv describe how Applitools makes their whole workflow run more smoothly.
These are just some of the customer stories shared in their own words in 2020.
Looking Ahead to 2021
As I mentioned earlier, we plan to publish a series of case studies outlining customer successes with Applitools in 2021.
When you read the published stories, you might be surprised by the results they get. For example, the company whose test suite for their graphical application today runs in 5 minutes. They previously used a home-grown visual test suite that took 4 hours to complete. Instead of running their tests infrequently, they can now run their application test suite as part of every software build. That’s what a 48x improvement can do.
Another company has tests that used to require every developer to run, analyze, and evaluate on their own. Visual tests were incorporated into their suite and had to be validated manually. Today, the tests get run automatically and require just a single engineer to review and either approve or delegate for rework, the results of tests.
You might be surprised to find competitors in your industry using Applitools. And, you might find that they feel guarded about sharing that information among competitors. Some companies see Applitools as a secret weapon in making their teams more efficient.
We look forward to sharing more with you in the weeks and months ahead.
Happy Testing, and Happy 2021.
Featured photo by alex bracken on Unsplash