How to Build a Successful QA Team in a Rapid Growth Startup

Getting Started — Published May 6, 2022

Learn how to build an effective QA team that thrives in an environment where things change quickly.

Within a startup projects can be the highest priority one minute and then sidelined the next. As the QA Lead at a growing startup, recruiting testers was my number one priority in order to keep up with the pace of new projects and demands on my current testers. I’d been told a new project was top priority, so I’d just offered a role to a tester who I thought would be a great fit to hit the ground running on it. 

Luckily, I hired a person who was resilient and took the immediate pressure in stride, but putting so much pressure on them with such a high priority project and requiring them to learn the domain super quickly was not how I would onboard a tester in normal circumstances.

And then, three months of hard work later, the project was scrapped. In a startup, this happens sometimes. It’s never fun, but that doesn’t mean you and your QA team can’t learn from each stage of the experience.

Dealing with Changing Priorities, as a Domain Expert

page not found error on computer screen

Testers often gain considerable domain knowledge while navigating the application from the users perspective multiple times a day. However this can be difficult to sustain when the priorities are changing so frequently. During this project there were changes to designs, user journeys and business logic right up until the last minute.

How does a tester keep on top of all these changes while maintaining tests and finding bugs which are relevant and up-to-date with the latest requirements? Within this environment it can be hard to stay motivated and perform your role at the level you aspire to. As a manager in this situation, I made sure that the tester knew their work would not be compared to normal circumstances and I was aware that all the changing requirements would lead to delays. Dealing with these challenges requires a person with certain hardy attributes.

It’s All a Learning Experience

Tests or code you’ve written which are deprioritized can be disheartening, but looking at it as a learning opportunity can help deal with this. The tester involved saw opportunities to reuse some code they had written for future tests and also felt they could use the test data spreadsheet as a template for other pages of the site. I was really proud of how they dealt with changing priorities and saw opportunities to take learnings with them to future testing scenarios.

It’s Okay to Write Poor Quality Automated Tests

In this fast moving environment, where designs are likely to change and user journeys are not set in stone, it’s okay to write poor quality tests.

When writing code, you want it to be easy to understand, maintain and reuse. However in this environment that isn’t possible, so writing code that you know you will need to refactor later is absolutely fine. The automation tester often would request a code review with a long description explaining why he’d duplicated code or why he’d not moved a piece of logic to a separate function. I always approved their pull requests and suggested they leave a comment as a TODO for them to revisit once the designs and user journeys were more stable. Always reiterating that I’m not judging them for this approach. 

Get Comfortable with Tests Being Red

The tests were red, often. Seeing your tests fail less than 24 hours after you’ve created them can be quite a demoralising feeling. Especially when it’s due to the developer refactoring some part of the UI and forgetting to keep the test-id you needed for your tests. This can be very frustrating and make maintenance difficult.

In a fast moving environment, it’s okay for your tests to be red. The important thing is to keep adding tests as these will be your lifeline when you are getting ready for a release and you have a short amount of time available. These tests will be critical in the lead up to go live.

Dealing with Design Changes, as an Automation Tester

design wireframes different variations

Designs are often not complete before you start development, particularly when working with overly optimistic deadlines. In this project, even the user experience (UX) research wasn’t complete at this stage. Meaning that the foundations of the frontend development process are not finalised. During this project, as mentioned previously, there were changes to designs on a regular basis. This impacts the automation tester quite significantly and can cause them to think they should wait until the frontend is stable. Often it is recommended to not automate when the frontend is changing frequently.

So what do you focus on in this scenario without becoming annoyed by all the wasted effort? Building the structure for the tests including visual, accessibility and performance. As the automation tester knew they couldn’t rely on elements or any specific locators, they focused on whole page visual snapshots, accessibility reports and page render performance metrics.

Visual Snapshots before Functional Checks

As the website was going to be changing on a daily basis, functional tests would be so brittle that we sought other alternatives.

Visual testing seemed like a great alternative as we could easily replace the base snapshots when the designs had stabilised. With this approach we weren’t targeting specific parts of the page or components, which is how I would usually use visual snapshots in order to ignore dynamic content. 

To combat this, within the content management system (CMS) we could create test pages with the same layout as the homepage, for example, to run our visual tests against the whole page. This way the content wouldn’t change on the page and we could make comparisons quickly across different resolutions. This saved us a lot of time and effort compared to functional tests.

Whole Page Accessibility Audits

As images were being swapped out, colour changes and font swaps were happening frequently, meaning developers forgetting about accessibility was a common occurrence.

The accessibility audits on the page allowed developers to get instant feedback on quick fixes they needed to make to ensure the accessibility wasn’t impacted with their most recent changes.

Page Render Performance as a Smoke Test

Marketing would frequently send over a new image, then a developer would update the image on the site, often skipping the optimization step.

Using Google Lighthouse as a smoke test, it was easy to identify images that hadn’t been optimised. Perhaps the image was the wrong size or wasn’t suitable for mobile. This meant we could quickly go back to marketing and ask them to provide a new image. Catching these performance issues as early as possible means you don’t have 100’s of images to optimise at the last minute before release.

Dealing with Projects Being Scrapped, as a Person

passion led us here

Days after the release, when the website was just released to the world, we got the news. Due to time pressures and changing designs right up until days before the site was made live, we didn’t deliver the highest quality website. There were bugs, the journey for the user was a bit clunky and the search results weren’t very accurate. None of this was down to the team that worked on the website, there were really some superstars and people worked weekends or late nights to deliver the website on time. However business stakeholders had decided to engage with an agency, with a view to outsource the website. 

This came as a real shock to the team and wasn’t quite the news everyone was expecting just days after working hard towards a common goal. All the tech debt and automated test coverage we left for post release was now put on hold. So how would you react when the domain knowledge you’ve acquired, code you’ve written and technology you’ve learnt overnight is not required anymore? It can be very disheartening to hear your project has been scrapped and the hard work you put in can seem like it was for nothing.

Lessons Learned, Delivering Rapidly and for Nothing

It’s not all doom and gloom. There are many lessons learned along the way which will help you develop as a resilient member of the team and also how to work in a rapidly changing environment, quite useful if you are working for a startup.

One of the most important lessons that I learned was to focus on what I could control, such as working with the automation tester to come up with solutions to the fast moving changes. I couldn’t control the deadline or the scope changes 2 days before going live. But I can give my advice as someone with experience in these situations before, of the risks late changes will cause.

Another positive to come out of this was the focus on visual, accessibility and performance in a holistic fashion. Usually I would focus on making my tests robust, target specific components and use this at the end of the project for regression purposes. However, now I have another use case for these testing types.

Testing in this setting is not ideal and requires some sacrifices in terms of quality. Leading QA on this project wasn’t an enjoyable experience, required careful management and one that I will not forget anytime soon. But I learned way more on this project than I would have learned if the project had gone smoothly.

Are you ready?

Get started Schedule a demo