You’ve decided that test automation is working or is going to work for you. That’s good; you’ve done your homework. Furthermore, you’ve decided that having a common or “base” automation team is your preferred organizational approach; you’ve decided this team will build and support the common automation infrastructure that will be shared across the teams in your organization or your company. Awesome!
How are you going to fund that team?
Unfortunately, money does not grow on trees and organizations have pre-defined budgets that are hard or impossible to amend. Additionally, Organization A is not generally interested in helping Organization B, unless helping is in line with meeting Organization A’s goals. This means that if Organization A has governance over the supposedly common automation infrastructure, that organization is only inclined to help Organization B as long as it also helps Organization A or, at least, doesn’t impede Organization A.
This all means that, in most cases, counting on organizations to work together for a common infrastructure “good” is not a tenable business strategy. That being the case, there are a few approaches to funding a common, or “base” automation team that are described below.
The Automation Tax
An Automation Tax is essentially what it sounds like; each year, each team or organization has a flat dollar amount or percentage of their budget allocated to fund the base automation team. In return for this funding, the funding team/organization gets unrestricted use of all code produced by the base team, including code, features, and fixes produced for other teams.
The easy math on this approach is quite attractive to many base team organizations. It’s also easy to “spread the wealth” in that large teams or organizations pay more than small ones do if the funding is a percentage of their budget. The logic here is that larger teams generally require more features and support in shared automation code than do smaller teams. The wealth is spread because all teams, large and small, gain access to features and fixes requested by other teams, regardless of the size of the requestor.
This approach can also make maintenance of the base code easier, at least from a funding standpoint. If the base team is funded a general amount from each user team, then care and feeding of the framework and stack base code is included in that amount. There is no need to scrimp and beg for “additional” funding to perform an upgrade of one of your automation tools or to port to the new version of the language or operating system that you use. It’s all built-in!
Unfortunately, this approach is not all fun and games. The accounting and reporting of work performed needs to be very detailed and will often be challenged by the user base. In addition to challenging the veracity of the reports, some common questions and challenges include:
- Team A and Team B both want the same feature added to the automation base; which team gets charged for it? It can be the case that each wants the other to pay for it. They will definitely want to see that only one of them paid for it and the base team didn’t charge twice for the work.
- Team C’s base team tax equates to approximately two people of effort based on the loaded labor rate. Team C wants to know which two people will be working on Team C’s work items. This may not be the best way to staff a base team.
- I’m the largest funder of the base team; why weren’t all of my requests completed prior to working on items for any other team?
Over time, the tax may need to be reduced because the year-over-year effort may reduce, leaving the base team with idle team members and dissatisfied “customers.”
For the Direct Funding approach, each year, each team or organization decides what features and support they will need for the upcoming year, add that to their budget, and then fund the base automation team; the funders must also project when they will need each feature or bug fix. The base automation team is obliged to appropriately staff for both the workload and the expected timing of delivery.
From the users’ standpoint, this is an awesome way of funding. A user organization tells the base team what to develop, the base team negotiates the cost and the delivery date, then delivery happens. The priority of the work is pretty clear since each funding organization is paying for specified work by a specified date. If many features or fixes are required in the same timeframe, the staffing and, therefore, the cost to the funding organizations will be higher; this cost, however, is stated upfront during the funding or budgeting period so there are fewer surprises. If a user organization doesn’t like the required funding, they negotiate over the date or the content.
The downsides of this approach typically fall on the base team. The base team has to account for variable staffing based on the expected workload and expectation dates. If many projects are all to be performed in parallel, it can be necessary for the base team to ramp up or ramp down in the middle of a fiscal year; this can be problematic in some companies. Additionally, managing short-notice staffing ramps can necessitate working with an outside partner to provide contractor-based staffing; the base team leadership must work closely with this partner to project the ramp cadence, meaning the base team must have a lot of trust in the partner.
In the Usage Billing approach, teams or organizations are charged on a person-by-person, period-by-period basis, be that period month-over-month, quarter-over-quarter, or other intervals. The attraction is that it’s pay-as-you-go: if person X doesn’t use it for that period, the organization isn’t charged for that person’s usage for that period.
On the surface, this sounds like an awesome arrangement, as did the Direct Funding approach. As we’ve learned, or are learning, however, nothing’s perfect.
This approach requires high bandwidth communication between the base automation team and the user teams; user teams need to make projections about their usage for the upcoming year and the base automation team needs to staff and prepare for that usage, but only up to the amount funded by projected usage. User organizations also need to project any “big” work items they need in the upcoming year; if the projects are sufficiently large, the “base” usage may not be sufficient. Like the Automation Tax Approach, the per-user, per-period bill may need to reduce as the base matures; it may also need to increase if there are anticipated “big ticket” base development items on the roadmap.
Another consideration is the definition of usage. Is usage simply, starting the tool’s UI? Or does a user need to actually run a test script? Depending on the organization or company, other definitions of usage could be applicable. A definition must be established that is realistic, tolerable to the user teams, and can generate enough funding to account for the effort needed by the base team.
Once a usage definition is established, how should that usage be tracked? There needs to be a way of recording a “use,” which can be challenging in some situations. Is the tool expected to be usable when a user is offline, e.g. on an airplane or in a non-connected area? If so how should that usage be “recorded” so that it can then be reported once the user is again on a network? Must the tool work on a permanently disconnected network such as a testing-only LAN where connections outside that LAN or segment are blocked? If so, how will usage be reported? How should usages by a shared or “service” account, as is often configured for CI/CD pipelines, be recorded? Is that a single user?
This is the solution that is the most sensitive to the technological context in that teams and organizations that use the tool the most, pay the most for its upkeep and evolution. The theory is that the more users a team has, the more changes and support that will be requested by that team so that team should contribute funding at a higher level.
There are challenges in addition to the ones previously described. Some teams will view the per-user cost to be unfair based on the value they think they are receiving. Related to that point, is that it is hard to show fairness; a lot of transparency and reporting must be in place. Finally, it can be hard for the automation team to project staffing and handle staffing changes.
It may be a stretch to call this a separate approach, per se. It’s really the combination of two of the above-described approaches. The hybrids I’ve seen most often used are “Automation Tax plus Direct Funding” and “Usage Billing plus Direct Funding”.
In both cases, the “Direct Funding” is used to develop features that teams specifically pay for. Many teams may want the same feature, but whichever team requests it first pays for it and ostensibly helps to influence its behavior. Regardless, all features become available to all user teams whether they paid for them or not.
The funds obtained from Usage Billing or Automation Tax, which is paid by all user teams, are used to fund the maintenance and upkeep of the automation base. This maintenance includes bug fixes. Naturally, the hybrid approach will have similar pros and cons as the two approaches that are selected for the hybrid; this combination, however, may be more palatable for some teams than just using a single approach.
There are many considerations to unpack here. A subtle consideration, but possibly the most important one, is “should I even have a shared base automation team?” By creating a base, shared, or core automation team you are committing to having automation as a core competency for your organization and company. I’m not stating that to deter anyone from this approach; in fact, most “sufficiently large” companies and organizations can benefit from considering test automation as a shared service at some level due to the economies of scale. This is a great approach so long as the cost (as defined by an organization or company) is smaller than the value provided by having a base automation team.
Another consideration is the delineation between what is built by a base automation team and what is built by the user teams. Often any features or fixes that would benefit multiple teams would fall to the base team; this also required good communication with user teams on what they are building, if it would be interesting to other teams, and how to build it in a more generic, i.e. applicable to multiple teams, manner.
We can even blend any of the approaches above with an “internal open-source” concept. For internal open-source, those automation user teams that have the appropriate skill sets can add features to the common automation code. The base automation team would be the stewards of the shared code to help ensure code added by other teams is appropriate for sharing.
Certainly, there are additional pros and cons to any of these approaches based on team experiences or the contexts in which those teams are currently working. These explanations are meant to serve as a guide so that you can make your own decisions about creating and funding a base team in your specific contexts.