The development world is Big Data hungry, but it often feels like a race to nowhere, because the number of options is huge, and the use case is not always clear. Log analysis is one Big Data tool that makes the lives of development teams better, or it should. And log analysis can be used beyond developers and IT operations – to benefit Quality Assurance (QA) teams in the process.
We tend to focus on pretty graphs or robust query engines, but the key is to identify a tool that expedites insights into performance: good, bad and where there is room for improvement.
Specifically, these are developer logging tools for:
1) Error Monitoring;
2) System Logs;
3) Application Performance Monitoring (APM); and
4) Marketing Analytics
For Quality Assurance, the tool that best satisfies this criteria is a system logging or application performance monitoring tool. APM cannot fully address the needs of a QA team, because the focus is on post-release. A robust system logging tool, such as Splunk, Loggly or LogEntries, is preferable, because this type of tool works with a wide range of existing log formats and has an API allowing a logging tool to be utilized as a new QA team member.
How Does Log Analysis Benefit QA?
While log analysis definitely benefits QA, it won’t be by speeding up test runs, test case creation or running tests. Log analysis does, though, assist QA teams in determining what to do to improve these areas. Log analysis helps QA by:
1) Understanding how QA has improved over time, including time series on bugs caught, speed of testing and amount of test coverage. If these tests show that improvement is up and to the right, the team is getting better.
2) Identifying issues and their locations faster in test runs. Since log analysis tools were originally used for system monitoring, they will indicate the status of test runs, how performant they are, infrastructure impact and the location of errors more quickly than Selenium scripts. Modern versions of log analysis platforms not only specify what went wrong, but also send out an alert when the system deviates from a normal pattern—a tremendously useful feature for QA, because the team is able to quickly spot scripts that did not run but should have, or test runs exhibiting bizarre behavior.
Visual Testing tools are new players on the field. They help connect the bugs to the actual screens of the application where bugs were found. Their contribution here is that they provide a visual collaboration platform to QA, R&D, and PM, and serve as a visual bug log that is easy to understand and refer to when needed later on in the process.
3) Using the log platforms API allows log analysis to create an audit trail for all tests, scripts and runs, and generate better reporting, indicating team improvement, new tooling needs and the overall impact of QA/QE on the delivery process.
Leveraging log analysis in QA can be extremely beneficial, as long as they do not become a distraction. Sometimes log tools that put an emphasis in query based pull of data become a time suck, or a trap.
The log analysis utility is far more important than the overall stats that convey no real actionable information. Use log analysis to improve QA/QE team efficiency, process quality and communication by collecting data on test runs, test infrastructure—and even API calls for all major test cases. By doing so, QA will optimize their processes holistically, spot issues faster than debugging Selenium or unit tests and act as an example to the entire company of the importance of QA methodology.
Chris Riley is a technologist. Helping organizations make the transition from traditional development practices to a modern set of culture, tooling, and processes that increase the release frequency and quality of software. He is an O’Reilly author, speaker, and subject matter expert in the area of DevOps Strategy, Machine Learning, and Information Management.
Image taken from here