Opinion

Smart assistant or arrogant intern? The impact of Generative AI in software testing

Despite concerns, humans in testing are going nowhere 
By
David Colwell
By
AI robot tests software

We must address the elephant in the room, and by that I mean goldfish.

Humans are losing their ability to concentrate. Our commitment to 10-second TikToks, and the relentless information overload we are subjected to has finally caught up with us. When it  comes to attention spans, the infamous goldfish now clocks up nine seconds compared to our measly 8.25.

Miller’s Law backs this up. In 1956, Harvard cognitive psychologist George Miller approximated that humans can hold seven (plus or minus two) objects in our working memory. A recent revision of the same study discovered that the number is now actually closer to five (plus or minus one).

This does not bode well for the navigation of an increasingly complex business and IT landscape where multiple apps integrate alongside thousands of business rules and over a million data objects that demand our complete attention.  We appear to be well outside the scope of what the human brain can retain, which has inadvertently created a perfect storm for defects finding their way into application/software production.

To make things worse, software bugs generally don’t make themselves readily apparent. They often hide in the intersections and logic gates of how data interacts with itself. On top of that, the real world comes with real relationships - business partners, developers, and project managers asking for status updates and product go-live dates. The amount of information that needs to be kept track of in order to account for not only finding, but solving these bugs, is simply overwhelming for humans.

The bar is on the floor, ready for AI to raise

This, however, is where AI thrives. Not in decision-making, but in acting as a filter. Unlike our five object working memory, Generative AI’s short-term memory can retain 50 pages of information. It can analyse large datasets to learn patterns, process data at runtime to give discrete answers, and has an amazingly powerful capability for drawing connections between disparate concepts and summarising them for human consumption.

With AI now taking the role of filtering information down to a manageable amount, humans can utilise one of their best traits: discernment. While AI would struggle with this, we can use our judgement to make the best choice between distinct options provided by the AI. This has the potential to cut our time spent on rudimentary tasks like writing tests in half, allowing us to focus on the results.

Smart assistant, or arrogant intern?

Given this distinction between the role of AI and humans, AI is by no means ready to take the reins. We often refer to Generative AI as a smart assistant, but I see it more as an ‘arrogant intern’ with too much creative licence. Give it a job and it returns sooner than anticipated with something that could be fantastic, but could equally be terrible - and without explaining how it was done. AI needs to be directed and corrected; it cannot be entirely trusted.

Generative AI can create a series of test cases based on a business requirement, but that doesn’t mean we should accept every single one. There will be many instances where AI creates cases that are redundant because it doesn’t have a complete understanding of the organisation. It simply covers all perceivable bases, and in so doing, will leave some test cases that are irrelevant and repetitive.

This doesn’t mean that AI’s capabilities aren’t useful. It still acts as a great way to receive feedback or review for requirements. Many test cases may not need adjustment or modification, but the overall process of using AI in this manner allows us to more quickly identify and correct edge cases and defects.

Generative AI can also act as an excellent knowledge assistant. On a day-to-day basis, teams are required to process and take in all kinds of broad information, from Jira tickets, to Slack conversations or meetings. AI can scan documentation, resources, or other large text bodies for information, which can be requested and given on command, cutting out significant portions of tedious work in finding information manually. This is already one of the primary ways people use the now one year old ChatGPT. It’s incredibly useful, but isn’t quite capable of being left completely alone without a human there for correction.

The circle of AI

While AI can help us with our work, it must be checked. And the more AI apps there are, the more need there is for human validation. So rather than AI replacing testers’ jobs, it’s actually reinforcing the need! Instead of seeing it as a threat, look at it as an opportunity to not sweat the ‘small stuff’. I will certainly not complain about saving time, minimising tedious activity, or taking a higher level look at strategy!

Rather than worrying about the improbable scenario of Generative AI taking over software testing and rendering our jobs obsolete, let’s focus on more tangible human concerns - such as outperforming the attention span of goldfish. Now, that’s a challenge surely worth our concern!

Written by
David Colwell
Written by
January 29, 2024