Quo Vadis, Enterprise GenAI?

The director of AI Lab at Pegasystems explores the future of generative AI in business
Peter van der Putten

Generative AI has taken the world by storm, and business executives are recognising its potential. Imagine a CEO seeing her kids doing their homework and her mom designing birthday cards with everyone ChatGPT. It’s natural she would also explore how generative AI can support strategic company goals, such as providing better customer experiences and driving more efficient business processes.

However, enterprise generative AI differs significantly from its consumer counterpart. Its rapid evolution challenges even the most tech-savvy, making it even tougher for business leaders to keep pace, focus on the right opportunities and understand where generative AI is heading.

Why is enterprise generative AI different?

Whilst the underlying models and services from OpenAI, Google and others are the same across consumer and enterprise applications, the needs and user experiences are different. According to a McKinsey study, use-case specific applications in areas such as customer operations, marketing, sales, software engineering, and R&D represent about 75% of the value of generative AI. Add enterprise requirements around accuracy, security, intellectual property, and privacy and it is clear that some open-ended chat window that employees copy random data into will not cut it. Instead, enterprises require highly specific workflows and capabilities where generative AI is fully integrated under the bonnet, from dynamic prompt generation and sensitive data filtering to consuming the output.

Sweet-spot applications across business functions

Let’s give some examples from the areas mentioned. In customer service, generative AI can guide agents in real time, suggest response emails or proofread chat responses, and summarise the interaction when transferring or wrapping up the call. Similar coaching can be extended from front to back-office operations, or to specific areas such as sales. Marketers use generative AI to craft engaging recommendation content for personalisation. Finally, generative AI powers better and faster development of business applications. This is not limited to coding assistants for programmers but can drive the entire cycle from ideation to low code app development.

Aiding knowledge workers

Another top financial benefit area that McKinsey identifies is knowledge management. Knowledge workers spend up to one day a week searching, analysing and synthesising information buried in documents to answer their queries, so anything that can help is welcome. Just using vanilla generative AI services trained on the public internet is not sufficient, because questions are specific to a domain or set of documents.

This is where, for instance, so-called retrieval augmented generative assistants (RAGs) come into play. They combine the power of search with generative AI to make sense of the search results. For instance, they can answer the question ‘I lost my credit card, what to do?’ based on self-service documentation on the company’s website. Or answer HR related questions based on HR policies, regulation specific questions based on latest government or regulatory docs, specialised work instructions given internal protocols – basically, any domain for which documents are available.

Where is enterprise generative AI heading?

So apart from tackling these highest value use cases, what else can we expect from enterprise generative AI, particularly from a technology perspective? Certainly, the likes of OpenAI, Microsoft, Google, Meta, Amazon and open-source players will continue to launch new models, but if anything, this will lead to the commodification of these underlying services, with more choice across performance, quality, speed and cost.

The emergence of RAG assistants and buddies gives us a clue of a deeper, more fundamental trend: giving more autonomy to the AI. You can see a RAG as an example of a generative model combined with a simple tool (a search engine) with a knowledge source (the document corpus it has access to) and a predefined plan of what to do (take the question, do a search, use generative AI to formulate an answer on the search results and give references).

In a truly ‘agentic’ or autonomous agent’ model, this idea is pushed much further. Provide more tools, also not just for searching and analysing data, but also for taking actions. And use the creative power of generative AI to reason and create plans, decide when to use what tools, reflect on plans, results and outcomes, all within enterprise constraints.

In other words: we are finally letting generative AI out of its cage, albeit on a leash.

Written by
May 9, 2024
Written by
Peter van der Putten