AI agent hype cools as enterprises struggle to get into production
Anyone scanning the news might think it's pedal to the metal as far as AI agent implementations go, but there is a slump in rollouts as many organizations figure out what to do next, Redis CEO Rowan Trollope told The Register.
The company behind the Redis database, which built a following as a cache in cloud application architecture to become the most popular database on AWS, is trying to help users get out of the lab with AI agent projects and into production.
Earlier this month, Gartner forecast that investment from software vendors and cloud providers would propel a trillion-dollar increase in AI spending this year as investment hits $2.52 trillion. Enterprise users, however, are in the "trough of disillusionment" as reactions to enterprise project pitches go from "that was a great idea" to "where's my revenue?" the research firm said.
Trollope said the phenomenon was reflected in his experience of helping customers build projects implementing AI agent platforms in business.
"I've seen fewer examples of real successful production agents than I would have imagined [in terms of] anything outside of engineering," he said. "It is still quite hard to do, and only the biggest companies in the world understand this is the future they're investing in. I don't think they're going to stop. They realize they need to have this next-generation platform."
Redis started life in 2009 as an attempt to build a performant key-value database. By late 2020, it was the most popular choice as a cache and message broker in cloud-native application stacks. Redis has since broadened its ambitions, adding features for machine learning and support for JSON documents in a bid to evolve beyond its caching roots. Now it is supporting AI implementations. Last year, it announced LangCache, a fully managed REST service designed to reduce expensive and latency-prone calls to LLMs by caching previous responses to semantically similar queries.
While Gartner sees a lot of enterprise LLM spending going to large application vendors as users seek low-risk options by upgrading software they already use, Trollope said organizations need to think about the range of sources they have to draw from to get AI agents to make decisions.
While Salesforce might store what discount you gave a customer and Workday stores information about employees, agents making decisions may also require information from email, instant messaging platforms, and other sources, he argued. Hence, organizations building out AI agent systems were using frameworks from Microsoft, Google, or LangChain, an independent engineering platform for building, testing, and deploying reliable AI agents.
"The information needed to make the most relevant decisions is often not immediately obvious to the agent," said Trollope. "For example, if I were to build an agent that is going to interface with my customers and allow it to do pricing, why and when is the agent allowed to make exceptions to the standard pricing policy? If all you want is the standard pricing policy, that's very easy, but you're not going to replace any human beings with that. What you need is to find out where the humans apply their judgment and what data they used to make that decision. That's where pulling that data together is difficult, because it's often unstructured. It's sitting in Slack threads, in email chains, in text messages. That's what we see as the number one problem."
The data requirements for AI agents to make meaningful decisions are part of the motivation for vector features in databases. A slew of vendors, including Redis and Oracle, as well as specialist vendors, are backing the concept. With a paucity of successful case studies, the jury might still be out on whether returns will follow. But Redis, at least, sees big businesses continue to invest despite the challenges. ®