We’re entering an era of autonomous agents capable of executing complex, multistep workflows
Generative AI is evolving. Knowledge-based applications like AI chatbots and copilots are giving way to autonomous agents that can reason and perform complex, multistep workflows. These are powered by what is known as agentic AI. This latest development in AI is poised to transform the way businesses operate by being able to understand context, set goals, and adapt actions based on changing conditions.
With these capabilities, agentic AI could perform a whole range of tasks previously thought impossible for a machine to handle – such as identifying sales targets and making pitches, analyzing and optimizing supply chains, or acting as personal assistants to manage employees’ time.
Amazon‘s recent partnership with Adept, a specialist in agentic AI, signals a growing recognition of the systems’ potential to automate diverse, high complexity use-cases across business functions. But to fully leverage this technology, organizations must first face several challenges with the underlying data – including latency issues, data silos and inconsistent data.
Rahul Pradhan, VP Product and Strategy, Couchbase.
The three foundations of agentic AI
For its complex functions to operate successfully, agentic AI needs three core components: a plan to work from, large language models (LLMs), and access to robust memory.
A plan allows the agent to execute complex, multi-step tasks. For instance, handling a customer complaint might involve a predefined plan to verify identity, gather details, provide solutions, and confirm resolution.
To follow this plan, an AI agent can use multiple LLMs to break down problems and perform subtasks. In the context of customer services, the agent could call on one LLM to summarize the current conversation with the customer, creating a working memory for the agent to refer to. A second LLM could then plan the next actions, and a third could evaluate the quality of these actions. A fourth LLM could then generate the final response seen by the user, informing them of potential solutions to their problem.
And just like humans, agentic AI systems can’t make informed decisions without using memory. Imagine a healthcare assistant AI with access to a patient’s medical history, medical records, and past consultations. Remembering and drawing from this data allows the AI to provide personalized and accurate information, explaining to a patient why a treatment was adjusted or reminding them of test results and doctor’s notes.
Both short term and long-term memory is needed for tasks requiring immediate attention, and to build an understanding of context that the AI can rely on for future inferences. But here lies one of the major barriers preventing optimization of agentic AI today: often, businesses’ databases aren’t advanced enough to support these memory systems, limiting the AI’s potential to deliver accurate and personalized insights.
The data architecture needed to support AI agents
The predominant approach for meeting memory system requirements is the use of special-purpose, standalone database management systems for various data workflows. However, the practice of using a complex web of these standalone databases can hurt an AI’s performance in a number of ways.
Latency issues arise when each of the different databases used have varying response times, causing delays that can disrupt AI operations. In addition, data silos, where information is isolated in separate databases, prevent the AI from having a unified view and hinder comprehensive analysis, leading to the agent missing connections and providing incomplete results. And on a more fundamental level, inconsistent data—due to variations in quality, formatting, or accuracy—can also cause errors and skew analysis, leading to faulty decision-making. The use of multiple single-purpose database solutions also create data sprawl, complexity and risk, making it difficult to trace the source of AI hallucinations and debug incorrect variables.
Many databases are also not well-suited for the speed and scalability required by AI systems. Their limitations become more pronounced in multi-agent environments, where rapid access to large volumes of data (e.g. through LLMs) is essential. In fact, only 25% of businesses have high-performance databases capable of managing unstructured data at high speed, and just 31% have consolidated their database architecture into a unified model. These databases will struggle to meet GenAI’s demands, let alone support any form of unconstrained AI growth.
As GenAI evolves and agentic AI becomes more prevalent, unified data platforms will become central to any successful AI implementation by organizations. Updated data architectures provide benefits by reducing latency with edge technology, efficiently managing structured and unstructured data, streamlining access, and scaling on demand. This will be a key development in building cohesive, interoperable, and resilient memory infrastructures and allowing businesses to finally capitalize on the automation, precision, and adaptability that agentic AI has to offer.
Embracing the AI revolution
Agentic AI opens the door to a new era where AI agents act as collaborators and innovators, fundamentally changing how humans interact with technology. Once businesses have overcome the challenges associated with disparate data sources and optimized memory systems, they will unlock widespread use of tools that can think and learn like humans, with unprecedented levels of efficiency, insight, and automation.