Where should you deploy AI?
>_TLDR
The proliferation of specialized, isolated AI agents within enterprise applications and platforms is creating new, fragmented AI silos that risk repeating past data management mistakes, demanding we focus on developing standards that allow these agents to talk to each other and form a single, connected layer of intelligence.
From Data Silos to AI Silos
Ever since companies started producing large amounts of data and the proliferation of databases, applications and saas solutions, data silos have plagued enterprises and spread rapidly. Data silos were and still are one of the most important and difficult challenges facing companies. Removing and consolidating data silos is a top priority and modern data platforms help tackle this challenge. However, the emergence of AI, specifically enterprise agents, poses an additional challenge. Namely AI silos. Modern data platforms or close to every application or software comes with its own AI. But there’s a problem: they don’t talk to each other. Your CRM agent doesn’t know what insights your data warehouse agent has. Even the current cutting-edge AI Agents essentially operate in isolation. This lack of interoperability will inadvertently create AI silos, a new risk stemming from the widespread, unmanaged deployment of AI without a clear concept or strategy. This article highlights the potential challenges and risks for companies that fail to implement industry best practices, warning that the failure to integrate these agents is, effectively, a costly repetition of past data management mistakes.
What’s your AI Strategy?
Recently, I was asked the following: Every platform and every solution has its own AI. But how do I know where and when to use which? To answer this question, one first needs to have a clearly defined and scoped opportunity or use case that can measurably be solved better with AI. It is also important to consider the indirect impact AI can deliver that does not tangibly impact the company’s bottom line. Another important aspect to consider where to run AI is where the underlying data lies and its format or structure. There is no AI strategy without a data strategy and it is advisable to run AI as close to your data source as possible. The core principles around data gravity still hold true in the age of AI and data movement should be reduced to a minimum or even better be eliminated completely. Fundamentally, AI relies on data and algorithms that require compute, and it is more secure and efficient to bring compute to the data and not the other way round. While AI democratizes access to data, security and governance must be ensured. AI should be deployed only where access controls and governance are in place to prevent risks associated with uncontrolled data and models. AI is fascinating technology but one must never neglect the human workforce and make sure they are trained and equipped to leverage it appropriately. The narrow application of AI on small and isolated use cases or separate business units might render some slightly underwhelming returns when looked at in isolation. However, the value of AI and the exponential gains can be seen when AI is scaled across the enterprise. This is why workforce readiness and change management is absolutely essential to facilitate wide adoption.
The future of enterprises is agentic
Leading companies are already leveraging agents to streamline operations and uncover insights. Think of agents as LLM with superpowers. Agents are powered by LLMs and can use tools to carry out tasks. Companies that successfully deploy agents start small and focus on simple, linear tasks where they can more quickly prove value and drive adoption and then move onto more complex workflows. While agents can unlock great potential, correctly defining the scope and success criteria can be the deciding factor between success and failure. Unlike traditional automation, which follows fixed rules, the technological foundations of AI agents powered by LLMs means that they are probabilistic. This means they can adapt dynamically and make real-time decisions based on evolving inputs. They don’t just execute workflows, they can continuously refine and optimize them. This of course means that LLMs are non-deterministic and this fact needs to be taken into account, especially for high-stake use cases. I will dive deeper on the inner workings of the architecture used for today’s state of the art LLMs in a separate article and what that means for your business critical use cases. It is clear that the industry sees agents as the next big thing in AI. Just log into one of your cloud vendor or SaaS solutions and I am sure you will see an agent, ready to assist you. While these tools are powerful and excel within their own domain, environment or application, enterprises are eagerly seeking the holy grail: A unified, connected intelligence layer.
The Impact of AI Silos
The advance of AI has been rapid and we have gone from manually interacting with an LLM as a chatbot to LLM-powered Agents that have access to tools and agentic capabilities. However, the current state of agents is still quite narrow. AI agents have not matured enough yet to be able to interact reliably with other Agents across different systems and platforms. The promise of AI Agents is to create seamless automation, but without interoperability, they create fragmented intelligence silos, duplicating effort and blocking efficiency.
Duplication of Effort and the Silo Problem
For years, enterprises have struggled with data silos, where valuable information is locked in separate systems. The industry’s response has been to consolidate data into data warehouses and lakes. With agents, the problem is repeating itself. Each agent generates intelligence, but without a common standard, those insights remain locked in isolated AI systems, forcing enterprises to manually bridge the gaps. Developments like the MCP and A2A protocols are a good step in the right direction. However, if we want a truly connected layer of intelligence that empowers the entire enterprise we clearly need further developments and innovation in this area. The solution to agent silos is not the same as the solution to data silos. While consolidating data into a single repository may help with analytics, agents are fundamentally different, they don’t just store knowledge, they act on it. What enterprises need is not just a central golden source of information but a way for AI agents to communicate in real time, regardless of where they run.
Creating an Intelligence Layer with Agent Protocols
One of the biggest bottlenecks for AI today is data friction. In other words, it has proven to be a challenge to provide AI the necessary context it needs to understand a specific task or domain and do this at scale. Moreover, in order to carry out specific tasks AI needs access to tools. LLMs have been trained on a huge corpus but do not understand how to use a specialized tool for a certain task. The solution in traditional Software to retrieve information from separate systems and interact with them are APIs. However, it is not scalable to integrate APIs one by one with an LLM and the ongoing maintenance of such a solution would become highly inefficient. In an attempt to address this challenge, Anthropic created an open standard called MCP for connecting AI applications to external systems. Using MCP, AI applications can connect to data sources (e.g. local files, databases), tools (e.g. search engines, calculators) and workflows (e.g. specialized prompts), enabling them to access key information and perform tasks. Another milestone to achieving a true intelligent layer is to enable different agents to speak to each other. This is where the protocol called Agent-to-Agent (A2A), developed by Google, comes into play. It facilitates direct agent to agent communication. These protocols represent where the next wave of AI innovation is heading. Using open industry standards and democratizing intelligence across the enterprise will help scale the application of AI and support companies’ journey and transformation to become more data-driven by leveraging a sound data foundation and an inter-connected intelligence layer powered by agentic AI.