This week in Chicago, I had the privilege of speaking to a consumer packaged goods IT leadership team about the emergence of Agentic AI. This wasn’t your typical generative AI discussion—it was an in-depth exploration of the next evolution of intelligent systems that can act independently, reason through complexity, and collaborate seamlessly with both humans and other agents.
The goal of the session was to ground the team in the full spectrum of Agentic AI—its capabilities, challenges, and opportunities—then guide them in mapping their strategic initiatives by reimagining how to enable the business through AI-ready data, human-AI collaboration, scalable frameworks, and a forward-looking approach to the future of AI.
I want to share a few key takeaways from the talk for those who couldn’t attend or want a recap of what we covered.
🔍 Agentic AI: The Next Frontier
We kicked things off by reframing how we think about AI. We’re no longer just building models to assist—we’re developing agents that autonomously pursue goals, use tools, and adapt dynamically. This shift marks a move from predictive outputs to purposeful orchestration. Think less chatbot, more digital co-worker.

⚙️ Real-World Use Cases
From customer service to research and development, Agentic AI is already having an impact:
- Enterprise Ops: Agents reconciling purchase orders, resolving exceptions, and triggering workflows.
- Customer Experience: Multi-turn agents resolving issues, managing escalations, and triggering actions in real time.
- Marketing: Agents dynamically generating personalized content, optimizing campaign performance in-flight, and A/B testing messaging across segments.
- Sales: Agents auto-prioritizing leads, drafting outbound communications, updating CRMs, and surfacing real-time buyer intent insights.
- Supply Chain: Agents forecasting demand, identifying at-risk inventory, coordinating logistics, and autonomously communicating with vendors.
📊 AI-Ready Data, Knowledge Graphs, & Synthetic Data
A major theme we explored was how to prepare your organization’s data to work with agents. Structured, time-aware, and metadata-rich data is essential. Knowledge graphs were highlighted as a key enabler—giving agents context, relationships, and the ability to reason across domains.
We also explored the growing role of synthetic data in enabling Agentic AI. Synthetic data can be used to simulate edge cases, fill gaps in real-world datasets, and stress-test agent behavior before deployment. When combined with knowledge graphs and rich metadata, it allows agents to reason more effectively, adapt to dynamic scenarios, and perform reliably in environments where traditional training data falls short.
🛠️ Frameworks & Architectures
I shared examples from the current landscape of agent development frameworks:
We walked through the evolving agentic ecosystem, starting with LangGraph for asynchronous, stateful workflows and AutoGen for building collaborative, role-driven agents. Tools like CrewAI and MetaGPT enable department-style orchestration with specialized agent roles, while LangSmith and DSPy provide observability and optimization layers for debugging and refining agent behavior.
We also touched on Microsoft’s Semantic Kernel, which brings orchestration, memory, and planner modules to .NET and Python environments, making it easier to integrate agents into enterprise-grade applications. Other emerging frameworks like Flowise, Haystack, and Cognitive Architectures are expanding the toolkit for building highly contextual and controllable agent systems.
🔁 RPA vs. Agentic AI
We took time to compare Agentic AI with more traditional automation approaches like RPA (Robotic Process Automation) and newer frameworks like Autogen. The core difference lies in adaptability. RPA is deterministic and best suited for repetitive, rule-based tasks with minimal variance—think of it as “if-this-then-that” automation. It excels in scenarios like invoice processing, form filling, or system-to-system data transfers where the rules rarely change.
In contrast, Agentic AI thrives in environments that require reasoning, multi-step decision-making, and dynamic collaboration. These agents can reflect, adjust, and even coordinate with other agents or humans to solve problems. This makes Agentic AI ideal for tasks like cross-system exception handling, adaptive customer engagement, or scenario planning in supply chain operations.
Use RPA when you want consistency and speed in narrow tasks; use Agentic AI when you need flexibility, context-awareness, and scalability across complex workflows.
🔗 Model Context Protocol (MCP)
We took a closer look at one of the hottest agentic topics over the past few weeks, Model Context Protocol, a foundational component of scalable agent ecosystems. MCP provides agents with persistent memory, toolkits, and persona data that they can carry across sessions and teams—enabling continuity, identity, and enhanced collaboration.
We also discussed a modern architecture approach built around an AI Agent Shell + MCP. The AI Agent Shell is a modular, framework-agnostic interface that governs agent behavior, memory, tools, and context management.
This shell acts as the orchestrator for agent execution and serves as the abstraction layer between LLMs and enterprise systems. By integrating with platforms like Snowflake Cortex for data access and in-database inference, SAP Joule for intelligent ERP insights, and ServiceNow Assist for workflow automation and ticket resolution, the Agent Shell becomes a true enterprise enabler.
Paired with Model Context Protocol (MCP), the system maintains persistent memory, tool access, and agent identity across sessions and services—enabling long-term reasoning, auditability, and adaptive collaboration across departments.
🤖 Responsible Agentic AI
I emphasized that with autonomy comes responsibility. It’s critical to implement explainability, audit trails, permission gating, and ethical decision frameworks into any agent system. Responsible AI isn’t optional—it’s the infrastructure for trust.
In a recent EY Pulse survey of 500 executives, it was stated that only 34% have a comprehensive approach to governance in place. With the rise of agentic systems, mitigation of risk is a key consideration, considering autonomous systems will be taking action and accessing various sources of data. Ensuring that the right data governance controls are in place, combined with explainability and reasoning is key.
I also discussed the importance of scaling Agentic AI responsibly by building robust validation mechanisms. I outlined several best practices—from implementing confidence thresholds to using reflection loops, where agents self-assess and critique their own outputs.
Agents can be configured to defer actions if their confidence falls below a predefined threshold or escalate to a human for review. We also discussed the role of validator agents, peer review systems, and human-in-the-loop QA workflows to ensure accuracy, safety, and alignment. These techniques are essential not only for mitigating risk but for building trust as agents take on more autonomous responsibilities within the enterprise.
💼 Preparing the Workforce
Agentic AI doesn’t eliminate jobs—it reshapes them. New roles like Agent Orchestrators, Systemic Prompt Architects, and Agent Quality Reviewers are emerging. Organizations need to begin investing in training and simulation to prepare employees to collaborate with AI-native teammates.
We’re not just deploying smarter tools. We’re co-designing the future of work alongside intelligent systems – Tom
Now is the time to experiment, define roles, structure data, and explore where agents can drive exponential value in your organization.






Leave a Reply