The AI revolution has ignited a debate about what constitutes an "AI agent." Using the term “AI agent” these days commonly implies autonomous, self-learning systems that pursue complex goals, adapting over time. A very impressive goal, but this purist vision can alienate traditional developers and slow innovation.
It’s time to expand the definition, and embrace a broader perspective: AI agents don’t always need to self-learn or chase lofty goals. Functional agents—a new term—that connect large language models (LLMs) to APIs, physical devices, or event-driven systems can be just as impactful. By prioritizing function over form, we enable a broader pool of developers to engage in building AI agents, empower both AI and traditional developers to collaborate, and build practical solutions that drive real-world value. Let’s make progress without always demanding perfection.
The agent purist’s dilemma
The traditional definition of an AI agent—rooted in significant AI research—demands autonomy, reasoning, learning, and goal-oriented behavior. These agents, like those powering autonomous vehicles or reinforcement learning models, are impressive but complex. They require deep expertise in machine learning (ML), which can feel like a barrier to traditional developers skilled in APIs, databases, or event-driven architectures.
This purist stance risks gatekeeping, sidelining practical agents that don’t learn, but still solve critical problems. Why should an agent that wraps an API call or responds to a sensor be considered inferior? Not every challenge needs a self-evolving neural network—sometimes, a reliable, lightweight solution is enough.
Progress over perfection = allow function over form
An AI agent, at its core, extends an LLM’s capabilities to act in the world, whether by fetching data, controlling devices, or responding to events. Functional agents don’t always need to learn or pursue long-term goals—they just need to execute effectively. Recent tech blogs, news articles, and analyst reports highlight how such agents are transforming industries. Here are some examples that show the power of function over form:
API-wrapping agents
These agents translate LLM outputs into structured API calls, enabling seamless data retrieval. For instance, TechCrunch describes Amazon's AI shopping agent that handles e-commerce tasks, like querying third-party stores for purchases.
Another example would be a customer service agent pulling order details from a CRM to answer queries, relying on the LLM for natural language understanding and the developer’s API skills for execution. These agents are deterministic, reliable, and don’t require learning, making them accessible to traditional developers.
Physical device agents
Agents that connect LLMs to physical systems are gaining traction in the fields of internet of things (IoT) and robotics. A Wired article highlights how intelligent digital twins in manufacturing use data from sensors and control software to optimize factory operations. These agents don’t evolve—they execute predefined actions, leveraging traditional developers’ hardware integration expertise to bring AI into industrial settings.
Event-driven agents
These agents react to real-time triggers, such as system alerts or user actions. Hacker Noon notes how hybrid storage architectures support event-driven AI agents in cloud infrastructure for maintaining context and autonomy. Similarly, a n8n blog describes email management agents that monitor inboxes, draft responses, or flag urgent messages. These agents operate on rules, not learning, aligning with traditional developers’ skills in event handling and workflow automation.
Data aggregation agents
As discussed in a Medium article, these agents collect data from multiple sources—like Google Analytics, social media APIs, or email platforms—and use LLMs to generate summarized reports. For example, a marketing agent might compile campaign metrics into a concise dashboard, relying on traditional developers’ data pipeline expertise. These agents prioritize reliable data processing over autonomy.
Chatbot orchestrators
VentureBeat highlights agents that coordinate multiple chatbots or LLMs for customer service, routing queries to specialized bots (e.g., billing vs. technical support). These orchestrators follow predefined logic, a task well-suited to traditional developers’ system architecture skills, enabling seamless interactions without requiring self-learning and improvement capabilities.
Specialized functional agents
Additional examples include research agents, like Perplexity, which retrieve and summarize information without self-learning to continually improve research results over time, and security monitoring agents that flag anomalies in system logs.
In the healthcare field, an Aisera blog notes agents achieving 98% accuracy in chest X-ray analysis for tuberculosis, saving $150 billion annually in the US. In finance, agents check 5,000 transaction details in milliseconds, reducing fraud by 70%. These agents focus on specific, high-impact tasks, proving that simplicity can be powerful.
This small set of examples shows that functional agents are deployable and cost-effective, often outperforming complex systems in practical settings. Traditional developers should look for use cases with their organization that allows them to create functional agents using familiar tools like APIs, webhooks, or scripting.
Why this matters
AI agents are a hot topic, with a recent IBM and Morning Consult survey revealing that 99% of enterprise developers are exploring them. For these initiatives to succeed, collaboration between AI and traditional developers is crucial. AI developers can concentrate on improving LLMs, while traditional developers, with their expertise in system integration, can build the connections to real-world systems.
This approach also accelerates innovation. Functional agents can be built and deployed quickly, delivering immediate value. In manufacturing, predictive maintenance agents reduce downtime by 30% (Aisera Blog), while customer service agents boost retention, with 72% of customers valuing fast service (Deloitte).
By starting with simple, functional agents, developers can iterate and later incorporate lessons learned to avoid overengineering complex agents to perform simple tasks. After all, creating a self-learning system for API calls wastes resources and is akin to, “using a sledgehammer to crack a walnut.”
Wrapping up
The AI ecosystem thrives on diversity—of ideas, approaches, and developers. Functional agents performing a range of tasks from API wrappers to event-driven scripts, are not lesser, they’re essential.
A great place to get started is with low-risk, high-impact use cases, like API wrapping or data aggregation, to build confidence in AI. AI developers should value these practical solutions, while traditional developers should embrace AI as an extension of their toolkit. Let’s stop debating what an agent should be and focus on what works.
How to get started
We have resources for you, no matter your preferred language or learning style:
- Agentic AI overview: For a comprehensive understanding of agentic AI, explore this article.
- Building enterprise-ready AI agents: Learn how to streamline development with Red Hat AI in this Red Hat blog article.
Developer-specific resources:
- Python developers: Discover how to create agentic solutions using Llama Stack with Python in this Red Hat Developer blog article.
- Java developers: Dive into a three-part series on agentic AI with Quarkus: Part 1, Part 2, and Part 3.
- Node.js developers: Get a practical guide on using Llama Stack with Node.js to build agentic solutions in this Red Hat Developer blog article.
Open source agents:
- Agentic AI examples with Red Hat AI: Explore various agentic AI frameworks and LLMs running on Red Hat AI platforms.
- Red Hat AI agentic demo: Experience a full agentic AI workflow with real-time interactions across multiple systems including CRM, PDF generation, and Slack integrations.
저자 소개
With over thirty years in the software industry at companies like Sybase, Siebel Systems, Oracle, IBM, and Red Hat (since 2012), I am currently an AI Technical Architect and AI Futurist. Previously at Red Hat, I led a team that enhanced worldwide sales through strategic sales plays and tactics for the entire portfolio, and prior to that, managed technical competitive marketing for the Application Services (middleware) business unit.
Today, my mission is to demystify AI architecture, helping professionals and organizations understand how AI can deliver business value, drive innovation, and be effectively integrate into software solutions. I leverage my extensive experience to educate and guide on the strategic implementation of AI. My work focuses on explaining the components of AI architecture, their practical application, and how they can translate into tangible business benefits, such as gaining competitive advantage, differentiation, and delighting customers with simple yet innovative solutions.
I am passionate about empowering businesses to not only harness AI to anticipate future technological landscapes but also to shape them. I also strive to promote the responsible use of AI, enabling everyone to achieve more than they could without it.
유사한 검색 결과
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
가상화
온프레미스와 클라우드 환경에서 워크로드를 유연하게 운영하기 위한 엔터프라이즈 가상화의 미래