GLOSSARY

AI Agent

An AI agent uses an LLM to plan, call tools, observe results, and iterate toward a goal across many steps — not a single prompt-response exchange.

Last updated:

Quick answer
An AI agent is a software system that takes a goal from a user, plans a sequence of steps toward that goal, calls tools or models as it goes, and decides when the goal has been met. It differs from a chatbot by taking autonomous multi-step action rather than single-turn response, and requires budget, tool, and audit controls in production.

WHAT IT IS

Agents differ from chatbots in two ways: they have tool access (search, code execution, APIs, file systems, databases) and they operate in a loop — reasoning about state, taking an action, checking the outcome, and deciding the next step. The pattern is formalized in frameworks like ReAct (Reason + Act) and in agentic runtimes such as the OpenAI Assistants API, Anthropic's tool-use pattern, LangGraph, and CrewAI.

HOW IT WORKS

A well-built agent defines a clear objective, a bounded tool set, an explicit stop condition, guardrails for irreversible actions, and logging/evaluation so regressions are visible. Retrieval-Augmented Generation (RAG) is often a component, not the whole system.

WHEN TO USE

Use agents where work involves multi-step tool use, long context, or decisions that benefit from iteration. Avoid them for simple Q&A a single retrieval call would answer cheaper and faster.

RELATED

SOURCES

Related questions.

What is an AI agent?
An AI agent is a software system that takes a goal from a user, plans a sequence of steps toward that goal, calls tools or models as it goes, and decides when the goal has been met. It differs from a plain chatbot in that it takes autonomous multi-step action, not just single-turn response.
How is an agent different from a chatbot?
A chatbot answers a turn at a time. An agent plans across turns, invokes tools (APIs, calculators, browsers, databases), holds state, and can loop until a stopping condition is met. Agents introduce reliability and safety concerns — tool access, budget, and rollback — that chatbots do not.
When should enterprises deploy agents?
Agents fit tasks with clear goals and well-defined tool surfaces — customer-service triage, internal data lookup, sales prep, coding assistants. They do not fit open-ended creative work or high-stakes decisions where human judgment is cheaper than guardrails.
What are the biggest risks with agents?
Prompt injection, runaway tool use (cost blowouts, infinite loops), data exfiltration via uncontrolled tool calls, and eroded auditability if there is no per-step log. Production deployments need budget ceilings, tool allow-lists, structured tracing, and human approval gates for high-impact actions.
How does NUUN AI build production agents?
We follow NIST AI RMF and ISO/IEC 42001, scope agents to a bounded tool surface, enforce budget and rate limits, log every step with structured tracing, and require human approval on any action that writes to customer-facing systems or moves money.

Need this term in action?