Machine Learning Glossary: Agentic

This page contains Agentic glossary terms. For all glossary terms, click here.

A

act

#agent

A stage in the agentic loop in which the agent executes the action chosen during the reason stage. For example, the act stage could send an API request.

action

#agent

In reinforcement learning, the mechanism by which the agent transitions between states of the environment. The agent chooses the action by using a policy.

action space

#agent

The set of resources an agent can use to perform a task. Action space might include the tools and APIs that the agent can invoke and the permissions the agent holds. In general, the action space should be just big enough for the agent to perform the task. If the action space is too small, the agent might have insufficient resources to perform the task. If the action space is too large, the agent tends to become more error prone.

agent

#generativeAI
#agent

Software that can reason about user inputs in order to plan and execute actions on behalf of the user.

In reinforcement learning, an agent is the entity that uses a policy to maximize the expected return gained from transitioning between states of the environment.

agentic

#generativeAI
#agent

The adjective form of agent. Agentic refers to the qualities that agents possess (such as autonomy).

agentic loop

#agent

A cycle that an agent iterates through until a termination condition is met. The cycle typically consists of the following four stages:

  1. Observe
  2. Reason
  3. Act
  4. Feedback

agentic workflow

#generativeAI
#agent

A dynamic process in which an agent autonomously plans and executes actions to achieve a goal. The process may involve reasoning, invoking external tools, and self-correcting its plan.

agent orchestration

#agent

The centralized management and routing of tasks across multiple sub-agents or LLM calls. Agent orchestration breaks down complex tasks into smaller sub-tasks and assigns them to the most capable sub-agents.

autonomous agent

#agent

An agent that works towards a complex goal by planning, acting, and adapting without continuous human intervention.

E

evaluator agent

#agent

An agent that assesses another agent's results before those results are finalized. You can imagine one agent as manufacturing a product and a separate agent—the evaluator agent—testing that product before it is released.

Critic is a synonym for evaluator agent.

F

feedback

#agent

A stage in an agentic loop in which the agent evaluates the action taken during the act stage. For example, if the agent sent an API request during the act stage, the feedback stage might determine whether the API response was successful.

G

Gemini models

#generativeAI
#agent

Google's state-of-the-art Transformer-based multimodal models. Gemini models are specifically designed to integrate with agents.

Users can interact with Gemini models in a variety of ways, including through an interactive dialog interface and through SDKs.

generative agents (simulacra)

#agent

Agents endowed with unique personas, memories, and routines that simulate realistic human behavior.

See Generative Agents: Interactive Simulacra of Human Behavior for details.

M

manager agent

#agent

An agent that controls one or more sub-agents.

multi-agent collaboration

#agent

A framework where multiple specialized AI agents interact, debate, or pass tasks to one another to solve a complex problem.

O

observe

#agent

A stage in the agentic loop in which the agent examines or evaluates some aspect of the agent's progress. For example, suppose that the act stage generates some code. Consequently, the observe stage might run tests on the generated code.

P

plan-and-solve

#agent

An agentic strategy where the model first drafts an explicit, multi-step plan before attempting to execute any actions.

plugin

#agent

A standardized, modular tool that can be easily attached to an agent to extend its capabilities. For example, a GitHub plugin enables agents to perform actions like reading GitHub issues and creating pull requests.

procedural memory

#agent

In agents, the knowledge of how to do something. For example, an agent might develop a procedural memory of how to search the web and then display the top three sites.

R

reason

#agent

A stage in the agentic loop in which the agent determines what to do. For example, the agent might determine that a particular API request should be sent.

reflection

#generativeAI
#agent

A strategy for improving the quality of an agentic workflow by examining (reflecting on) a step's output before passing that output to the next step.

The examiner is often the same LLM that generated the response (though it could be a different LLM). How could the same LLM that generated a response be a fair judge of its own response? The "trick" is to put the LLM in a critical (reflective) mindset. This process is analogous to a writer who uses a creative mindset to write a first draft and then switches to a critical mindset to edit it.

For example, imagine an agentic workflow whose first step is to create text for coffee mugs. The prompt for this step might be:

You are a creative. Generate humorous, original text of less than 50 characters suitable for a coffee mug.

Now imagine the following reflective prompt:

You are a coffee drinker. Would you find the preceding response humorous?

The workflow might then only pass text that receives a high reflection score to the next stage.

router agent

#agent

An agent that classifies a user query and then invokes the most appropriate agent to handle it.

S

self-correction

#agent

An agent's ability to detect an error in its own output and then try a different approach.

state

#agent

In reinforcement learning, the parameter values that describe the current configuration of the environment, which the agent uses to choose an action.

state machine agent

#agent

An agent whose workflows are constrained by rigid rules. State machine agents generally make fewer mistakes than autonomous agents but lack the freedom to adapt to situations outside their constraints.

sub-agent

#agent

A specialized, narrowly focused model invoked by a manager agent to handle a specific subset of a larger problem. Sub-agents typically have a narrower action space than agents.

T

termination condition

#agent

In agentic AI, the predefined criteria that tell the agent to stop iterating. For example, here are a few possible termination conditions:

  • The agent successfully completed the goal.
  • The agent can't use any more resources.
  • A human-in-the-loop has detected a problem.

In reinforcement learning, the conditions that determine when an episode ends, such as when the agent reaches a certain state or exceeds a threshold number of state transitions. For example, in tic-tac-toe (also known as noughts and crosses), an episode terminates either when a player marks three consecutive spaces or when all spaces are marked.