Artificial Intelligence - UNIT - 1 Topic - 7 : Structure of Agents
Part A: Understanding Agent Structure in Artificial Intelligence
1. Introduction
In Artificial Intelligence, an agent is an
entity that perceives its environment and takes actions to achieve its goals.
But how does it decide what to do?
The structure of an agent defines how it is
internally designed to make decisions, process inputs, and select actions.
It is like the brain and body of the agent.
2. What is Agent Structure?
Agent structure
refers to the internal architecture or design of an agent — including how it
processes percepts (inputs), stores knowledge, makes decisions, and performs
actions.
It determines how the agent reacts to the
environment.
3. Components of an Agent Structure
Component |
Description |
Sensors |
Collect information (percepts) from the environment |
Actuators |
Perform actions in the environment |
Agent Program |
Software or logic that decides what action to take |
Architecture |
The platform (hardware/software) on which the agent
operates |
4. Types of Agent Structures
There are different types of agent designs based on
how simple or intelligent the agent is. Each structure suits different kinds of
environments and tasks.
1.
Simple Reflex Agents
- React
only to current input using predefined rules.
- Do
not use history or memory.
- Use
“if condition, then action” logic.
Example:
A thermostat turns on heating if the temperature is below 20°C.
IF temperature < 20°C THEN turn on heater
2.
Model-Based Reflex Agents
- Have
a model (memory) of the world to track what’s happening.
- Can
handle partially observable environments.
- Use
internal state to remember the past.
Example:
A smart vacuum remembers which rooms it has cleaned.
3. Goal-Based Agents
- Decide
actions by comparing future outcomes based on a specific goal.
- Involve
planning and search algorithms.
- Better
decision-making than reflex agents.
Example:
A robot in a maze tries to reach the exit using path planning.
4. Utility-Based Agents
- Choose
actions that maximize their utility (happiness/satisfaction).
- Utility
= How beneficial or useful the outcome is.
- Can
handle multiple goals and choose the best.
Example:
A delivery robot chooses the shortest and safest route to save time and
battery.
5. Learning Agents
- Can
learn from experience and improve their performance over time.
- Adapt
to new environments.
- Have
components like:
- Learning
element – Improves agent
- Critic
– Gives feedback
- Performance
element – Chooses actions
- Problem
generator – Suggests new experiences
Example:
A self-driving car gets better at driving after each trip by learning from
traffic data.
5. Comparison of Agent Structures
Agent Type |
Uses Memory |
Has Goals |
Learns |
Example |
Simple Reflex Agent |
❌ |
❌ |
❌ |
Light turns on when someone enters |
Model-Based Agent |
✅ |
❌ |
❌ |
Smart vacuum |
Goal-Based Agent |
✅ |
✅ |
❌ |
Maze-solving robot |
Utility-Based Agent |
✅ |
✅ |
❌ |
Route-optimizing delivery robot |
Learning Agent |
✅ |
✅ |
✅ |
Self-driving car |
6. Summary
- The
structure of agents defines how an AI system works internally
to make decisions.
- Different
agents are built based on task complexity, environment, and goals.
- Advanced
agents not only think logically but also learn and improve.
- Understanding
agent structure helps in building intelligent and adaptive AI systems.
Comments
Post a Comment