Artificial Intelligence - UNIT 3-Topic 2-knowledge representation issues
UNIT
- III
Topic
2 : Knowledge Representation Issues
Part A: Introduction
✅
What is Knowledge Representation?
In AI, Knowledge Representation (KR) refers to
the method of encoding information about the world into a format that a
computer system can use to solve problems and make decisions.
But representing knowledge is not always easy —
several issues or challenges arise during this process. These are called
Knowledge Representation Issues.
Part B: Why are These Issues Important?
- A
poorly designed KR system leads to incorrect or slow decision-making.
- Understanding
these issues helps AI systems to become more reliable, intelligent,
and flexible.
Part C: Major Issues in Knowledge
Representation
1.
Representational Adequacy
Can the representation capture all kinds of
knowledge needed for solving the problem?
- Some
systems cannot represent time, uncertainty, or default values.
- Example:
A simple rule like “Birds can fly” may not work for penguins or ostriches.
2.
Inferential Adequacy
Can the system derive new knowledge from
existing knowledge?
- An
intelligent system must not just store facts but also reason from
them.
- Example:
If “All humans are mortal” and “Socrates is a human”, the system should
infer “Socrates is mortal”.
3.
Inferential Efficiency
Can the system perform reasoning quickly and
effectively?
- It
should be able to answer questions in real-time, especially in games,
robotics, or medical AI.
4.
Acquisition and Learning
How easy is it to add or learn new knowledge?
- Manually
feeding data is difficult.
- AI
should be able to learn automatically from new inputs or
environments.
5.
Handling Incomplete and Uncertain Knowledge
Can the system make decisions even when information
is missing or unclear?
- Real-world
data is often incomplete (e.g., unknown weather) or uncertain
(e.g., possibility of rain).
- Solution:
Use probabilities or fuzzy logic.
6.
Expressiveness
Can the representation handle different types of
knowledge (e.g., facts, relationships, rules)?
- Some
tasks need temporal knowledge (time-based), spatial knowledge
(location), or procedural knowledge (how to do things).
7.
Ambiguity and Vagueness
Natural language or real-world data often contains ambiguous
meanings.
- Example:
“He saw the man with the telescope.” → Who has the telescope?
- The
system must deal with multiple interpretations.
8.
Scalability
Can the knowledge base grow and update without
breaking?
- As
more knowledge is added, the system must remain efficient and accurate.
9.
Consistency
Does new knowledge conflict with existing
knowledge?
- Example:
If “All birds can fly” and later we say “Penguin is a bird and cannot
fly”, this causes a conflict.
10.
Context Dependence
Meaning of information can change depending on context.
- Example:
The word “bank” could mean a river bank or a money bank
depending on the sentence.
Part D: How to Handle These Issues
Issue |
Solution / Technique |
Incompleteness & Uncertainty |
Use probability, fuzzy logic, or Bayesian
nets |
Conflicts in rules |
Use non-monotonic logic or default
reasoning |
Speed & efficiency |
Use inference engines, heuristics, and
pruning |
Context & ambiguity |
Use semantic networks, context-based
parsing |
✅ Real-World
Example: Medical Diagnosis System
Challenge |
Solution |
Patients show incomplete symptoms |
Use probabilistic reasoning to guess likely
disease |
Conflicting symptoms |
Prioritize with confidence scores or fuzzy values |
Many new diseases appear |
System must allow easy knowledge updates |
📝 Summary
- Knowledge
Representation Issues are the challenges faced while
designing systems that can store and reason with knowledge.
- Major
issues include: incompleteness, ambiguity, learning difficulty,
reasoning speed, scalability, and uncertainty.
- By
addressing these issues, AI becomes more accurate, intelligent, and
useful in real-world situations.
"It’s not just what the AI knows – it’s how well
it understands, uses, and learns from it that defines its intelligence."
Comments
Post a Comment