AH Artificial Intelligence

John Lawrence
Mind Map by , created over 6 years ago

Mind Map on AH Artificial Intelligence, created by John Lawrence on 05/26/2013.

172
0
0
Tags No tags specified
John Lawrence
Created by John Lawrence over 6 years ago
CCNA Security 210-260 IINS - Exam 1
Mike M
AQA A-Level Sociology: Gender Differences in Education - Internal Factors
Rhiann .
AQA A-Level Sociology: Gender Differences in Education - Identity, Class & Girls' Achievement
Rhiann .
Welcome to GoConqr!
Sarah Egan
Core 1.5 Mechanisms
T Andrews
Spanish: Talking About Everyday Things
Niat Habtemariam
Organic Chemistry Functional Groups
linpubotwheeds
Chemistry (C3)
Amy Lashkari
The Five Minute Lesson Plan Template
tom.roche_
GCSE Revision Tips
miminoma
AH Artificial Intelligence
1 Problem Representation
1.1 Abstraction

Annotations:

  • Problem abstraction is the process of defining the problem clearly and unambiguously. This includes defining the initial or start state, defining the goal or end state, and defining the constraints that apply.
1.2 Symbolic Representation

Annotations:

  • A way of representing the state space (the space showing all states that are possible by applying legal moves to the initial state) using symbols to represent actual entities. Production Rules (based on legal moves) are used to generate new states: Can be written in English, or in symbolic form as below: old state -- (legal move) --> new state
1.3 Trees and Graphs

Annotations:

  • Search Tree is a graphical representation of all states in the state space, starting with the initial state at the top. Usually, repeated state (loop) are not re-expanded again. Search Graph is similar, but repeated states are combined - states are never shown more than once on a Graph. Generally more compact. Graph may have bi-directional arrows and cross branches. AND / OR Graph And branches require both legs to be completed and are joined by an arch. Used in route planning - usually split a route into two halves at the Intermediate point; and then further split up.
2 Search Strategies
2.1 Exhaustive Search

Annotations:

  • Breadth First - each layer explored before moving onto the next layer. Can be implemented by a queue algorithm. Depth First - each branch is explored till its end, then backtracking to next branch. Can be implemented by a stack algorithm. Breadth first will always find the shallowest (optimal) solution; Depth first will find solutions on the left quicker. Breadth first requires more memory as the whole tree must be stored to generate the solution path. Depth first only store current branch.  Loops will not affect Breadth first searches. Combinatorial Explosion an issue with Exhaustive searches.
2.2 Heuristics

Annotations:

  • Heuristic is a "rule of thumb" that can be applied to help us evaluate the helpfulness of a node in our pursuit of a goal.  Using heuristics can help "prune" the tree. May not always be accurate or helpful. An evaluation function is usually used to give a cost value of a node.
2.3 Hill Climbing

Annotations:

  • From the current state, evaluate the cost of each of the successor nodes.  Choose to expand the successor node that has the best score as long as it is better than the current node's score. Issues: May not find the optimal solution (a poorer route initially might actually get better later on) and it may "plateau" and halt without a solution being found.
2.4 Best First Search

Annotations:

  • In this algorithm, the best node currently held in memory (the agenda) will be expanded. This might cause the search to jump between different branches of the tree.
2.5 A*

Annotations:

  • A* takes into account the evaluation of the node (heuristic function) and the evaluation of the path cost (cost function). These are added to get a total cost. From this it uses the best first algorithm to find its solution.
2.6 MiniMax

Annotations:

  • Procedure for a two-player game. Node evaluation functions are either maximised (for the current player) or minimised (for the opponent's shot). Not ideal - in reality mistakes are made and the heuristic function might not be accurate.
3 Knowledge Representation
3.1 Semantic Nets

Annotations:

  • Used to describe relationships between objects. Each argument (object) is represented once in a circle. Each relationship is shown via a link (arrowed) between the objects. Easily translated into Prolog facts. Inheritance of objects can be shown clearly via is_a and has relationships.
3.2 Frames

Annotations:

  • Slots of information built up on each object. A frame may include a Subclass and Instance slot . A slot may have a default value which can be overridden if need by a frame. This is indicated by a * next to the slot. A frame with an Instance slot is an actual living object.
3.3 Converting Between Frames and Semantic Nets

Annotations:

  • Semantic Net --> Frame Objects that are linked by "Subclass" and "Instance" relationships are the Frame Names. The other relationships form the slot names in each of the frames. The objects provide the values for these slots. Frame --> Semantic Net The Names of the frames and the slot values form the object circles for the semantic net. The relationships are taken from each of the slot names.
3.4 Prolog

Annotations:

  • Declarative language with in-built depth first search strategy.
3.4.1 Recursion

Annotations:

  • Recursive rules call themselves. They usually need a fact subgoal to come first, then the recursive subgoal. All recursive rules need a non-recursive version to come first otherwise they will loop forever. Recursion cuts down on code, making the code more efficient.
3.4.2 Inheritance - from Semantic Nets

Annotations:

  • Semantic nets will show inheritance, but not explicitly.  Changing a semantic net into Prolog only created facts.Rules need to be created to show the inheritance between subclass and instances. These are found as follows: Subclass relationships: Choose a superclass object in semantic net (an object that has a subclass relationship pointing to it). Find a relationship off this superclass object. Then this_relationship(X, Y) :-  subclass(X, Z),                                               this_relationship(Z, Y). Instance relationships: Choose an object that has an instance relationship (or is_a predicate) pointing to it. Find a relationship off this object. Then this_relationship(X, Y) :-  instance(X, Z),                                                     this_relationship(Z, Y).
3.4.3 Lists

Annotations:

  • [item1, item2, item3,  ....][H|T]    head is item1Tail is  [item2, item3, ...]Recursion works by extracting the head and then working the rule recursively on the tail. If non-recursive version matches the item on the head.Membership:member(X, X|Tail).  member(X,[Head|Tail]) :- member(X,Tail).Concatenationconcat([],L,L)concat([Head|Tail],L2,[Head|Tail2] :- concat(Tail, L2, Tail2). 
4 Rule Based Systems
4.1 Expert Systems

Annotations:

  • Consists of user interface, inference engine, knowledge base and working memory. Take out the knowledge base then you get the expert system shell. Created via: 1) Knowledge acquisition  via domain expert, knowledge engineer and domain user.Can be difficult - vague answers, intuition, lack of support for change by expert. 2) Knowledge Representation (Prolog, IF THEN rules etc) 3) System Validation (matching response against the expert) HOW and WHY justification - How did you get that solution (looks back);  Why did you ask that question (looks forward).
4.2 Forward and Backward Rules

Annotations:

  • Forward IF conditions THEN conclusion Backward conclusion IF conditions AND/OR tree can also be used to show these different syntax, same semantics example.Backward ChainingThis mechanism starts with a conclusion and attempts to satisfy the conditions.Goal / hypothesis driven, might follow unhelpful pathsForward ChainingThis method starts with what is known and attempts to fire rules to establish more known facts. Continues until desired fact is found.Good for gathering information / monitoring, might require a lot of user input or generate unhelpful facts.Often many rules could be fired - the Conflict Set.Conflict Resolution Strategies involve:Most specific (most conditions)First Come First Served (order)Most Recent (added most recently)PrioritiesAvoid Repetition
4.3 Certainty Factors

Annotations:

  • Provides a degree of confidence to an answer or a conclusion. Can be ad-hoc in their assignment. CFs need to be combined to provide a final CF: CFconclusion = MIN(CFcond1 ,CFcond2 ,CFcond3 ...) x CFrule
5 Vision

Annotations:

  • Five stages: Digitisation, signal processing (enhancing), edge & region detections - form the primal sketch, object recognition (AI), image understanding (AI). Waltz algorithm on 2D primal sketch, trihedral shapes (3 faces to a common point).  Convex edges (+) they stick out; Concave (-) stick in; > is a n obscuring edge. Start by marking out all outer edges with >. Apply rules (18 possible) to work out + or - for the rest.  Rule of Thumb: stick out bits are +, suck in bits are -. Application: quality control, robot/weapons guidance, xray screening, weather satellite understanding.
6 Language

Annotations:

  • 4 main stages: Speech Recognition, Natural Language Understanding, Natural Language Generation, Speech Synthesis Recognition: finding phonemes (sounds) to split words; issues cheque check czech ( lexical ambiguity) and background noise. Understanding: Syntactic analysis: make sure structure is correct; Semantic analysis: extract meaning; Pragmatic analysis: use context to help with meaning. Ambiguity: Interpretations (I saw the man with glasses), Imprecise (mouse was not working); Changing meanings (Cool, Guys etc). Parse Tree: Sentence: noun phrase, verb phrase Noun Phrase:  1) Proper Noun 2) ProNoun 3) Det, Noun 4) Det, Adj, Noun Verb Phrase: 1) Verb 2) verb, noun phrase Sentences tackled via depth first search, with backtracking.
7 Robotics

Annotations:

  • Blocks world: Operators ontable(x), on(x,y) and clear(a). move(...) to do transitions. Pre-conditions, Add Fact and Remove Fact can be included for transition operations. Planning involves finding the moves that will cause the initial state >> goal state. (use search techniques). Usual issues (combinatorial explosion, heuristic functions) sometimes means working from the goal state back to the initial state is better (means-ends analysis). Global (whole strategy) and Local (specific, small scale strategy) used when planning.
8 Machine Learning

Annotations:

  • 7 types of learning: rote learning - memorise everything!learning from advice - difficult to transfer to machines as it is ill defined.learning from experience - Samuel's Checkers program, genetic algorithmslearning from examples (inductive learning): e.g. neural nets, provided with test cases.explanation-based learning: Uses example and domain knowledge (e.g. Soar system).learning by discovery: pick up facts by trying things (very human like, but some projects e.g. CYC.)learning by analogy: Reacting by considering similar situations. E.g. Evans IQ test.

Media attachments