[Artificial Intelligence] Solving problems by Searching (2)
- Solving problems by Search (1)
- Solving problems by Search (2)
Search Strategies
Uninformed Search (Blind Search)
- No information about the number of steps or the path cost from the current state to the goal
- Distinguish goal state from a non goal state
- No preference among Sibiu, Timisoara, Zerind
Informed Search (Heuristic Search)
- Since Bucharest is southeast of Arad, and Sibiu is in that direction, so it is likely to be the best choice.
- Uninformed search strategies are distinguished by the order in which nodes are expanded.
Breadth First Search (BFS)
- Root node is expanded first
- All nodes generated by the root node are expanded next, and their successors
- All nodes at depth d in search tree are expanded before the nodes at depth $d+1$
- Considers all the paths of length 1 then all those of length 2, and so on.
- If there are several solutions, it find the shallowest goal state first
- Complete
- Optimal provided the path cost is a nondecreasing function of the depth of the node
- Why it is not always the strategy of choice?
- Assume branching factor is $b$
- Suppose that solution has a path length of $d$
- Maximum number of nodes expanded before finding a solution
- $1+b+b^2+b^3+ \dots + b^d$
- Exponential complexity (Time and Space) $O(b^d)$
- Depth $d$ =12
- Assuming branch factor $b$ =10, 1000 nodes/second, 100 bytes/node
- Time: 35 years, Memory: 111 tera bytes
Uniform Cost Search
- Expand the lowest cost node measured by path cost $g(n)$
- Breadth first search is uniform cost search with $g(n)=d(n)$
- Finds cheapest solution provided that the cost of a path never decrease as we go along the path.(i.e. every operator has a nonnegative cost)
$g(Successor(n)) \geq g(n),$ for every node $n$
- Route Finding Problem
Depth-First Search (DFS)
- Expand one of the nodes at the deepest level of the tree.
- When the search hits a dead end (a non goal node with no expansion), search go back and expand nodes at shallower levels.
- Modest memory requirements.
- $bm$ nodes
- $b$: branching factor, $m$ :maximum depth
- Time complexity $O(b^m)$
- For problems that have many solutions, depth first may be faster than breadth-first
- Get stuck: infinite depth
- Not complete
- Not optimal
Depth-First Search with a depth-limit
Depth-limited search
- Impose a cutoff on the maximum depth of a path
- Map of Romania, 20 cities
- Solution must be of length 19 at the longest
- Complete
- Not optimal
- If we choose depth limit that is too small,
- Complete? Optimal?
- Time complexity $O(b^l)$
- Space complexity $O(bl)$
- $l$: depth limit
Iterative deepening search
- Good limit?
- Any city can be reached from any other city at most 9 steps.
- leads to a more efficient depth–limited search
- We will not know a good depth limit.
- Try all possible depth limits
- Depth 0, depth 1, depth 2,….
- Combines the benefits of depth-first and breadth-first search
- Optimal and complete
Modest memory requirements of depth first search
- Many states are expanded multiple times
- Time complexity?
-
It does not matter much that the upper levels are expanded multiple times
- Cf. Number of expansions in depth-limited search to depth d with branching factor b
- $1 + b + b^2 + \dots + b^{d-2} + b^{d-1} + b^d$
- For b=10, d=5
- Cf. Depth-limited search
- 1+10+100+1000+10,000+100,000=111,111
- Iterative deepening search, nodes on the bottom level are expanded once
- Nodes on the next to bottom level are expanded twice Root of the search tree, expanded d+1 times
- Total number of expansions
- $(d+1)1 + (d)b + (d-1)b^2 + \dots + 3b^{d-2} + 2b^{d-1} + 1b^d$
- For b=10, d=5
- total number of expanded nodes in iterative deepening search
- 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,456
- 11% more nodes are expanded
- Space complexity : $O(db)$
- Time complexity : $O(b^d)$
Comparing search strategies
- $b$:branching factor, $d$:depth of solution,
- $m$:maximum depth of search tree, $l$:depth limit
2.5 Search in Neural Network
- $x$: parameter we are adjusting
- $P(x)$: performance index
- Develop algorithm to optimize a performance index $P(x)$
- Find value of $x$ that minimizes $P(x)$
- Iterative algorithm
- Initial guess, $x_0$
- Update guess
- $\gamma(n)$: positive scalar, learning rate
- $\delta(n)$: search direction. Choice of $\delta(n)$ ?
Gradient Descent (경사하강법)
Fig. Steepest descent algorithms for two variable function
댓글남기기