This shows you the differences between two versions of the page.

— |
cs-470:review-for-midterm [2015/01/06 14:51] (current) ryancha created |
||
---|---|---|---|

Line 1: | Line 1: | ||

+ | ==Agent Models== | ||

+ | *PEAS | ||

+ | *Types of Agents | ||

+ | *Types of Environments | ||

+ | *Decision Theoretic Model | ||

+ | |||

+ | ==Search== | ||

+ | |||

+ | I may use n-queens, checkers, chess, 8-puzzle (sliding block), rubic's cube, backgammon, tic-tac-toe as the context in any question on the exam. If you are not familiar with these, you may want to learn the basic idea of how they work. | ||

+ | |||

+ | All algorithms '''as described in the book''' | ||

+ | |||

+ | Space complexity, Time complexity. You should be able to derive and discuss these. I prefer you not just memorize them. | ||

+ | |||

+ | Optimality and Completeness. You should be able to prove and discuss these. | ||

+ | |||

+ | You should know how all of the following algorithms work. This includes being able to simulate there execution for a small problem, and discuss their optimality, completeness, space complexity and time complexity. | ||

+ | |||

+ | ===Uninformed=== | ||

+ | |||

+ | Depth, Breadth, Uniform cost, Iterative Deepening, Bi-directional | ||

+ | |||

+ | ===Informed search=== | ||

+ | |||

+ | Greedy-best-first, A*, IDA*, Recursive Best-First Search, SMA* | ||

+ | |||

+ | Heuristics, Admissibility, Consistency, Making Heuristics | ||

+ | |||

+ | Tree vs. Graph search, Closed list. | ||

+ | |||

+ | ===Beyond Classic Search=== | ||

+ | |||

+ | On-line search | ||

+ | |||

+ | Search in a continuous space | ||

+ | |||

+ | Genetic Algorithms | ||

+ | |||

+ | Simulated Annealing | ||

+ | |||

+ | Particle swarm Optimization | ||

+ | |||

+ | ==Games== | ||

+ | |||

+ | Ply, Minimax, Terminal test, Evaluation functions, Cut-off, Quiescence search, Horizon problem, and Complexity | ||

+ | |||

+ | Min/Max search | ||

+ | |||

+ | $\alpha \beta$ pruning | ||

+ | |||

+ | $\alpha \beta$ pruning w/ random nodes no limits, that is -$\infty$ to $\infty$ | ||

+ | |||

+ | $\alpha \beta$ pruning w/ random nodes and limits | ||

+ | |||

+ | == Probability == | ||

+ | |||

+ | * Axioms of Prob. | ||

+ | * Definition of Conditional Prob. | ||

+ | * Notation, including: P(a) means the probability P(A=True), P(A) means a vector of probabilities corresponding to all values (all two, in the binary case) of the random variable A. | ||

+ | * Marginalizing out variable by summing, i.e. $p(a)=\Sigma_b P(a,b)$ | ||

+ | * Using the joint to compute arbitrary probabilities and arbitrary conditional probabilities | ||

+ | * Bayes Law | ||

+ | * Chain rule, using it both directions, that is to (1) split a joint probabilities '''into''' conditionals and marginals, and (2) form joint probabilities '''from''' conditionals and marginals. |