## Philosophy Dictionary of ArgumentsHome | |||

| |||

Author | Item | Summary | Meta data |
---|---|---|---|

Norvig I 64 Search Algorithms/Russell/Norvig: uninformed search algorithms [are] algorithms that are given no information about the problem other than its definition. Although some of these algorithms can solve any solvable problem, none of them can do so efficiently. Informed search algorithms, on the other hand, can do quite well given some guidance on where to look for solutions. A. Uninformed search. Norvig I 81 Breadth-first search is a simple strategy in which the root node is expanded first, then all the successors of the root node are expanded next, then their successors, and so on. In general, all the nodes are expanded at a given depth in the search tree before any nodes at the next level are expanded. Norvig I 83 The memory requirements are a bigger problem for breadth-first search than is the execution time. (…) exponential-complexity search problems cannot be solved by uninformed methods for any but the smallest instances. Norvig I 85 Depth-first search always expands the deepest node in the current frontier of the search tree. Both versions are nonoptimal. Norvig I 87 Backtracking search: uses less memory. (…) only one successor is generated at a time rather than all successors; each partially expanded node remembers which successor to generate next. In this way, only O(m) memory is needed rather than O(bm). Backtracking search facilitates yet another memory-saving (and time-saving) trick: the idea of generating a successor by modifying the current state description directly rather than copying it first. This reduces the memory requirements to just one state description and O(m) actions. For this to work, we must be able to undo each modification when we go back to generate the next successor. B. Informed (heuristic) search. Norvig I 92 Best-first search is an instance of the general tree-search or graph-search algorithm in which a node is selected for expansion based on an evaluation function, f(n). The evaluation function is construed as a cost estimate, so the node with the lowest evaluation is expanded first. Greedy best-first search tries to expand the node that is closest to the goal, on the grounds that this is likely to lead to a solution quickly. Thus, it evaluates nodes by using just the heuristic function; that is, f(n) = h(n). Norvig I 108 A general tree search algorithm considers all possible paths to find a solution, whereas a graph search algorithm avoids consideration of redundant paths. Norvig I 120 Online search: here, the agent is faced with a state space that is initially unknown and must be explored. Norvig I 121 Local search algorithms: If the path to the goal does not matter, we might consider a different class of algorithms, ones that do not worry LOCAL SEARCH about paths at all. Local search algorithms operate using a single current node (rather than multiple paths) and generally move only to neighbors of that node. These algorithms have two key advantages: (1) they use very little memory - usually a constant amount; and (2) they can often find reasonable solutions in large or infinite (continuous) state spaces for which systematic algorithms are unsuitable.((s) For Problems: cf. >Local minima (local maxima; for a solution: >Simulated annealing). Norvig I 125 Local beam search: (beam search is a path-based algorithm). The local beam search algorithm keeps track of k states rather than Norvig I 126 just one. It begins with k randomly generated states. At each step, all the successors of all k states are generated. If anyone is a goal, the algorithm halts. Otherwise, it selects the k best successors from the complete list and repeats. >Genetic algorithms. And-Or search problem: see >Terminology/Norvig. Norvig I 147 Online search: >Online search/Norvig. Norvig I 154 Literature for local search: (Newton, 1671 ^{(1)}; Raphson, 1690^{(2)}) can be seen as a very efficientlocal search method for continuous spaces in which gradient information is available. Brent (1973) ^{(3)} is a classic reference for optimization algorithms that do not require such information.Beam search, which we have presented as a local search algorithm, originated as a bounded-width variant of dynamic programming for speech recognition in the HARPY system (Lowerre, 1976) ^{(4)}. A related algorithm is analyzed in depth by Pearl (1984^{(5)}).The topic of local search was reinvigorated in the early 1990s by surprisingly good results for large constraint-satisfaction problems such as n-queens (Minton et al., 1992) ^{(6)} and logical reasoning (Selman et al., 1992)^{(7)} and by the incorporation of randomness, multiple simultaneous searches, and other improvements.Tabu serarch: a variant of hill climbing called tabu search has gained popularity (Glover and Laguna, 1997) ^{(8)}. This algorithm maintains a tabu list of k previously visited states that cannot be revisited; as well as improving efficiency when searching graphs, this list can allow the algorithm to escape from some local minima. Stage algorithm: Another useful improvement on hill climbing is the stage algorithm (Boyan and Moore, 1998) ^{(9)}. The idea is to use the local maxima found by random-restart hill climbing to get an idea of the overall shape of the landscape. >Constraint Satisfaction Problems/Norvig.Norvig I 227 Constraint satisfaction problems (CSPs) represent a state with a set of variable/value pairs and represent the conditions for a solution by a set of constraints on the variables. Many important real-world problems can be described as CSPs. A number of inference techniques use the constraints to infer which variable/value pairs are consistent and which are not. These include node, arc, path, and k-consistency. Backtracking search, a form of depth-first search, is commonly used for solving CSPs. Inference can be interwoven with search. The minimum-remaining-values and degree heuristics are domain-independent methods for deciding which variable to choose next in a backtracking search. The least constraining- value heuristic helps in deciding which value to try first for a given variable. Backtracking occurs when no legal assignment can be found for a variable. Conflict-directed backjumping backtracks directly to the source of the problem. Local search using the min-conflicts heuristic has also been applied to constraint satisfaction problems with great success. For forward chaining, backward chaining: see >Software agents/Norvig. 1. Newton, I. (1664–1671). Methodus fluxionum et serierum infinitarum. Unpublished notes 2. Raphson, J. (1690). Analysis aequationum universalis. Apud Abelem Swalle, London. 3. Brent, R. P. (1973). Algorithms for minimization without derivatives. Prentice-Hall 4. Lowerre, B. T. (1976). The HARPY Speech Recognition System. Ph.D. thesis, Computer Science Department, Carnegie-Mellon University. 5. Pearl, J. (1984). Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison- Wesley. 6. Minton, S., Johnston, M. D., Philips, A. B., and Laird, P. (1992). Minimizing conflicts: A heuristic repair method for constraint satisfaction and scheduling problems. AIJ, 58(1–3), 161–205. 7. Selman, B., Levesque, H. J., and Mitchell, D. (1992). A new method for solving hard satisfiability problems. In AAAI-92, pp. 440–446. 8. Glover, F. and Laguna, M. (Eds.). (1997). Tabu search. Kluwer 9. Boyan, J. A. and Moore, A. W. (1998). Learning evaluation functions for global optimization and Boolean satisfiability. In AAAI-98 _____________ Explanation of symbols: Roman numerals indicate the source, arabic numerals indicate the page number. The corresponding books are indicated on the right hand side. ((s)…): Comment by the sender of the contribution. The note [Author1]Vs[Author2] or [Author]Vs[term] is an addition from the Dictionary of Arguments. If a German edition is specified, the page numbers refer to this edition. |
Norvig I Peter Norvig Stuart J. Russell Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010 |

> Counter arguments against **Norvig**

Ed. Martin Schulz, access date 2020-08-14