Psychology Dictionary of Arguments

Home Screenshot Tabelle Begriffe

Author Concept Summary/Quotes Sources

AI Research on Software Agents - Dictionary of Arguments

Norvig I 64
Software Agents/artificial intelligence/Russell/Norvig: agents base their actions on a direct mapping from states to actions. Such agents cannot operate well in environments for which this mapping would be too large to store and would take too long to learn.
Goal-based agents, on the other hand, consider future actions and the desirability of their outcomes.
Problem-solving agents use atomic representations, (…) that is, states of the world are considered as wholes, with no internal structure visible to the problem-solving algorithms.
Norvig I 4
Computer agents are expected to do more: operate autonomously, perceive their environment, persist over a prolonged time period, adapt to change, and create and pursue goals.
A rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome.
Norvig I 235
Knowledge based Agents/logical agents: The central component of a knowledge-based agent is its knowledge base, or KB. A knowledge base is a set of sentences. (Here “sentence” is used as a technical term. It is related but not identical to the sentences of English and other natural languages.) Each sentence is expressed in a language called a knowledge representation language and represents some assertion about the world.
Norvig I 240
Semantics: The semantics defines the truth of each sentence with respect to each possible world.
Model: instead of “possible world” we need to be more precise and use the term model. Whereas possible worlds might be thought of as (potentially) real environments that the agent might or might not be in, models are mathematical abstractions, each of which simply fixes the truth or falsehood of every relevant sentence.
Norvig I 241
Knowledge Base: The KB can be thought of as a set of sentences or as a single sentence that asserts all the individual sentences. The KB is false in models that contradict what the agent knows (…).
Norvig I 242
Completeness: an inference algorithm is complete if it can derive any sentence that is entailed. Fortunately, there are complete inference procedures for logics that are sufficiently expressive to handle many knowledge bases.
Real world: if [the knowledge base] KB is true in the real world, then any sentence α derived from KB by a sound inference procedure is also true in the real world. So, while an inference process operates on “syntax”—internal physical configurations such as bits in registers or patterns of electrical blips in brains - the process corresponds
Norvig I 243
to the real-world relationship whereby some aspect of the real world is the case by virtue of other aspects of the real world being the case.
Grounding: grounding [is] the connection between logical reasoning processes and the real environment in which the agent exists. In particular, how do we know that KB is true in the real world? A simple answer is that the agent’s sensors create the connection. Cf. >Semantics, >Syntax; for the philosophical discussion see also >Facts/Wittgenstein, >States of Affairs/Wittgenstein, >Foundation/Philosophical theories.
Norvig I 257
Forward chaining: The forward-chaining algorithm (…) determines if a single proposition symbol q - the query - is entailed by a knowledge base of definite clauses. It begins from known facts (positive literals) in the knowledge base. If all the premises of an implication are known, then its conclusion is added to the set of known facts.
Norvig I 258
It is easy to see that forward chaining is sound: every inference is essentially an application of Modus Ponens. Forward chaining is also complete: every entailed atomic sentence will be derived. The easiest way to see this is to consider the final state of the inferred table (after the algorithm reaches a fixed point where no new inferences are possible). Cf. >Fixed points.
Forward chaining is an example of the general concept of data-driven reasoning – that is, reasoning in which the focus of attention starts with the known data. It can be used within an agent to derive conclusions from incoming percepts, often without a specific query in mind.
Backward chaining: works backward from the query. If the query q is known to be true, then no work is needed. Otherwise, the algorithm finds those implications in the knowledge base whose conclusion is q. If all the premises of one of those implications can be proved true (by backward chaining), then q is true. Backward chaining is a form of goal-directed reasoning. It is useful for answering specific questions such as “What shall I do now?” and “Where are my keys?” Often, the cost of backward chaining is much less than linear in the size of the knowledge base, because the process touches only relevant facts.
Norvig I 275
History: John McCarthy’s paper “Programs with Common Sense” (McCarthy, 1958(1), 1968(2)) promulgated the notion of agents that use logical reasoning to mediate between percepts and actions. Allen Newell’s (1982)(3) article “The Knowledge Level” makes the case that rational agents can be described and analyzed at an abstract level defined by the knowledge they possess rather than the programs they run. The declarative and procedural approaches to AI are analyzed in depth by Boden (1977)(4). The debate was revived by, among others, Brooks (1991)(5) and Nilsson (1991)(6), and continues to this day (Shaparau et al., 2008)(7). Meanwhile, the declarative approach has spread into other areas of computer science such as networking (Loo et al., 2006)(8).
Norvig I 278
Current state: The current state of theoretical understanding is summarized by Achlioptas (2009)(9). The satisfiability threshold conjecture states that, for each k, there is a sharp satisfiability threshold rk, such that as the number of variables n→∞, instances below the threshold are satisfiable with probability 1, while those above the threshold are unsatisfiable with probability 1. The conjecture was not quite proved by Friedgut (1999)(10): a sharp threshold exists but its location might depend on n even as n → ∞. Despite significant progress in asymptotic analysis of the threshold location for large k (Achlioptas and Peres, 2004(11); Achlioptas et al., 2007(12)), all that can be proved for k=3 is that it lies in the range [3.52,4.51].
Current theory suggests that a peak in the run time of a SAT solver is not necessarily related to the satisfiability threshold, but instead to a phase transition in the solution distribution and structure of
SAT instances. Empirical results due to Coarfa et al. (2003)(13) support this view. In fact, algorithms such as survey propagation (Parisi and Zecchina, 2002(14); Maneva et al., 2007(15)) take advantage of special properties of random SAT instances near the satisfiability threshold and greatly outperform general SAT solvers on such instances.
Neural networks: The idea of building agents with propositional logic can be traced back to the seminal paper of McCulloch and Pitts (1943)(16), which initiated the field of neural networks. >Frame problem, >Environment/AI research, >Universe/AI research, >Decisions/AI research, >Uncertainty/AI research.

1. McCarthy, J. (1958). Programs with common sense. In Proc. Symposium on Mechanisation of
Thought Processes, Vol. 1, pp. 77–84.
2. McCarthy, J. (1968). Programs with common sense. In Minsky, M. L. (Ed.), Semantic Information
Processing, pp. 403–418. MIT Press.
3. Newell, A. (1982). The knowledge level. AIJ, 18(1), 82–127.
4. Boden, M. A. (1977). Artificial Intelligence and Natural Man. Basic Books
5. Brooks, R. A. (1991). Intelligence without representation. AIJ, 47(1–3), 139–159.
6. Nilsson, N. J. (1991). Logic and artificial intelligence. AIJ, 47(1–3), 31–56.
7. Shaparau, D., Pistore, M., and Traverso, P. (2008). Fusing procedural and declarative planning goals for nondeterministic domains. In AAAI-08.
8. Loo, B. T., Condie, T., Garofalakis, M., Gay, D. E., Hellerstein, J. M., Maniatis, P., Ramakrishnan, R.,
Roscoe, T., and Stoica, I. (2006). Declarative networking: Language, execution and optimization. In
9. Achlioptas, D. (2009). Random satisfiability. In Biere, A., Heule, M., van Maaren, H., and Walsh, T.
(Eds.), Handbook of Satisfiability. IOS Press.
10. Friedgut, E. (1999). Necessary and sufficient conditions for sharp thresholds of graph properties, and
the k-SAT problem. J. American Mathematical Society, 12, 1017–1054.
11. Achlioptas, D. and Peres, Y. (2004). The threshold for random k-SAT is 2k log 2−o(k). J. American Mathematical Society, 17(4), 947–973.
12. Achlioptas, D., Naor, A., and Peres, Y. (2007). On the maximum satisfiability of random formulas.
JACM, 54(2).
13. Coarfa, C., Demopoulos, D., Aguirre, A., Subramanian, D., and Yardi, M. (2003). Random 3-SAT: The plot thickens. Constraints, 8(3), 243–261.
14. Parisi, M. M. G. and Zecchina, R. (2002). Analytic and algorithmic solution of random satisfiability problems. Science, 297, 812–815.
15. Maneva, E., Mossel, E., and Wainwright, M. J. (2007). A new look at survey propagation and its generalizations. JACM, 54(4).
16. McCulloch, W. S. and Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity.
Bulletin of Mathematical Biophysics, 5, 115–137.

Explanation of symbols: Roman numerals indicate the source, arabic numerals indicate the page number. The corresponding books are indicated on the right hand side. ((s)…): Comment by the sender of the contribution. Translations: Dictionary of Arguments
The note [Concept/Author], [Author1]Vs[Author2] or [Author]Vs[term] resp. "problem:"/"solution:", "old:"/"new:" and "thesis:" is an addition from the Dictionary of Arguments. If a German edition is specified, the page numbers refer to this edition.
AI Research
Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010

Send Link

Authors A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Z  

Concepts A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Y   Z