# Dictionary of Arguments

Philosophical and Scientific Issues in Dispute

[german]

Find counter arguments by entering NameVs… or …VsName.

Enhanced Search:
Search term 1: Author or Term Search term 2: Author or Term

together with

The author or concept searched is found in the following 1 entries.
Disputed term/author/ism Author
Entry
Reference
Dempster-Shafer Theory Norvig Norvig I 547
Dempster-Shafer Theory/AI Research/Norvig/Russell: uses interval-valued degrees of belief to represent an agent’s knowledge of the probability of a proposition.
Norvig I 549
The Dempster–Shafer theory is designed to deal with the distinction between uncertainty and ignorance. Rather than computing the probability of a proposition, it computes the probability that the evidence supports the proposition. This measure of belief is called a belief function, written Bel(X). The mathematical underpinnings of Dempster–Shafer theory have a similar flavor to those of probability theory; the main difference is that, instead of assigning probabilities to possible worlds, the theory assigns masses to sets of possible world, that is, to events. The masses still must add to 1 over all possible events. Bel(A) is defined to be the sum of masses for all events that are subsets of (i.e., that entail) A, including A itself. With this definition, Bel(A) and Bel(¬A) sum to at most 1, and the gap—the interval between Bel(A) and 1 − Bel(¬A)—is often interpreted as bounding the probability of A.

VsDempster-Shafer theory: Problems: As with default reasoning, there is a problem in connecting beliefs to actions. Whenever there is a gap in the beliefs, then a decision problem can be defined such that a Dempster–Shafer system is unable to make a decision. In fact, the notion of utility in the Dempster–Shafer model is not yet well understood because the meanings of masses and beliefs themselves have yet to be understood. Pearl (1988)(1) has argued that Bel(A) should be interpreted not as a degree of belief in A but as the probability assigned to all the possible worlds (now interpreted as logical theories) in which A is provable. While there are cases in which this quantity might be of interest, it is not the same as the probability that A is true. A Bayesian analysis of the coin-flipping example would suggest that no new formalism is necessary to handle such cases. The model would have two variables: the Bias of the coin (a number between 0 and 1, where 0 is a coin that always shows tails and 1 a coin that always shows heads) and the outcome of the next Flip. Cf. >Fuzzy Logic, >Vagueness/Philosophical theories, >Sorites/Philosophical theories.

1. Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann.

Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010