Economics Dictionary of Arguments

Home Screenshot Tabelle Begriffe

 
Artificial intelligence: is the ability to recognize artificial systems, patterns and redundancies, to complete incomplete sequences, to re-formulate and solve problems, and to estimate probabilities. This is not an automation of human behavior, since such an automation could be a mechanical imitation. Rather, artificial systems are only used by humans to make decisions, when these systems have already made autonomous decisions.

_____________
Annotation: The above characterizations of concepts are neither definitions nor exhausting presentations of problems related to them. Instead, they are intended to give a short introduction to the contributions below. – Lexicon of Arguments.

 
Author Item Summary Meta data

Stuart J. Russell on Artificial Intelligence - Dictionary of Arguments

Brockman I 22
Artificial Intelligence/Stuart Russell: The goal of AI research has been to understand the principles underlying intelligent behavior and to build those principles into machines that can then exhibit such behavior.
Brockman I 23
In the 1960s and 1970s, the prevailing theoretical notion of intelligence was the capacity for logical reasoning (…).
More recently, a consensus has emerged around the idea of a rational agent that perceives, and acts in order to maximize, its expected utility.
AI has incorporated probability theory to handle uncertainty, utility theory to define objectives, and statistical learning to allow machines to adapt to new circumstances. These developments have created strong connections to other disciplines that build on similar concepts, including control theory, economics, operations research, and statistics.
Purpose: For example, a self-driving car should accept a destination as input instead of having one fixed destination. However, some aspects of the car’s “driving purpose” are fixed, such as that it shouldn’t hit pedestrians. Putting a purpose into a machine (…) seems an admirable approach to ensuring that the machine’s “conduct will be carried out on principles acceptable to us!”
Brockman I 24
Problem: neither AI nor other disciplines (economics, statistics, control theory, operations research) built around the optimization of objectives have much to say about how to identify the purposes “we really desire.” >Artificial Intelligence/Omohundro, >Superintelligence/Stuart Russell.
Brockman I 29
Solution/Stuart Russell: The optimal solution to this problem is not, as one might hope, to behave well, but instead to take control of the human and force him or her to provide a stream of maximal rewards. This is known as the wireheading problem, based on observations that humans themselves are susceptible to the same problem if given a means to electronically stimulate their own pleasure centers.
Problem: This idealization ignores the possibility that our minds are composed of subsystems with incompatible preferences; if true, that would limit a machine’s ability to optimally satisfy our preferences, but it doesn’t seem to prevent us from designing machines that avoid catastrophic outcomes.
Solution/Stuart Russell: A more precise definition is given by the framework of cooperative inverse-reinforcement learning, or CIRL. A CIRL problem involves two agents, one human and the other a robot. Because there are two agents, the problem is what economists call a game. It is a game of partial information, because while the human knows the reward function, the robot doesn’t—even though the robot’s job is to maximize it.
Brockman I 30
Off-switch Problem: Within the CIRL framework, one can formulate and solve the off-switch problem - that is, the problem of how to prevent a robot from disabling its off switch. A robot that’s uncertain about human preferences actually benefits from being switched off,
Brockman I 31
because it understands that the human will press the off switch to prevent the robot from doing something counter to those preferences. Thus the robot is incentivized to preserve the off switch, and this incentive derives directly from its uncertainty about human preferences.(1)
Behavioral learning/preferences/Problems: There are obvious difficulties, however, with an approach that expects a robot to learn underlying preferences from human behavior. Humans are irrational, inconsistent, weak willed, and computationally limited, so their actions don’t always reflect their true preferences.


1. Cf. Hadfield-Menell et al., “The Off-Switch Game,” https:/Jarxiv.orglpdf/ 1611.0821 9.pdf.


Russell, Stuart J. „The Purpose put into the Machine”, in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press.

- - -

Norvig I 27
Artificial general intelligence/Norvig/Russell: Artificial General Intelligence or AGI (Goertzel and
Pennachin, 2007)(1), (…) held its first conference and organized the Journal of Artificial General
Intelligence in 2008.
AGI looks for a universal algorithm for learning and acting in any environment, and has its roots in the work of Ray Solomonoff (1964)(2), one of the attendees of the original 1956 Dartmouth conference. Guaranteeing that what we create is really Friendly AI is also a concern (Yudkowsky, 2008(3); Omohundro, 2008)(4). >Human Level AI/Minsky; >Artificial general intelligence.



1. Goertzel, B. and Pennachin, C. (2007). Artificial General Intelligence. Springer
2. Solomonoff, R. J. (1964). A formal theory of inductive inference. Information and Control, 7, 1–22,
224–254.
3. Yudkowsky, E. (2008). Artificial intelligence as a positive and negative factor in global risk. In Bostrom, N. and Cirkovic, M. (Eds.), Global Catastrophic Risk. Oxford University Press
4. Omohundro, S. (2008). The basic AI drives. In AGI-08 Workshop on the Sociocultural, Ethical and
Futurological Implications of Artificial Intelligence


_____________
Explanation of symbols: Roman numerals indicate the source, arabic numerals indicate the page number. The corresponding books are indicated on the right hand side. ((s)…): Comment by the sender of the contribution. Translations: Dictionary of Arguments
The note [Author1]Vs[Author2] or [Author]Vs[term] is an addition from the Dictionary of Arguments. If a German edition is specified, the page numbers refer to this edition.

Russell I
B. Russell/A.N. Whitehead
Principia Mathematica Frankfurt 1986

Russell II
B. Russell
The ABC of Relativity, London 1958, 1969
German Edition:
Das ABC der Relativitätstheorie Frankfurt 1989

Russell IV
B. Russell
The Problems of Philosophy, Oxford 1912
German Edition:
Probleme der Philosophie Frankfurt 1967

Russell VI
B. Russell
"The Philosophy of Logical Atomism", in: B. Russell, Logic and KNowledge, ed. R. Ch. Marsh, London 1956, pp. 200-202
German Edition:
Die Philosophie des logischen Atomismus
In
Eigennamen, U. Wolf (Hg), Frankfurt 1993

Russell VII
B. Russell
On the Nature of Truth and Falsehood, in: B. Russell, The Problems of Philosophy, Oxford 1912 - Dt. "Wahrheit und Falschheit"
In
Wahrheitstheorien, G. Skirbekk (Hg), Frankfurt 1996

Brockman I
John Brockman
Possible Minds: Twenty-Five Ways of Looking at AI New York 2019

Norvig I
Peter Norvig
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010


Send Link
> Counter arguments against Russell
> Counter arguments in relation to Artificial Intelligence

Authors A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Z  


Concepts A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Z