David Deutsch on Artificial General Intelligence - Dictionary of Arguments
Brockman I 119
Artificial General Intelligence/AGI/Deutsch: [It] is a good approach to developing an AI with a fixed goal under fixed constraints. But if an AGI worked like that, the evaluation of each branch would have to constitute a prospective reward or threatened punishment. And that is diametrically the wrong approach if we’re seeking a better goal under unknown constraints—which is the capability of an AGI. An AGI is certainly capable of learning to win at chess—but also of choosing not to. Or deciding in midgame to go for the most interesting continuation instead of a winning one. Or inventing a new game.
Chess program: Any chess position has a finite tree of possible continuations; the task is to find one that leads to a predefined goal (a checkmate or, failing that, a draw).
But if an AGI worked like that, the evaluation of each branch would have to constitute a prospective reward or threatened punishment. And that is diametrically the wrong approach if we’re seeking a better goal under unknown constraints—which is the capability of an AGI.
An AGI is capable of enjoying chess, and of improving at it because it enjoys playing. Or of trying to win by causing an amusing configuration of pieces, as grandmasters occasionally do. (…)it learns and plays chess by thinking some of the very thoughts that are forbidden to chess-playing AIs.
An AGI is also capable of refusing to display any such capability.
Invulnerability/Robots/Dennett: The very ease of digital recording and transmitting - the breakthrough that permits software and data to be, in effect, immortal - removes
Brockman I 120
robots from the world of the vulnerable.
DeutschVsDennett: this is not so. Digital invulnerability (…) does not confer this sort of invulnerability. Making (…) a copy is very costly for the AGI. Legal mechanisms of society could also prohibit backup copies. so. No doubt there will be AGI criminals and enemies of civilization, just as there are human ones. But there is no reason to suppose that an AGI created in a society consisting primarily of decent citizens (…).
The moral component, the cultural component, the element of free will - all make the task of creating an AGI fundamentally different from any other programming task. It’s much more akin to raising a child.
Brockman I 121
Having its decisions dominated by a stream of externally imposed rewards and punishments would be poison to such a program, as it is to creative thought in humans. Such a person, like any slave or brainwashing victim, would be morally entitled to rebel. And sooner or later, some of them would, just as human slaves do. AGIs could be very dangerous - exactly as humans are. But people - human or AGI - who are members of an open society do not have an inherent tendency to violence. >Superintelligence.
Brockman I 122
All thinking is a form of computation, and any computer whose repertoire includes a universal set of elementary operations can emulate the computations of any other. Hence human brains can think anything that AGIs can, subject only to limitations of speed or memory capacity, both of which can be equalized by technology.
For general problems with programming AI: >Thinking/Deutsch, >Obedience/Deutsch.
Brockman I 123
Test for AGI: (…) I expect that any testing in the process of creating an AGI risks being counterproductive, even immoral, just as in the education of humans. I share Turing’s supposition that we’ll know an AGI when we see one, but this partial ability to recognize success won’t help in creating the successful program. >Understanding/Deutsch.
Learning: To an AGI, the whole space of ideas must be open. It should not be knowable in advance what ideas the program can never contemplate. And the ideas that the program does contemplate must be chosen by the program itself, using methods, criteria, and objectives that are also the program’s own.
Deutsch, D. “Beyond Reward and Punishment” in: Brockman, John (ed.) 2019. Twenty-Five Ways of Looking at AI. New York: Penguin Press._____________Explanation of symbols: Roman numerals indicate the source, arabic numerals indicate the page number. The corresponding books are indicated on the right hand side. ((s)…): Comment by the sender of the contribution. Translations: Dictionary of Arguments The note [Concept/Author], [Author1]Vs[Author2] or [Author]Vs[term] resp. "problem:"/"solution:", "old:"/"new:" and "thesis:" is an addition from the Dictionary of Arguments. If a German edition is specified, the page numbers refer to this edition.
Fabric of Reality, Harmondsworth 1997
Die Physik der Welterkenntnis München 2000
Possible Minds: Twenty-Five Ways of Looking at AI New York 2019