|Artificial intelligence: is the ability of artificial systems, to recognize patterns and redundancies, to replenish incomplete sequences, to re-formulate and solve problems, and to estimate probabilities. This is not an automation of human behavior. Rather, artificial systems are only used by humans to make decisions, when these systems have already made autonomous decisions. See also artificial consciousness, intelligence, consciousness._____________Annotation: The above characterizations of concepts are neither definitions nor exhausting presentations of problems related to them. Instead, they are intended to give a short introduction to the contributions below. – Lexicon of Arguments. |
Artificial Intelligence/Chalmers: Suppose we had an artificial system that rationally reflects what it perceives. Would this system have a concept of consciousness? It would certainly have a concept of the self, it could differ from the rest of the world, and have a more direct access to its own cognitive contents than to that of others. So it would have a certain kind of self-awareness. This system will not say about itself, that it would have no idea how it is to see a red triangle. Nor does it need access to its elements on a deeper level (Hofstadter 1979 1, Winograd 1972 2).
N.B.: such a system would have a similar attitude to its inner life as we do to ours.
Behavioral explanation/Chalmers: to explain the behavior of such systems, we never need to attribute consciousness. Perhaps such systems have consciousness, or not, but the explanation of their behavior is independent of this.
Artificial Intelligence/VsArtificial Intelligence/Chalmers: DreyfusVsArtificial Intelligence: (Dreyfus 1972 7): Machines cannot achieve the flexible and creative behavior of humans.
LucasVsArtificial Intelligence/PenroseVsArtificial Intelligence/Chalmers: (Lucas 1961 3, Penrose, 1989 4): Computers can never reach the mathematical understanding of humans because they are limited by Goedel's Theorem in a way in which humans are not. Chalmers: these are external objections. The internal objections are more interesting:
VsArtificial intelligence: internal argument: conscious machines cannot develop a mind. SearleVsArtificial Intelligence: > Chinese Room Argument. (Searle 1980 5). According to that, a computer is at best a simulation of consciousness, a zombie.
Artificial Intelligence/ChalmersVsSearle/ChalmersVsPenrose/ChalmersVsDreyfus: it is not obvious that certain physical structures in the computer lead to consciousness, the same applies to the structures in the brain.
Definition Strong Artificial Intelligence/Searle/Chalmers: Thesis: There is a non-empty class of computations so that the implementation of each operation from this class is sufficient for a mind and especially for conscious experiences. This is only true with natural necessity, because it is logically possible that any compuation can do without consciousness, but this also applies to brains.
Implementation/Chalmers: this term is needed as a bridge for the connection between abstract computations and concrete physical systems in the world. We also sometimes say that our brain implements calculations.
Implementation/Searle: (Searle 1990b 6): Thesis is an observational-relativistic term. If you want, you can consider every system as implementing, for example: a wall.
ChalmersVsSearle: one has to specify the implementation, then this problem is avoided.
For example, a combinatorial state machine has quite different implementation conditions than a finite state machine. The causal interaction between the elements is differently fine-grained. In addition, combinatorial automats can reflect various other automats, like...
...Turing machines and cellular automats, as opposed to finite or infinite state automats.
ChalmersVsSearle: each system implements one or the other computation. Only not every type (e.g., a combinational state machine) is implemented by each system. Observational relativity remains, but it does not threaten the possibility of artificial intelligence.
This does not say much about the nature of the causal relations.
1. D. R. Hofstadter Gödel, Escher Bach, New York 1979
2. T. Winograd, Understanding Natural Language, New York 1972
3. J. R. Lucas, Minds, machines and Gödel, Philosophy 36, 1961, p. 112-27.
4. R. Penrose, The Emperor's New Mind, Oxford 1989
5. J. R. Searle, Minds, brains and programs. Behavioral and Brain Sciences 3, 1980: pp. 417 -24
6. J. R. Searle, Is the brain an digital computer? Proceedings and Adresses of the American Philosophical association, 1990, 64: pp. 21-37
7. H. Dreyfus, What Computers Can't Do. New York 1972._____________Explanation of symbols: Roman numerals indicate the source, arabic numerals indicate the page number. The corresponding books are indicated on the right hand side. ((s)…): Comment by the sender of the contribution. The note [Author1]Vs[Author2] or [Author]Vs[term] is an addition from the Dictionary of Arguments. If a German edition is specified, the page numbers refer to this edition.
The Conscious Mind Oxford New York 1996
Constructing the World Oxford 2014