Philosophy Dictionary of Arguments

Screenshot Tabelle Begriffe

Author Item Summary Meta data
I 130
Neural networks/learning/Deacon: the basic structure consists of three layers: input units, output units and hidden units (hidden units, middle layer) as well as their connections. The states of the nodes of the middle layer (0 or 1) are initially influenced by the input nodes. It is crucial that the strength of the compounds only emerges as a result of more frequent use. The connections are trained by comparing the success of the output signal (correct or wrong association) with the input.
I 131
This training corresponds to adapting to a stock of external forms of behaviour and is an analogy to learning. Such systems are much more capable of recognizing patterns than conventional programmed computers. When neural networks are trained to categorize stimuli, they can easily continue this when new stimuli occur. When it comes to incidental interference, they are superior to conventional computers...
I 132
... in reacting and not to reinforce problematic connections. I.e. they do not react in an all-or-nothing way. This is similar to the reaction of the nervous systems to damage.
Information processing within neural networks has been compared with holograms that have information available from several perspectives at the same time.
Short-term memory: can be simulated with recurrent networks. (see J. Elman, (1991) Incremental learning, or the importance of starting small. In 13th Annual Conference of the Cognitive Science Society, NJ, L. Erlbaum, 443-448). Former states of the hidden layer are entered and processed as new input.
Language acquisition/Elman: with this, language learning could be simulated: the problem of syntax learning was translated into the problem of mapping previous sequences to future input sequences. Incomplete sequences were completed by the system with the most likely additions. Initially, this involved the occurrence of 0 and 1, i.e. meanings were neglected.
Problem: Neural networks sometimes converge into suboptimal solutions because they only take local patterns into account.
Solution: in order to prevent the nets from being trapped in such "learning potholes", it is possible to install "noises" (random disturbances) that force the system to search for possible solutions in another area.
I 133
Language acquisition/Elman/Deacon: Elman kept different stages of learning more complex structures apart, so they could not interfere with each other.
I 134
Deacon: the production of grammatically correct forms was learned inductively without any grammar, let alone to presuppose a universal grammar.
I 135
N.B.: it was shown that the structure of the learning process has to do with what can and cannot be learned. More importantly, it suggests that the structure of the language and the way in which it has to be learned are related.

Explanation of symbols: Roman numerals indicate the source, arabic numerals indicate the page number. The corresponding books are indicated on the right hand side. ((s)…): Comment by the sender of the contribution.
The note [Author1]Vs[Author2] or [Author]Vs[term] is an addition from the Dictionary of Arguments. If a German edition is specified, the page numbers refer to this edition.

Dea I
T. W. Deacon
The Symbolic Species: The Co-evolution of language and the Brain New York 1998

Dea II
Terrence W. Deacon
Incomplete Nature: How Mind Emerged from Matter New York 2013

Send Link
> Counter arguments in relation to Neural Networks

Authors A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Z  

Concepts A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Z  

Ed. Martin Schulz, access date 2019-09-18
Legal Notice   Contact   Data protection declaration