Economics Dictionary of Arguments

Home Screenshot Tabelle Begriffe

 
Ethics, philosophy: ethics is concerned with the evaluation and justification of actions and ultimately a justification of morality. See also good, values, norms, actions, deontology, deontological logic, consequentialism, morals, motives, reasons, action theory.
_____________
Annotation: The above characterizations of concepts are neither definitions nor exhausting presentations of problems related to them. Instead, they are intended to give a short introduction to the contributions below. – Lexicon of Arguments.

 
Author Concept Summary/Quotes Sources

Nick Bostrom on Ethics - Dictionary of Arguments

I 257
Ethics/morals/morality/superintelligence//Bostrom: No ethical theory commands majority support among philosophers, so most philosophers must be wrong.
((s)VsBostrom: It is not a question of applause as to which theory is correct.)
I 369
Majorities in ethics/Bostrom: A recent canvass of professional philosophers found the percentage of respondents who “accept or leans toward” various positions. On normative ethics, the results were
deontology 25.9%; - consequentialism 23.6%; - virtue ethics 18.2%.
On metaethics, results were
moral realism 56.4%; - moral anti-realism 27.7%.
On moral judgment:
cognitivism 65.7%; - non-cognitivism 17.0% (Bourget and Chalmers 2009(1))
>Norms/normativity/superintelligence/Bostrom
, >Ethics/superintelligence/Yudkowsky.
Morality models:
I 259
Coherent Extrapolated Volition/CEV/Yudkowsky: Our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted. >Ethics/superintelligence/Yudkowsky.
I 266
VsCEV/Bostrom: instead:
Moral rightness/MR/Bostrom: (…) build an AI with the goal of doing what is morally right, relying on the AI’s superior cognitive capacities to figure out just which actions fit that description. We can call this proposal “moral rightness” (MR). The idea is that we humans have an imperfect understanding of what is right and wrong (…)
((s)VsBostrom: This delegates human responsibility and ultimately assumes that human decisions are only provisional until non-human decisions are made.)
I 267
BostromVsYudkowsky: MR would do away with various free parameters in CEV, such as the degree of coherence among extrapolated volitions that is required for the AI to act on the result, the ease with which a majority can overrule dissenting minorities, and the nature of the social environment within which our extrapolated selves are to be supposed to have “grown up farther together.”
BostromVsMR: Problem: 1. MR would also appear to have some disadvantages. It relies on the notion of “morally right,” a notoriously difficult concept (…).
I 268
2. (…) [MR] might not give us what we want or what we would choose if we were brighter and better informed.
Solution/Bostrom: Goal for AI:
MP: Among the actions that are morally permissible for the AI, take one that humanity’s CEV would prefer. However, if some part of this instruction has no well-specified meaning, or if we are radically confused about its meaning, or if moral realism is false, or if we acted morally impermissibly in creating an AI with this goal, then undergo a controlled shutdown.(*) Follow the intended meaning of this instruction.
I 373 (Annotation)
*Moral permissibility/Bostrom: When the AI evaluates the moral permissibility of our act of creating the AI, it should interpret permissibility in its objective sense. In one ordinary sense of “morally permissible,” a doctor acts morally permissibly when she prescribes a drug she believes will cure her patient - even if the patient, unbeknownst to the doctor, is allergic to the drug and dies as a result. Focusing on objective moral permissibility takes advantage of the presumably superior epistemic position of the AI.
((s)VsBostrom: The last sentence (severability) is circular, especially when there are no longer individuals in decision-making positions who could object to it.
>Goals/superintelligence/Bostrom.
I 312
Def Common good principle/Bostrom: Superintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals.
I 380
This formulation is intended to be read so as to include a prescription that the well-being of nonhuman animals and other sentient beings (including digital minds) that exist or may come to exist be given due consideration. It is not meant to be read as a license for one AI developer to substitute his or her own moral intuitions for those of the wider moral community.


1. Bourget, David, and Chalmers, David. 2009. “The PhilPapers Surveys.” November. Available at http://philpapers.org/surveys/

_____________
Explanation of symbols: Roman numerals indicate the source, arabic numerals indicate the page number. The corresponding books are indicated on the right hand side. ((s)…): Comment by the sender of the contribution. Translations: Dictionary of Arguments
The note [Concept/Author], [Author1]Vs[Author2] or [Author]Vs[term] resp. "problem:"/"solution:", "old:"/"new:" and "thesis:" is an addition from the Dictionary of Arguments. If a German edition is specified, the page numbers refer to this edition.

Bostrom I
Nick Bostrom
Superintelligence. Paths, Dangers, Strategies Oxford: Oxford University Press 2017


Send Link

Authors A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Z  


Concepts A   B   C   D   E   F   G   H   I   J   K   L   M   N   O   P   Q   R   S   T   U   V   W   Z