Eliezer Yudkowsky on AI Takeover - Dictionary of Arguments
Bostrom I 119
AI takeover/Yudkowsky/Bostrom: (1) Crack the protein folding problem to the extent of being able to generate DNA strings whose folded peptide sequences fill specific functional roles in a complex chemical interaction.
(2) Email sets of DNA strings to one or more online laboratories that offer DNA synthesis, peptide sequencing, and FedEx delivery.
(3) Find at least one human connected to the Internet who can be paid, blackmailed, or fooled by the right background story, into receiving FedExed vials and mixing them in a specified environment.
(4) The synthesized proteins form a very primitive “wet” nanosystem, which, ribosome-like, is capable of accepting external instructions; (…)
(5) Use the extremely primitive nanosystem to build more sophisticated systems, which construct still more sophisticated systems, bootstrapping to molecular nanotechnology—or beyond.(1)
>Ethics/superintelligence/Bostrom, >Norms/Bostrom, >Risks/Bostrom, >Technology/Bostrom, >Goals/Bostrom, >Ethics/Yudkowsky.
1. Yudkowsky, Eliezer. 2008a. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Ćirković, 308–45. New York: Oxford University Press._____________Explanation of symbols: Roman numerals indicate the source, arabic numerals indicate the page number. The corresponding books are indicated on the right hand side. ((s)…): Comment by the sender of the contribution. Translations: Dictionary of Arguments The note [Author1]Vs[Author2] or [Author]Vs[term] is an addition from the Dictionary of Arguments. If a German edition is specified, the page numbers refer to this edition.
Superintelligence. Paths, Dangers, Strategies Oxford: Oxford University Press 2017