‘It is acceptable to fail at AI and at Friendly AI. It is acceptable to succeed at AI and at Friendly AI. What is not acceptable is succeeding at AI and failing at Friendly AI.’
Eliezer Yudkowsky says understanding intelligence that is artificial is limited by our intelligence that is human. Put differently, we only know how to measure ourselves.
Yudkowsky says the problem is that artificial intelligence might someday accelerate very rapidly. He argues for a concerted effort to develop ‘Friendly AI’.
SEE FULL PAPER Repository PDF (free)
Yudkowsky, E. (2008), ‘Artificial Intelligence as a Positive and Negative Factor in Global Risk,’ in Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Ćirković, 308–345. New York: Oxford University Press.