In 2008 European physicists at CERN were on the verge of activating the Large Hadron Collider to great acclaim. The LHC held the promise of testing precise predictions of the most important current theories in physics, including finding the elusive Higgs Boson (which was indeed successfully confirmed in 2012). However, some opponents of CERN’s activation raised an almost laughable objection, encapsulated in a lawsuit against CERN from the German chemist Otto Rossler, that switching the LHC on might create a miniature black hole and destroy the Earth. In response, most physicists dismissed the chances of such a catastrophe as extremely unlikely, but none of them declared that it was utterly impossible. This raises an important practical and philosophical problem: how large does a probability of an activity destroying humanity need to be in order to outweigh any potential benefits of doing it? How do we even begin to weigh a plausible risk of destroying all humanity against other benefits?
Recently, prominent figures such as Sam Harris and Elon Musk have expressed similar concerns about the existential risks to humanity posed by the creation of artificial intelligence. This follows earlier work by Nick Bostrom (see for instance his ‘Ethical Issues in Advanced Artificial Intelligence’, 2004) and Eliezer Yudkowsky (for example, ‘Artificial Intelligence as a Positive and Negative Factor in Global Risk’, 2008). Let’s call this position ‘Anti-Natalism’ about artificial (general) intelligence since it proposes that the overall risks of creating AI outweigh its expected benefits, and so demands that AI shouldn’t be brought to birth.
Denne historien er fra August/September 2020-utgaven av Philosophy Now.
Start din 7-dagers gratis prøveperiode på Magzter GOLD for å få tilgang til tusenvis av utvalgte premiumhistorier og 9000+ magasiner og aviser.
Allerede abonnent ? Logg på
Denne historien er fra August/September 2020-utgaven av Philosophy Now.
Start din 7-dagers gratis prøveperiode på Magzter GOLD for å få tilgang til tusenvis av utvalgte premiumhistorier og 9000+ magasiner og aviser.
Allerede abonnent? Logg på
Anselm (1033-1109)
Martin Jenkins recalls the being of the creator of the ontological argument.
Is Brillo Box an Illustration?
Thomas E. Wartenberg uses Warhol's work to illustrate his theory of illustration.
Why is Freedom So Important To Us?
John Shand explains why free will is basic to humanity.
The Funnel of Righteousness
Peter Worley tells us how to be right, righter, rightest.
We're as Smart as the Universe Gets
James Miles argues, among other things, that E.T. will be like Kim Kardashian, and that the real threat of advanced AI has been misunderstood.
Managing the Mind
Roger Haines contemplates how we consciously manage our minds.
lain McGilchrist's Naturalized Metaphysics
Rogério Severo looks at the brain to see the world anew.
Love & Metaphysics
Peter Graarup Westergaard explains why love is never just physical, with the aid of Donald Davidson's anomalous monism.
Mary Leaves Her Room
Nigel Hems asks, does Mary see colours differently outside her room?
From Birds To Brains
Jonathan Moens considers whether emergence can explain minds from brains.