Al momento non sono disponibili copie per questo codice ISBN.Vedi tutte le copie di questo ISBN:
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains.
If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence.
But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?
To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence.
This profoundly ambitious and original book picks its way carefully through a vast tract of forbiddingly difficult intellectual terrain. Yet the writing is so lucid that it somehow makes it all seem easy. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.
Le informazioni nella sezione "Riassunto" possono far riferimento a edizioni diverse di questo titolo.
I highly recommend this book (Bill Gates)
Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era (Stuart Russell, Professor of Computer Science, University of California, Berkley)
Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book (Martin Rees, Past President, Royal Society)
This superb analysis by one of the worlds clearest thinkers tackles one of humanitys greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesnt become the last? (Max Tegmark, Professor of Physics, MIT)
Terribly important ... groundbreaking... extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole... If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Spring from 1962, or ever (Olle Haggstrom, Professor of Mathematical Statistics)
Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking (The Economist)
There is no doubting the force of [Bostrom's] arguments the problem is a research challenge worthy of the next generations best mathematical talent. Human civilisation is at stake (Financial Times)
Worth reading.... We need to be super careful with AI. Potentially more dangerous than nukes (Elon Musk, Founder of SpaceX and Tesla)
A damn hard read (Sunday Telegraph)
Every intelligent person should read it. (Nils Nilsson, Artificial Intelligence Pioneer, Stanford University)
Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Strategic Artificial Intelligence Research Centre and of the Programme on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009). He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.
Le informazioni nella sezione "Su questo libro" possono far riferimento a edizioni diverse di questo titolo.
Descrizione libro Condizione: New. Codice articolo 24749713-n
Descrizione libro Condizione: New. . Codice articolo 535ZZZ00BTOQ_ns
Descrizione libro Condizione: New. . Codice articolo 531ZZZ00BU3C_ns
Descrizione libro Condizione: New. . Codice articolo 52YZZZ00C5LQ_ns
Descrizione libro Condizione: New. . Codice articolo 52ZZZZ00BTVI_ns
Descrizione libro Condizione: New. . Codice articolo 533ZZZ00BTVK_ns
Descrizione libro Condizione: New. . Codice articolo 532ZZZ00BTY6_ns
Descrizione libro Condizione: New. . Codice articolo IR-BN-Q-9780198739838
Descrizione libro Condizione: New. BRAND NEW, GIFT QUALITY! NOT OVERSTOCKS OR MARKED UP REMAINDERS! DIRECT FROM THE PUBLISHER!|VCF. Codice articolo OTF-S-9780198739838
Descrizione libro Soft Cover. Condizione: new. Codice articolo 9780198739838