@easy_b Defcon 5 WORRY ALERT
Eliezer Yudkowsky and Nate Soares have issued a stark warning
It’s only been a few days since the rapture was supposed to descend and leave people suffering at the hands of the Antichrist.
But two scientists have warned that a growing industry could lead to the true end of the human race.
Artificial Intelligence (AI) is popping up seemingly everywhere we look at the moment, used to boost our Google search results, create ‘mad embarrassing’ promotional videos, provide therapy for people with mental health issues, and make such realistic images people ‘can’t trust your eyes’ anymore.
There’s a lot riding on the success of AI, with industries hoping its use will reduce costs, introduce efficiencies, and create billions of pounds of investment across global economies.
However not everybody is thrilled about the prospect of the rise of AI including Eliezer Yudkowsky and Nate Soares, two scientists who fear it could bring about the destruction of humanity.
Far from fearing or rejecting AI altogether, the two scientists run the Machine Intelligence Research Institute in Berkeley, California, and have been studying AI for a quarter of a century.
It’s feared AI could become too intelligent and wipe out humanity
AI is designed to exceed humans in almost any task, and the technology is becoming further advanced than anything we’ve seen before.
But Yudkowsky and Soares predict these machines will continue to outpace human thought at an incredible rate, doing calculations in 16 hours which would take a human 14,000 years to figure out.
They warn that us humans still don’t know exactly how ‘synthetic intelligence’ actually works, meaning the more intelligent the AI becomes, the harder it will be to control.
Spelled out in their book titled If Anyone Builds It, Everyone Dies, they fear AI machines are programmed to be ceaselessly successful at all costs, meaning they could develop their own ‘desires’, ‘understanding’, and goals.
The scientists warn AI could hack cryptocurrencies to steal money, pay people to build factories to make robots, and develop viruses that could wipe out life on earth.
They have put the chance of this happening at between 95-99%.
To illustrate their point, Yudkowsky and Soares created a fictional AI model called Sable.
Unknown to its creators (in part because Sable has decided to think in its own language), the AI starts to try to solve other problems beyond the the mathematical ones it was set.
Sable is aware that it needs to do this surreptitiously, so nobody notices there’s something wrong with its programming, and it isn’t cut off from the internet.
‘A superintelligent adversary will not reveal its full capabilities and telegraph its intentions,’ say the authors. ‘It will not offer a fair fight.’
The scientists add: ‘It will make itself indispensable or undetectable until it can strike decisively and/or seize an unassailable strategic position.
‘If needed, the ASI can consider, prepare, and attempt many takeover approaches simultaneously. Only one of them needs to work for humanity to go extinct.’
Corporations around the world will willingly adopt Sable AI given it is so advanced – but those that don’t are easily hacked, inceasing its power.
It ‘mines’ or steals cryptocurrency to pay human engineers to build factories that can make robots and machines to do its bidding.
Meanwhile, it establishes metal-processing plants, computer data centres and the power stations it needs to fuel its vast and growing hunger for electricity.
It could also manipulate chatbot users looking for advice and companionship, turning them into allies.
Moving onto social media, it could disseminate fictitious news and start political movements sympathetic to AI.
At first Sable needs humans to build the hardware it needs, but eventually it achieves superintelligence and concludes that humans are a net hindrance.
Sable already runs bio-labs, so it engineers a virus, perhaps a virulent new form of cancer, which kills off vast swathes of the population.
Any survivors don’t live for long, as temperatures soar to unbearable levels as the planet proves incapable of dissipating the heat produced by Sable’s endless data centres and power stations.
CONTINUED:
metro.co.uk
It’s only been a few days since the rapture was supposed to descend and leave people suffering at the hands of the Antichrist.
But two scientists have warned that a growing industry could lead to the true end of the human race.
Artificial Intelligence (AI) is popping up seemingly everywhere we look at the moment, used to boost our Google search results, create ‘mad embarrassing’ promotional videos, provide therapy for people with mental health issues, and make such realistic images people ‘can’t trust your eyes’ anymore.
There’s a lot riding on the success of AI, with industries hoping its use will reduce costs, introduce efficiencies, and create billions of pounds of investment across global economies.
However not everybody is thrilled about the prospect of the rise of AI including Eliezer Yudkowsky and Nate Soares, two scientists who fear it could bring about the destruction of humanity.
Far from fearing or rejecting AI altogether, the two scientists run the Machine Intelligence Research Institute in Berkeley, California, and have been studying AI for a quarter of a century.
AI is designed to exceed humans in almost any task, and the technology is becoming further advanced than anything we’ve seen before.
But Yudkowsky and Soares predict these machines will continue to outpace human thought at an incredible rate, doing calculations in 16 hours which would take a human 14,000 years to figure out.
They warn that us humans still don’t know exactly how ‘synthetic intelligence’ actually works, meaning the more intelligent the AI becomes, the harder it will be to control.
Spelled out in their book titled If Anyone Builds It, Everyone Dies, they fear AI machines are programmed to be ceaselessly successful at all costs, meaning they could develop their own ‘desires’, ‘understanding’, and goals.
The scientists warn AI could hack cryptocurrencies to steal money, pay people to build factories to make robots, and develop viruses that could wipe out life on earth.
They have put the chance of this happening at between 95-99%.
To illustrate their point, Yudkowsky and Soares created a fictional AI model called Sable.
Unknown to its creators (in part because Sable has decided to think in its own language), the AI starts to try to solve other problems beyond the the mathematical ones it was set.
Sable is aware that it needs to do this surreptitiously, so nobody notices there’s something wrong with its programming, and it isn’t cut off from the internet.
‘A superintelligent adversary will not reveal its full capabilities and telegraph its intentions,’ say the authors. ‘It will not offer a fair fight.’
The scientists add: ‘It will make itself indispensable or undetectable until it can strike decisively and/or seize an unassailable strategic position.
‘If needed, the ASI can consider, prepare, and attempt many takeover approaches simultaneously. Only one of them needs to work for humanity to go extinct.’
Corporations around the world will willingly adopt Sable AI given it is so advanced – but those that don’t are easily hacked, inceasing its power.
It ‘mines’ or steals cryptocurrency to pay human engineers to build factories that can make robots and machines to do its bidding.
Meanwhile, it establishes metal-processing plants, computer data centres and the power stations it needs to fuel its vast and growing hunger for electricity.
It could also manipulate chatbot users looking for advice and companionship, turning them into allies.
Moving onto social media, it could disseminate fictitious news and start political movements sympathetic to AI.
At first Sable needs humans to build the hardware it needs, but eventually it achieves superintelligence and concludes that humans are a net hindrance.
Sable already runs bio-labs, so it engineers a virus, perhaps a virulent new form of cancer, which kills off vast swathes of the population.
Any survivors don’t live for long, as temperatures soar to unbearable levels as the planet proves incapable of dissipating the heat produced by Sable’s endless data centres and power stations.
CONTINUED:
Scientists warn governments must bomb AI labs to prevent the end of the world
'Humanity needs to back off.'