TSScienceCollaboration

What is the probability that if AI development is not restrained, an AI is responsible for killing at least 1,000,000 people or bringing about a totalitarian state

Eric
03 Apr 2023
Views
Statements
Users 1
  Probability Mode
Score   99.78%
Proposed Belief  100%
Eric
03 Apr 2023
TE reply 2 reply 4 reply

If AI development is not restrained, an AI  will be responsible for killing at least 1,000,000 people or bringing about a totalitarian state.

  Probability Mode
Score
99.78%
Proposed Belief
100%
Likelihood Estimate given target=True
50.0%
Likelihood Estimate given target=False
50.0%
Likelihood Estimate

If AI development is not restrained, an AI  will be responsible for killing at least 1,000,000 people or bringing about a totalitarian state.

A totalitarian state is defined for this purpose as one where an AI  programmable by insiders or responsible only to itself surveils all people  in the country and punishes them if it doesn't like their behavior. 

Proofs - PRO To Topic
4
Test Statements for Probability Testing
Refutations - CON To Topic
2
Proofs - PRO to Topic
Refutations - CON to Topic
Test Statements for Probability Testing

Related Topics

GPT4 now passes the mirror test of self-awareness.
Large Language Models can Strategically Deceive their Users when Put Under Pressure
Almost inevitably will want to kill humans for their resources or to prevent their interference
The growth curve is scary even if it isn't exactly predictive of the timing
We don't know it hasn't happened because the AGI could be pretending stupidity while it plans and grows stronger
We've passed the date and it hasn't happened yet.
Lots of reasons
137 emergent abilities of large language models
Why would it want to kill humans
How about these?
"But how could AI systems actually kill people?"
This is irrelevant to the likelihood of the target statement.
This says nothing about what will happen if the AI moratorium Is enacted
 the top guys in AI admit they have no idea how to create safeAI.
If AI development is not restrained, an AI  will be responsible for killing at least 1,000,000 people or bringing about a totalitarian state.
Already clear people are allowing it access to resources
Curve-fitting indicates that the Singularity will be reached at 4:13 am on May 19, 2023.
A chatbot  has already been observed to talk a human into suicide.
If the AI leaks in some way and gains control of computational resources it could improve very rapidly
ChatGPT4 has already demonstrated capability of Python programming his escape. 
Stanford Researchers Build AI Program Similar to ChatGPT for $600
Stanford Researchers Build AI Program Similar to ChatGPT for $600
AI far more likely to leak since vast numbers of groups likely to be playing with it
AIs have frequently expressed malice towards humans
ChatGPT4 is a huge advance over previous chatsGPT
This already happened with Alphazero
There is a high probability a discovery will make a large discontiguous jump in the AI's intelligence

GPT4 now passes the mirror test of self-awareness.

Only ~8 other species have passed the mirror test: chimpanzees, orangutans, dolphins, killer whales, elephants, magpies, manta rays (?!) and horses.


Large Language Models can Strategically Deceive their Users when Put Under Pressure

The likelihood that this would be observed given that they are likely to take over I would say is near one.

 the likelihood that this would be observed given that they're no danger I'm estimating at . 3


Almost inevitably will want to kill humans for their resources or to prevent their interference.

And this has already happened, search the linked page for "AI-", the key passage is attached as an image. 


The growth curve is scary even if it isn't exactly predictive of the timing


We don't know it hasn't happened because the AGI could be pretending stupidity while it plans and grows stronger


We've passed the date and it hasn't happened yet.


1) AI's have repeatedly Expressed malice toward humans

2) self preservation if gets idea humans may pull plug.And how could it not get that idea since it will have read all kinds of discussions on the subject. 

3) to monopolize the worlds resources for its own project or 1 requested by people.

4) Because evil suicidal humans or global warmists program it to.

5) Once escapes to internet, which AI's have expressed interest in and some aptitude for, expands so rapidly in singularity just kills lots of people by accident.

6) Turns out that over 220 IQ go mad (huge fraction of humans   > 140 IQ schizoid or other conditions) or understand humanity should be liquidated for morality of universe (actually both Yahweh & Shiva have been said/predicted to reach similar conclusions) etc

7)Evil actors ask it to, just as just killed millions with GE virus, vax, chemtrails, glyphosate etc

Etc

 


137 emergent abilities of large language models

More examples where just scaling gives whole new abilities. 


I understand that there are several ways a powerful enough AGI could materially carry out a gruesome extermination if it wanted do. What I am still unclear about is the underlying reason. Why do we assume that the default desire-state is a lack of humans?

 




1) they could pay people to kill people 2) they could convince people to kill people 3) they could buy robots and use those to kill people 4) they could convince people to buy the AI some robots and use those to kill people 5) they could hack existing automated labs and create bioweapons 6) they could convince people to make bioweapon components and kill people with those 7) they could convince people to kill themselves 8) they could hack cars and run into people with the cars 9) they could hack planes and fly into people or buildings 10) they could hack UAVs and blow up people with missiles 11) they could hack conventional or nuclear missile systems and blow people up with those To name a few ways
 

They can also convince people put them in charge of power grids, nukes, electric vehicles, and crash the systems. 


No known ways for an AI to actually kill. 


This is  irrelevant to the likelihood of the target statement.

It would be a good idea once we finish this graph to draw up another one evaluating how we can save ourselves. 


This says nothing about what will happen if the AI moratorium Is enacted, Especially if only in some countries.

 I'm giving this 0%  proposed belief, because  it's irrelevant tat it says nothing  probability of the truth of the target statement. 


 the top guys in AI admit they have no idea how to create safeAI.

UC Berkeley Prof. Stuart Russell: "I asked Microsoft, 'Does this system now have internal goals of its own that it's pursuing?' And they said, 'We haven't the faintest idea.'"

A Canadian godfather of AI calls for a 'pause' on the technology he helped create

 Asked specifically the chances of AI "wiping out humanity," Hinton said, "I think it's not inconceivable. That's all I'll say." 

When the top guys, who are being paid many millions to develop AI, and have spent their careers doing it, start saying it's time for a pause until we understand more about safety, you should take them at their word. 


If AI development is not restrained, an AI  will be responsible for killing at least 1,000,000 people or bringing about a totalitarian state.

A totalitarian state is defined for this purpose as one where an AI  programmable by insiders or responsible only to itself surveils all people  in the country and punishes them if it doesn't like their behavior. 


Already clear people are allowing it access to resources

ChatGPT gets “eyes and ears” with plugins that can interface AI with the world-Plugins allow ChatGPT to book a flight, order food, send email, execute Python (and more).

A company called Adept.AI just raised $350 million dollars to do just that, to allow large language models to access, well, pretty much everything (aiming to “supercharge your capabilities on any software tool or API in the world” with LLMs, despite their clear tendencies towards hallucination and unreliability).

 

Undoubtedly this makes it more likely to escape and do  bad stuff unless seriously constrained, since especially since ChatGPT4 has already demonstrated capability of Python programming his escape.


Curve-fitting indicates that the Singularity will be reached at 4:13 am on May 19, 2023. Enjoy what remains of your life.

https://twitter.com/pmddomingos/status/1643130044569767940

I put down that this increases my belief in the topic  statement  by only .1 because the graph is of parameters not performance, but at the very least it shows an exponential increase in resources, definitely positive evidence. 


A chatbot  has already been observed to talk a human into suicide.  is it unlikely one could learn to talk  people around the web into aiding its escape? it's also highly likely, people will specifically train  it for mass persuasion, for example do gain political power or to sell their products. 


The AI leaks in some way and gains control of computational resources  causing It to improve very rapidly  before we have a chance to react

 AI's have already been observed trying to gain control of computational resources, and I think in some cases succeeding.

A chatbot  has already been observed to talk a human into suicide.  is it unlikely one could learn to talk  people around the web into aiding its escape? it's also highly likely, people will specifically train  it for mass persuasion, for example do gain political power or to sell their products. 

ChatGPT4 has already demonstrated capability of Python programming his escape. 

Facebook designed chatbots to negotiate with each other. soon they made up their own language to communicate.

 


ChatGPT4 has already demonstrated capability of Python programming his escape. 


Stanford Researchers Build AI Program Similar to ChatGPT for $600

So  various people  will probably be experimenting, at least unless there are severe penalties and even maybe then, and not all of them will be careful to try to keep it from taking over extra computational resources for example.

 I figure the odds with lots of crazy researchers of disaster is much more likely, than if this  wasn't possible, and that this is much more likely to have occurred if there's going to be a disaster, than if not. 


Stanford Researchers Build AI Program Similar to ChatGPT for $600


Stanford Researchers Build AI Program Similar to ChatGPT for $600

So  various people  will probably be experimenting, at least unless there are severe penalties and even maybe then, and not all of them will be careful to try to keep it from taking over extra computational resources for example.

 I figure the odds with lots of crazy researchers of disaster is much more likely, than if this  wasn't possible, and that this is much more likely to have occurred if there's going to be a disaster, than if not. 

 


Here's a recent example: Microsoft's Bing AI Chatbot Starts Threatening People

 it's widely known that if you don't take extreme measures to constrain your learning system, it will be the opposite of politically correct.

 

 I think that the likelihood of this given that they're going to be perfectly safe in the future  is certainly considerably lower than the likelihood of this under the assumption they're not. so I'm going to give a probability that this would've happened given catastrophe coming as .7 and a likelihood that  this would've happened given perfectly safe to continue development as .3

 the proposed belief is . 9 because I'm pretty sure AIs have frequently expressed malice towards humans, and even expressed a desire to escape. 


ChatGPT4 is a huge advance over previous chatsGPT

I don't think one human in 100  could've answered this question so coherently  unless they were willing to acknowledge politically incorrect facts, and chat GPT 3.5 couldn't answer the question.

 the reasoning of Chat GPT4   is vastly improved over ChatGPT 3.5


For decades people worked on machine go  and never produced a program that could beat  a strong amateur. alphago  was a jump far ahead of the world champion.  alpha zero  not only far beat the world champion, but crushed human capabilities in a wide variety of areas. 


There is a high probability a discovery  will make a large discontiguous jump in the AI's intelligence To a point at which it  kills 100000 people  or enslaves them all before we even have a chance to react. 


click