This site uses cookies. By continuing to browse the site, you agree to our use of cookies.
This person has not created any Public Topics.
Statement should read "It is not be possible to rule out destruction or enslavement with 90% confidence" if that is its content.
Statement DescriptionIf you will change the head proposition to be "It will not be possible to rule out destruction or enslavement with 90% confidence", I will accept that.
I do not accept that failure to rule out with 90% confidence equates to proof with 10% confidence. A failure to prove a negative is not positive proof.
We have no proof that anything will happen with 10% confidence. The proposition should not state that. It should state there is no positive proof it will not. If that is the proposition.
Topic:No connection with specific nature of AGI made in argument.
Statement DescriptionArgument is not specific to AGI.
Same thing might happen in theory to a control system now.
Once again it is the "something bad might happen in the future" argument. It argues nothing about AGI. It can't, because we don't have an adequate idea what AGI can be.
A flaw might happen in any powerful system leading to disaster. Such a disaster is arguably more likely in a dumb system with no concept of consequences.
Potentially an AI might be more robust than non-intelligent systems because of characteristics specific to intelligence. We don't know.
No numerical justification for 10% given.
Topic:Once again, lack of proof something will not happen, is not proof it will.
Statement DescriptionWe can't prove anything. We don't have an adequate definition what AGI might be.
On the idea something might generate infinite goals and one of those might be killing or enslaving humanity. We don't know. A meaningful definition of AGI might exclude it. We don't have that meaningful defintion so we can't say.
Something might generate an infinity of goals, and still not generate all goals.
On the argument that humanity might be using some power and an AGI might want all of it for itself. It might. But you'll have a hard time proving a 10% probability that it will. Or any level of "significance" you care to name. On the contrary we might reasonably expect intelligence, once defined, would exclude such monotonic behaviour. That kind of blind chain reaction risk seems much more likely as a consequence of, say, basic physics research.
Even the limited intelligence of humanity is moving away from consuming its entire environment.
Not to mention that this existential risk is a certainty if humanity does nothing. It is certain the energy of our star will eventually consume itself. If nothing changes we are doomed 100%.
But for AGI we don't know. We don't have an adequate definition for AGI.
Topic:Not enough information
Statement DescriptionThe statement that any powerful tool created by humans has a chance to kill or enslave humans is irrelevant to the nature of AGI. It returns to the earlier formulation of the proposition that something, we don't know what, might kill or enslave humanity.
It might immediately be moral, by some as yet unknown property of "intelligence", and repudiate an evil creator. We don't know.
We don't know what AGI will be, so we can't say anything about it. In particular the proposer has not said anything about AGI itself which establishes the proposition.
Being mistaken for a human, and having self-directed goals, is not enough to establish the proposition, or even an adequate definition for the term AGI.
The proposer seems confused that lack of proof something will not happen, is not proof it will.
Topic:Logical errors and lack of argument
Statement DescriptionBad actor argument is irrelevant to AGI per se. It argues to the bad actions of humans.
There is no argument necessary that the building of AGI will be necessarily safe. This is the same proof of negative argument rejected two steps back. What is required is a proof it will be dangerous.
Proposer still has no link between definition of AGI given, and danger in the measure proposed. Self-directing goals were shown in the previous answer to be insufficient (existence proof.) Both as a definition of AGI, and as a proof of danger.
Topic:Objection that we don't have enough information still stands.
Statement DescriptionAs a candidate definition for an aspect of AGI, self-generating goals are more interesting. It gives us something to work with. Good.
As goals go, killing or enslaving humanity is a possible goal, and if goals are absolutely unconstrained that particular goal might be generated by an absolutely unconstrained system which generates its own goals. That is true.
But simply specifying self-generating goals is not enough. There can be different infinities of goals even within the scope they be self-generating. For instance, arguably some computer programs already have self-generating goals. By Turing's halting theorem, the goal of halting might be seen as self-generated. Perhaps the same thing, Steven Wolfram has done a lot of work on what he calls the "New Kind of Science" of "computationally irreducible" systems: of which the essence is that some computer programs are in themselves the smallest representation of what they might do, and there is no way to know what they may do, except to wait for them to do it, so their goals are self-generating. (His argument is that this should be a new focus for science, to search over these small programs, to try and find some which have useful consequences, searching by brute force because there is no other way to know what their "goals" might be - the search by brute force being a new kind of science.)
But we are not afraid computational automata will kill or enslave humanity. Wolfram's computational automata don't instill fear in us. There are bounds. They may have self-generating goals, but they are not capable of generating all possible goals. Different levels of goals can be possible even if the goals are self-generating within their capability.
There may be limitations in the goals even humans can generate. This is an aspect of AGI research which can inform humanity.
The question becomes a more general one. Because if self-generating goals inherently lead to bad goals. We need to understand this to guard against bad goals within our own species.
Currently we assess humanity as having freewill. But for all that humanity is demonstrably murderous and despotic, some goals do seem to be constrained to a degree. A quick check shows that of an estimated 8.5 million species, in the last 100 years humanity has killed between 500 and 1 million, by varying estimates. (The high end might indicate a 10% chance, the low end not.)
But rather than being blindly fated by our own nature which remains a mystery to us, it is better we come to understand what the bounds on our own freewill might be. We destroy many species, but not all, and usually not deliberately. We kill and conquer, but there are mysterious constraints which so far have managed to prevent our complete self destruction. What are they?
By providing answers to questions like this AGI research may actually save us, not destroy us. (Perhaps a positive parameter should be added to the calculation, pulling the risk back from 10%??)
It is important we come to understand what constrains freewill, equally so that we can constrain ourselves better from our current murderous path, which if unchanged seems much more likely than 10% to result in our own destruction, specifically without AGI, exactly because human intelligence seems limited in ways we poorly understand.
Topic:Lack of proof that something won't do something does not equal proof it will.
Statement DescriptionIf the proposition "boils down to: if a machine can pass a Turing test against a sophisticated inquisitor such as myself, then you won't be able to prove with 90% confidence that it won't kill or enslave humanity."
Then that should be the proposition as stated.
Of course, equally with no evidence it won't, we also have no evidence it will.
For completeness the proposition should now be changed to be:
"If a machine can pass a Turing test against a sophisticated inquisitor such as myself, then you won't be able to prove or disprove that it will or won't kill or enslave humanity."
Note the proposition as now stated depends on the nature of the proposer himself. Which makes the nature of the proposer the ultimate arbiter of his own proposition. Which weakens its value as a general statement of truth. It reduces to: this statement is true if I say it is.
Topic:There is no causal link between the proposition and the definition given, which is also widely disputed within its own scope
Statement DescriptionThere is no link established between between being mistaken for a human and a 10% chance of enslaving or eliminating humanity.
Why would mistaking a machine for a human result in enslavement or death? We need to establish a link between the claimed definition and a 10% chance of enslavement or death, to establish the proposition.
The more substantial objection is that the Turning test, even if achieved, and some claim it has already been achieved, still tells us nothing about what Intelligence is, only when we might judge it has been achieved. In the absense of information about what it is, we may still not draw conclusions about what it might do.
To draw conclusions about what something might do, we need an idea what it is.
The Turning test is disputed even as a test for when intelligence is achieved:
https://moral-robots.com/ai-society/all-thats-wrong-about-the-turing-test/
Some claim the Turing test has already been passed (Eugene Goostman):
https://www.smithsonianmag.com/innovation/turing-test-measures-something-but-not-intelligence-180951702/
https://www.zdnet.com/article/google-duplex-beat-the-turing-test-are-we-doomed/
A famous objection is Searle's Chinese Room.
Topic:There exists no commonly accepted definition for intelligence, let alone AGI
Statement DescriptionLack of clarity may make a proposition easier to support, but it makes it less meaningful.
If there is no clarity what might be meant by AGI, then the statement becomes one that something, we don't know what, may be significantly dangerous in the future.
Why not change the statement to be that there may be something in the future, we don't know what, but something, which if it is built, there will be at least a 10% chance it will kill or enslave humanity.
Or just a general statement that the future might be dangerous (as is the present, and was the past.)
Topic:We don't have enough information to say
Statement DescriptionNo adequate measure to qualify the word "significant" in this context is possible given our current state of knowledge.
To assess significance would require evidence for the nature of AGI, not to mention the likelyhood of it being created any time soon. But we lack commonly accepted definitions for this.
By contrast there are many reasonable measures which indicate humans are quite likely to destroy themselves, given current levels of technology and self-understanding.
Topic:This person has no followers.
This person is not following anyone.