|
Hunter Jay
@
HunterJayPerson
Sydney, New South Wales
|
|
Engineering apple picking robots with Ripe Robotics. Very concerned about *potentially* unfriendly superintelligence, without strong views on timelines.
|
|
|
189
Tweetovi
|
62
Pratim
|
17
Osobe koje vas prate
|
| Tweetovi |
|
Hunter Jay
@HunterJayPerson
|
2. velj |
|
Looking good, medium Yud! @ESYudkowsky
|
||
|
|
||
|
Hunter Jay
@HunterJayPerson
|
30. sij |
|
This didn't age well.
|
||
|
|
||
|
Hunter Jay
@HunterJayPerson
|
30. sij |
|
I’m going to stop here because the later points are not actually about superintelligence any more, but can continue if anybody is actually reading this rant. If this isn't satire, I'm really reeling!
|
||
|
|
||
|
Hunter Jay
@HunterJayPerson
|
30. sij |
|
And even in isolation, it’s common for today’s A.I. to vastly outperform humans at narrow tasks without using previous human work or collaboration -- see: Alpha Zero.
|
||
|
|
||
|
Hunter Jay
@HunterJayPerson
|
30. sij |
|
It might not be in isolation. We don’t know how to keep it in isolation safely while still getting it to do anything yet. It’s starting point may very well be ‘all of humanities knowledge’, not ‘nothing’.
|
||
|
|
||
|
Hunter Jay
@HunterJayPerson
|
30. sij |
|
M: “The Argument From Robinson Crusoe. Humans work better in groups, the A.I. would too. Also you shouldn’t expect an isolated thing to overtake all of humanity.”
|
||
|
|
||
|
Hunter Jay
@HunterJayPerson
|
30. sij |
|
But there are reasons for thinking it might be faster -- adding more hardware could speed up the computation time, and coupled with directly accessing all existing information over the internet massively reduces the amount of experimentation the system needs to do.
|
||
|
|
||
|
Hunter Jay
@HunterJayPerson
|
30. sij |
|
Firstly, if a superintelligence takes 18 years to go from near-human to super-human level, that’s still a massive problem.
|
||
|
|
||
|
Hunter Jay
@HunterJayPerson
|
30. sij |
|
M: “The Argument From Childhood. We’re born helpless and it takes us ages to learn. Why would a superintelligence do it any quicker?”
|
||
|
|
||
|
Hunter Jay
@HunterJayPerson
|
30. sij |
|
But it doesn’t even have to modify it’s own code while its running anyway, it could simply build a successor.
|
||
|
|
||
|
Hunter Jay
@HunterJayPerson
|
30. sij |
|
Why not? The A.I. was built, originally, by people who were modifying it’s code directly. Why couldn’t it simply continue that work? Computer programs are certainly capable of changing themselves -- we do it all the time.
|
||
|
|
||
|
Hunter Jay
@HunterJayPerson
|
30. sij |
|
M: “The Argument From Brain Surgery. We can’t go in and improve parts of our brain, similarly the A.I. can’t either.”
|
||
|
|
||
|
Hunter Jay
@HunterJayPerson
|
30. sij |
|
They are built, presumably, to perform some task anyway, so the forcing function is towards efficiency and action rather than lack of action, anyway.
|
||
|
|
||
|
Hunter Jay
@HunterJayPerson
|
30. sij |
|
Would you bet your life on the idea that superintelligence might just be lazy? Even if, somehow, we do create “lazy” superintelligence, would you bet your life that every single one we ever create will be?
|
||
|
|
||
|
Hunter Jay
@HunterJayPerson
|
30. sij |
|
M: “The Argument From My Roommate Peter. Peter is lazy, the idea that every intelligent system is going to be insanely ambitious is therefore wrong.”
|
||
|
|
||
|
Hunter Jay
@HunterJayPerson
|
30. sij |
|
This is true. It’s also irrelevant, unless you’re certain that today’s A.I. algorithms are the best we will ever come up with, despite the fact that they have been reliably improving over the past 70 years.
|
||
|
|
||
|
Hunter Jay
@HunterJayPerson
|
30. sij |
|
M: “The Argument From Just Look Around You. Today’s A.I. systems are not as complex or clever as the media and corporations say.”
|
||
|
|
||
|
Hunter Jay
@HunterJayPerson
|
30. sij |
|
And it’s not like we have no idea why we have complex motivations anyway, we know we behave the way we do because our ancestors who didn't died! You could do something similar with A.I., but there’s no reason to think it *must* happen by default.
|
||
|
|
||
|
Hunter Jay
@HunterJayPerson
|
30. sij |
|
You don’t know that! You only have example set of complex minds, you have no idea if all other complex minds are going to be similarly motivated! You would bet our lives on that?
|
||
|
|
||
|
Hunter Jay
@HunterJayPerson
|
30. sij |
|
M: “The Argument From Mental Complexity. The Orthogonality Thesis is wrong, because complex minds have complex motivations.”
|
||
|
|
||