|
@ciphergoth | |||||
|
There's no fire alarm for artificial general intelligence, so if you're not alarmed by this, nothing will alarm you until it is much, much too late. intelligence.org/2019/08/06/aug… pic.twitter.com/GHfSvvkWx7
|
||||||
|
||||||
|
Alexander Kruel
@XiXiDu
|
8. kol |
|
My thoughts here twitter.com/XiXiDu/status/…
|
||
|
|
||
|
Paul Crowley
@ciphergoth
|
8. kol |
|
AI systems have identified subgoals since the beginning; can you flesh out how your proposed fire alarm differs from what we have today? What would be a specific example that would alarm you?
|
||
|
|
||
|
Roko Mijic
@RokoMijicUK
|
8. kol |
|
Having talked to some AI people here on twitter I think I can see what the problem is.
AI people are dismissive of safety concerns for two reasons: (1) they correctly see lots of BS hype about it in the popular media. Most of them haven't actually engaged with serious (1/n)
|
||
|
|
||
|
Alexander Kruel
@XiXiDu
|
8. kol |
|
I predict that less than 5% of the relevant experts will change their mind after reading the AI risk literature.
|
||
|
|
||
|
Jessica Taylor
@jessi_cata
|
8. kol |
|
This is like one of those "estimate how many jelly beans are in this jar, say the biggest number without going over" questions. If someone mistakenly guesses higher than the actual number, it would be wrong to conclude that there are therefore few jellybeans in an absolute sense.
|
||
|
|
||
|
entirelyuseless
@entirelyuseles
|
8. kol |
|
Why on earth should that alarm you? He was specifically asking for certainty about the least impressive thing. That by definition is asking for a high degree of precision about something inherently vague. That is why there was silence. This is a completely adequate explanation.
|
||
|
|
||
|
Paul Crowley
@ciphergoth
|
8. kol |
|
Yes, one should come at such questions with tremendous uncertainty; that's a large part of the point of the essay.
|
||
|
|
||