|
@XiXiDu | |||||
|
I predict that less than 5% of the relevant experts will change their mind after reading the AI risk literature.
|
||||||
|
||||||
|
Paul Crowley
@ciphergoth
|
7. kol |
|
There's no fire alarm for artificial general intelligence, so if you're not alarmed by this, nothing will alarm you until it is much, much too late. intelligence.org/2019/08/06/aug… pic.twitter.com/GHfSvvkWx7
|
||
|
|
||
|
Roko Mijic
@RokoMijicUK
|
8. kol |
|
Having talked to some AI people here on twitter I think I can see what the problem is.
AI people are dismissive of safety concerns for two reasons: (1) they correctly see lots of BS hype about it in the popular media. Most of them haven't actually engaged with serious (1/n)
|
||
|
|
||
|
Paul Crowley
@ciphergoth
|
8. kol |
|
This seems right; people largely don't change their minds. But I'd predict that someone in the field who didn't have a set opinion about it would be much more likely to take the field seriously after watching eg Nate Soares's Google talk.
|
||
|
|
||
|
Alexander Kruel
@XiXiDu
|
8. kol |
|
You got a link to the talk? I will try to get some people to watch it and ask for feedback.
|
||
|
|
||
|
Roko Mijic
@RokoMijicUK
|
8. kol |
|
Maybe, but there might be a bit of a hysteresis effect here. Exposure to carefully reasoned thinking first is probably more effective than stupid AGI-on-your-cornflakes hype and then the careful stuff playing catch-up.
|
||
|
|
||