|
@RokoMijicUK | |||||
|
Having talked to some AI people here on twitter I think I can see what the problem is.
AI people are dismissive of safety concerns for two reasons: (1) they correctly see lots of BS hype about it in the popular media. Most of them haven't actually engaged with serious (1/n)
|
||||||
|
||||||
|
Paul Crowley
@ciphergoth
|
7. kol |
|
There's no fire alarm for artificial general intelligence, so if you're not alarmed by this, nothing will alarm you until it is much, much too late. intelligence.org/2019/08/06/aug… pic.twitter.com/GHfSvvkWx7
|
||
|
|
||
|
Roko Mijic
@RokoMijicUK
|
8. kol |
|
writing about AI risk, they've just been casually exposed to it in the popular media with all the distortions that that brings. (2) They're worried that if it is seen to be real, their industry will get a bunch of government regulations which will in practice turn out to be (2/n)
|
||
|
|
||
|
Roko Mijic
@RokoMijicUK
|
8. kol |
|
useless paperwork and busybodying; this is a very legitimate concern. Imagine every time you start a new piece of work you have to fill out a risk assessment form which is filed away by an AI risk compliance officer at your company. (3) people don't necessarily have (3/n)
|
||
|
|
||
|
Alexander Kruel
@XiXiDu
|
8. kol |
|
I predict that less than 5% of the relevant experts will change their mind after reading the AI risk literature.
|
||
|
|
||
|
Roko Mijic
@RokoMijicUK
|
8. kol |
|
Maybe, but there might be a bit of a hysteresis effect here. Exposure to carefully reasoned thinking first is probably more effective than stupid AGI-on-your-cornflakes hype and then the careful stuff playing catch-up.
|
||
|
|
||