Twitter | Search | |
Arvind Narayanan
Much of what’s being sold as "AI" today is snake oil. It does not and cannot work. In a talk at MIT yesterday, I described why this happening, how we can recognize flawed AI claims, and push back. Here are my annotated slides:
Reply Retweet Like More
Arvind Narayanan Nov 19
Replying to @random_walker
Key point #1: AI is an umbrella term for a set of loosely related technologies. *Some* of those technologies have made genuine, remarkable, and widely-publicized progress recently. But companies exploit public confusion by slapping the “AI” label on whatever they’re selling.
Reply Retweet Like
Arvind Narayanan Nov 19
Replying to @random_walker
Key point #2: Many dubious applications of AI involve predicting social outcomes: who will succeed at a job, which kids will drop out, etc. We can’t predict the future — that should be common sense. But we seem to have decided to suspend common sense when “AI” is involved.
Reply Retweet Like
Arvind Narayanan Nov 19
Replying to @random_walker
There’s evidence from many domains, including prediction of criminal risk, that machine learning using hundreds of features is only slightly more accurate than random, and no more accurate than simple linear regression with three or four features—basically a manual scoring rule.
Reply Retweet Like
Arvind Narayanan Nov 19
Replying to @random_walker
Key point #3: transparent, manual scoring rules for risk prediction can be a good thing! Traffic violators get points on their licenses and those who accumulate too many points are deemed too risky to drive. In contrast, using “AI” to suspend people’s licenses would be dystopian.
Reply Retweet Like
Arvind Narayanan Nov 19
The best part of the event was the panel discussion with , , , and the audience. Thanks to for the excellent summary!
Reply Retweet Like
Hillel Nov 19
Question: for this slide, are the the average R² for the models, or the best R²s, or something else? Just having a bit of trouble understanding it!
Reply Retweet Like
Arvind Narayanan Nov 20
They are the best R^2s!
Reply Retweet Like
Ricardo Baeza-Yates Nov 22
Replying to @random_walker
Although I agree with you in general, is bad to generalize. One social counter-example: prediction of risk of dyslexia in children using a game. The sooner a child sees an specialist, better the intervention that can save a child from failing school.
Reply Retweet Like
Arvind Narayanan Nov 22
Replying to @PolarBearby
Thank you for the example. I would appreciate references to studies if you know of any.
Reply Retweet Like
Andy Charlwood Jan 18
Replying to @random_walker
Hi Arvind, I thought these slides were great and am referencing them in a paper I am writing on how AI is changing people management, can you give me more details of the talk (when, where etc.) so I can reference it properly?
Reply Retweet Like
Arvind Narayanan Jan 18
Replying to @ProfAndyC
Reply Retweet Like