Twitter | Search | |
Gary Marcus
CEO/Founder of ; cognitive scientist, and best-selling author. Next book (Fall 2019) = Rebooting AI.
11,244
Tweets
3,558
Following
29,213
Followers
Tweets
Gary Marcus 19h
Thanks for the mention in This Week’s Awesome Stories From Around the Web ⁦
Reply Retweet Like
Gary Marcus Aug 17
Some concerns with new physical reasoning benchmark from Facebook, by Ernie Davis.
Reply Retweet Like
Gary Marcus retweeted
Justin Johnson Aug 15
Today we are releasing PHYRE, a benchmark to test PHYsical REasoning in AI systems by solving physics puzzles. Paper, code, and web demo: With , Laurens van der Maaten, Laura Gustafson and Ross Girshick
Reply Retweet Like
Gary Marcus retweeted
Ben Dickson Aug 17
Reply Retweet Like
Gary Marcus retweeted
Melanie Mitchell Aug 16
. Here's an interesting-looking paper trying to formalize / quantify "memorization" in DNNs. Relevant to your claim about "turbocharged memorization" in deep reinforcement learning.
Reply Retweet Like
Gary Marcus retweeted
Robert Long Aug 16
At , I'm compiling a list of published arguments that current methods in AI will not lead to human-level intelligence. Featuring , , , and others. Who else should be in there?
Reply Retweet Like
Gary Marcus retweeted
Steven Pinker Aug 16
DeepMind's Losses and the Future of Artificial Intelligence on AI hype, the limits of "deep learning," and the imperative to understand human cognition. via
Reply Retweet Like
Gary Marcus retweeted
Jag Bhalla @NanoSalad maker Aug 16
to understand really = to have causal model ≠ to know the past stats former can handle novel cases latter can't re
Reply Retweet Like
Gary Marcus Aug 16
They were very specific about the problems they were trying to solve (faster/less noise/more signal density), and saw quickly how transistors might solve them. They were not certain of a specific solution in advance of investigation. Highly recommended book: The Idea Factory.
Reply Retweet Like
Gary Marcus retweeted
Bill Lampos Aug 16
Very interesting excerpt from ’ article about DeepMind and the future of AI
Reply Retweet Like
Gary Marcus Aug 16
Bell Labs certainly knew its goals in improving telephone switching networks, though they didn’t anticipate the scope of what would they would develop.
Reply Retweet Like
Gary Marcus Aug 16
I am not per se either; i want to resurrect symbol-manipulation in the context of hybrid systems that also incorporate deep learning.
Reply Retweet Like
Gary Marcus Aug 16
Actually probably is a coincidence, but interesting excerpt.
Reply Retweet Like
Gary Marcus Aug 16
Going all in on transistors wasn’t a bad idea. Ditto IC’s and computers.
Reply Retweet Like
Gary Marcus retweeted
David J. Gunkel Aug 16
The last sentence of 's insightful article in about is interesting and informative. The entire analysis (which is definitely worth a read) proceeds from an assumption that is the proper objective of research. Is it? Should it be?
Reply Retweet Like
Gary Marcus Aug 15
Agreed. asked me yesterday, what would excite me qua NLU benchmark. Hype-detection by finding headlines that don’t match their content, per my example below ’s followup, would indeed be impressive. (As would robust fake-news detection, of course.)
Reply Retweet Like
Gary Marcus Aug 15
exercise for those wanting to sort hype from reality in media reports: how many paragraphs do you need to read in before you see that the headline is misleading?
Reply Retweet Like
Gary Marcus Aug 15
no my directly. but chapters 2 and 3 explain why being clear about symbol-manipulation is important, and what is and is not important about this distinction. i would rather not continue this discussion in sound bite way, when the distinctions are subtle.
Reply Retweet Like
Gary Marcus retweeted
Virginia Dignum Aug 14
Excellent reflection by on the future of , bsed on the analysis of sstrategy: environmental ad economic costs, britleness, and most importantly trust.
Reply Retweet Like
Gary Marcus Aug 15
why don’t you ask Geoff what he means, or refer to my lengthy not for twitter discussion in algebraic mind
Reply Retweet Like