|
@ilyasut | |||||
|
Rich Sutton: The Bitter Lesson that compute always wins: incompleteideas.net/IncIdeas/Bitte…
h/t @jachiam0
|
||||||
|
||||||
|
Tejas Kulkarni
@tejasdkulkarni
|
14. ožu |
|
"The two methods that seem to scale arbitrarily in this way are search and learning" - search and learn using what primitives? they won't fall out of the sky
|
||
|
|
||
|
Tejas Kulkarni
@tejasdkulkarni
|
14. ožu |
|
even if do fall, they will first land inside your head and therefore the right inductive biases that scale are key.
|
||
|
|
||
|
Natesh Ganesh
@GaneshNatesh
|
14. ožu |
|
1/2 "The ultimate reason for this is Moore's law"- this is key! Moores law is reaching its inevitable end and if need for AI compute doubles every 3.5 mnths, not quite how long this idea of 'compute always wins' is sustainable for economic & enviromental reasons.
|
||
|
|
||
|
Natesh Ganesh
@GaneshNatesh
|
14. ožu |
|
2/2 Computing is 5% of US's energy budget. We simply cannot have that go up by a factor of even 5.
|
||
|
|
||
|
Aurélien Geron
@aureliengeron
|
15. ožu |
|
Yes! Compute always wins. How do we apply this bitter lesson to problems where little data is available? Currently, we try to build a model that encodes our prior knowledge. Perhaps we should instead build a system that searches for relevant knowledge and builds the model?
|
||
|
|
||
|
Nicholas Guttenberg
@ngutten
|
15. ožu |
|
I don't even think this is just Moore's law. In science there's often a point where you can only make progress by explicitly separating out the stuff that you won't be able to say anything simple about from the things you can, but that stuff still exists and must be dealt with.
|
||
|
|
||
|
Nicholas Guttenberg
@ngutten
|
15. ožu |
|
And in some cases, you can strongly prove that no simple outcome can be reached that contains everything - for example, the generalized N-body problem being analytically intractable even if it's statistically approachable.
|
||
|
|
||
|
Marc
@zarzuelazen
|
14. ožu |
|
OK, so we want to avoid relying on domain-specific knowledge and instead look for meta-methods that can learn for themselves? All well and good, but these meta-methods (e.g., search) themselves rely on theoretical insight, without which, compute power can't be harnessed.
|
||
|
|
||
|
David Speck
@ai_dev_david
|
15. ožu |
|
I agree. That post talks about the unimportance of domain knowledge. But in opinion my AI research is about finding general improvement to algorithm. More compute + better algorithms is behind the success AI imho. Not compute alone.
|
||
|
|
||