Twitter | Pretraživanje | |
Dileep George
Are you skeptical about successor representations? Want to know how our new model can learn cognitive maps, context-specific representations, do transitive inference, and flexible hierarchical planning? ...(1)
Cognitive maps enable us to learn the layout of environments, encode and retrieve episodic memories, and navigate vicariously for mental evaluation of options. A unifying model of cognitive maps will...
bioRxiv bioRxiv @biorxivpreprint
Reply Retweet Označi sa "sviđa mi se" More
Dileep George 7. pro
Odgovor korisniku/ci @yael_niv
As pointed out in her recent article, learning context specific representations from aliased observations is a challenge. Our agent can learn the layout of a room from severely aliased random walk sequences, only 4 unique observations in the room!
Reply Retweet Označi sa "sviđa mi se"
Dileep George 7. pro
Odgovor korisniku/ci @dileeplearning
And it works even when the room is empty, with no unique observations in the center of the room. The observations are now severely aliased and correlated, but it still recovers the map of the room.
Reply Retweet Označi sa "sviđa mi se"
Dileep George 7. pro
Odgovor korisniku/ci @dileeplearning
A hard problem of transitive inference: two aliased rooms with an overlapping portion, and the agent walks in only in one room at a time. No problem...our model learns a coherent global map from these disjoint episodes!
Reply Retweet Označi sa "sviđa mi se"
Dileep George 7. pro
Odgovor korisniku/ci @dileeplearning
Observations in rooms are aliased, but there is more. Did you notice that a portion in the first room looks exactly like the overlapping patch, making this a harder problem? Recovering all the relative positions required a lot of transitive stitching, which the model did!
Reply Retweet Označi sa "sviđa mi se"
Dileep George 7. pro
Odgovor korisniku/ci @dileeplearning
Note that the model doesn't make any assumptions about 2D/3D space or Euclidean geometry or anything like that. It is purely relational and these are learned purely from sequential random walk observations.
Reply Retweet Označi sa "sviđa mi se"
Dileep George 7. pro
Odgovor korisniku/ci @dileeplearning
Remember the 'splitter cells', where place cells encode the paths taken rather than locations? These emerge when rats follow paths rather than random walks. Happens that way in our model too. Also, lap cells emerge when rats run laps along the same loop.
Reply Retweet Označi sa "sviđa mi se"
Dileep George 7. pro
Odgovor korisniku/ci @dileeplearning
By transferring learned structural knowledge, the agent can take shortcuts in a new room, including navigating around obstacles, without having seen the whole room.
Reply Retweet Označi sa "sviđa mi se"
Dileep George 7. pro
Odgovor korisniku/ci @dileeplearning
How does it work? By learning variable order sequences! The core idea is very simple: split aliased states and adapt them to different contexts. This representation has many good properties compared to suffix trees or RNNs.
Reply Retweet Označi sa "sviđa mi se"
Dileep George 7. pro
Odgovor korisniku/ci @dileeplearning
The model can be formulated as a highly structured over-complete HMM, and trained with EM. Training results in a directed graph that is very sparse and approximates the latent generative process behind the observed sequences.
Reply Retweet Označi sa "sviđa mi se"
Dileep George 7. pro
Odgovor korisniku/ci @dileeplearning
When the model is exposed to multiple mazes, it splits them apart properly and the responses remap when the agent switches from one maze to next. Rate remapping can be explained by uncertainty in the observations.
Reply Retweet Označi sa "sviđa mi se"
Dileep George 7. pro
Odgovor korisniku/ci @dileeplearning
When the world has latent hierarchy, our model can recover that. This is then used for efficient planning.
Reply Retweet Označi sa "sviđa mi se"
Dileep George 7. pro
Odgovor korisniku/ci @dileeplearning
There is more to be done. We think replay during learning and inference can be explained. The model has strong connections to our earlier work on schema nets
Reply Retweet Označi sa "sviđa mi se"
Dileep George 7. pro
Odgovor korisniku/ci @behrenstimb @neuro_kim @NealWMorton
I want to point out a few review papers which got us started and had great influence. Viewpoints interview is excellent. What is a cognitive map -- we learned it from , et al. and . Buzsaki&Tingley: importance of sequence learning.
Reply Retweet Označi sa "sviđa mi se"