|
@ArtirKel | |||||
|
An analogy I use when learning a new domain is that learning from just one place may lead to overfitting, and it's good to see the context presented in different ways, linked to different things each time
|
||||||
|
||||||
|
michael_nielsen
@michael_nielsen
|
16. sij |
|
Striking analogy between the mnemonic medium and supervised learning. Makes me think about analogies to things done in supervised learning: adding noise to the data (questions) to improve generalization; changing the loss function (overall scoring), etc. Fun to think about! twitter.com/mlpowered/stat…
|
||
|
|
||
|
𝔊𝔴𝔢𝔯𝔫
@gwern
|
16. sij |
|
(I mean, that's the main theory of how the testing effect works, you know: laying down multiple memory traces with each fetch-and-encode review increases the probability of later successful recall.)
|
||
|
|
||
|
José Luis Ricón (Artir)
@ArtirKel
|
16. sij |
|
But with different materials, the testing effect works even for one set right?
|
||
|
|
||
|
José Luis Ricón (Artir)
@ArtirKel
|
16. sij |
|
Also helps to get a sense of how robust the literature is and how much debate is there.
|
||
|
|
||