Twitter | Pretraživanje | |
Stanford NLP Group
. people’s papers #2—ELECTRA: and colleagues (incl. at ) show how to build a much more compute/energy efficient discriminative pre-trainer for text encoding than BERT etc. using instead replaced token detection
Reply Retweet Označi sa "sviđa mi se" More
Adrian Brunetto 7. sij
Odgovor korisniku/ci @stanfordnlp @clark_kev @GoogleAI
When are they going to release the code?
Reply Retweet Označi sa "sviđa mi se"
Kevin Clark 10. sij
Odgovor korisniku/ci @Mlbot4 @stanfordnlp @GoogleAI
Code will be released early February!
Reply Retweet Označi sa "sviđa mi se"
Debanjan Mahata 7. sij
Odgovor korisniku/ci @stanfordnlp @clark_kev @GoogleAI
Is it possible to get the code implementation?
Reply Retweet Označi sa "sviđa mi se"
Debanjan Mahata 7. sij
Odgovor korisniku/ci @stanfordnlp @clark_kev @GoogleAI
The paper states an anonymous link.
Reply Retweet Označi sa "sviđa mi se"