Twitter | Pretraživanje | |
lerner zhang
116
Tweetovi
465
Pratim
11
Osobe koje vas prate
Tweetovi
lerner zhang 14 h
Odgovor korisniku/ci @quocleix @xpearhead @lmthang
I wonder if the average perplexity is calculated in this way?
Reply Retweet Označi sa "sviđa mi se"
lerner zhang proslijedio/la je tweet
Dirk Hovy 30. sij
I have now seen several classification tasks where a logistic regression model with bag-of-character n-grams tied or even beat neural (MTL, LSTM, CNN+attention) models. Any similar observations?
Reply Retweet Označi sa "sviđa mi se"
lerner zhang proslijedio/la je tweet
𝗨𝗻𝗙𝗼𝘂𝗻𝗱.𝗮𝗶 31. sij
Odgovor korisniku/ci @dirk_hovy
Not chargram, but see this leaderboard
Reply Retweet Označi sa "sviđa mi se"
lerner zhang 2. velj
The End of an Era | Q&A 2019 Finale
Reply Retweet Označi sa "sviđa mi se"
lerner zhang 29. sij
Odgovor korisniku/ci @diskandartweet @quocleix i 2 ostali
Actually I don't know anyone who did the weighting so far.
Reply Retweet Označi sa "sviđa mi se"
lerner zhang 29. sij
Odgovor korisniku/ci @tkasasagi
I wonder if it would be of any use if you are not considering a job switch..
Reply Retweet Označi sa "sviđa mi se"
lerner zhang 29. sij
Odgovor korisniku/ci @diskandartweet @quocleix i 2 ostali
just my two cents
Reply Retweet Označi sa "sviđa mi se"
lerner zhang 29. sij
Odgovor korisniku/ci @diskandartweet @quocleix i 2 ostali
Understood. I thought we can get a perplexity for each generated word, but all words are not equal. Maybe the perplexities can be weighted by the tf-idf score or the alike.
Reply Retweet Označi sa "sviđa mi se"
lerner zhang 29. sij
Odgovor korisniku/ci @miku85335434 @Sylvi2711
Genau
Reply Retweet Označi sa "sviđa mi se"
lerner zhang 29. sij
Odgovor korisniku/ci @lmthang @quocleix @xpearhead
Danke
Reply Retweet Označi sa "sviđa mi se"
lerner zhang 29. sij
Odgovor korisniku/ci @quocleix @xpearhead @lmthang
By perplexity do you mean average perplexity? I wonder if a weighted average perplexity would be better?
Reply Retweet Označi sa "sviđa mi se"
lerner zhang proslijedio/la je tweet
Quoc Le 28. sij
New paper: Towards a Human-like Open-Domain Chatbot. Key takeaways: 1. "Perplexity is all a chatbot needs" ;) 2. We're getting closer to a high-quality chatbot that can chat about anything Paper: Blog:
Reply Retweet Označi sa "sviđa mi se"
lerner zhang 26. sij
Odgovor korisniku/ci @laike9m @Manjusaka_Lee
我只用quora和stackexchange,知乎上充满了歧视和偏见。
Reply Retweet Označi sa "sviđa mi se"
lerner zhang 24. sij
How do you determine how much data to use/annotate(like outsourcing)?
Reply Retweet Označi sa "sviđa mi se"
lerner zhang 14. srp
Odgovor korisniku/ci @laike9m
Can I treat you to a meal or a coffee?
Reply Retweet Označi sa "sviđa mi se"
lerner zhang proslijedio/la je tweet
Physics-astronomy.org 16. lip
One word this video...
Reply Retweet Označi sa "sviđa mi se"
lerner zhang 17. lip
The best explanation of CRF I have ever seen!
Reply Retweet Označi sa "sviđa mi se"
lerner zhang 15. lip
Odgovor korisniku/ci @ID_R_McGregor
Is this true?
Reply Retweet Označi sa "sviđa mi se"
lerner zhang 15. lip
Odgovor korisniku/ci @laike9m
That's crazy.
Reply Retweet Označi sa "sviđa mi se"
lerner zhang 3. lip
I wish I can work in such companies in the future.
Reply Retweet Označi sa "sviđa mi se"