|
@quocleix | |||||
|
New paper: Towards a Human-like Open-Domain Chatbot. Key takeaways:
1. "Perplexity is all a chatbot needs" ;)
2. We're getting closer to a high-quality chatbot that can chat about anything
Paper: arxiv.org/abs/2001.09977
Blog: ai.googleblog.com/2020/01/toward… pic.twitter.com/5SOBa58qx3
|
||||||
|
||||||
|
Quoc Le
@quocleix
|
28. sij |
|
You can find some sample conversations with the bot here:
github.com/google-researc…
|
||
|
|
||
|
Quoc Le
@quocleix
|
29. sij |
|
My favorite conversation is below. The Hayvard pun was funny but I totally missed the steer joke at the end until it was pointed out today by @Blonkhart pic.twitter.com/AmTobwf9A0
|
||
|
|
||
|
Quoc Le
@quocleix
|
30. sij |
|
I had another conversation with Meena just now. It's not as funny and I don't understand the first answer. But the replies to the next two questions are quite funny. pic.twitter.com/lpOZpsvDck
|
||
|
|
||
|
lerner zhang
@lerner_adams
|
29. sij |
|
By perplexity do you mean average perplexity? I wonder if a weighted average perplexity would be better?
|
||
|
|
||
|
Thang Luong
@lmthang
|
29. sij |
|
Perplexity for a language model, by definition, is computed by first averaging all neg log predictions and then exponentiating. Does that help explain? towardsdatascience.com/perplexity-int…
|
||
|
|
||
|
Alex Teichman
@alex_teichman
|
28. sij |
|
For deep NLP it has been
A hell of a ride wherein
To minimize perplexity
While lacking convexity
It’s neural architecture search for the win.
|
||
|
|
||
|
Kiran Mudiam
@KiranMudiam
|
28. sij |
|
Great work! Can’t wait to chat with Meena soon, perhaps via Google Home one day!
|
||
|
|
||
|
Bots.MY
@botsdotmy
|
29. sij |
|
chatbot = future of marketing, no doubt about that
bots.my/giveaway
|
||
|
|
||
|
Titus von der Malsburg
@tmalsburg
|
29. sij |
|
Chatbots = future of winning elections, I'm afraid
|
||
|
|
||
|
Michael Retchin
@MichaelRetchin
|
28. sij |
|
Really impressive work. Congrats!
|
||
|
|
||