|
@xpearhead | |||||
|
When I was about 9 years old my father taught me how to program, and, to my delight, we built a chatbot. Initially, I couldn't stop working on it, but I no matter how many rules I wrote and how much knowledge I tried to add to its database, it still wasn't what I expected. (2/4)
|
||||||
|
||||||
|
Daniel Adiwardana
@xpearhead
|
28. sij |
|
Enabling people to converse with chatbots about anything has been a passion of a lifetime for me, and I'm sure of others as well. So I'm very thankful to be able to finally share our results with you all. Hopefully, this will help inform efforts in the area. (1/4) twitter.com/lmthang/status…
|
||
|
|
||
|
Daniel Adiwardana
@xpearhead
|
28. sij |
|
One day, I came across the paper A Neural Conversational Model (arxiv.org/abs/1506.05869) by @OriolVinyalsML and @quocleix. The paper showed sample conversations with an end-to-end learned neural network. (3/4)
|
||
|
|
||
|
Daniel Adiwardana
@xpearhead
|
28. sij |
|
"It was trained on movie subtitles?!" I told myself and others in awe. Maybe the potential for generalization was really there. I was truly blessed to be able to later work with @quocleix and many others on giving continuity to this idea, and turning it into #MeenaResearch. (4/4)
|
||
|
|
||
|
Daniel Adiwardana
@xpearhead
|
28. sij |
|
Bonus: Meena often seems to put together ideas in ways that we don't manage to find matches of in the data. For example saying that "Horses go to Hayvard" in conversation we show in the blog post ai.googleblog.com/2020/01/toward….
|
||
|
|
||