Twitter | Pretraživanje | |
Daniel Adiwardana
Researching conversational AI at Google Brain Team .
399
Tweetovi
220
Pratim
480
Osobe koje vas prate
Tweetovi
Daniel Adiwardana proslijedio/la je tweet
Connor Shorten 29. sij
This video explains 's amazing new Meena chatbot! An Evolved Transformer with 2.6B parameters on 341 GB / 40B words of conversation data to achieves remarkable chatbot performance! "Horses go to Hayvard!"
Reply Retweet Označi sa "sviđa mi se"
Daniel Adiwardana 29. sij
Odgovor korisniku/ci @thtrieu_
Thanks Trieu!
Reply Retweet Označi sa "sviđa mi se"
Daniel Adiwardana proslijedio/la je tweet
Trieu H. Trinh 29. sij
Had the chance to sit next to Daniel in the early days of the project and tried out the interactive Meena. It has always been *this* surprising and funny :) BIG Congrats to the team with this publication. The possibilities to build up from here is endless.
Reply Retweet Označi sa "sviđa mi se"
Daniel Adiwardana proslijedio/la je tweet
Yifeng Lu 28. sij
Meena: new SOTA chatbot from us. One big step towards human-like conversation AI. Look forward to many applications related to that, e.g. 7/24 AI based foreign language tutoring.
Reply Retweet Označi sa "sviđa mi se"
Daniel Adiwardana proslijedio/la je tweet
Quoc Le 28. sij
New paper: Towards a Human-like Open-Domain Chatbot. Key takeaways: 1. "Perplexity is all a chatbot needs" ;) 2. We're getting closer to a high-quality chatbot that can chat about anything Paper: Blog:
Reply Retweet Označi sa "sviđa mi se"
Daniel Adiwardana 29. sij
Odgovor korisniku/ci @JeffDean @GoogleAI @lmthang
No worries!
Reply Retweet Označi sa "sviđa mi se"
Daniel Adiwardana 28. sij
Odgovor korisniku/ci @Brahmonaut @JeffDean i 2 ostali
I just happened to dream about that name.
Reply Retweet Označi sa "sviđa mi se"
Daniel Adiwardana 28. sij
Odgovor korisniku/ci @whiskeyandwry @quocleix @lmthang
It's everything minus a few potentially sensitive conversations. You can see more details in Appendix A of
Reply Retweet Označi sa "sviđa mi se"
Daniel Adiwardana proslijedio/la je tweet
Jeff Dean 28. sij
Open-domain conversation is an extremely difficult task for ML systems. Meena is a research effort at in this area. It's challenging, but we are making progress towards more fluent and sensible conversations. Nice work, Daniel, & everyone involved!
Reply Retweet Označi sa "sviđa mi se"
Daniel Adiwardana 28. sij
Odgovor korisniku/ci @JeffDean @GoogleAI @lmthang
Thanks Jeff!! PS: I'm Daniel
Reply Retweet Označi sa "sviđa mi se"
Daniel Adiwardana 28. sij
Odgovor korisniku/ci @xpearhead
Bonus: Meena often seems to put together ideas in ways that we don't manage to find matches of in the data. For example saying that "Horses go to Hayvard" in conversation we show in the blog post .
Reply Retweet Označi sa "sviđa mi se"
Daniel Adiwardana 28. sij
Odgovor korisniku/ci @quocleix
"It was trained on movie subtitles?!" I told myself and others in awe. Maybe the potential for generalization was really there. I was truly blessed to be able to later work with and many others on giving continuity to this idea, and turning it into . (4/4)
Reply Retweet Označi sa "sviđa mi se"
Daniel Adiwardana 28. sij
Odgovor korisniku/ci @OriolVinyalsML @quocleix
One day, I came across the paper A Neural Conversational Model () by and . The paper showed sample conversations with an end-to-end learned neural network. (3/4)
Reply Retweet Označi sa "sviđa mi se"
Daniel Adiwardana 28. sij
Odgovor korisniku/ci @xpearhead
When I was about 9 years old my father taught me how to program, and, to my delight, we built a chatbot. Initially, I couldn't stop working on it, but I no matter how many rules I wrote and how much knowledge I tried to add to its database, it still wasn't what I expected. (2/4)
Reply Retweet Označi sa "sviđa mi se"
Daniel Adiwardana 28. sij
Enabling people to converse with chatbots about anything has been a passion of a lifetime for me, and I'm sure of others as well. So I'm very thankful to be able to finally share our results with you all. Hopefully, this will help inform efforts in the area. (1/4)
Reply Retweet Označi sa "sviđa mi se"
Daniel Adiwardana proslijedio/la je tweet
Tom Brown 30. srp
(1/4) Learning ML engineering is a long slog even for legendary hackers like . IMO, the two hardest parts of ML eng are: 1) Feedback loops are measured in minutes or days in ML (compared to seconds in normal eng) 2) Errors are often silent in ML
Reply Retweet Označi sa "sviđa mi se"
Daniel Adiwardana proslijedio/la je tweet
Quoc Le 20. lip
XLNet: a new pretraining method for NLP that significantly improves upon BERT on 20 tasks (e.g., SQuAD, GLUE, RACE) arxiv: github (code + pretrained models): with Zhilin Yang, , Yiming Yang, Jaime Carbonell,
Reply Retweet Označi sa "sviđa mi se"
Daniel Adiwardana proslijedio/la je tweet
Andrej Karpathy 21. lip
An interesting trend from this year's CVPR are the numerous new papers on self-supervised learning. Andrew Zisserman gave a nice tutorial: although, there is a lot more geometry-related work as well (e.g. self-supervised depth & friends).
Reply Retweet Označi sa "sviđa mi se"
Daniel Adiwardana proslijedio/la je tweet
Christine McLeavey Payne 22. svi
Honored to talk w . His courses were my intro to the field & I wouldn't be here w/o his clear & inspiring teaching! I think of these, the courses, and the scholars/fellows, as the 3 essential steps I've taken in this wild journey
Reply Retweet Označi sa "sviđa mi se"
Daniel Adiwardana proslijedio/la je tweet
Google AI 15. svi
Translatotron is our experimental model for direct end-to-end speech-to-speech translation, which demonstrates the potential for improved translation efficiency, fewer errors, and better handling of proper nouns. Learn all about it below!
Reply Retweet Označi sa "sviđa mi se"