Twitter | Search | |
Search Refresh
Jack Hessel 7h
I'm sure it's been tried, but is there a reference for reducing machine translation to language modeling, i.e., if you are model sequences like: "<s> how are you <SEP> cómo estás </s>"?
Reply Retweet Like
Prasanth Kolachina 16h
One of these days, I hope reversibility becomes a trend again in systems .. its super nice as an abstraction yet unintuitive at the same time for me ..
Reply Retweet Like
Jim Macwan Mar 19
5 Deep Learning-Based Text Analysis Tools NLP Enthusiasts Can Use To Parse Text
Reply Retweet Like
mohadiui Mar 19
Thinking paper bidding process should not be removed
Reply Retweet Like
William Wang Mar 22
When we say multilingual, does it mean...
Reply Retweet Like
Emily M. Bender 3h
Dear authors: "aim at V-ing" isn't natural sounding English (though it's quickly becoming a feature of academic international English). "aim to V" is much, much better.
Reply Retweet Like
mohadiui Mar 16
Does anyone know has the reviewing process started? As a reviewer, I haven't got any notification.
Reply Retweet Like
Emily M. Bender 1h
Attn , , and folks --- you might want to follow today.
Reply Retweet Like
Mace ‘pls cut to the nuance’ Ojala 🦠 Mar 23
How the hell is Android keyboard autocomplete so useless? They have all of my data, all of your data, and I prefer to write pretty boring English. This ongoing failure itself is an interesting topic. A PhD topic?
Reply Retweet Like
I. Volkov Mar 19
Formalizing natural language is a common task of natural sciences. It was successfully done by Newton and Maxwell in physics, Carl Linnaeus in biology, Mendeleev in chemistry ... Just single out the most used elements and establish relations between them.
Reply Retweet Like
Yejin Choi Mar 21
Statistical significance test is overrated. Reviewer #3 needs to stop asking for it when the result may or may not mean what they think it means.
Reply Retweet Like
Emily M. Bender Mar 21
Just saw someone refer to BERT/ELMo as "modern" embedding techniques (as opposed to say word2vec). Take a deep breath, slow down. Make time for "slow research" in please.
Reply Retweet Like
jerry Mar 20
I understand the difficulties like for example ξ (ksi) or ψ (psi) or even τ (we say it like tough) because your mouth never learned to do these sounds. Same for me when I try to pronounce some Dutch utterances :-) that’s why is cool
Reply Retweet Like
Jim Macwan 4h
Experience building chatbots with Rasa — Tuning the NLU pipeline
Reply Retweet Like
Liling Tan Mar 18
Replying to @alvations
When the dataset is small, companies hiring interns expect the accuracy to be 100% extraction from whichever automatically read PDF files they are given. Unfortunately, whatever neural nets in hasn't been able to provide guarantees yet, just promises >>
Reply Retweet Like
cs.CL Papers 11h
Question Answering by Reasoning Across Documents with Graph Convolutional Networks. (arXiv:1808.09920v2 [] UPDATED)
Reply Retweet Like
Liling Tan 4h
Replying to @marian_nmt @ccb
And then came re-evaluating and meta-evaluation papers =) Must read for every or people today. Esp. if you think BLEU is more than a game, like most SOTA-chasing =)
Reply Retweet Like
Manaal Faruqui Mar 18
Pro tip: If you have to say multiple times during your talk "I hope you can see this number", you really have way too many numbers and no, we cannot see them.
Reply Retweet Like
ictclas.nlpir Mar 19
NLPIR Big Data Semantic Intelligent Analysis Platform mainly includes more than ten functional modules, such as precise collection, document transformation, new word discovery, batch word segmentation, language statistics, text clustering,
Reply Retweet Like
ICLR&D 16h
We'll be launching on Wednesday 27 March with four open projects that span , , , and
Reply Retweet Like