Twitter | Search | |
Dominique Luna
just waiting to be projected into 2d space ⏲️
272
Tweets
49
Following
14
Followers
Tweets
Dominique Luna Aug 5
Replying to @balajis
Not to say you shouldn't not be allowed to profit off junk food but incentives should be realigned so that you spend %x on R&D for better food after a certain point or something like that.
Reply Retweet Like
Dominique Luna Aug 5
Replying to @balajis
Applies to other domains as well. An obvious one is junk food where you're profiting off destroying society. Obesity and other illness caused by poor diet not only plague the healthcare system but impede an individual from maximizing their potential.
Reply Retweet Like
Dominique Luna Aug 5
Crazy to think how much better travel is going to get in this decade.
Reply Retweet Like
Dominique Luna retweeted
Zack Kanter Jul 31
Reply Retweet Like
Dominique Luna Jul 29
Replying to @sama
Have you thought about scaling the program? I think if you have mentorship for research and free or low-cost computing resources you can equip more people with the skills to pursue AGI and emphasize the "open" part of OpenAI. From what I've seen previous candidates had PhDs
Reply Retweet Like
Dominique Luna Jul 24
I'm skeptical of colonization, why would they bother? Maybe this is the interstellar equivalent of us encountering a moose on the side of the road while driving.
Reply Retweet Like
Dominique Luna Jul 24
tfw ancient aliens is shown to be actual history
Reply Retweet Like
Dominique Luna retweeted
𝔊𝔴𝔢𝔯𝔫 May 31
GPT-3 is terrifying because it's a tiny model compared to what's possible, trained in the dumbest way possible on a single impoverished modality on tiny data, yet the first version already manifests crazy runtime meta-learning—and the scaling curves 𝘴𝘵𝘪𝘭𝘭 are not bending! 😮
Reply Retweet Like
Dominique Luna retweeted
Mason 🏃‍♂️✂️ Oct 17
I used to feel ashamed of being a "slow" reader — now I'm grateful for how naturally my mind interrogates or befriends an interesting text. It's a shame most of us are taught to read under the perverse assumption that the purpose is to "consume" "material."
Reply Retweet Like
Dominique Luna Jul 20
"conditioning" appears to be the term being used. the model figures out what to output based on the input example, output is appended to the input if there were actual gradient updates I don't think the API would be feasible given the enormity of the model
Reply Retweet Like
Dominique Luna Jul 20
at least in the context of meta learning few shot learning also requires gradient updates. Once the model is trained you sample a task (get data) and update parameters n times. From other posts it looks like that doesn't take place here
Reply Retweet Like
Dominique Luna Jul 17
Replying to @gdb @ilyasut
are there parallels between GPT3 and meta-learning? It seems (from the examples I've seen) that its learnt to learn to generate sequences based on a context
Reply Retweet Like
Dominique Luna Jul 14
Replying to @rivatez
Reply Retweet Like
Dominique Luna Jul 13
Replying to @sharifshameem
Are the 2 samples the fine-tuning portion?
Reply Retweet Like
Dominique Luna retweeted
Tim Dillon May 23
None of us know what it’s like to be this virus. Now we do.
Reply Retweet Like
Dominique Luna retweeted
Rex Chapman🏇🏼 May 6
People are adding butter to their coffee and here’s why...
Reply Retweet Like
Dominique Luna May 1
Replying to @elonmusk
Cya in Canggu next year bro
Reply Retweet Like
Dominique Luna retweeted
SportsNation Apr 21
Brady to Gronk right now ...
Reply Retweet Like
Dominique Luna retweeted
Rob Henderson Mar 31
Reply Retweet Like
Dominique Luna Mar 28
Replying to @washingtonpost
kids have the potential to learn significantly more this way it's only problematic if they've been thoroughly conditioned to follow the leader
Reply Retweet Like