Twitter | Search | |
MIRI
The Machine Intelligence Research Institute exists to ensure that the creation of smarter-than-human intelligence has a positive impact.
993
Tweets
88
Following
32,647
Followers
Tweets
MIRI Dec 31
It's the final day of our 2019 fundraiser. Help MIRI grow!
Reply Retweet Like
MIRI Dec 30
Replying to @gyrodiot
Merci Jérémy! Historically, we have received a significant portion of our donations at the end of Fundraisers though it's not clear the pattern will continue this year. If we don't reach our target, it may affect the scale of our growth, yes.
Reply Retweet Like
MIRI Dec 30
Our fundraiser ends tomorrow! We need your help to continue growing in 2020.
Reply Retweet Like
MIRI retweeted
Open Philanthropy Dec 18
In this blog post, Open Philanthropy staff give their annual suggestions for individual donors:
Reply Retweet Like
MIRI Dec 22
A review of AI alignment research from the past year:
Reply Retweet Like
MIRI Dec 22
New this month: our annual fundraiser is live, with details on new hires and other recent developments at MIRI.
Reply Retweet Like
MIRI retweeted
OpenAI Dec 13
We're releasing "Dota 2 with Large Scale Deep Reinforcement Learning", a scientific paper analyzing our findings from our 3-year Dota project: One highlight — we trained a new agent, Rerun, which has a 98% win rate vs the version that beat .
Reply Retweet Like
MIRI Dec 5
Our 2019 fundraiser is live! Over the last two years, our research team has ~doubled in size. Our goal is to raise $1M by Dec. 31, letting us go into 2020 with 1.25-1.5 years worth of reserves. Learn more about what we've been up to, and how you can help:
Reply Retweet Like
MIRI Dec 2
Tomorrow is Giving Tuesday! Help us secure up to $100,000 in matching funds on/from Facebook by donating to our Fundraiser page at exactly 5:00:00PT. More tips here:
Reply Retweet Like
MIRI Nov 28
This Tuesday, Dec. 3, is Giving Tuesday! Last year, our lightning-fast supporters helped us get $40,000 in matching from Facebook/Paypal! Help us match up to $100,000 this year with the strategy tips here:
Reply Retweet Like
MIRI retweeted
Rohin Shah Nov 27
[Alignment Newsletter #75]: Solving Atari and Go with learned game models, and thoughts from a MIRI employee -
Reply Retweet Like
MIRI Nov 25
Reply Retweet Like
MIRI retweeted
Center for Security and Emerging Technology Oct 31
2. AI Research Needs Responsible Publication Norms— in :
Reply Retweet Like
MIRI Nov 24
"Suppose you are designing a new invention, a predict-o-matic. It is a wondrous machine which will predict everything for us..."
Reply Retweet Like
MIRI Nov 13
One-month-late response: apply to attend a free AI Risk for Computer Scientists workshop (). You'll be able to meet people there who may be able to help you find a role in contributing to research.
Reply Retweet Like
MIRI retweeted
Centre for the Study of Existential Risk Oct 31
AlphaStar update from in (of course) - now Grandmaster level in StarCraft II
Reply Retweet Like
MIRI retweeted
Rohin Shah Oct 18
Alignment Newsletter #69: Stuart Russell's new book on why we need to replace the standard model of AI -
Reply Retweet Like
MIRI retweeted
OpenAI Sep 19
We've fine-tuned GPT-2 using human feedback for tasks such as summarizing articles, matching the preferences of human labelers (if not always our own). We're hoping this brings safety methods closer to machines learning values by talking with humans.
Reply Retweet Like
MIRI retweeted
OpenAI Sep 17
We've observed AIs discovering complex tool use while competing in a simple game of hide-and-seek. They develop a series of six distinct strategies and counterstrategies, ultimately using tools in the environment to break our simulated physics:
Reply Retweet Like
MIRI retweeted
Future of Life Sep 17
In this newest episode of the AI Alignment Podcast, Stuart Armstrong discusses his recently published research agenda on how to identify and synthesize human values into a utility function.
Reply Retweet Like