|
@
MIRIBerkeley
Berkeley, CA
|
|
The Machine Intelligence Research Institute exists to ensure that the creation of smarter-than-human intelligence has a positive impact.
|
|
|
993
Tweets
|
88
Following
|
32,647
Followers
|
| Tweets |
|
MIRI
@MIRIBerkeley
|
Dec 31 |
|
It's the final day of our 2019 fundraiser. Help MIRI grow!
intelligence.org/2019/12/02/mir…
|
||
|
|
||
|
MIRI
@MIRIBerkeley
|
Dec 30 |
|
Merci Jérémy! Historically, we have received a significant portion of our donations at the end of Fundraisers though it's not clear the pattern will continue this year. If we don't reach our target, it may affect the scale of our growth, yes.
|
||
|
|
||
|
MIRI
@MIRIBerkeley
|
Dec 30 |
|
Our fundraiser ends tomorrow! We need your help to continue growing in 2020.
intelligence.org/2019/12/02/mir…
|
||
|
|
||
| MIRI retweeted | ||
|
Open Philanthropy
@open_phil
|
Dec 18 |
|
In this blog post, Open Philanthropy staff give their annual suggestions for individual donors: openphilanthropy.org/blog/suggestio…
|
||
|
|
||
|
MIRI
@MIRIBerkeley
|
Dec 22 |
|
A review of AI alignment research from the past year: alignmentforum.org/posts/SmDziGM9…
|
||
|
|
||
|
MIRI
@MIRIBerkeley
|
Dec 22 |
|
New this month: our annual fundraiser is live, with details on new hires and other recent developments at MIRI. intelligence.org/2019/12/05/dec…
|
||
|
|
||
| MIRI retweeted | ||
|
OpenAI
@OpenAI
|
Dec 13 |
|
We're releasing "Dota 2 with Large Scale Deep Reinforcement Learning", a scientific paper analyzing our findings from our 3-year Dota project: openai.com/projects/five/
One highlight — we trained a new agent, Rerun, which has a 98% win rate vs the version that beat @OGEsports. pic.twitter.com/1kWvXwBHHp
|
||
|
|
||
|
MIRI
@MIRIBerkeley
|
Dec 5 |
|
Our 2019 fundraiser is live!
Over the last two years, our research team has ~doubled in size. Our goal is to raise $1M by Dec. 31, letting us go into 2020 with 1.25-1.5 years worth of reserves. Learn more about what we've been up to, and how you can help: intelligence.org/2019/12/02/mir…
|
||
|
|
||
|
MIRI
@MIRIBerkeley
|
Dec 2 |
|
Tomorrow is Giving Tuesday! Help us secure up to $100,000 in matching funds on/from Facebook by donating to our Fundraiser page at exactly 5:00:00PT. More tips here: intelligence.org/2019/11/28/giv…
|
||
|
|
||
|
MIRI
@MIRIBerkeley
|
Nov 28 |
|
This Tuesday, Dec. 3, is Giving Tuesday! Last year, our lightning-fast supporters helped us get $40,000 in matching from Facebook/Paypal! Help us match up to $100,000 this year with the strategy tips here: intelligence.org/2019/11/28/giv…
|
||
|
|
||
| MIRI retweeted | ||
|
Rohin Shah
@rohinmshah
|
Nov 27 |
|
[Alignment Newsletter #75]: Solving Atari and Go with learned game models, and thoughts from a MIRI employee - mailchi.mp/3e34fa1f299a/a…
|
||
|
|
||
|
MIRI
@MIRIBerkeley
|
Nov 25 |
|
|
||
| MIRI retweeted | ||
|
Center for Security and Emerging Technology
@CSETGeorgetown
|
Oct 31 |
|
|
||
|
|
||
|
MIRI
@MIRIBerkeley
|
Nov 24 |
|
"Suppose you are designing a new invention, a predict-o-matic. It is a wondrous machine which will predict everything for us..." lesswrong.com/posts/SwcyMEgL…
|
||
|
|
||
|
MIRI
@MIRIBerkeley
|
Nov 13 |
|
One-month-late response: apply to attend a free AI Risk for Computer Scientists workshop (intelligence.org/ai-risk-for-co…). You'll be able to meet people there who may be able to help you find a role in contributing to research.
|
||
|
|
||
| MIRI retweeted | ||
|
Centre for the Study of Existential Risk
@CSERCambridge
|
Oct 31 |
|
AlphaStar update from @DeepMindAI in (of course) @nature - now Grandmaster level in StarCraft II
nature.com/articles/s4158…
|
||
|
|
||
| MIRI retweeted | ||
|
Rohin Shah
@rohinmshah
|
Oct 18 |
|
Alignment Newsletter #69: Stuart Russell's new book on why we need to replace the standard model of AI - mailchi.mp/59ddebcb3b9a/a…
|
||
|
|
||
| MIRI retweeted | ||
|
OpenAI
@OpenAI
|
Sep 19 |
|
We've fine-tuned GPT-2 using human feedback for tasks such as summarizing articles, matching the preferences of human labelers (if not always our own). We're hoping this brings safety methods closer to machines learning values by talking with humans. openai.com/blog/fine-tuni…
|
||
|
|
||
| MIRI retweeted | ||
|
OpenAI
@OpenAI
|
Sep 17 |
|
We've observed AIs discovering complex tool use while competing in a simple game of hide-and-seek. They develop a series of six distinct strategies and counterstrategies, ultimately using tools in the environment to break our simulated physics: openai.com/blog/emergent-…
|
||
|
|
||
| MIRI retweeted | ||
|
Future of Life
@FLIxrisk
|
Sep 17 |
|
In this newest episode of the AI Alignment Podcast, Stuart Armstrong discusses his recently published research agenda on how to identify and synthesize human values into a utility function. futureoflife.org/2019/09/17/syn…
|
||
|
|
||