April 2024 Newsletter
Machine Intelligence Research Institute Blog
by Harlan Stewart
6d ago
The MIRI Newsletter is back in action after a hiatus since July 2022. To recap some of the biggest MIRI developments since then: MIRI released its 2024 Mission and Strategy Update, announcing a major shift in focus: While we’re continuing to support various technical research programs at MIRI, our new top priority is broad public communication and policy change. In short, we’ve become increasingly pessimistic that humanity will be able to solve the alignment problem in time, while we’ve become more hopeful (relatively speaking) about the prospect of intergovernmental agreements to hit the b ..read more
Visit website
MIRI 2024 Mission and Strategy Update
Machine Intelligence Research Institute Blog
by Malo Bourgon
3M ago
As we announced back in October, I have taken on the senior leadership role at MIRI as its CEO. It’s a big pair of shoes to fill, and an awesome responsibility that I’m honored to take on. There have been several changes at MIRI since our 2020 strategic update, so let’s get into it.1 The short version: We think it’s very unlikely that the AI alignment field will be able to make progress quickly enough to prevent human extinction and the loss of the future’s potential value, that we expect will result from loss of control to smarter-than-human AI systems. However, developments this past year l ..read more
Visit website
Written statement of MIRI CEO Malo Bourgon to the AI Insight Forum
Machine Intelligence Research Institute Blog
by Malo Bourgon
4M ago
Today, December 6th, 2023, I participated in the U.S. Senate’s eighth bipartisan AI Insight Forum, which focused on the topic of “Risk, Alignment, & Guarding Against Doomsday Scenarios.” I’d like to thank Leader Schumer, and Senators Rounds, Heinrich, and Young, for the invitation to participate in the Forum. One of the central points I made in the Forum discussion was that upcoming general AI systems are different. We can’t just use the same playbook we’ve used for the last fifty years. Participants were asked to submit written statements of up to 5 pages prior to the event. In my statem ..read more
Visit website
Ability to solve long-horizon tasks correlates with wanting things in the behaviorist sense
Machine Intelligence Research Institute Blog
by Nate Soares
5M ago
Status: Vague, sorry. The point seems almost tautological to me, and yet also seems like the correct answer to the people going around saying “LLMs turned out to be not very want-y, when are the people who expected ‘agents’ going to update?”, so, here we are. Okay, so you know how AI today isn’t great at certain… let’s say “long-horizon” tasks? Like novel large-scale engineering projects, or writing a long book series with lots of foreshadowing? (Modulo the fact that it can play chess pretty well, which is longer-horizon than some things; this distinction is quantitative rather than qualitativ ..read more
Visit website
Thoughts on the AI Safety Summit company policy requests and responses
Machine Intelligence Research Institute Blog
by Nate Soares
6M ago
Over the next two days, the UK government is hosting an AI Safety Summit focused on “the safe and responsible development of frontier AI”. They requested that seven companies (Amazon, Anthropic, DeepMind, Inflection, Meta, Microsoft, and OpenAI) “outline their AI Safety Policies across nine areas of AI Safety”. Below, I’ll give my thoughts on the nine areas the UK government described; I’ll note key priorities that I don’t think are addressed by company-side policy at all; and I’ll say a few words (with input from Matthew Gray, whose discussions here I’ve found valuable) about the individual c ..read more
Visit website
AI as a science, and three obstacles to alignment strategies
Machine Intelligence Research Institute Blog
by Nate Soares
6M ago
AI used to be a science. In the old days (back when AI didn’t work very well), people were attempting to develop a working theory of cognition. Those scientists didn’t succeed, and those days are behind us. For most people working in AI today and dividing up their work hours between tasks, gone is the ambition to understand minds. People working on mechanistic interpretability (and others attempting to build an empirical understanding of modern AIs) are laying an important foundation stone that could play a role in a future science of artificial minds, but on the whole, modern AI engineering ..read more
Visit website
Announcing MIRI’s new CEO and leadership team
Machine Intelligence Research Institute Blog
by Gretta Duleba
6M ago
In 2023, MIRI has shifted focus in the direction of broad public communication—see, for example, our recent TED talk, our piece in TIME magazine “Pausing AI Developments Isn’t Enough. We Need to Shut it All Down”, and our appearances on various podcasts. While we’re continuing to support various technical research programs at MIRI, this is no longer our top priority, at least for the foreseeable future. Coinciding with this shift in focus, there have also been many organizational changes at MIRI over the last several months, and we are somewhat overdue to announce them in public. The big chang ..read more
Visit website
The basic reasons I expect AGI ruin
Machine Intelligence Research Institute Blog
by Rob Bensinger
1y ago
I’ve been citing AGI Ruin: A List of Lethalities to explain why the situation with AI looks lethally dangerous to me. But that post is relatively long, and emphasizes specific open technical problems over “the basics”. Here are 10 things I’d focus on if I were giving “the basics” on why I’m so worried:[1] 1. General intelligence is very powerful, and once we can build it at all, STEM-capable artificial general intelligence (AGI) is likely to vastly outperform human intelligence immediately (or very quickly). When I say “general intelligence”, I’m usually thinking about “whatever it is that let ..read more
Visit website
Misgeneralization as a misnomer
Machine Intelligence Research Institute Blog
by Nate Soares
1y ago
Here’s two different ways an AI can turn out unfriendly: You somehow build an AI that cares about “making people happy”. In training, it tells people jokes and buys people flowers and offers people an ear when they need one. In deployment (and once it’s more capable), it forcibly puts each human in a separate individual heavily-defended cell, and pumps them full of opiates. You build an AI that’s good at making people happy. In training, it tells people jokes and buys people flowers and offers people an ear when they need one. In deployment (and once it’s more capable), it turns out that what ..read more
Visit website
Pausing AI Developments Isn’t Enough. We Need to Shut it All Down
Machine Intelligence Research Institute Blog
by Eliezer Yudkowsky
1y ago
(Published in TIME on March 29.)   An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin. I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it. The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens afte ..read more
Visit website

Follow Machine Intelligence Research Institute Blog on FeedSpot

Continue with Google
Continue with Apple
OR