Annie Jacobsen on Nuclear War - a Second by Second Timeline
Future of Life Institute
by Future of Life Institute
1w ago
Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen. We also discuss time pressure, submarines, interceptor missiles, cyberattacks, and concentration of power. You can find more on Annie's work at https://anniejacobsen.com Timestamps: 00:00 A scenario of nuclear war 06:56 Who would launch an attack? 13:50 Detecting nuclear attacks 19:37 The first critical seconds 29:42 Decisions under time pressure 34:27 Lessons from insiders 44:18 Submarines 51:06 How did we end up like this? 59:40 Interceptor missiles 1:11:25 Nuclear weapons and cyberattac ..read more
Visit website
Katja Grace on the Largest Survey of AI Researchers
Future of Life Institute
by Future of Life Institute
1M ago
Katja Grace joins the podcast to discuss the largest survey of AI researchers conducted to date, AI researchers' beliefs about different AI risks, capabilities required for continued AI-related transformation, the idea of discontinuous progress, the impacts of AI from either side of the human-level intelligence threshold, intelligence and power, and her thoughts on how we can mitigate AI risk. Find more on Katja's work at https://aiimpacts.org/. Timestamps: 0:20 AI Impacts surveys 18:11 What AI will look like in 20 years 22:43 Experts’ extinction risk predictions 29:35 Opinions on slowing down ..read more
Visit website
Sneha Revanur on the Social Effects of AI
Future of Life Institute
by Future of Life Institute
2M ago
Sneha Revanur joins the podcast to discuss the social effects of AI, the illusory divide between AI ethics and AI safety, the importance of humans in the loop, the different effects of AI on younger and older people, and the importance of AIs identifying as AIs. You can read more about Sneha's work at https://encodejustice.org Timestamps: 00:00 Encode Justice 06:11 AI ethics and AI safety 15:49 Humans in the loop 23:59 AI in social media 30:42 Deteriorating social skills? 36:00 AIs identifying as AIs 43:36 AI influence in elections 50:32 AIs interacting with human systems ..read more
Visit website
Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable
Future of Life Institute
by Future of Life Institute
2M ago
Roman Yampolskiy joins the podcast again to discuss whether AI is like a Shoggoth, whether scaling laws will hold for more agent-like AIs, evidence that AI is uncontrollable, and whether designing human-like AI would be safer than the current development path. You can read more about Roman's work at http://cecs.louisville.edu/ry/ Timestamps: 00:00 Is AI like a Shoggoth? 09:50 Scaling laws 16:41 Are humans more general than AIs? 21:54 Are AI models explainable? 27:49 Using AI to explain AI 32:36 Evidence for AI being uncontrollable 40:29 AI verifiability 46:08 Will AI be aligned by default? 54 ..read more
Visit website
Special: Flo Crivello on AI as a New Form of Life
Future of Life Institute
by Future of Life Institute
3M ago
On this special episode of the podcast, Flo Crivello talks with Nathan Labenz about AI as a new form of life, whether attempts to regulate AI risks regulatory capture, how a GPU kill switch could work, and why Flo expects AGI in 2-8 years. Timestamps: 00:00 Technological progress 07:59 Regulatory capture and AI 11:53 AI as a new form of life 15:44 Can AI development be paused? 20:12 Biden's executive order on AI 22:54 How would a GPU kill switch work? 27:00 Regulating models or applications? 32:13 AGI in 2-8 years 42:00 China and US collaboration on AI ..read more
Visit website
Carl Robichaud on Preventing Nuclear War
Future of Life Institute
by Future of Life Institute
3M ago
Carl Robichaud joins the podcast to discuss the new nuclear arms race, how much world leaders and ideologies matter for nuclear risk, and how to reach a stable, low-risk era. You can learn more about Carl's work here: https://www.longview.org/about/carl-robichaud/ Timestamps: 00:00 A new nuclear arms race 08:07 How much do world leaders matter? 18:04 How much does ideology matter? 22:14 Do nuclear weapons cause stable peace? 31:29 North Korea 34:01 Have we overestimated nuclear risk? 43:24 Time pressure in nuclear decisions 52:00 Why so many nuclear warheads? 1:02:17 Has containment been succe ..read more
Visit website
Frank Sauer on Autonomous Weapon Systems
Future of Life Institute
by Future of Life Institute
4M ago
Frank Sauer joins the podcast to discuss autonomy in weapon systems, killer drones, low-tech defenses against drones, the flaws and unpredictability of autonomous weapon systems, and the political possibilities of regulating such systems. You can learn more about Frank's work here: https://metis.unibw.de/en/ Timestamps: 00:00 Autonomy in weapon systems 12:19 Balance of offense and defense 20:05 Killer drone systems 28:53 Is autonomy like nuclear weapons? 37:20 Low-tech defenses against drones 48:29 Autonomy and power balance 1:00:24 Tricking autonomous systems 1:07:53 Unpredictability of auton ..read more
Visit website
Mark Brakel on the UK AI Summit and the Future of AI Policy
Future of Life Institute
by Future of Life Institute
5M ago
Mark Brakel (Director of Policy at the Future of Life Institute) joins the podcast to discuss the AI Safety Summit in Bletchley Park, objections to AI policy, AI regulation in the EU and US, global institutions for safe AI, and autonomy in weapon systems. Timestamps: 00:00 AI Safety Summit in the UK 12:18 Are officials up to date on AI? 23:22 Objections to AI policy 31:27 The EU AI Act 43:37 The right level of regulation 57:11 Risks and regulatory tools 1:04:44 Open-source AI 1:14:56 Subsidising AI safety research 1:26:29 Global institutions for safe AI 1:34:34 Autonomy in weapon systems ..read more
Visit website
Dan Hendrycks on Catastrophic AI Risks
Future of Life Institute
by Future of Life Institute
5M ago
Dan Hendrycks joins the podcast again to discuss X.ai, how AI risk thinking has evolved, malicious use of AI, AI race dynamics between companies and between militaries, making AI organizations safer, and how representation engineering could help us understand AI traits like deception. You can learn more about Dan's work at https://www.safe.ai Timestamps: 00:00 X.ai - Elon Musk's new AI venture 02:41 How AI risk thinking has evolved 12:58 AI bioengeneering 19:16 AI agents 24:55 Preventing autocracy 34:11 AI race - corporations and militaries 48:04 Bulletproofing AI organizations 1:07:51 Open-so ..read more
Visit website
Samuel Hammond on AGI and Institutional Disruption
Future of Life Institute
by Future of Life Institute
6M ago
Samuel Hammond joins the podcast to discuss how AGI will transform economies, governments, institutions, and other power structures. You can read Samuel's blog at https://www.secondbest.ca Timestamps: 00:00 Is AGI close? 06:56 Compute versus data 09:59 Information theory 20:36 Universality of learning 24:53 Hards steps in evolution 30:30 Governments and advanced AI 40:33 How will AI transform the economy? 55:26 How will AI change transaction costs? 1:00:31 Isolated thinking about AI 1:09:43 AI and Leviathan 1:13:01 Informational resolution 1:18:36 Open-source AI 1:21:24 AI will decrease state ..read more
Visit website

Follow Future of Life Institute on FeedSpot

Continue with Google
Continue with Apple
OR