Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. This podcast also contains narrations of some of our publications. ABOUT US The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic p ...
…
continue reading
For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, a ...
…
continue reading
The Into AI Safety podcast aims to make it easier for everyone, regardless of background, to get meaningfully involved with the conversations surrounding the rules and regulations which should govern the research, development, deployment, and use of the technologies encompassed by the term "artificial intelligence" or "AI" For better formatted show notes, additional resources, and more, go to https://into-ai-safety.github.io For even more content and community engagement, head over to my Pat ...
…
continue reading

1
Dark Patterns In AI | Episode #61 | For Humanity: An AI Risk Podcast
1:31:02
1:31:02
Afspil senere
Afspil senere
Lister
Like
Liked
1:31:02Host John Sherman interviews Esban Kran, CEO of Apart Research about a broad range of AI risk topics. Most importantly, the discussion covers a growing for-profit AI risk business landscape, and Apart’s recent report on Dark Patterns in LLMs. We hear about the benchmarking of new models all the time, but this project has successfully identified som…
…
continue reading
Plus, Measuring AI Honesty. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. In this newsletter, we discuss two recent papers: a policy paper on national security strategy, and a technical paper on measuring honesty in AI systems. Listen to the AI Safety …
…
continue reading
Superintelligence is destabilizing since it threatens other states’ survival—it could be weaponized, or states may lose control of it. Attempts to build superintelligence may face threats by rival states—creating a deterrence regime called Mutual Assured AI Malfunction (MAIM). In this paper, Dan Hendrycks, Eric Schmidt, and Alexandr Wang detail a s…
…
continue reading
Superintelligence is destabilizing since it threatens other states’ survival—it could be weaponized, or states may lose control of it. Attempts to build superintelligence may face threats by rival states—creating a deterrence regime called Mutual Assured AI Malfunction (MAIM). In this paper, Dan Hendrycks, Eric Schmidt, and Alexandr Wang detail a s…
…
continue reading

1
AI Risk Rising | Episode #60 | For Humanity: An AI Risk Podcast
1:43:01
1:43:01
Afspil senere
Afspil senere
Lister
Like
Liked
1:43:01Host John Sherman interviews Pause AI Global Founder Joep Meindertsma following the AI summits in Paris. The discussion begins at the dire moment we are in, the stakes, and the failure of our institutions to respond, before turning into a far-ranging discussion of AI risk reduction communications strategies.(FULL INTERVIEW STARTS AT)FOR HUMANITY MO…
…
continue reading

1
AISN #48: Utility Engineering and EnigmaEval
8:56
8:56
Afspil senere
Afspil senere
Lister
Like
Liked
8:56Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. In this newsletter, we explore two recent papers from CAIS. We’d also like to highlight that CAIS is hiring for editorial and writin…
…
continue reading

1
Smarter-Than-Human Robots? | Episode #59 | For Humanity: An AI Risk Podcast
1:42:14
1:42:14
Afspil senere
Afspil senere
Lister
Like
Liked
1:42:14Host John Sherman interviews Jad Tarifi, CEO of Integral AI, about Jad's company's work to try to create a world of trillions of AGI-enabled robots by 2035. Jad was a leader at Google's first generative AI team, his views of his former colleague Geoffrey Hinton's views on existential risk from advanced AI come up more than once.FOR HUMANITY MONTHLY…
…
continue reading
Plus, State-Sponsored AI Cyberattacks. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Reasoning Models DeepSeek-R1 has been one of the most significant model releases since ChatGPT. After its release, the DeepSeek's app quickly rose to the top of Apple's most downloaded chart and NVIDIA saw a 17% stock decline. In this st…
…
continue reading

1
Protecting Our Kids From AI Risk | Episode #58
1:42:46
1:42:46
Afspil senere
Afspil senere
Lister
Like
Liked
1:42:46Host John Sherman interviews Tara Steele, Director, The Safe AI For Children Alliance, about her work to protect children from AI risks such as deep fakes, her concern about AI causing human extinction, and what we can do about all of it.FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH htt…
…
continue reading
Plus, Humanity's Last Exam, and the AI Safety, Ethics, and Society Course. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The Transition The transition from the Biden to Trump administrations saw a flurry of executive activity on AI policy, with Biden signing several last-minute executive orders and Trump revoking Biden's…
…
continue reading

1
2025 AI Risk Preview | For Humanity: An AI Risk Podcast | Episode #57
1:40:10
1:40:10
Afspil senere
Afspil senere
Lister
Like
Liked
1:40:10What will 2025 bring? Sam Altman says AGI is coming in 2025. Agents will arrive for sure. Military use will expand greatly. Will we get a warning shot? Will we survive the year? In Episode #57, host John Sherman interviews AI Safety Research Engineer Max Winga about the latest in AI advances and risks and the year to come. FOR HUMANITY MONTHLY DONA…
…
continue reading

1
AISN #45: Center for AI Safety 2024 Year in Review
11:31
11:31
Afspil senere
Afspil senere
Lister
Like
Liked
11:31As 2024 draws to a close, we want to thank you for your continued support for AI safety and review what we’ve been able to accomplish. In this special-edition newsletter, we highlight some of our most important projects from the year. The mission of the Center for AI Safety is to reduce societal-scale risks from AI. We focus on three pillars of wor…
…
continue reading

1
AGI Goes To Washington | For Humanity: An AI Risk Podcast | Episode #56
1:14:21
1:14:21
Afspil senere
Afspil senere
Lister
Like
Liked
1:14:21FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9S...$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y...$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg...$100 MONTH https://buy.stripe.com/aEU007bVp7fAfc...In Episode #56, host John Sherman travels to Washington DC to lobby House and Senate staffers for AI …
…
continue reading

1
AI Risk Special | "Near Midnight in Suicide City" | Episode #55
1:31:34
1:31:34
Afspil senere
Afspil senere
Lister
Like
Liked
1:31:34In a special episode of For Humanity: An AI Risk Podcast, host John Sherman travels to San Francisco. Episode #55 "Near Midnight in Suicide City" is a set of short pieces from our trip out west, where we met with Pause AI, Stop AI, Liron Shapira and stopped by Open AI among other events. Big, huge massive thanks to Beau Kershaw, Director of Photogr…
…
continue reading

1
Connor Leahy Interview | Helping People Understand AI Risk | Episode #54
2:24:58
2:24:58
Afspil senere
Afspil senere
Lister
Like
Liked
2:24:583,893 views Nov 19, 2024 For Humanity: An AI Safety PodcastIn Episode #54 John Sherman interviews Connor Leahy, CEO of Conjecture.(FULL INTERVIEW STARTS AT 00:06:46)DONATION SUBSCRIPTION LINKS:$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y...$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg...$100 MONTH https://buy.stripe.com/aEU007bVp7fAfc...EMAIL …
…
continue reading

1
Human Augmentation Incoming | The Coming Age Of Humachines | Episode #53
1:42:01
1:42:01
Afspil senere
Afspil senere
Lister
Like
Liked
1:42:01In Episode #53 John Sherman interviews Michael DB Harvey, author of The Age of Humachines. The discussion covers the coming spectre of humans putting digital implants inside ourselves to try to compete with AI.DONATION SUBSCRIPTION LINKS:$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y...$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg...$100 MONTH h…
…
continue reading

1
AISN #44: The Trump Circle on AI Safety
11:22
11:22
Afspil senere
Afspil senere
Lister
Like
Liked
11:22Plus, Chinese researchers used Llama to create a military tool for the PLA, a Google AI system discovered a zero-day cybersecurity vulnerability, and Complex Systems. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The Trump Circle on AI Safety The incoming Trump administration is likely to significantly alter the US gover…
…
continue reading

1
AI Risk Update | One Year of For Humanity | Episode #52
1:18:11
1:18:11
Afspil senere
Afspil senere
Lister
Like
Liked
1:18:11In Episode #52 , host John Sherman looks back on the first year of For Humanity. Select shows are featured as well as a very special celebration of life at the end.
…
continue reading

1
AISN #43: White House Issues First National Security Memo on AI
14:55
14:55
Afspil senere
Afspil senere
Lister
Like
Liked
14:55Plus, AI and Job Displacement, and AI Takes Over the Nobels. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. White House Issues First National Security Memo on AI On October 24, 2024, the White House issued the first National Security Memorandum (NSM) on Artificial Intelligence, accompanied by a Framework to Advance AI Gov…
…
continue reading

1
AI Risk Funding | Big Tech vs. Small Safety I Episode #51
1:06:03
1:06:03
Afspil senere
Afspil senere
Lister
Like
Liked
1:06:03In Episode #51 , host John Sherman talks with Tom Barnes, an Applied Researcher with Founders Pledge, about the reality of AI risk funding, and about the need for emergency planning for AI to be much more robust and detailed than it is now. We are currently woefully underprepared.Learn More About Founders Pledge:https://www.founderspledge.com/No ce…
…
continue reading

1
AI Risk Funding | Big Tech vs. Small Safety | Episode #51 TRAILER
6:03
6:03
Afspil senere
Afspil senere
Lister
Like
Liked
6:03In Episode #51 Trailer, host John Sherman talks with Tom Barnes, an Applied Researcher with Founders Pledge, about the reality of AI risk funding, and about the need for emergency planning for AI to be much more robust and detailed than it is now. We are currently woefully underprepared.Learn More About Founders Pledge:https://www.founderspledge.co…
…
continue reading

1
Accurately Predicting Doom | What Insight Can Metaculus Reveal About AI Risk? | Episode # 50
1:18:58
1:18:58
Afspil senere
Afspil senere
Lister
Like
Liked
1:18:58In Episode #50, host John Sherman talks with Deger Turan, CEO of Metaculus about what his prediction market reveals about the AI future we are all heading towards.THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM ESTJoin Zoom Meeting:https://storyfarm.zoom.us/j/816517210...Passcode: 829191LEARN MORE–www.metaculus.comPlease Donate Here T…
…
continue reading

1
Accurately Predicting Doom | What Insight Can Metaculus Reveal About AI Risk? | Episode # 50 TRAILER
5:03
5:03
Afspil senere
Afspil senere
Lister
Like
Liked
5:03In Episode #50 TRAILER, host John Sherman talks with Deger Turan, CEO of Metaculus about what his prediction market reveals about the AI future we are all heading towards.LEARN MORE–AND JOIN STOP AIwww.stopai.infoPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhu...EMAIL JOHN: forhumanitypodcast@gmail.comThis podca…
…
continue reading

1
Episode #49: “Go To Jail To Stop AI” For Humanity: An AI Risk Podcast
1:17:08
1:17:08
Afspil senere
Afspil senere
Lister
Like
Liked
1:17:08In Episode #49, host John Sherman talks with Sam Kirchner and Remmelt Ellen, co-founders of Stop AI. Stop AI is a new AI risk protest organization, coming at it with different tactics and goals than Pause AI. LEARN MORE–AND JOIN STOP AI www.stopai.info Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcas…
…
continue reading

1
Go To Jail To Stop AI | Stopping AI | Episode #49 TRAILER
4:53
4:53
Afspil senere
Afspil senere
Lister
Like
Liked
4:53In Episode #49 TRAILER, host John Sherman talks with Sam Kirchner and Remmelt Ellen, co-founders of Stop AI. Stop AI is a new AI risk protest organization, coming at it with different tactics and goals than Pause AI.LEARN MORE–AND JOIN STOP AIwww.stopai.infoPlease Donate Here To Help Promote For Humanityhttps://www.paypal.com/paypalme/forhu...EMAIL…
…
continue reading

1
What Is The Origin Of AI Safety? | AI Safety Movement | Episode #48
1:09:29
1:09:29
Afspil senere
Afspil senere
Lister
Like
Liked
1:09:29In Episode #48, host John Sherman talks with Pause AI US Founder Holly Elmore about the limiting origins of the AI safety movement. Polls show 60-80% of the public are opposed to building artificial superintelligence. So why is the movement to stop it still so small? The roots of the AI safety movement have a lot to do with it. Holly and John explo…
…
continue reading
Plus, OpenAI's o1, and AI Governance Summary. Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Newsom Vetoes SB 1047 On Sunday, Governor Newsom vetoed California's Senate Bill 1047 …
…
continue reading

1
AI Safety's Limiting Origins: For Humanity, An AI Risk Podcast, Episode #48 Trailer
7:40
7:40
Afspil senere
Afspil senere
Lister
Like
Liked
7:40In Episode #48 Trailer, host John Sherman talks with Pause AI US Founder Holly Elmore about the limiting origins of the AI safety movement. Polls show 60-80% of the public are opposed to building artificial superintelligence. So why is the movement to stop it still so small? The roots of the AI safety movement have a lot to do with it. Holly and Jo…
…
continue reading

1
Episode #47: “Can AI Be Controlled?“ For Humanity: An AI Risk Podcas
1:19:39
1:19:39
Afspil senere
Afspil senere
Lister
Like
Liked
1:19:39In Episode #47, host John Sherman talks with Buck Shlegeris, CEO of Redwood Research, a non-profit company working on technical AI risk challenges. The discussion includes Buck’s thoughts on the new OpenAI o1-preview model, but centers on two questions: is there a way to control AI models before alignment is achieved if it can be, and how would the…
…
continue reading

1
Episode #47 Trailer : “Can AI Be Controlled?“ For Humanity: An AI Risk Podcast
4:35
4:35
Afspil senere
Afspil senere
Lister
Like
Liked
4:35In Episode #47 Trailer, host John Sherman talks with Buck Shlegeris, CEO of Redwood Research, a non-profit company working on technical AI risk challenges. The discussion includes Buck’s thoughts on the new OpenAI o1-preview model, but centers on two questions: is there a way to control AI models before alignment is achieved if it can be, and how w…
…
continue reading

1
Episode #46: “Is AI Humanity’s Worthy Successor?“ For Humanity: An AI Risk Podcast
1:17:26
1:17:26
Afspil senere
Afspil senere
Lister
Like
Liked
1:17:26In Episode #46, host John Sherman talks with Daniel Faggella, Founder and Head of Research at Emerj Artificial Intelligence Research. Dan has been speaking out about AI risk for a long time but comes at it from a different perspective than many. Dan thinks we need to talk about how we can make AGI and whatever comes after become humanity’s worthy s…
…
continue reading

1
Episode 46 Trailer: “Is AI Humanity’s Worthy Successor?“ For Humanity: An AI Risk Podcast
5:53
5:53
Afspil senere
Afspil senere
Lister
Like
Liked
5:53In Episode #46 Trailer, host John Sherman talks with Daniel Faggella, Founder and Head of Research at Emerj Artificial Intelligence Research. Dan has been speaking out about AI risk for a long time but comes at it from a different perspective than many. Dan thinks we need to talk about how we can make AGI and whatever comes after become humanity’s …
…
continue reading

1
Episode #45: “AI Risk And Child Psychology” For Humanity: An AI Risk Podcast
1:24:24
1:24:24
Afspil senere
Afspil senere
Lister
Like
Liked
1:24:24In Episode #45, host John Sherman talks with Dr. Mike Brooks, a Psychologist focusing on kids and technology. The conversation is broad-ranging, touching on parenting, happiness and screens, the need for human unity, and the psychology of humans facing an ever more unknown future.FULL INTERVIEW STARTS AT (00:05:28)Mike’s book: Tech Generation: Rais…
…
continue reading

1
AISN #41: The Next Generation of Compute Scale
11:59
11:59
Afspil senere
Afspil senere
Lister
Like
Liked
11:59Plus, Ranking Models by Susceptibility to Jailbreaking, and Machine Ethics. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The Next Generation of Compute Scale AI development is on the cusp of a dramatic expansion in compute scale. Recent developments across multiple fronts—from chip manufacturing to power infrastructure—…
…
continue reading

1
Episode #45 TRAILER: “AI Risk And Child Psychology” For Humanity: An AI Risk Podcast
6:42
6:42
Afspil senere
Afspil senere
Lister
Like
Liked
6:42In Episode #45 TRAILER, host John Sherman talks with Dr. Mike Brooks, a Psychologist focusing on kids and technology. The conversation is broad-ranging, touching on parenting, happiness and screens, the need for human unity, and the psychology of humans facing an ever more unknown future. Mike’s book: Tech Generation: Raising Balanced Kids in a Hyp…
…
continue reading

1
Episode #44: “AI P-Doom Debate: 50% vs 99.999%” For Humanity: An AI Risk Podcast
1:31:05
1:31:05
Afspil senere
Afspil senere
Lister
Like
Liked
1:31:05In Episode #44, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI Safety researcher, through leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a n…
…
continue reading

1
Episode #43: “So what exactly is the good case for AI?” For Humanity: An AI Risk Podcast
1:16:06
1:16:06
Afspil senere
Afspil senere
Lister
Like
Liked
1:16:06In Episode #43, host John Sherman talks with DevOps Engineer Aubrey Blackburn about the vague, elusive case the big AI companies and accelerationists make for the good case AI future. LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.pay…
…
continue reading

1
Episode #44 Trailer: “AI P-Doom Debate: 50% vs 99.999%” For Humanity: An AI Risk Podcast
7:58
7:58
Afspil senere
Afspil senere
Lister
Like
Liked
7:58In Episode #44 Trailer, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI Safety researcher, through leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron…
…
continue reading

1
Episode #43 TRAILER: “So what exactly is the good case for AI?” For Humanity: An AI Risk Podcast
7:34
7:34
Afspil senere
Afspil senere
Lister
Like
Liked
7:34In Episode #43 TRAILER, host John Sherman talks with DevOps Engineer Aubrey Blackburn about the vague, elusive case the big AI companies and accelerationists make for the good case AI future. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is no…
…
continue reading

1
AISN #40: California AI Legislation
14:00
14:00
Afspil senere
Afspil senere
Lister
Like
Liked
14:00Plus, NVIDIA Delays Chip Production, and Do AI Safety Benchmarks Actually Measure Safety?. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. SB 1047, the Most-Discussed California AI Legislation California's Senate Bill 1047 has sparked discussion over AI regulation. While state bills often fly under the radar, SB 1047 has g…
…
continue reading

1
Episode #42: “Actors vs. AI” For Humanity: An AI Risk Podcast
1:23:19
1:23:19
Afspil senere
Afspil senere
Lister
Like
Liked
1:23:19In Episode #42, host John Sherman talks with actor Erik Passoja about AI’s impact on Hollywood, the fight to protect people’s digital identities, and the vibes in LA about existential risk. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not …
…
continue reading

1
Episode #42 TRAILER: “Actors vs. AI” For Humanity: An AI Risk Podcast
3:11
3:11
Afspil senere
Afspil senere
Lister
Like
Liked
3:11In Episode #42 Trailer, host John Sherman talks with actor Erik Passoja about AI’s impact on Hollywood, the fight to protect people’s digital identities, and the vibes in LA about existential risk. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast…
…
continue reading

1
Episode #41 “David Brooks: Dead Wrong on AI” For Humanity: An AI Risk Podcast
48:39
48:39
Afspil senere
Afspil senere
Lister
Like
Liked
48:39In Episode #41, host John Sherman begins with a personal message to David Brooks of the New York Times. Brooks wrote an article titled “Many People Fear AI: They Shouldn’t”–and in full candor it pissed John off quite much. During this episode, John and Doom Debates host Liron Shapira go line by line through David Brooks’s 7/31/24 piece in the New Y…
…
continue reading

1
Episode #41 TRAILER “David Brooks: Dead Wrong on AI” For Humanity: An AI Risk Podcast
9:18
9:18
Afspil senere
Afspil senere
Lister
Like
Liked
9:18In Episode #41 TRAILER, host John Sherman previews the full show with a personal message to David Brooks of the New York Times. Brooks wrote something–and in full candor it pissed John off quite much. During the full episode, John and Doom Debates host Liron Shapira go line by line through David Brooks’s 7/31/24 piece in the New York Times. Please …
…
continue reading

1
Episode #40 “Surviving Doom” For Humanity: An AI Risk Podcast
1:30:53
1:30:53
Afspil senere
Afspil senere
Lister
Like
Liked
1:30:53In Episode #40, host John Sherman talks with James Norris, CEO of Upgradable and longtime AI safety proponent. James has been concerned about AI x-risk for 26 years. He lives now in Bali and has become an expert in prepping for a very different world post-warning shot or other major AI-related disaster, and he’s helping others do the same. James sh…
…
continue reading

1
Episode #40 TRAILER “Surviving Doom” For Humanity: An AI Risk Podcast
6:17
6:17
Afspil senere
Afspil senere
Lister
Like
Liked
6:17In Episode #40, TRAILER, host John Sherman talks with James Norris, CEO of Upgradable and longtime AI safety proponent. James has been concerned about AI x-risk for 26 years. He lives now in Bali and has become an expert in prepping for a very different world post-warning shot or other major AI-related disaster, and he’s helping others do the same.…
…
continue reading

1
Episode #39 “Did AI-Risk Just Get Partisan?” For Humanity: An AI Risk Podcast
1:23:01
1:23:01
Afspil senere
Afspil senere
Lister
Like
Liked
1:23:01In Episode #39, host John Sherman talks with Matthew Taber, Founder, advocate and expert in AI-risk legislation. The conversation starts ut with the various state AI laws that are coming up and moves into the shifting political landscape around AI-risk legislation in America in July 2024. Please Donate Here To Help Promote For Humanity https://www.…
…
continue reading

1
Episode #39 Trailer “Did AI-Risk Just Get Partisan?” For Humanity: An AI Risk Podcast
4:04
4:04
Afspil senere
Afspil senere
Lister
Like
Liked
4:04In Episode #39 Trailer, host John Sherman talks with Matthew Taber, Founder, advocate and expert in AI-risk legislation. The conversation addresses the shifting political landscape around AI-risk legislation in America in July 2024. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhum…
…
continue reading

1
AISN #39: Implications of a Trump Administration for AI Policy
12:00
12:00
Afspil senere
Afspil senere
Lister
Like
Liked
12:00Plus, Safety Engineering Overview. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Implications of a Trump administration for AI policy Trump named Ohio Senator J.D. Vance—an AI regulation skeptic—as his pick for vice president. This choice sheds light on the AI policy landscape under a future Trump administration. In this…
…
continue reading

1
Episode #38 “France vs. AGI” For Humanity: An AI Risk Podcast
1:20:19
1:20:19
Afspil senere
Afspil senere
Lister
Like
Liked
1:20:19In Episode #38, host John Sherman talks with Maxime Fournes, Founder, Pause AI France. With the third AI “Safety” Summit coming up in Paris in February 2025, we examine France’s role in AI safety, revealing France to be among the very worst when it comes to taking AI risk seriously. How deep is madman Yan Lecun’s influence in French society and gov…
…
continue reading