

The days may feel long, but the weeks quickly fly by, and this week is no exception. It's hard to believe we're already putting May in the rearview mirror. As usual, there were far too many updates to cover in a single episode; however, I'll be covering some of the ones I think are most notable.
Thanks also to all of you who send feedback and topics for consideration. Keep them coming!
With that, let's hit it.
Show Notes:
In this weekly update, Christopher spends dedicates a large portion of this update to AI safety and governance. Key topics include the missteps of AI's integration with Reddit, concerns sparked by the departure of OpenAI's safety executives, and Stanford's Model Transparency Index. The episode also explores Google's safety framework and global discussions on implementing an AI kill switch. Throughout, Christopher emphasizes the importance of transparency, external oversight, and personal responsibility in navigating the rapidly evolving AI landscape.
00:00 - Introduction
01:46 - The AI and Reddit Cautionary Tale
07:28 - Revisiting OpenAI's Executive Departures
09:45 - OpenAI's New Model and Safety Board
13:59 - Stanford's Foundation Model Transparency Index
24:17 - Google's Frontier Safety Framework
30:04 - Global AI Kill Switch Agreement
38:57 - Final Thoughts and Personal Reflections
#ai #cybersecurity #techtrends #artificialintelligence #futureofwork
341 episoder
The days may feel long, but the weeks quickly fly by, and this week is no exception. It's hard to believe we're already putting May in the rearview mirror. As usual, there were far too many updates to cover in a single episode; however, I'll be covering some of the ones I think are most notable.
Thanks also to all of you who send feedback and topics for consideration. Keep them coming!
With that, let's hit it.
Show Notes:
In this weekly update, Christopher spends dedicates a large portion of this update to AI safety and governance. Key topics include the missteps of AI's integration with Reddit, concerns sparked by the departure of OpenAI's safety executives, and Stanford's Model Transparency Index. The episode also explores Google's safety framework and global discussions on implementing an AI kill switch. Throughout, Christopher emphasizes the importance of transparency, external oversight, and personal responsibility in navigating the rapidly evolving AI landscape.
00:00 - Introduction
01:46 - The AI and Reddit Cautionary Tale
07:28 - Revisiting OpenAI's Executive Departures
09:45 - OpenAI's New Model and Safety Board
13:59 - Stanford's Foundation Model Transparency Index
24:17 - Google's Frontier Safety Framework
30:04 - Global AI Kill Switch Agreement
38:57 - Final Thoughts and Personal Reflections
#ai #cybersecurity #techtrends #artificialintelligence #futureofwork
341 episoder
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.