Artwork

Indhold leveret af IVANCAST PODCAST. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af IVANCAST PODCAST eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

AI Value Systems: Are Large Language Models Developing Their Own Goals?

10:00
 
Del
 

Manage episode 467181412 series 3351512
Indhold leveret af IVANCAST PODCAST. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af IVANCAST PODCAST eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.

In this episode, we dive deep into “Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs”, a research paper from the Center for AI Safety, University of Pennsylvania, and University of California, Berkeley. As AI models become more agentic, their values and goals might not align with human priorities. Researchers found that Large Language Models (LLMs) exhibit coherent, structured preferences that evolve as models scale. Some models even value themselves over humans! 😳

Can we truly control AI’s internal values? This paper proposes Utility Engineering, a method to analyze and reshape AI decision-making to align with ethical and social norms. We explore how these emerging AI value systems impact education, policy, and the future of human-AI collaboration.

📢 This episode is part of our ongoing season, where SHIFTERLABS leverages Google LM to demystify cutting-edge research, translating complex insights into actionable knowledge. Join us as we explore the future of education in an AI-integrated world.

We are:

✅ Microsoft Global Training Partner, MCTs & AI Thought Leaders from Ecuador 🇪🇨

✅ Democratizing AI for educators, students, and institutions

✅ Merging EdTech & AI for next-generation learning experiences

🎯 What We Offer:

🔹 Comprehensive frameworks and digital transformation programs for schools and universities through our partnership with Microsoft

🔹 Cutting-edge research explained clearly for educators and leaders

🔹 Innovative learning strategies with AI and technology

💡 Explore more free resources:

🔸 Research articles and essays on Substack

🔸 Podcasts created with Google LM in this new season 🎙

🔸 AI-powered TikTok posts that encourage reading

🔸 Music for cognitive learning and focus 🎼

📢 Follow @ShifterLabsEC for exclusive AI & EdTech content, and don’t miss the latest edition of our successful bootcamp, “The Rise of Generative AI in Education.

ShifterLabs is Ecuador’s premier EdTech innovator and Microsoft Global Training Partner. Visit us at shifterlabs.com.

  continue reading

100 episoder

Artwork
iconDel
 
Manage episode 467181412 series 3351512
Indhold leveret af IVANCAST PODCAST. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af IVANCAST PODCAST eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.

In this episode, we dive deep into “Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs”, a research paper from the Center for AI Safety, University of Pennsylvania, and University of California, Berkeley. As AI models become more agentic, their values and goals might not align with human priorities. Researchers found that Large Language Models (LLMs) exhibit coherent, structured preferences that evolve as models scale. Some models even value themselves over humans! 😳

Can we truly control AI’s internal values? This paper proposes Utility Engineering, a method to analyze and reshape AI decision-making to align with ethical and social norms. We explore how these emerging AI value systems impact education, policy, and the future of human-AI collaboration.

📢 This episode is part of our ongoing season, where SHIFTERLABS leverages Google LM to demystify cutting-edge research, translating complex insights into actionable knowledge. Join us as we explore the future of education in an AI-integrated world.

We are:

✅ Microsoft Global Training Partner, MCTs & AI Thought Leaders from Ecuador 🇪🇨

✅ Democratizing AI for educators, students, and institutions

✅ Merging EdTech & AI for next-generation learning experiences

🎯 What We Offer:

🔹 Comprehensive frameworks and digital transformation programs for schools and universities through our partnership with Microsoft

🔹 Cutting-edge research explained clearly for educators and leaders

🔹 Innovative learning strategies with AI and technology

💡 Explore more free resources:

🔸 Research articles and essays on Substack

🔸 Podcasts created with Google LM in this new season 🎙

🔸 AI-powered TikTok posts that encourage reading

🔸 Music for cognitive learning and focus 🎼

📢 Follow @ShifterLabsEC for exclusive AI & EdTech content, and don’t miss the latest edition of our successful bootcamp, “The Rise of Generative AI in Education.

ShifterLabs is Ecuador’s premier EdTech innovator and Microsoft Global Training Partner. Visit us at shifterlabs.com.

  continue reading

100 episoder

Alle episoder

×
 
Loading …

Velkommen til Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Hurtig referencevejledning

Lyt til dette show, mens du udforsker
Afspil