Artwork

Indhold leveret af Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

Using Role-Playing Scenarios to Identify Bias in LLMs

45:07
 
Del
 

Manage episode 440233040 series 2487640
Indhold leveret af Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.

Harmful biases in large language models (LLMs) make AI less trustworthy and secure. Auditing for biases can help identify potential solutions and develop better guardrails to make AI safer. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Katie Robinson and Violet Turri, researchers in the SEI’s AI Division, discuss their recent work using role-playing game scenarios to identify biases in LLMs.

  continue reading

428 episoder

Artwork
iconDel
 
Manage episode 440233040 series 2487640
Indhold leveret af Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af Carnegie Mellon University Software Engineering Institute and Members of Technical Staff at the Software Engineering Institute eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.

Harmful biases in large language models (LLMs) make AI less trustworthy and secure. Auditing for biases can help identify potential solutions and develop better guardrails to make AI safer. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Katie Robinson and Violet Turri, researchers in the SEI’s AI Division, discuss their recent work using role-playing game scenarios to identify biases in LLMs.

  continue reading

428 episoder

Alle episoder

×
 
Loading …

Velkommen til Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Hurtig referencevejledning