Artwork

Indhold leveret af SquareX. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af SquareX eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

Using LLMs for Offensive Cybersecurity | Michael Kouremetis | Be Fearless Podcast EP 11

9:46
 
Del
 

Manage episode 440902967 series 3579095
Indhold leveret af SquareX. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af SquareX eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.

In this DEF CON 32 special, Michael Kouremetis, Principal Adversary Emulation Engineer from MITRE discusses the Caldera project, research on LLMs and their implications for cybersecurity. If you’re interested in the intersection of AI and cybersecurity, this is one episode you don’t want to miss!
0:00 Introduction and the story behind Caldera
2:40 Challenges of testing LLMs for cyberattacks
5:05 What are indicators of LLMs’ offensive capabilities?
7:46 How open-source LLMs are a double-edged sword
🔔 Follow Michael and Shourya on:
https://www.linkedin.com/in/michael-kouremetis-78685931/
https://www.linkedin.com/in/shouryaps/
📖 Episode Summary:
In this episode, Michael Kouremetis from MITRE’s Cyber Lab division shares his insights into the intersection of AI and cybersecurity. Michael discusses his work on the MITRE Caldera project, an open-source adversary emulation platform designed to help organizations run red team operations and simulate real-world cyber threats. He also explores the potential risks of large language models (LLMs) in offensive cybersecurity, offering a glimpse into the research he presented at Black Hat on how AI might be used to carry out cyberattacks.
Michael dives into the challenges of testing LLMs for offensive cyber capabilities, emphasizing the need for real-world, operator-specific tests to better understand their potential. He also discusses the importance of community collaboration to enhance awareness and create standardized tests for these models.

🔥 Powered by SquareX
SquareX helps organizations detect, mitigate, and threat hunt web attacks happening against their users in real-time. Find out more about SquareX at https://sqrx.com/

  continue reading

Kapitler

1. Introduction and the story behind Caldera (00:00:00)

2. Challenges of testing LLMs for cyberattacks (00:02:40)

3. What are indicators of LLMs’ offensive capabilities? (00:05:05)

4. How open-source LLMs are a double-edged sword (00:07:46)

30 episoder

Artwork
iconDel
 
Manage episode 440902967 series 3579095
Indhold leveret af SquareX. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af SquareX eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.

In this DEF CON 32 special, Michael Kouremetis, Principal Adversary Emulation Engineer from MITRE discusses the Caldera project, research on LLMs and their implications for cybersecurity. If you’re interested in the intersection of AI and cybersecurity, this is one episode you don’t want to miss!
0:00 Introduction and the story behind Caldera
2:40 Challenges of testing LLMs for cyberattacks
5:05 What are indicators of LLMs’ offensive capabilities?
7:46 How open-source LLMs are a double-edged sword
🔔 Follow Michael and Shourya on:
https://www.linkedin.com/in/michael-kouremetis-78685931/
https://www.linkedin.com/in/shouryaps/
📖 Episode Summary:
In this episode, Michael Kouremetis from MITRE’s Cyber Lab division shares his insights into the intersection of AI and cybersecurity. Michael discusses his work on the MITRE Caldera project, an open-source adversary emulation platform designed to help organizations run red team operations and simulate real-world cyber threats. He also explores the potential risks of large language models (LLMs) in offensive cybersecurity, offering a glimpse into the research he presented at Black Hat on how AI might be used to carry out cyberattacks.
Michael dives into the challenges of testing LLMs for offensive cyber capabilities, emphasizing the need for real-world, operator-specific tests to better understand their potential. He also discusses the importance of community collaboration to enhance awareness and create standardized tests for these models.

🔥 Powered by SquareX
SquareX helps organizations detect, mitigate, and threat hunt web attacks happening against their users in real-time. Find out more about SquareX at https://sqrx.com/

  continue reading

Kapitler

1. Introduction and the story behind Caldera (00:00:00)

2. Challenges of testing LLMs for cyberattacks (00:02:40)

3. What are indicators of LLMs’ offensive capabilities? (00:05:05)

4. How open-source LLMs are a double-edged sword (00:07:46)

30 episoder

Alle episoder

×
 
Loading …

Velkommen til Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Hurtig referencevejledning