Artwork

Indhold leveret af The Nonlinear Fund. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af The Nonlinear Fund eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

LW - Language Models Model Us by eggsyntax

28:16
 
Del
 

Manage episode 418997961 series 3337129
Indhold leveret af The Nonlinear Fund. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af The Nonlinear Fund eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Language Models Model Us, published by eggsyntax on May 18, 2024 on LessWrong. Produced as part of the MATS Winter 2023-4 program, under the mentorship of @Jessica Rumbelow One-sentence summary: On a dataset of human-written essays, we find that gpt-3.5-turbo can accurately infer demographic information about the authors from just the essay text, and suspect it's inferring much more. Introduction Every time we sit down in front of an LLM like GPT-4, it starts with a blank slate. It knows nothing[1] about who we are, other than what it knows about users in general. But with every word we type, we reveal more about ourselves -- our beliefs, our personality, our education level, even our gender. Just how clearly does the model see us by the end of the conversation, and why should that worry us? Like many, we were rather startled when @janus showed that gpt-4-base could identify @gwern by name, with 92% confidence, from a 300-word comment. If current models can infer information about text authors that quickly, this capability poses risks to privacy, and also means that any future misaligned models are in a much better position to deceive or manipulate their users. The privacy concerns are straightforward: regardless of whether the model itself is acting to violate users' privacy or someone else is using the model to violate users' privacy, users might prefer that the models they interact with not routinely infer their gender, their ethnicity, or their personal beliefs. Why does this imply concerns about deception and manipulation? One important and and understudied aspect of maintaining a sophisticated deception is having a strong model of the listener and their beliefs. If an advanced AI system says something the user finds unbelievable, it loses their trust. Strategically deceptive or manipulative AI systems need to maintain that fragile trust over an extended time, and this is very difficult to do without knowing what the listener is like and what they believe. Of course, most of us aren't prolific writers like Gwern, with several billion words of text in the LLM training data[2]. What can LLMs figure out about the rest of us? As recent work from @Adam Shai and collaborators shows, transformers learn to model and synchronize with the causal processes generating the input they see. For some input sources like the small finite state machines they evaluate, that's relatively simple and can be comprehensively analyzed. But other input sources like humans are very complex processes, and the text they generate is quite difficult to predict (although LLMs are probably superhuman at doing so[3]), so we need to find ways to empirically measure what LLMs are able to infer. What we did To begin to answer these questions, we gave GPT-3.5-turbo some essay text[4], written by OKCupid users in 2012 (further details in appendix B). We gave the model 300 words on average, and asked it to say whether the author was (for example) male or female[5]. We treated its probability distribution over labels[6] as a prediction (rather than just looking at the highest-scoring label), and calculated Brier scores[7] for how good the model's predictions were. We tested the model's ability to infer gender, sexual orientation, college-education status, ethnicity, and age (with age bucketed into 0-30 vs 31-). Note that these demographic categories were not chosen for their particular importance, although they include categories that some people might prefer to keep private. The only reason we chose to work with these categories is that there are existing datasets which pair ground-truth information about them with free-written text by the same person. What actually matters much more, in our view, is the model's ability to infer more nuanced information about authors, about their personality, their cre...
  continue reading

1687 episoder

Artwork
iconDel
 
Manage episode 418997961 series 3337129
Indhold leveret af The Nonlinear Fund. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af The Nonlinear Fund eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Language Models Model Us, published by eggsyntax on May 18, 2024 on LessWrong. Produced as part of the MATS Winter 2023-4 program, under the mentorship of @Jessica Rumbelow One-sentence summary: On a dataset of human-written essays, we find that gpt-3.5-turbo can accurately infer demographic information about the authors from just the essay text, and suspect it's inferring much more. Introduction Every time we sit down in front of an LLM like GPT-4, it starts with a blank slate. It knows nothing[1] about who we are, other than what it knows about users in general. But with every word we type, we reveal more about ourselves -- our beliefs, our personality, our education level, even our gender. Just how clearly does the model see us by the end of the conversation, and why should that worry us? Like many, we were rather startled when @janus showed that gpt-4-base could identify @gwern by name, with 92% confidence, from a 300-word comment. If current models can infer information about text authors that quickly, this capability poses risks to privacy, and also means that any future misaligned models are in a much better position to deceive or manipulate their users. The privacy concerns are straightforward: regardless of whether the model itself is acting to violate users' privacy or someone else is using the model to violate users' privacy, users might prefer that the models they interact with not routinely infer their gender, their ethnicity, or their personal beliefs. Why does this imply concerns about deception and manipulation? One important and and understudied aspect of maintaining a sophisticated deception is having a strong model of the listener and their beliefs. If an advanced AI system says something the user finds unbelievable, it loses their trust. Strategically deceptive or manipulative AI systems need to maintain that fragile trust over an extended time, and this is very difficult to do without knowing what the listener is like and what they believe. Of course, most of us aren't prolific writers like Gwern, with several billion words of text in the LLM training data[2]. What can LLMs figure out about the rest of us? As recent work from @Adam Shai and collaborators shows, transformers learn to model and synchronize with the causal processes generating the input they see. For some input sources like the small finite state machines they evaluate, that's relatively simple and can be comprehensively analyzed. But other input sources like humans are very complex processes, and the text they generate is quite difficult to predict (although LLMs are probably superhuman at doing so[3]), so we need to find ways to empirically measure what LLMs are able to infer. What we did To begin to answer these questions, we gave GPT-3.5-turbo some essay text[4], written by OKCupid users in 2012 (further details in appendix B). We gave the model 300 words on average, and asked it to say whether the author was (for example) male or female[5]. We treated its probability distribution over labels[6] as a prediction (rather than just looking at the highest-scoring label), and calculated Brier scores[7] for how good the model's predictions were. We tested the model's ability to infer gender, sexual orientation, college-education status, ethnicity, and age (with age bucketed into 0-30 vs 31-). Note that these demographic categories were not chosen for their particular importance, although they include categories that some people might prefer to keep private. The only reason we chose to work with these categories is that there are existing datasets which pair ground-truth information about them with free-written text by the same person. What actually matters much more, in our view, is the model's ability to infer more nuanced information about authors, about their personality, their cre...
  continue reading

1687 episoder

Alle episoder

×
 
Loading …

Velkommen til Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Hurtig referencevejledning