Artwork

Indhold leveret af CCC media team. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af CCC media team eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

Automated Malfare - discriminatory effects of welfare automation (38c3)

45:33
 
Del
 

Manage episode 457955394 series 48696
Indhold leveret af CCC media team. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af CCC media team eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
An increasing number of countries is implementing algorithmic decision-making and fraud detection systems within their social benefits system. Instead of improving decision fairness and ensuring effective procedures, these systems often reinforce preexisting discriminations and injustices. The talk presents case studies of automation in the welfare systems of the Netherlands, India, Serbia and Denmark, based on research by Amnesty International. Social security benefits provide a safety net for those who are dependent on support in order to make a living. Poverty and other forms of discrimination often come together for those affected. But what happens, when states decide to use Social Benefit Systems as a playground for automated decision making? Promising more fair and effective public services, a closer investigation reveals reinforcements of discriminations due to the kind of algorithms and quality of the input data on the one hand and a large-scale use of mass surveillance techniques in order to generate data to feed the systems with on the other hand. Amnesty International has conducted case studies in the Netherlands, India, Serbia and, most recently, Denmark. In the Netherlands, the fraud detection algorithm under investigation in 2021 was found to be clearly discriminatory. The algorithm uses nationality as a risk factor, and the automated decisions went largely unchallenged by the authorities, leading to severe and unjustified subsidy cuts for many families. The more recent Danish system takes a more holistic approach, taking into account a huge amount of private data and some dozens of algorithms, resulting in a system that could well fall under the EU's own AI law definition of a social scoring system, which is prohibited. In the cases of India and Serbia, intransparency, problems with data integrity, automation bias and increased surveillance have also led to severe human rights violations. Licensed to the public under http://creativecommons.org/licenses/by/4.0 about this event: https://events.ccc.de/congress/2024/hub/event/automated-malfare-discriminatory-effects-of-welfare-automation/
  continue reading

3107 episoder

Artwork
iconDel
 
Manage episode 457955394 series 48696
Indhold leveret af CCC media team. Alt podcastindhold inklusive episoder, grafik og podcastbeskrivelser uploades og leveres direkte af CCC media team eller deres podcastplatformspartner. Hvis du mener, at nogen bruger dit ophavsretligt beskyttede værk uden din tilladelse, kan du følge processen beskrevet her https://da.player.fm/legal.
An increasing number of countries is implementing algorithmic decision-making and fraud detection systems within their social benefits system. Instead of improving decision fairness and ensuring effective procedures, these systems often reinforce preexisting discriminations and injustices. The talk presents case studies of automation in the welfare systems of the Netherlands, India, Serbia and Denmark, based on research by Amnesty International. Social security benefits provide a safety net for those who are dependent on support in order to make a living. Poverty and other forms of discrimination often come together for those affected. But what happens, when states decide to use Social Benefit Systems as a playground for automated decision making? Promising more fair and effective public services, a closer investigation reveals reinforcements of discriminations due to the kind of algorithms and quality of the input data on the one hand and a large-scale use of mass surveillance techniques in order to generate data to feed the systems with on the other hand. Amnesty International has conducted case studies in the Netherlands, India, Serbia and, most recently, Denmark. In the Netherlands, the fraud detection algorithm under investigation in 2021 was found to be clearly discriminatory. The algorithm uses nationality as a risk factor, and the automated decisions went largely unchallenged by the authorities, leading to severe and unjustified subsidy cuts for many families. The more recent Danish system takes a more holistic approach, taking into account a huge amount of private data and some dozens of algorithms, resulting in a system that could well fall under the EU's own AI law definition of a social scoring system, which is prohibited. In the cases of India and Serbia, intransparency, problems with data integrity, automation bias and increased surveillance have also led to severe human rights violations. Licensed to the public under http://creativecommons.org/licenses/by/4.0 about this event: https://events.ccc.de/congress/2024/hub/event/automated-malfare-discriminatory-effects-of-welfare-automation/
  continue reading

3107 episoder

Minden epizód

×
 
Loading …

Velkommen til Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Hurtig referencevejledning

Lyt til dette show, mens du udforsker
Afspil