Dez 6, 2021
Řečník · 0 sledujících
Řečník · 0 sledujících
Řečník · 0 sledujících
Řečník · 0 sledujících
Řečník · 0 sledujících
Řečník · 0 sledujících
Řečník · 0 sledujících
Řečník · 0 sledujících
The capabilities of natural language models trained on large-scale data have increased immensely over the past few years. Open source libraries such as HuggingFace have made these models easily available and accessible. While prior research has identified biases in large language models, this paper considers biases contained in the most popular versions of these models when applied out-of-the-box for downstream tasks. We focus on generative language models as they are well-suited for extracting biases inherited from training data. Specifically, we conduct an in-depth analysis of GPT-2, which is the most downloaded text generation model on HuggingFace, with over half a million downloads in the past month alone. We assess biases related to occupational associations for different protected categories by intersecting gender with religion, sexuality, ethnicity, political affiliation, and continental name origin. Using a template-based data collection pipeline, we collect 396K sentence completions made by GPT-2 and find: (i) The machine-predicted jobs are less diverse and more stereotypical for women than for men, especially for intersections; (ii) Intersectional interactions are highly relevant for occupational associations, which we quantify by fitting 262 logistic models; (iii) For most occupations, GPT-2 reflects the skewed gender and ethnicity distribution found in US Labour Bureau data, and even pulls the societally-skewed distribution towards gender parity in cases where its predictions deviate from real labor market observations. This raises the normative question of what language models should learn - whether they should reflect or correct for existing inequalities.The capabilities of natural language models trained on large-scale data have increased immensely over the past few years. Open source libraries such as HuggingFace have made these models easily available and accessible. While prior research has identified biases in large language models, this paper considers biases contained in the most popular versions of these models when applied out-of-the-box for downstream tasks. We focus on generative language models as they are well-suited for extracting…
Účet · 1,9k sledujících
Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.
Professionelle Aufzeichnung und Livestreaming – weltweit.
Präsentationen, deren Thema, Kategorie oder Sprecher:in ähnlich sind
Xiaomin Li, …
Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %
Awa Dieng, …
Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %
Wei Tan, …
Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %
Yanjun Han, …
Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %
Can Qin, …
Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %
Muchen Li, …
Pro uložení prezentace do věčného trezoru hlasovalo 0 diváků, což je 0.0 %