Representation Learning and Fairness

9. Prosinec 2019

Řečníci

O prezentaci

It is increasingly evident that widely-deployed machine learning models can lead to discriminatory outcomes and can exacerbate disparities in the training data. With the accelerating adoption of machine learning for real-world decision-making tasks, issues of bias and fairness in machine learning must be addressed. Our motivating thesis is that among a variety of emerging approaches, representation learning provides a unique toolset for evaluating and potentially mitigating unfairness. This tutorial presents existing research and proposes open problems at the intersection of representation learning and fairness. We will look at the (im)possibility of learning fair task-agnostic representations, connections between fairness and generalization performance, and the opportunity for leveraging tools from representation learning to implement algorithmic individual and group fairness, among others. The tutorial is designed to be accessible to a broad audience of machine learning practitioners, and the necessary background is a working knowledge of predictive machine learning.

Organizátor

Kategorie

O organizátorovi (NIPS 2019)

Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.

Uložení prezentace

Měla by být tato prezentace uložena po dobu 1000 let?

Jak ukládáme prezentace

Pro uložení prezentace do věčného trezoru hlasovalo 1 diváků, což je 0.1 %

Sdílení

Doporučená videa

Prezentace na podobné téma, kategorii nebo přednášejícího

Zajímají Vás podobná videa? Sledujte NIPS 2019