Dec 14, 2019
Much work has documented instances of unfairness in deployed machine learning models, and significant effort has been dedicated to creating algorithms that take into account issues of fairness. Our work highlight an important but understudied source of unfairness: market forces that drive differing amounts of firm investment in data across populations.We develop a high-level framework, based on insights from learning theory and industrial organization, to study this phenomenon. In a simple model of this type of data-driven market, we first show that a monopoly will invest unequally in the groups. There are two natural avenues for preventing this disparate impact: promoting competition and regulating firm behavior. We show first that competition, under many natural models, does not eliminate incentives to invest unequally, and can even exacerbate it. We then consider two avenues for regulating the monopoly - requiring the monopoly to ensure that each group’s error rates are low, or forcing each group’s error rates to be similar to each other - and quantify the price of fairness (and who pays it). These models imply that mitigating fairness concerns may require policy-driven solutions, and not only technological ones.
Neural Information Processing Systems (NeurIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Presentations on similar topic, category or speaker