Fairness

Jun 13, 2019

Speakers

About

Proportionally Fair Clustering We extend the fair machine learning literature by considering the problem of proportional centroid clustering in a metric context. For clustering n points with k centers, we define fairness as proportionality to mean that any n/k points are entitled to form their own cluster if there is another center that is closer in distance for all n/k points. We seek clustering solutions to which there are no such justified complaints from any subsets of agents, without assuming any a priori notion of protected subsets. We present and analyze algorithms to efficiently compute, optimize, and audit proportional solutions. We conclude with an empirical examination of the tradeoff between proportional solutions and the k-means objective. Stable and Fair Classification Fair classification has been a topic of intense study in machine learning and several algorithms have been proposed towards this important task. However, in a recent study, Friedler et al. pointed out that several fair classification algorithms are not stable with respect to variations in the training set -- a crucial consideration in several applications. Motivated by their work, we study the problem of designing classification algorithms that are both fair and stable. We propose an extended framework based on fair classification algorithms that are formulated as optimization problems, by introducing a stability-focused regularization term. Theoretically, we prove an additional stability guarantee, that was lacking in fair classification algorithms, and also provide an accuracy guarantee for our extended framework. Our accuracy guarantee can be used to inform the selection of the regularization parameter in our framework. To the best of our knowledge, this is the first work that combines stability and fairness in automated decision-making tasks. We assess the benefits of our approach empirically by extending several fair classification algorithms that are shown to achieve the best balance between fairness and accuracy over the \textbf{Adult} dataset. Our empirical results show that our extended framework indeed improves the stability at only a slight sacrifice in accuracy. Flexibly Fair Representation Learning by Disentanglement We consider the problem of learning representations that achieve group and subgroup fairness with respect to multiple sensitive attributes. Taking inspiration from the disentangled representation learning literature, we propose an algorithm for learning compact representations of datasets that are useful for reconstruction and prediction, but are also \emph{flexibly fair}, meaning they can be easily modified at test time to achieve subgroup demographic parity with respect to multiple sensitive attributes and their conjunctions. We show empirically that the resulting encoder---which does not require the sensitive attributes for inference---allows for the adaptation of a single representation to a variety of fair classification tasks with new target labels and subgroup definitions. Fair Regression: Quantitative Definitions and Reduction-Based Algorithms In this paper, we study the prediction of a real-valued target, such as a risk score or recidivism rate, while guaranteeing a quantitative notion of fairness with respect to a protected attribute such as gender or race. We call this class of problems fair regression. We propose general schemes for fair regression under two notions of fairness: (1) statistical parity, which asks that the prediction be statistically independent of the protected attribute, and (2) bounded group loss, which asks that the prediction error restricted to any protected group remain below some pre-determined level. While we only study these two notions of fairness, our schemes are applicable to arbitrary Lipschitz-continuous losses, and so they encompass least-squares regression, logistic regression, quantile regression, and many other tasks. Our schemes only require access to standard risk minimization algorithms (such as standard classification or least-squares regression) while providing theoretical guarantees on the optimality and fairness of the obtained solutions. In addition to analyzing theoretical properties of our schemes, we empirically demonstrate their ability to uncover fairness--accuracy frontiers on several standard datasets. Fairness without Harm: Decoupled Classifiers with Preference Guarantees Should we train different classifiers for groups defined by sensitive attributes, such as gender and ethnicity? In a domain such as medicine, it may be ethical to allow classifiers to vary by group membership -- so long as treatment disparity is aligned with the principles of beneficence (do the best") and non-maleficence (do no harm"). We argue that classifiers should satisfy {\em preference guarantees} for individuals who are subjected to disparate treatment: (i) the majority of individuals in each group should prefer their classifier in comparison to (i) a pooled classifier that makes no use of sensitive attributes ({\em rationality}, responsive to non-maleficence) and (ii) the classifier assigned to any other group ({\em envy-freeness}, responsive to beneficence). Standard decoupled training, which fits a separate classifier for each group, may fail (i) or (ii) due to data disparities or heterogeneity in the data generating distributions between groups. We introduce a {\em recursive decoupling procedure} that adaptively chooses group attributes for decoupling, and present formal conditions for achieving these preference guarantees. We illustrate the benefits of our approach through experiments on real-world datasets, showing that it can safely improve the groups defined by multiple sensitive attributes without violating preference guarantees on test data. Differentially Private Fair Learning Motivated by settings in which predictive models may be required to be non-discriminatory with respect to certain attributes (such as race), but even collecting the sensitive attribute may be forbidden or restricted, we initiate the study of fair learning under the constraint of differential privacy. Our first algorithm is a private implementation of the equalized odds post-processing approach of (Hardt et al., 2016). This algorithm is appealingly simple, but must be able to use protected group membership explicitly at test time, which can be viewed as a form of “disparate treatment”. Our second algorithm is a differentially private version of the oracle-efficient in-processing approach of (Agarwal et al., 2018). This algorithm is more complex but need not have access to protected group membership at test time. We identify new tradeoffs between fairness, accuracy, and privacy that emerge only when requiring all three properties, and show that these tradeoffs can be milder if group membership may be used at test time. Obtaining Fairness using Optimal Transport Theory In the fair classification setup, we recast the links between fairness and predictability in terms of probability metrics. We analyze repair methods based on mapping conditional distributions to the Wasserstein barycenter. We propose a Random Repair which yields a tradeoff between minimal information loss and a certain amount of fairness. Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions When the average performance of a prediction model varies significantly with respect to a sensitive attribute (e.g., race or gender), the performance disparity can be expressed in terms of the probability distributions of input and output variables for each sensitive group. In this paper, we exploit this fact to explain and repair the performance disparity of a fixed classification model over a population of interest. Given a black-box classifier that performs unevenly across sensitive groups, we aim to eliminate the performance gap by perturbing the distribution of input features for the disadvantaged group. We refer to the perturbed distribution as a counterfactual distribution, and characterize its properties for popular fairness criteria (e.g., predictive parity, equal FPR, equal opportunity). We then design a descent algorithm to efficiently learn a counterfactual distribution given the black-box classifier and samples drawn from the underlying population. We use the estimated counterfactual distribution to build a data preprocessor that reduces disparate impact without training a new model. We illustrate these use cases through experiments on real-world datasets, showing that we can repair different kinds of disparate impact without a large drop in accuracy. On the Long-term Impact of Algorithmic Decision Policies: Effort Unfairness and Feature Segregation through Social Learning Most existing notions of algorithmic fairness are static: they ensure some form of allocative equality at the time of decision making, but do not account for the adverse impact of the algorithmic decisions today on the long-term welfare and prosperity of certain segments of the population. We take a broader perspective on algorithmic fairness. We propose an effort-based measure of fairness and present a data-driven framework for characterizing the long-term impact of algorithmic policies on reshaping the underlying population. Motivated by the psychological literature on social learning and the economic literature on equality of opportunity, we propose a micro-scale model of how individuals respond to decision making algorithms. We employ existing measures of segregation from sociology and economics to quantify the resulting macro-scale population-level change. Importantly, we observe that different models may shift the group-conditional distribution of qualifications in different directions. Our findings raise a number of important questions regarding the formalization of fairness for decision-making models. Making Decisions that Reduce Discriminatory Impacts As machine learning algorithms move into real-world settings, it is crucial to ensure they are aligned with societal values. There has been much work on one aspect of this, namely unfairness in the prediction problem: How can we reduce discrimination in the predictions themselves? While an important question, solutions to this problem only apply in a restricted setting because we have full control over the predictions. Often we care about the non-discrimination of quantities we do not have full control over. Thus, we describe another key aspect of this challenge, the impact problem: How can we reduce discrimination arising from the real-world impact of decisions? To address this, we describe causal methods that model the relevant parts of the real-world system in which the decisions are made. Unlike previous approaches, these models not only allow us to map the causal pathway of a single decision, but also to model the effect of interference--how the impact on an individual depends on decisions made about other people. Often, the goal of decision policies is to maximize a beneficial impact overall. To reduce the discrimination of these benefits, we devise a constraint inspired by recent work in counterfactual fairness, and give an efficient procedure to solve the constrained optimization problem. We demonstrate our approach with an example: how to increase students taking college entrance exams in New York City public schools.

Organizer

Categories

About ICML 2019

The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

Like the format? Trust SlidesLive to capture your next event!

Professional recording and live streaming, delivered globally.

Sharing

Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow ICML 2019