Research papers and online resources on machine learning fairness.
If I miss your paper, please let me know!
Contact: [email protected]
- Survey
- Book, Blog, Case Study, and Introduction
- Group Fairness in Classification
- Individual Fairness
- Minimax Fairness
- Counterfactual Fairness
- Graph Mining
- Online Learning & Bandits
- Clustering
- Regression
- Outlier Detection
- Ranking
- Generation
- Fairness and Robustness
- Transfer & Federated Learning
- Long-term Impact
- Trustworthiness
- Auditing
- Empirical Study
- Software Engineering
- Library & Toolkit
- Dataset
For fairness & bias in computer vision & natural language processing, please refer to:
- Fairness in rankings and recommendations: an overview, The VLDB Journal'22
- Fairness in Ranking, Part I: Score-based Ranking, ACM Computing Surveys'22
- Fairness in Ranking, Part II: Learning-to-Rank and Recommender Systems, ACM Computing Surveys'22
- A survey on datasets for fairness-aware machine learning, WIREs Data Mining and Knowledge Discovery'22
- A Survey on Bias and Fairness in Machine Learning, ACM Computing Surveys'21
- An Overview of Fairness in Clustering, IEEE Access'21
- Trustworthy AI: A Computational Perspective, arXiv'21
- Algorithm Fairness in AI for Medicine and Healthcare, arXiv'21
- Socially Responsible AI Algorithms: Issues, Purposes, and Challenges, arXiv'21
- Fairness in learning-based sequential decision algorithms: A survey, arXiv'20
- Language (Technology) is Power: A Critical Survey of “Bias” in NLP, ACL'20
- Fairness in Machine Learning: A Survey, arXiv'20
- The Frontiers of Fairness in Machine Learning, arXiv'18
- The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning, arXiv'18
- Identifying, measuring, and mitigating individual unfairness for supervised learning models and application to credit risk models
- Assessing and mitigating unfairness in credit models with the Fairlearn toolkit
- To regulate AI, try playing in a sandbox, Emerging Tech Brew
- NSF grant decisions reflect systemic racism, study argues
- Fairness and Machine Learning: LImitations and Opportunities
- Apple Card algorithm sparks gender bias allegations against Goldman Sachs
- Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need?, CHI'19
- Unequal Representation and Gender Stereotypes in Image Search Results for Occupations, CHI'15
- Big Data’s Disparate Impact, California Law Review
- An Analysis of the New York City Police Department’s “Stop-and-Frisk” Policy in the Context of Claims of Racial Bias, JASA'07
- What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias
- Amazon scraps secret AI recruiting tool that showed bias against women
- Consumer-Lending Discrimination in the FinTech Era
- Apple Card Investigated After Gender Discrimination Complaints
- When a Computer Program Keeps You in Jail
- European Union regulations on algorithmic decision-making and a “right to explanation”
- An Algorithm That Grants Freedom, or Takes It Away
- Achieving Fairness at No Utility Cost via Data Reweighing, ICML'22
- Fairness with Adaptive Weights, ICML'22
- Bias in Machine Learning Software: Why? How? What to Do?, FSE'21
- Identifying and Correcting Label Bias in Machine Learning, AISTATS‘20
- Fair Class Balancing: Enhancing Model Fairness without Observing Sensitive Attributes, CIKM'20
- Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions, ICML'19
- Adaptive Sensitive Reweighting to Mitigate Bias in Fairness-aware Classification, WWW'18
- Optimized Pre-Processing for Discrimination Prevention, NeurIPS'17
- Certifying and Removing Disparate Impact, KDD'15
- Learning Fair Representations, ICML'13
- Data preprocessing techniques for classification without discrimination, Knowledge and Information Systems'12
- On Learning Fairness and Accuracy on Multiple Subgroups, NeurIPS'22
- Fair Representation Learning through Implicit Path Alignment, ICML'22
- Fair Generalized Linear Models with a Convex Penalty, ICML'22
- Fair Normalizing Flows, ICLR'22
- A Stochastic Optimization Framework for Fair Risk Minimization, arXiv'22
- Fairness via Representation Neutralization, NeurIPS'21
- Scalable and Stable Surrogates for Flexible Classifiers with Fairness Constraints, NeurIPS'21
- A Fair Classifier Using Kernel Density Estimation, NeurIPS'20
- Exploiting MMD and Sinkhorn Divergences for Fair and Transferable Representation Learning, NeurIPS'20
- Rényi Fair Inference, ICLR'20
- Conditional Learning of Fair Representations, ICLR'20
- A General Approach to Fairness with Optimal Transport, AAAI'20
- Fairness Constraints: A Flexible Approach for Fair Classification, JMLR'19
- Fair Regression: Quantitative Definitions and Reduction-based Algorithms, ICML'19
- Wasserstein Fair Classification, UAI'19
- Empirical Risk Minimization Under Fairness Constraints, NeurIPS'18
- A Reductions Approach to Fair Classification, ICML'18
- Mitigating Unwanted Biases with Adversarial Learning, AIES'18
- Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees, arXiv'18
- Fairness Constraints: Mechanisms for Fair Classification, AISTATS‘17
- FairCal: Fairness Calibration for Face Verification, ICLR'22
- Fairness-aware Model-agnostic Positive and Unlabeled Learning, FAccT'22
- FACT: A Diagnostic for Group Fairness Trade-offs, ICML'20
- Equality of Opportunity in Supervised Learning, NeurIPS'16
- Fair Classification and Social Welfare, FAccT'20
- Is There a Trade-Off Between Fairness and Accuracy? A Perspective Using Mismatched Hypothesis Testing, ICML'20
- Inherent Tradeoffs in Learning Fair Representations, NeurIPS'19
- The Cost of Fairness in Binary Classification, FAT'18
- Inherent Trade-Offs in the Fair Determination of Risk Scores, ITCS'17
- Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments, Big Data'17
- On the (im)possibility of fairness, arXiv'16
- Understanding Instance-Level Impact of Fairness Constraints, ICML'22
- Generalized Demographic Parity for Group Fairness, ICLR'22
- Assessing Fairness in the Presence of Missing Data, NeurIPS'21
- Characterizing Fairness Over the Set of Good Models Under Selective Labels, ICML'21
- Fair Selective Classification via Sufficiency, ICML'21
- Testing Group Fairness via Optimal Transport Projections, ICML'21
- Fairness with Overlapping Groups, NeurIPS'20
- Feature Noise Induces Loss Discrepancy Across Groups, ICML'20
- Why Is My Classifier Discriminatory?, NeurIPS'18
- Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, FAT'18
- On Fairness and Calibration, NeurIPS'17
- Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment, WWW'17
- Learning Antidote Data to Individual Unfairness, arXiv'22
- Metric-Fair Active Learning, ICML'22
- Metric-Fair Classifier Derandomization, ICML'22
- Post-processing for Individual Fairness, NeurIPS'21
- SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness, ICLR'21
- Individually Fair Gradient Boosting, ICLR'21
- Learning Individually Fair Classifier with Path-Specific Causal-Effect Constraint, AISTATS'21
- What’s Fair about Individual Fairness?, AIES'21
- Learning Certified Individually Fair Representations, NeurIPS'20
- Metric-Free Individual Fairness in Online Learning, NeurIPS'20
- Two Simple Ways to Learn Individual Fairness Metrics from Data, ICML'20
- Training Individually Fair ML Models with Sensitive Subspace Robustness, ICLR'20
- Individual Fairness Revisited: Transferring Techniques from Adversarial Robustness, IJCAI'20
- Metric Learning for Individual Fairness, FORC'20
- Individual Fairness in Pipelines, FORC'20
- Average Individual Fairness: Algorithms, Generalization and Experiments, NeurIPS'19
- iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making, ICDM'19
- Operationalizing Individual Fairness with Pairwise Fair Representations, VLDB'19
- Active Sampling for Min-Max Fairness, ICML'22
- Adaptive Sampling for Minimax Fair Classification, NeurIPS'21
- Blind Pareto Fairness and Subgroup Robustness, ICML'21
- Fairness without Demographics through Adversarially Reweighted Learning, NeurIPS'20
- Minimax Pareto Fairness: A Multi Objective Perspective, ICML'20
- Fairness Without Demographics in Repeated Loss Minimization, ICML'18
- Causal Conceptions of Fairness and their Consequences, ICML'22
- Contrastive Mixture of Posteriors for Counterfactual Inference, Data Integration and Fairness, ICML'22
- PC-Fairness: A Unified Framework for Measuring Causality-based Fairness, NeurIPS'19
- Fairness through Causal Awareness: Learning Causal Latent-Variable Models for Biased Data, FAccT'19
- Counterfactual Fairness, NeurIPS'17
- When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness, NeurIPS'17
- Avoiding Discrimination through Causal Reasoning, NeurIPS'17
- A Causal Framework for Discovering and Removing Direct and Indirect Discrimination, IJCAI'17
- RawlsGCN: Towards Rawlsian Difference Principle on Graph Convolutional Network, WWW'22
- Correcting Exposure Bias for Link Recommendation, ICML'21
- On Dyadic Fairness: Exploring and Mitigating Bias in Graph Connections, ICLR'21
- Individual Fairness for Graph Neural Networks: A Ranking based Approach, KDD'21
- Fairness constraints can help exact inference in structured prediction, NeurIPS'20
- InFoRM: Individual Fairness on Graph Mining, KDD'20
- Fairness-Aware Explainable Recommendation over Knowledge Graphs, SIGIR'20
- Compositional Fairness Constraints for Graph Embeddings, ICML'19
- The price of unfairness in linear bandits with biased feedback, NeurIPS'22
- Fair Sequential Selection Using Supervised Learning Models, NeurIPS'21
- Online Market Equilibrium with Application to Fair Division, NeurIPS'21
- A Unified Approach to Fair Online Learning via Blackwell Approachability, NeurIPS'21
- Fair Algorithms for Multi-Agent Multi-Armed Bandits, NeurIPS'21
- Fair Exploration via Axiomatic Bargaining, NeurIPS'21
- Group-Fair Online Allocation in Continuous Time, NeurIPS'20
- Robust Fair Clustering: A Novel Fairness Attack and Defense Framework, arXiv'22
- Fair and Fast k-Center Clustering for Data Summarization, ICML'22
- Fair Clustering Under a Bounded Cost, NeurIPS'21
- Better Algorithms for Individually Fair k-Clustering, NeurIPS'21
- Approximate Group Fairness for Clustering, ICML'21
- Variational Fair Clustering, AAAI'21
- Socially Fair k-Means Clustering, FAccT'21
- Deep Fair Clustering for Visual Learning, CVPR'20
- Fair Hierarchical Clustering, NeurIPS'20
- Fair Algorithms for Clustering, NeurIPS'19
- Coresets for Clustering with Fairness Constraints, NeurIPS'19
- Scalable Fair Clustering, ICML'19
- Fair k-Center Clustering for Data Summarization, ICML'19
- Guarantees for Spectral Clustering with Fairness Constraints, ICML'19
- Fair Clustering Through Fairlets, NeurIPS'17
- Selective Regression Under Fairness Criteria, ICML'22
- Pairwise Fairness for Ordinal Regression, NeurIPS'22
- Fair Sparse Regression with Clustering: An Invex Relaxation for a Combinatorial Problem, NeurIPS'21
- Fair Regression with Wasserstein Barycenters, NeurIPS'20
- Fair Regression via Plug-In Estimator and Recalibration, NeurIPS'20
- Deep Clustering based Fair Outlier Detection, KDD'21
- FairOD: Fairness-aware Outlier Detection, AIES'21
- FairLOF: Fairness in Outlier Detection, Data Science and Engineering'21
- Fair Rank Aggregation, NeurIPS'22
- Fair Ranking with Noisy Protected Attributes, NeurIPS'22
- Individually Fair Rankings, ICLR'21
- Two-sided fairness in rankings via Lorenz dominance, NeurIPS'21
- Fairness in Ranking under Uncertainty, NeurIPS'21
- Fair algorithms for selecting citizens’ assemblies, Nature'21
- On the Problem of Underranking in Group-Fair Ranking, ICML'21
- Fairness and Bias in Online Selection, ICML'21
- Policy Learning for Fairness in Ranking, NeurIPS'19
- The Fairness of Risk Scores Beyond Classification: Bipartite Ranking and the XAUC Metric, NeurIPS'19
- Balanced Ranking with Diversity Constraints, IJCAI'19
- DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative Networks, NeurIPS'21
- Fairness for Image Generation with Uncertain Sensitive Attributes, ICML'21
- FairGAN: Fairness-aware Generative Adversarial Networks, BigData'18
- Robust Fair Clustering: A Novel Fairness Attack and Defense Framework, ICLR'23
- Fair Classification with Adversarial Perturbations, NeurIPS'21
- Sample Selection for Fair and Robust Training, NeurIPS'21
- Fair Classification with Noisy Protected Attributes: A Framework with Provable Guarantees, ICML'21
- To be Robust or to be Fair: Towards Fairness in Adversarial Training, ICML'21
- Exacerbating Algorithmic Bias through Fairness Attacks, AAAI'21
- Fair Classification with Group-Dependent Label Noise, FAccT'21
- Robust Optimization for Fairness with Noisy Protected Groups, NeurIPS'20
- FR-Train: A Mutual Information-Based Approach to Fair and Robust Training, ICML'20
- Poisoning Attacks on Algorithmic Fairness, ECML'20
- Noise-tolerant Fair Classification, NeurIPS'19
- Stable and Fair Classification, ICML'19
- Fairness Guarantees under Demographic Shift, ICLR'22
- Addressing Algorithmic Disparity and Performance Inconsistency in Federated Learning, NeurIPS'21
- Gradient-Driven Rewards to Guarantee Fairness in Collaborative Machine Learning, NeurIPS'21
- FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout, NeurIPS'21
- Does enforcing fairness mitigate biases caused by subpopulation shift?, NeurlPS'21
- Ditto: Fair and Robust Federated Learning Through Personalization, ICML'21
- Fair Transfer Learning with Missing Protected Attributes, AIES'19
- Achieving Long-Term Fairness in Sequential Decision Making, AAAI'22
- Unintended Selection: Persistent Qualification Rate Disparities and Interventions, NeurIPS'21
- How Do Fair Decisions Fare in Long-term Qualification?, NeurIPS'20
- The Disparate Equilibria of Algorithmic Decision Making when Individuals Invest Rationally, FAccT'20
- Group Retention when Using Machine Learning in Sequential Decision Making: the Interplay between User Dynamics and Fairness, NeurIPS'19
- Delayed Impact of Fair Machine Learning, ICML'18
- A Short-term Intervention for Long-term Fairness in the Labor Market, WWW'18
- Washing The Unwashable : On The (Im)possibility of Fairwashing Detection, NeurIPS'22
- The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations, FAccT'22
- Differentially Private Empirical Risk Minimization under the Fairness Lens, NeurIPS'21
- Characterizing the risk of fairwashing, NeurIPS'21
- Fair Performance Metric Elicitation, NeurIPS'20
- Can I Trust My Fairness Metric? Assessing Fairness with Unlabeled Data and Bayesian Inference, NeurIPS'20
- You Shouldn’t Trust Me: Learning Models Which Conceal Unfairness From Multiple Explanation Methods, AAAI'20
- Fairwashing: the risk of rationalization, ICML'19
- Active Fairness Auditing, ICML'22
- Statistical inference for individual fairness, ICLR'21
- Verifying Individual Fairness in Machine Learning Models, UAI'20
- Are My Deep Learning Systems Fair? An Empirical Study of Fixed-Seed Training, NeurIPS'21
- An empirical characterization of fair machine learning for clinical risk prediction, Journal of Biomedical Informatics
- FairPy: A Python Library for Machine Learning Fairness, Brandeis University
- AI Fairness 360, IBM Research
- Fairlearn: A toolkit for assessing and improving fairness in AI, Microsoft Research
- fairpy: An open-source library of fair division algorithms in Python, Ariel University
- FairML: Auditing Black-Box Predictive Models, MIT
- Folktables, UC Berkeley
- fairness_dataset, Leibniz University
- Communities and Crime Data Set
- Statlog (German Credit Data) Data Set
- Bank Marketing Data Set
- Adult Data Set
- COMPAS Recidivism Risk Score Data and Analysis
- Arrhythmia Data Set
- LSAC National Longitudinal Bar Passage Study
- Medical Expenditure Panel Survey Data
- Drug consumption Data Set
- Student Performance Data Set
- default of credit card clients Data Set
- Adult Reconstruction dataset
- America Community Survey Public Use Microdata Sample
- Census-Income (KDD) Data Set
- The Dutch Virtual Census of 2001 - IPUMS Subset
- Diabetes 130-US hospitals for years 1999-2008 Data Set
- Parkinsons Telemonitoring Data Set