Skip to content

Papers and online resources related to machine learning fairness

License

Notifications You must be signed in to change notification settings

brandeis-machine-learning/awesome-ml-fairness

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

70 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Awesome Machine Learning Fairness Awesome

Research papers and online resources on machine learning fairness.
If I miss your paper, please let me know!
Contact: [email protected]

Table of Contents

  1. Survey
  2. Book, Blog, Case Study, and Introduction
  3. Group Fairness in Classification
  4. Individual Fairness
  5. Minimax Fairness
  6. Counterfactual Fairness
  7. Graph Mining
  8. Online Learning & Bandits
  9. Clustering
  10. Regression
  11. Outlier Detection
  12. Ranking
  13. Generation
  14. Fairness and Robustness
  15. Transfer & Federated Learning
  16. Long-term Impact
  17. Trustworthiness
  18. Auditing
  19. Empirical Study
  20. Software Engineering
  21. Library & Toolkit
  22. Dataset

For fairness & bias in computer vision & natural language processing, please refer to:

  1. Computer Vision
  2. Natural Language Processing

Survey

  1. Fairness in rankings and recommendations: an overview, The VLDB Journal'22
  2. Fairness in Ranking, Part I: Score-based Ranking, ACM Computing Surveys'22
  3. Fairness in Ranking, Part II: Learning-to-Rank and Recommender Systems, ACM Computing Surveys'22
  4. A survey on datasets for fairness-aware machine learning, WIREs Data Mining and Knowledge Discovery'22
  5. A Survey on Bias and Fairness in Machine Learning, ACM Computing Surveys'21
  6. An Overview of Fairness in Clustering, IEEE Access'21
  7. Trustworthy AI: A Computational Perspective, arXiv'21
  8. Algorithm Fairness in AI for Medicine and Healthcare, arXiv'21
  9. Socially Responsible AI Algorithms: Issues, Purposes, and Challenges, arXiv'21
  10. Fairness in learning-based sequential decision algorithms: A survey, arXiv'20
  11. Language (Technology) is Power: A Critical Survey of “Bias” in NLP, ACL'20
  12. Fairness in Machine Learning: A Survey, arXiv'20
  13. The Frontiers of Fairness in Machine Learning, arXiv'18
  14. The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning, arXiv'18

Book, Blog, Case Study, and Introduction

  1. Identifying, measuring, and mitigating individual unfairness for supervised learning models and application to credit risk models
  2. Assessing and mitigating unfairness in credit models with the Fairlearn toolkit
  3. To regulate AI, try playing in a sandbox, Emerging Tech Brew
  4. NSF grant decisions reflect systemic racism, study argues
  5. Fairness and Machine Learning: LImitations and Opportunities
  6. Apple Card algorithm sparks gender bias allegations against Goldman Sachs
  7. Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need?, CHI'19
  8. Unequal Representation and Gender Stereotypes in Image Search Results for Occupations, CHI'15
  9. Big Data’s Disparate Impact, California Law Review
  10. An Analysis of the New York City Police Department’s “Stop-and-Frisk” Policy in the Context of Claims of Racial Bias, JASA'07
  11. What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias
  12. Amazon scraps secret AI recruiting tool that showed bias against women
  13. Consumer-Lending Discrimination in the FinTech Era
  14. Apple Card Investigated After Gender Discrimination Complaints
  15. When a Computer Program Keeps You in Jail
  16. European Union regulations on algorithmic decision-making and a “right to explanation”
  17. An Algorithm That Grants Freedom, or Takes It Away

Group Fairness in Classification

Pre-processing

  1. Achieving Fairness at No Utility Cost via Data Reweighing, ICML'22
  2. Fairness with Adaptive Weights, ICML'22
  3. Bias in Machine Learning Software: Why? How? What to Do?, FSE'21
  4. Identifying and Correcting Label Bias in Machine Learning, AISTATS‘20
  5. Fair Class Balancing: Enhancing Model Fairness without Observing Sensitive Attributes, CIKM'20
  6. Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions, ICML'19
  7. Adaptive Sensitive Reweighting to Mitigate Bias in Fairness-aware Classification, WWW'18
  8. Optimized Pre-Processing for Discrimination Prevention, NeurIPS'17
  9. Certifying and Removing Disparate Impact, KDD'15
  10. Learning Fair Representations, ICML'13
  11. Data preprocessing techniques for classification without discrimination, Knowledge and Information Systems'12

In-processing

  1. On Learning Fairness and Accuracy on Multiple Subgroups, NeurIPS'22
  2. Fair Representation Learning through Implicit Path Alignment, ICML'22
  3. Fair Generalized Linear Models with a Convex Penalty, ICML'22
  4. Fair Normalizing Flows, ICLR'22
  5. A Stochastic Optimization Framework for Fair Risk Minimization, arXiv'22
  6. Fairness via Representation Neutralization, NeurIPS'21
  7. Scalable and Stable Surrogates for Flexible Classifiers with Fairness Constraints, NeurIPS'21
  8. A Fair Classifier Using Kernel Density Estimation, NeurIPS'20
  9. Exploiting MMD and Sinkhorn Divergences for Fair and Transferable Representation Learning, NeurIPS'20
  10. Rényi Fair Inference, ICLR'20
  11. Conditional Learning of Fair Representations, ICLR'20
  12. A General Approach to Fairness with Optimal Transport, AAAI'20
  13. Fairness Constraints: A Flexible Approach for Fair Classification, JMLR'19
  14. Fair Regression: Quantitative Definitions and Reduction-based Algorithms, ICML'19
  15. Wasserstein Fair Classification, UAI'19
  16. Empirical Risk Minimization Under Fairness Constraints, NeurIPS'18
  17. A Reductions Approach to Fair Classification, ICML'18
  18. Mitigating Unwanted Biases with Adversarial Learning, AIES'18
  19. Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees, arXiv'18
  20. Fairness Constraints: Mechanisms for Fair Classification, AISTATS‘17

Post-processing

  1. FairCal: Fairness Calibration for Face Verification, ICLR'22
  2. Fairness-aware Model-agnostic Positive and Unlabeled Learning, FAccT'22
  3. FACT: A Diagnostic for Group Fairness Trade-offs, ICML'20
  4. Equality of Opportunity in Supervised Learning, NeurIPS'16

Tradeoff

  1. Fair Classification and Social Welfare, FAccT'20
  2. Is There a Trade-Off Between Fairness and Accuracy? A Perspective Using Mismatched Hypothesis Testing, ICML'20
  3. Inherent Tradeoffs in Learning Fair Representations, NeurIPS'19
  4. The Cost of Fairness in Binary Classification, FAT'18
  5. Inherent Trade-Offs in the Fair Determination of Risk Scores, ITCS'17
  6. Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments, Big Data'17
  7. On the (im)possibility of fairness, arXiv'16

Others

  1. Understanding Instance-Level Impact of Fairness Constraints, ICML'22
  2. Generalized Demographic Parity for Group Fairness, ICLR'22
  3. Assessing Fairness in the Presence of Missing Data, NeurIPS'21
  4. Characterizing Fairness Over the Set of Good Models Under Selective Labels, ICML'21
  5. Fair Selective Classification via Sufficiency, ICML'21
  6. Testing Group Fairness via Optimal Transport Projections, ICML'21
  7. Fairness with Overlapping Groups, NeurIPS'20
  8. Feature Noise Induces Loss Discrepancy Across Groups, ICML'20
  9. Why Is My Classifier Discriminatory?, NeurIPS'18
  10. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, FAT'18
  11. On Fairness and Calibration, NeurIPS'17
  12. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment, WWW'17

Individual Fairness

  1. Learning Antidote Data to Individual Unfairness, arXiv'22
  2. Metric-Fair Active Learning, ICML'22
  3. Metric-Fair Classifier Derandomization, ICML'22
  4. Post-processing for Individual Fairness, NeurIPS'21
  5. SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness, ICLR'21
  6. Individually Fair Gradient Boosting, ICLR'21
  7. Learning Individually Fair Classifier with Path-Specific Causal-Effect Constraint, AISTATS'21
  8. What’s Fair about Individual Fairness?, AIES'21
  9. Learning Certified Individually Fair Representations, NeurIPS'20
  10. Metric-Free Individual Fairness in Online Learning, NeurIPS'20
  11. Two Simple Ways to Learn Individual Fairness Metrics from Data, ICML'20
  12. Training Individually Fair ML Models with Sensitive Subspace Robustness, ICLR'20
  13. Individual Fairness Revisited: Transferring Techniques from Adversarial Robustness, IJCAI'20
  14. Metric Learning for Individual Fairness, FORC'20
  15. Individual Fairness in Pipelines, FORC'20
  16. Average Individual Fairness: Algorithms, Generalization and Experiments, NeurIPS'19
  17. iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making, ICDM'19
  18. Operationalizing Individual Fairness with Pairwise Fair Representations, VLDB'19

Minimax Fairness

  1. Active Sampling for Min-Max Fairness, ICML'22
  2. Adaptive Sampling for Minimax Fair Classification, NeurIPS'21
  3. Blind Pareto Fairness and Subgroup Robustness, ICML'21
  4. Fairness without Demographics through Adversarially Reweighted Learning, NeurIPS'20
  5. Minimax Pareto Fairness: A Multi Objective Perspective, ICML'20
  6. Fairness Without Demographics in Repeated Loss Minimization, ICML'18

Counterfactual Fairness

  1. Causal Conceptions of Fairness and their Consequences, ICML'22
  2. Contrastive Mixture of Posteriors for Counterfactual Inference, Data Integration and Fairness, ICML'22
  3. PC-Fairness: A Unified Framework for Measuring Causality-based Fairness, NeurIPS'19
  4. Fairness through Causal Awareness: Learning Causal Latent-Variable Models for Biased Data, FAccT'19
  5. Counterfactual Fairness, NeurIPS'17
  6. When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness, NeurIPS'17
  7. Avoiding Discrimination through Causal Reasoning, NeurIPS'17
  8. A Causal Framework for Discovering and Removing Direct and Indirect Discrimination, IJCAI'17

Graph Mining

  1. RawlsGCN: Towards Rawlsian Difference Principle on Graph Convolutional Network, WWW'22
  2. Correcting Exposure Bias for Link Recommendation, ICML'21
  3. On Dyadic Fairness: Exploring and Mitigating Bias in Graph Connections, ICLR'21
  4. Individual Fairness for Graph Neural Networks: A Ranking based Approach, KDD'21
  5. Fairness constraints can help exact inference in structured prediction, NeurIPS'20
  6. InFoRM: Individual Fairness on Graph Mining, KDD'20
  7. Fairness-Aware Explainable Recommendation over Knowledge Graphs, SIGIR'20
  8. Compositional Fairness Constraints for Graph Embeddings, ICML'19

Online Learning & Bandits

  1. The price of unfairness in linear bandits with biased feedback, NeurIPS'22
  2. Fair Sequential Selection Using Supervised Learning Models, NeurIPS'21
  3. Online Market Equilibrium with Application to Fair Division, NeurIPS'21
  4. A Unified Approach to Fair Online Learning via Blackwell Approachability, NeurIPS'21
  5. Fair Algorithms for Multi-Agent Multi-Armed Bandits, NeurIPS'21
  6. Fair Exploration via Axiomatic Bargaining, NeurIPS'21
  7. Group-Fair Online Allocation in Continuous Time, NeurIPS'20

Clustering

  1. Robust Fair Clustering: A Novel Fairness Attack and Defense Framework, arXiv'22
  2. Fair and Fast k-Center Clustering for Data Summarization, ICML'22
  3. Fair Clustering Under a Bounded Cost, NeurIPS'21
  4. Better Algorithms for Individually Fair k-Clustering, NeurIPS'21
  5. Approximate Group Fairness for Clustering, ICML'21
  6. Variational Fair Clustering, AAAI'21
  7. Socially Fair k-Means Clustering, FAccT'21
  8. Deep Fair Clustering for Visual Learning, CVPR'20
  9. Fair Hierarchical Clustering, NeurIPS'20
  10. Fair Algorithms for Clustering, NeurIPS'19
  11. Coresets for Clustering with Fairness Constraints, NeurIPS'19
  12. Scalable Fair Clustering, ICML'19
  13. Fair k-Center Clustering for Data Summarization, ICML'19
  14. Guarantees for Spectral Clustering with Fairness Constraints, ICML'19
  15. Fair Clustering Through Fairlets, NeurIPS'17

Regression

  1. Selective Regression Under Fairness Criteria, ICML'22
  2. Pairwise Fairness for Ordinal Regression, NeurIPS'22
  3. Fair Sparse Regression with Clustering: An Invex Relaxation for a Combinatorial Problem, NeurIPS'21
  4. Fair Regression with Wasserstein Barycenters, NeurIPS'20
  5. Fair Regression via Plug-In Estimator and Recalibration, NeurIPS'20

Outlier Detection

  1. Deep Clustering based Fair Outlier Detection, KDD'21
  2. FairOD: Fairness-aware Outlier Detection, AIES'21
  3. FairLOF: Fairness in Outlier Detection, Data Science and Engineering'21

Ranking

  1. Fair Rank Aggregation, NeurIPS'22
  2. Fair Ranking with Noisy Protected Attributes, NeurIPS'22
  3. Individually Fair Rankings, ICLR'21
  4. Two-sided fairness in rankings via Lorenz dominance, NeurIPS'21
  5. Fairness in Ranking under Uncertainty, NeurIPS'21
  6. Fair algorithms for selecting citizens’ assemblies, Nature'21
  7. On the Problem of Underranking in Group-Fair Ranking, ICML'21
  8. Fairness and Bias in Online Selection, ICML'21
  9. Policy Learning for Fairness in Ranking, NeurIPS'19
  10. The Fairness of Risk Scores Beyond Classification: Bipartite Ranking and the XAUC Metric, NeurIPS'19
  11. Balanced Ranking with Diversity Constraints, IJCAI'19

Generation

  1. DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative Networks, NeurIPS'21
  2. Fairness for Image Generation with Uncertain Sensitive Attributes, ICML'21
  3. FairGAN: Fairness-aware Generative Adversarial Networks, BigData'18

Fairness and Robustness

  1. Robust Fair Clustering: A Novel Fairness Attack and Defense Framework, ICLR'23
  2. Fair Classification with Adversarial Perturbations, NeurIPS'21
  3. Sample Selection for Fair and Robust Training, NeurIPS'21
  4. Fair Classification with Noisy Protected Attributes: A Framework with Provable Guarantees, ICML'21
  5. To be Robust or to be Fair: Towards Fairness in Adversarial Training, ICML'21
  6. Exacerbating Algorithmic Bias through Fairness Attacks, AAAI'21
  7. Fair Classification with Group-Dependent Label Noise, FAccT'21
  8. Robust Optimization for Fairness with Noisy Protected Groups, NeurIPS'20
  9. FR-Train: A Mutual Information-Based Approach to Fair and Robust Training, ICML'20
  10. Poisoning Attacks on Algorithmic Fairness, ECML'20
  11. Noise-tolerant Fair Classification, NeurIPS'19
  12. Stable and Fair Classification, ICML'19

Transfer & Federated Learning

  1. Fairness Guarantees under Demographic Shift, ICLR'22
  2. Addressing Algorithmic Disparity and Performance Inconsistency in Federated Learning, NeurIPS'21
  3. Gradient-Driven Rewards to Guarantee Fairness in Collaborative Machine Learning, NeurIPS'21
  4. FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout, NeurIPS'21
  5. Does enforcing fairness mitigate biases caused by subpopulation shift?, NeurlPS'21
  6. Ditto: Fair and Robust Federated Learning Through Personalization, ICML'21
  7. Fair Transfer Learning with Missing Protected Attributes, AIES'19

Long-term Impact

  1. Achieving Long-Term Fairness in Sequential Decision Making, AAAI'22
  2. Unintended Selection: Persistent Qualification Rate Disparities and Interventions, NeurIPS'21
  3. How Do Fair Decisions Fare in Long-term Qualification?, NeurIPS'20
  4. The Disparate Equilibria of Algorithmic Decision Making when Individuals Invest Rationally, FAccT'20
  5. Group Retention when Using Machine Learning in Sequential Decision Making: the Interplay between User Dynamics and Fairness, NeurIPS'19
  6. Delayed Impact of Fair Machine Learning, ICML'18
  7. A Short-term Intervention for Long-term Fairness in the Labor Market, WWW'18

Trustworthiness

  1. Washing The Unwashable : On The (Im)possibility of Fairwashing Detection, NeurIPS'22
  2. The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations, FAccT'22
  3. Differentially Private Empirical Risk Minimization under the Fairness Lens, NeurIPS'21
  4. Characterizing the risk of fairwashing, NeurIPS'21
  5. Fair Performance Metric Elicitation, NeurIPS'20
  6. Can I Trust My Fairness Metric? Assessing Fairness with Unlabeled Data and Bayesian Inference, NeurIPS'20
  7. You Shouldn’t Trust Me: Learning Models Which Conceal Unfairness From Multiple Explanation Methods, AAAI'20
  8. Fairwashing: the risk of rationalization, ICML'19

Auditing

  1. Active Fairness Auditing, ICML'22
  2. Statistical inference for individual fairness, ICLR'21
  3. Verifying Individual Fairness in Machine Learning Models, UAI'20

Empirical Study

  1. Are My Deep Learning Systems Fair? An Empirical Study of Fixed-Seed Training, NeurIPS'21
  2. An empirical characterization of fair machine learning for clinical risk prediction, Journal of Biomedical Informatics

Software Engineering

  1. Fairway: A Way to Build Fair ML Software, FSE'20

Library & Toolkit

  1. FairPy: A Python Library for Machine Learning Fairness, Brandeis University
  2. AI Fairness 360, IBM Research
  3. Fairlearn: A toolkit for assessing and improving fairness in AI, Microsoft Research
  4. fairpy: An open-source library of fair division algorithms in Python, Ariel University
  5. FairML: Auditing Black-Box Predictive Models, MIT
  6. Folktables, UC Berkeley

Dataset

  1. fairness_dataset, Leibniz University

Tabular Data

  1. Communities and Crime Data Set
  2. Statlog (German Credit Data) Data Set
  3. Bank Marketing Data Set
  4. Adult Data Set
  5. COMPAS Recidivism Risk Score Data and Analysis
  6. Arrhythmia Data Set
  7. LSAC National Longitudinal Bar Passage Study
  8. Medical Expenditure Panel Survey Data
  9. Drug consumption Data Set
  10. Student Performance Data Set
  11. default of credit card clients Data Set
  12. Adult Reconstruction dataset
  13. America Community Survey Public Use Microdata Sample
  14. Census-Income (KDD) Data Set
  15. The Dutch Virtual Census of 2001 - IPUMS Subset
  16. Diabetes 130-US hospitals for years 1999-2008 Data Set
  17. Parkinsons Telemonitoring Data Set

Graph Data

  1. MovieLens 100K Dataset

Text Data

  1. Jigsaw Unintended Bias in Toxicity Classification

Image Data

  1. CelebFaces Attributes Dataset (CelebA)
  2. FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age