Practical Aspects of Fairness In AI
Awards & Recognition
Introduction
As Artificial Intelligence (AI) becomes more embedded in society, ensuring fairness in its decision-making processes is crucial. AI systems are increasingly used to make sensitive predictions, such as determining creditworthiness, hiring suitability, and the likelihood of criminal re-offense. While these models promise efficiency and uniformity in decision-making, they can unintentionally perpetuate and even amplify existing biases.
In my Master’s project at Imperial College London, I aimed to address these concerns by extending IBM’s open-source AI Fairness 360 (AIF360) toolkit. The goal was to integrate three state-of-the-art fairness remediation algorithms: Subgroup Fairness, Instantaneous Fairness, and Distributional Repair. These methods were adapted to be versatile, easy to implement, and scalable across a variety of datasets and fairness use cases.
Understanding Fairness in AI
Fairness in AI refers to designing systems that ensure equitable treatment regardless of an individual’s sensitive attributes, such as race, gender, or age. There are several ways to define fairness, each with its advantages and limitations.
Some common fairness notions include:
- Group Fairness: Ensures that sensitive attribute groups (e.g., male/female, or different racial groups) receive similar treatment. For instance, in a hiring algorithm, this might involve ensuring that men and women have equal probabilities of being hired.
- Individual Fairness: Ensures that individuals with similar qualifications or characteristics are treated similarly, regardless of their group memberships. This aims to avoid situations where two people with similar skill sets face different outcomes based on their sensitive attributes.
Project Goals
The main objectives of this project were to:
- Adapt research-specific algorithms into versatile tools that are compatible with AIF360.
- Ensure that the tools could be easily integrated into existing AI fairness pipelines.
- Provide comprehensive and accessible documentation to encourage broader use.
- Evaluate the algorithms thoroughly on various datasets to measure their effectiveness.
Implemented Algorithms
1. Subgroup and Instantaneous Fairness
The Subgroup Fairness and Instantaneous Fairness algorithms focus on minimizing disparities between different groups over time and across subgroups. These algorithms expand on a conventional fairness model to address scenarios where fairness might vary over different time periods or among subgroups with multiple sensitive attributes.
Subgroup Fairness ensures equitable treatment by minimizing the maximum average loss across all subgroups over an entire time period. For example, if an AI model is used in hiring, Subgroup Fairness aims to balance the hiring success rates across all subgroups (e.g., gender and race combinations) over an extended period.
Instantaneous Fairness, on the other hand, aims to maintain fairness at each individual time point by equalizing the loss for all subgroups at each time step. This algorithm is particularly relevant for applications requiring real-time fairness, such as news feed algorithms, where it is crucial that all subgroups have an equal chance of seeing relevant news stories.
Both algorithms employ min-max optimization, which aims to balance the losses across subgroups, ensuring that no subgroup is disproportionately disadvantaged. By addressing both long-term and real-time fairness, these algorithms provide a robust framework for fairness in AI systems.
2. Distributional Repair Algorithm
The Distributional Repair algorithm takes a different approach by focusing on conditional independence. The idea is to reduce the correlation between sensitive and non-sensitive features in a dataset, thus increasing fairness before the model is even trained.
This algorithm employs Optimal Transport (OT) theory to modify the distribution of dataset features. It essentially “repairs” the dataset by reducing dependencies between sensitive features (e.g., race or gender) and the model’s input features. This method enhances fairness at the preprocessing stage, making it a powerful technique to integrate with models that act as “black boxes.”
Optimal Transport identifies an optimal way to shift data points to achieve a target distribution, reducing potential bias while preserving the original structure of the data.
Methodology and Implementation
Algorithm Development
The project involved generalizing and abstracting research-specific code to work seamlessly within the AIF360 toolkit. This required redesigning the algorithms to follow standard practices for open-source tools, including:
- Using object-oriented principles for readability and maintainability.
- Standardizing inputs and outputs to align with AIF360’s API.
- Writing comprehensive documentation to guide users through setup and application.
Evaluation Metrics
To evaluate the effectiveness of these algorithms, the project focused on the following metrics:
- Independence: Measures the difference in the probability of favorable outcomes between privileged and unprivileged groups.
- Separation: Checks if the repaired model treats sensitive groups differently when the initial predictions were incorrect.
- Sufficiency: Measures if the predictions of the repaired model depend on sensitive attributes.
- Kullback-Leibler Divergence (KLD): Used to quantify the difference between distributions before and after applying the Distributional Repair algorithm.
Evaluation and Results
The implemented algorithms were evaluated on multiple datasets, including the COMPAS dataset (criminal re-offense predictions) and the Adult dataset (income predictions). Key findings include:
- Subgroup and Instantaneous Fairness: These algorithms were found to effectively reduce disparities between subgroups, particularly when fairness varied over time. They were scalable and showed consistent results even with large datasets.
- Distributional Repair: The algorithm was shown to significantly improve conditional independence, especially on datasets with correlated sensitive and non-sensitive attributes. Performance was dependent on factors such as the size of the training dataset and the resolution of probability distribution supports.
Key Findings
- Scalability: The generalized algorithms maintained fairness even as dataset sizes increased.
- Flexibility: Users could choose between multiple solvers and configurations, making the tools adaptable to different scenarios.
- Improved Fairness Metrics: All three algorithms demonstrated improvements in fairness metrics, with minimal impact on model accuracy.
Future Scope
While the algorithms performed well, there are areas for further improvement:
- Scalability to Multiple Subgroups: Currently, the fairness algorithms are optimized for two subgroups at a time. Future work could aim to handle more complex scenarios with multiple overlapping subgroups.
- Automating Distributional Support Tuning: For the Distributional Repair algorithm, future research could focus on dynamically tuning the number of distributional supports to optimize performance across diverse datasets.
Conclusion
Through this project, I successfully integrated state-of-the-art fairness algorithms into the AIF360 toolkit. By adapting these research algorithms into accessible, generalized tools, the project contributes to the ongoing effort to make AI systems more fair and transparent.