Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
shaycrk authored Mar 16, 2021
1 parent 3cb307c commit b70f7dc
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
# Fairness-Accuracy Trade-Offs in ML for Public Policy
This repository contains code relating to our ongoing work explorating the trade-offs between fairness and accuracy in machine learning models developed to support decision-making in public policy contexts.

For each context, modeling was performed with our open-sourced machine learning pipeline toolkit, [triage](https://github.com/dssg/triage). Although the data for several of these projects is confidential and not publicly available, this repository includes our `triage` configuration files (specifying features and model/hyperparameter grids) for all projects as well as the code used for bias mitigation and analysis of trade-offs. The main functionality for bias mitigation is provided in `RecallAdjuster.py` (at the moment, this assumes model results are in the form of `triage` output) and analyses are generally in a set of `jupyter` notebooks in each project directory. The bias mitigation here extends on methods we described recently at [FAT* 2020](https://arxiv.org/abs/2001.09233). Each project is described briefly below:
For each context, modeling was performed with our open-sourced machine learning pipeline toolkit, [triage](https://github.com/dssg/triage). Although the data for several of these projects is confidential and not publicly available, this repository includes our `triage` configuration files (specifying features and model/hyperparameter grids) for all projects as well as the code used for bias mitigation and analysis of trade-offs. The main functionality for bias mitigation is provided in `RecallAdjuster.py` (at the moment, this assumes model results are in the form of `triage` output) and analyses are generally in a set of `jupyter` notebooks in each project directory. The bias mitigation here extends on methods we described recently at [FAT* 2020](https://arxiv.org/abs/2001.09233). Addittionally, we recently developed a [tutorial](https://dssg.github.io/fairness_tutorial/) around improving machine learning fairness and a simplified application can be found in [this interactive colab notebook](https://colab.research.google.com/github/dssg/fairness_tutorial/blob/master/notebooks/bias_reduction.ipynb). which is a good starting point.

Each project is described briefly below:

### Inmate Mental Health
The Inmate Mental Health project focuses on breaking the cycle of incarceration in Johnson County, KS, by proactive outreach from their Mental Health Center's Mobile Crisis Response Team to individuals with a history of incarceration and mental health need and at risk of returning to jail. Early results from this work was presented at [ACM COMPASS 2018](https://dl.acm.org/citation.cfm?id=3209869) and code for this analysis can be found in [code/joco](code/joco).
Expand Down

0 comments on commit b70f7dc

Please sign in to comment.