Usually, in order to assess bias in machine learning, one feature is selected, such as gender or race, as the focus of the study. Then, well-known fairness metrics are applied to measure discrimination against certain groups, such as females or black people. However, more recently, further attention has been given to fairness at the subgroup level. This refers to the groupings formed when particular features are intersected, where, for example, black females could be one of the subgroups generated when intersecting gender and race. The aim here is to potentially expose biases, which were previously hidden when features were considered separately.
IMI MIRA Laura Hattam explores this further by reviewing previous studies that detail new intersectional fairness metrics, as well as repair methods attempting to mitigate subgroup bias. Also, an example dataset is analysed from varying perspectives by choosing different features (gender and race), which includes an intersectional approach (gender + race). This demonstrates how fairness results can be largely dependent upon the perspective you take (the features you pick).
Read the full article here.
This work is part of an Innovate UK project with Dr Julian Padget and the company Etiq.