Issue #1 | Summer 2022
Fairness Metrics
Algorithms can discriminate against people for instance based on age, gender or skin color if, for example, the data used to train the models contains a bias and thus reproduces social prejudices. The fairness of models is measured and tested to avoid such discriminatory effects. The various statisti- cal approaches used to do so are called fairness metrics. These metrics can be used, for example, to measure the likelihood of favorable decisions by the algorithm for groups with different demographic characteristics, such as age or income, or to test whether the accuracy of the model is the same for different subgroups (whether credit scores show a significant variation between males and females, for example). The online library Tensorflow provides an extensive list of the different methods available for measuring fairness. There are, however, limits to the measurability of fairness given that many of the influencing variables are difficult to quantify. That is why it is important to clearly define fairness and designate groups worthy of protection based on protected attributes such as ethnicity, origin and skin color.