De grootste kennisbank van het HBO

Inspiratie op jouw vakgebied

Vrij toegankelijk

Deel deze publicatie

Fairness trade-offs in hiring: what people prefer and what engineers can build

Fairness trade-offs in hiring: what people prefer and what engineers can build

Samenvatting

Human-centered AI must confront tensions between mutually incompatible fairness definitions and fairness requirements of algorithmic decision-making (ADM) systems. To investigate how people perceive this trade-off and how this perception can guide engineering requirements, we determine the underlying principles of common fairness metrics in the form of statements that people may or may not agree with. Using an illustrative dataset, we show how favored metrics can conflict in practice, underscoring the need for explicit trade-offs and how to solve them. We design and evaluate a survey that can be used to determine the preferences of stakeholders in a hiring scenario by mapping 12 statements to demographic parity, equal opportunity (TPR), predictive equality (FPR), predictive parity (PPV), fairness through unawareness, and individual fairness definitions. Responses (N=51) indicate broad support for excluding sensitive attributes and for error-rate parity criteria (FPR-TPR), with contrasting views on demographic parity under unequal base rates. We contribute a requirements-elicitation approach that can be used to define ‘fairness requirements’ of an ADM system by mapping stakeholder preferences to concrete metrics, yielding a pragmatic set of recommended requirements using our hiring scenario as a guiding example.

Toon meer
Organisatie
Afdeling
Lectoraat
Gepubliceerd inProceedings of the 2026 Conference on Human Centred Artificial Intelligence - Education and Practice ACM Digital Library, Pagina's: 27-33
Jaar2026
Type
ISBN979-8-4007-2153-3
DOI10.1145/3777490.3777496
TaalEngels

Op de HBO Kennisbank vind je publicaties van 26 hogescholen

De grootste kennisbank van het HBO

Inspiratie op jouw vakgebied

Vrij toegankelijk