Beyond bias: algorithmic machines, discrimination law and the analogy trap

tags
Algorithmic Discrimination

Notes

‘ 'traps '’ that result from attempts to transpose discrimination laws to algorithmic machines and their biases.

NOTER_PAGE: (1 0.3502747252747253 . 0.5137777777777778)

‘Why hire a lawyer? I'll make myself one!’ And Trurl went home, threw six heaping teaspoons of transistors into a big pot, added again as many condensers and resistors, poured electrolyte over it, stirred well and covered tightly with a lid, then went to bed, and in three days the mixture had organized itself into a first-rate lawyer.—Stanisław Lem, The Cyberiad, 1965

NOTER_PAGE: (1 0.5728021978021979 . 0.11644444444444445)

Introduction

NOTER_PAGE: (1 0.6875 . 0.12177777777777779)

‘challenge of regulatory connection’. It pictures the law and law-making as slow, lagging behind, and barely able to catch up with fast-evolving technologies,

NOTER_PAGE: (1 0.7891483516483517 . 0.27111111111111114)

instrumentalist reading of the law as a technical tool engineered by ‘modest but expertly devoted technicians’ seeking to solve practical problems. 7 This is particularly manifest in the ‘human-centric AI’ leitmotiv

NOTER_PAGE: (2 0.35302197802197804 . 0.24266666666666667)

implicit analogy between the ‘digital’ and the ‘human’ realms which provides justificatory force for treating them alike.

NOTER_PAGE: (2 0.41964285714285715 . 0.35200000000000004)
NOTER_PAGE: (3 0.1401098901098901 . 0.8382222222222223)

On the other hand, the (unspoken) value compact encapsulated in the human figure is transposed by analogy to the regulation of the digital realm.

NOTER_PAGE: (3 0.20947802197802198 . 0.12177777777777779)
NOTER_PAGE: (5 0.3695054945054945 . 0.5084444444444445)
NOTER_PAGE: (6 0.0995879120879121 . 0.12177777777777779)

two main routes can be taken to ‘bridge' the normative interstices that emerge when new technologies are deployed.

NOTER_PAGE: (6 0.15521978021978022 . 0.5191111111111112)

purposive interpretation of existing rules

NOTER_PAGE: (6 0.1936813186813187 . 0.14133333333333334)

second route involves adopting new regulations

NOTER_PAGE: (6 0.23832417582417584 . 0.17333333333333334)

The tech industry deploys powerful frames to influence how regulatory problems are constructed, perceived and addressed by regulators.

NOTER_PAGE: (6 0.3228021978021978 . 0.24266666666666667)

framing technology as the relevant regulatory site amounts to a first analogy trap

NOTER_PAGE: (7 0.09203296703296704 . 0.11733333333333335)

A. Technical frames: the problem with centering algorithmic systems

NOTER_PAGE: (7 0.1730769230769231 . 0.11822222222222223)
NOTER_PAGE: (7 0.220467032967033 . 0.16)

Critical accounts of anti-discrimination law deplore its overemphasis on perpetrators, for example single decision-makers.

NOTER_PAGE: (7 0.3042582417582418 . 0.32711111111111113)

the ‘black box boundary’ is drawn around the technical element alone, excluding its context of intervention and its interaction with organisational processes, cognitive schemes and ideological frameworks.

NOTER_PAGE: (7 0.45260989010989017 . 0.3786666666666667)

Discrimination that can neither be attributed to bad tech nor to a given perpetrator risks falling into a liability gap.

NOTER_PAGE: (7 0.5418956043956045 . 0.6426666666666667)

sources of discrimination that cannot be traced to discrete bad mechanisms are bracketed, dismissed as someone else’s problem or, worse, couched as untouchable facts of history’.

NOTER_PAGE: (8 0.09684065934065934 . 0.336)

"what was once viewed as a misfortune is understood as an injustice"

discrimination is not a product of biased algorithms alone, but is rather co-produced at the intersection of epistemic, social and technical practices,

NOTER_PAGE: (8 0.3935439560439561 . 0.1857777777777778)

algorithms become viewed as one of the ingredients of discrimination alongside – but not separate from – human decisions, organisational processes and value frameworks.

NOTER_PAGE: (8 0.6607142857142858 . 0.11911111111111111)

developers argue that the system makes ‘realistic predictions for job seekers belonging to disadvantaged groups’ which reflect ‘the harsh reality’

NOTER_PAGE: (9 0.26717032967032966 . 0.256)

Analogies between human perpetrators of discrimination and technical systems cannot adequately account for such entangled agency.

NOTER_PAGE: (10 0.1201923076923077 . 0.5146666666666667)

the profiling system had been branded as a way to ensure the objectivity of the decision-making process. Yet the developers’ defence against accusations of bias was to present the system as a simple ‘measure of “technical support” for the AMS workers’, a ‘second opinion’ and a ‘mere add-on’

NOTER_PAGE: (10 0.18475274725274726 . 0.7866666666666667)

liability mechanisms in discrimination law must be revisited to reflect the mechanics of co-production of algorithmic discrimination within socio-technical assemblages.

NOTER_PAGE: (10 0.37980769230769235 . 0.21511111111111111)

strategic approximation of responsibility for discrimination

NOTER_PAGE: (10 0.4608516483516484 . 0.12088888888888889)

biased algorithmic recommendations should arguably be conceptualised as ‘instructions to discriminate’.

NOTER_PAGE: (10 0.5899725274725275 . 0.19733333333333333)
NOTER_PAGE: (11 0.3125 . 0.12000000000000001)

‘bias’ has come to be perceived as the relevant site for regulatory intervention.

NOTER_PAGE: (11 0.3427197802197802 . 0.5671111111111111)

algorithmic discrimination is framed as ‘mere accidents that are ‘caused,’ if at all, by biases that ‘sneak in’ to the system’.

NOTER_PAGE: (12 0.16346153846153846 . 0.36533333333333334)

‘glitch’ narrative depicts bias as an exogenous and occasional ‘error’ that accidentally enters algorithmic systems.

NOTER_PAGE: (12 0.3399725274725275 . 0.11822222222222223)

Designating bias as a regulatory object encourages technical fixes such as bias mitigation and debiasing strategies.

NOTER_PAGE: (12 0.5254120879120879 . 0.27555555555555555)

‘locat[e] the problems and solutions in algorithmic inputs and outputs, shifting political problems into the domain of design’

NOTER_PAGE: (13 0.1414835164835165 . 0.6017777777777779)

‘concentrat[ing] power in the hands of service providers, giving them (and not lawmakers) the discretion to decide what counts as discrimination, when it occurs and how to address it’.

NOTER_PAGE: (13 0.18406593406593408 . 0.12088888888888889)

Bias itself only leads to discrimination because it is inscribed in, and interacts with, existing vectors of inequality. Hence, addressing algorithmic discrimination requires addressing the whole socio-technical system,

NOTER_PAGE: (13 0.3317307692307693 . 0.34044444444444444)

emphasis on the need for clean and representative data tends to obfuscate more systemic causes of algorithmic discrimination.

NOTER_PAGE: (13 0.5027472527472527 . 0.6062222222222222)

‘if the data represent something wrong or biased, what should they represent instead?’

NOTER_PAGE: (13 0.6531593406593407 . 0.7857777777777778)

‘the alternative to biased data from the current society is not neutral data but data based on a political decision on what society should be like

NOTER_PAGE: (14 0.10164835164835166 . 0.4186666666666667)
NOTER_PAGE: (14 0.31112637362637363 . 0.12000000000000001)

bequeaths responsibility for biased data to society at large.

NOTER_PAGE: (14 0.6517857142857143 . 0.19644444444444445)
NOTER_PAGE: (15 0.09615384615384616 . 0.11911111111111111)

Framing the problem of algorithmic discrimination as technical feeds into what Kling calls ‘reinforcement politics’ by empowering tech experts, providers and users of AI to define and solve it.

NOTER_PAGE: (15 0.14972527472527475 . 0.12000000000000001)
NOTER_PAGE: (15 0.4711538461538462 . 0.5226666666666667)

a socio-technical reading of algorithmic discrimination gives visibility to the ways in which bias is enacted through specific usages of technologies,

NOTER_PAGE: (15 0.5576923076923077 . 0.216)

The ex post individual redress system in anti-discrimination law, which places the burden of proof and redress on the shoulders of individual victims, should be complemented by public supervision, collective action and a low threshold for triggering rebuttable presumptions of algorithmic discrimination.

NOTER_PAGE: (16 0.16277472527472528 . 0.2995555555555556)
NOTER_PAGE: (16 0.41483516483516486 . 0.11822222222222223)
NOTER_PAGE: (16 0.4869505494505495 . 0.5884444444444444)

the clash between algorithmic subjects, often described as clusters and profiles predicated on big data analytics, and the functional categorisations operated by non- discrimination law to grant protection, for example on the basis of gender or ethnicity.

NOTER_PAGE: (16 0.5446428571428572 . 0.34844444444444445)
NOTER_PAGE: (16 0.7740384615384616 . 0.11733333333333335)

Non-discrimination law functionally defines two main units of protection: the individual victim of discrimination and the protected group.

NOTER_PAGE: (16 0.8241758241758242 . 0.12444444444444445)

the notion of protected ‘ground’,

NOTER_PAGE: (16 0.867445054945055 . 0.26933333333333337)

These protected categories serve as ‘proxies’ for vectors of disadvantage and inequality that society perceives as morally unfair

NOTER_PAGE: (17 0.2884615384615385 . 0.3608888888888889)

non-discrimination law enacts specific regulatory subjects through categorisation operations that are premised on the relative stability, salience and identifiability of given social groups.

NOTER_PAGE: (17 0.4587912087912088 . 0.26222222222222225)

both fundamental units of subjectivity – the autonomous individual and the socially identifiable group – fade away in the face of algorithmic rationality.

NOTER_PAGE: (18 0.10645604395604397 . 0.15733333333333335)
NOTER_PAGE: (18 0.16071428571428573 . 0.4657777777777778)
NOTER_PAGE: (18 0.45741758241758246 . 0.6311111111111112)

Stability of legally protected grounds vs volatility of algorithmic clusters

NOTER_PAGE: (18 0.6092032967032968 . 0.11822222222222223)

Data subjects are clustered in provisional and unstable aggregates that are recomposed as data fluctuates and technological deployment shifts.

NOTER_PAGE: (18 0.6401098901098902 . 0.11822222222222223)

Intelligibility of protected categories vs. correlational algorithmic clustering

NOTER_PAGE: (20 0.24862637362637366 . 0.12266666666666667)

The normative implications of algorithmic clusters can only be comprehended at the complex intersection of machine processes, social practices, human cognition and value frameworks.

NOTER_PAGE: (20 0.5178571428571429 . 0.4168888888888889)

Algorithmic clustering creates ‘non-publics’ or ‘phantom publics’ that undermine key premises for the application of non-discrimination law, namely visibility, mutual recognition and collective action.

NOTER_PAGE: (21 0.2046703296703297 . 0.14044444444444446)

unintelligibility of algorithmic groupings also undermines the conditions for mutual recognition and collective action upon which the actionability of non-discrimination norms is premised.

NOTER_PAGE: (21 0.45329670329670335 . 0.11733333333333335)

The lack of mutual recognition and basis for collective action undermines or delays accountability.

NOTER_PAGE: (22 0.12156593406593408 . 0.5244444444444445)

Social salience of protected grounds vs contingency of algorithmic subjectivity

NOTER_PAGE: (22 0.282967032967033 . 0.11377777777777778)
NOTER_PAGE: (22 0.32554945054945056 . 0.17333333333333334)

The raison d’être of algorithmic groupings is their exploitability and actionability for decision-making, which themselves depend upon economic rationality and profit-making logics.

NOTER_PAGE: (22 0.47321428571428575 . 0.6773333333333333)

collective algorithmic subjects do not exist outside of specific assemblages and cannot be comprehended when abstracted from the purpose of these assemblages.

NOTER_PAGE: (22 0.5130494505494506 . 0.5466666666666667)

B. The individual vs. the data user: the fading of autonomy and dignity

NOTER_PAGE: (24 0.17857142857142858 . 0.12000000000000001)

Individual self-definition vs algorithmic stereotyping

NOTER_PAGE: (24 0.4745879120879121 . 0.12088888888888889)

Algorithmic subject-making strips individuals from the power of definition over their identity.

NOTER_PAGE: (24 0.49244505494505497 . 0.12444444444444445)
NOTER_PAGE: (24 0.565934065934066 . 0.1928888888888889)

protect individuals’ dignity understood as the right to identity-building and singularity.

NOTER_PAGE: (25 0.1201923076923077 . 0.41600000000000004)

Algorithmic decision-making systems produce pattern-based individualised or personalised – as opposed to individual or personal – decisions.

NOTER_PAGE: (25 0.4800824175824176 . 0.3466666666666667)

Individual autonomy vs algorithmic opacity

NOTER_PAGE: (26 0.09752747252747254 . 0.1128888888888889)

III. Displacing modes of reasoning: from comparison to ground truth and from rights to risks

NOTER_PAGE: (27 0.18475274725274726 . 0.12355555555555556)

A. Comparative heuristics and the ‘ground truth’ question

NOTER_PAGE: (27 0.5453296703296704 . 0.12088888888888889)

Causal Inference

Non-discrimination law hinges on the comparability of applicants’ situation with others who do not belong to protected categories.

NOTER_PAGE: (27 0.576923076923077 . 0.12622222222222224)

but for a protected ground, two people or groups would have been treated the same.

NOTER_PAGE: (27 0.6167582417582418 . 0.2008888888888889)

The illegibility of algorithmic subjectivity saps individuals’ ability to compare themselves to activate non-discrimination law and, potentially, judges’ ability to assess algorithmic discrimination.

NOTER_PAGE: (28 0.24862637362637366 . 0.8391111111111111)

the way in which meaning is produced is entirely delegated to the technical tool according to invisible logics. This inferential mode of knowledge production bypasses nomenclatures, categorisations and hypotheses.

NOTER_PAGE: (28 0.31043956043956045 . 0.1448888888888889)

‘smart digital technologies make pattern- based, personalized decisions rather than principled, generalizable ones, and they don’t give reasons for – or even draw attention to – the choices they make’.

NOTER_PAGE: (28 0.4402472527472528 . 0.5617777777777778)

Profiling and inferential predictions also take away the possibility to establish a stable counter- factual ‘other’.

NOTER_PAGE: (28 0.6991758241758242 . 0.112)

there can only be bias if algorithmic rankings can be contrasted with a good or fair representation’.

NOTER_PAGE: (28 0.7376373626373627 . 0.6871111111111111)

B. The proportionality test: from a rights-based to a risk-based assessment

NOTER_PAGE: (29 0.4436813186813187 . 0.11822222222222223)

conflation of two very different understandings of the notion of proportionality.

NOTER_PAGE: (29 0.4951923076923077 . 0.5102222222222222)

Indirect discrimination, by contrast to direct discrimination, can be justified if the incriminated ‘provision, criterion or practice is objectively justified by a legitimate aim

NOTER_PAGE: (29 0.5673076923076924 . 0.12000000000000001)

prima facie discrimination can be considered proportionate

NOTER_PAGE: (30 0.0995879120879121 . 0.3031111111111111)

discrimination that might not give rise to exclusion from a tangible good or service in a given moment, but the accumulation of which over time might severely diminish a subject’s autonomy, dignity and quality of life.

NOTER_PAGE: (30 0.27472527472527475 . 0.12266666666666667)

Even though the AI Act proclaims complementarity with fundamental rights legislation, the semantics of risks, health and safety and fundamental rights provide for an uneasy blending.

NOTER_PAGE: (31 0.47733516483516486 . 0.5262222222222223)

Determining the threshold for ‘acceptable’ ‘residual risks’ is a highly normative task that seems to lay almost entirely in the hands of providers and users.

NOTER_PAGE: (31 0.6092032967032968 . 0.41422222222222227)

The risk balancing operated by the EU AI Act is in fact alien to the very philosophy and rationale underpinning fundamental rights law.

NOTER_PAGE: (32 0.10027472527472528 . 0.18755555555555556)

strict judicial review that is at odds with the portraying of fundamental rights as mere ‘interests’ in the balance of risks.

NOTER_PAGE: (32 0.271978021978022 . 0.6888888888888889)

Conclusion

NOTER_PAGE: (33 0.1881868131868132 . 0.12000000000000001)