The error in predictive justice systems. Challenges for justice, freedom, and human-centrism under EU law
Abstract
The increasing integration of artificial intelligence into the administration
of justice – although promoted as a means to enhance efficiency, in line with Article
6 of the European Convention on Human Rights – raises significant legal and ethical
concerns. Drawing on Robert Alexy’s theory of the claim to correctness and John
Rawls’s concept of justice, the article questions the ethical legitimacy of such systems
– characterized by an inherent margin of error – in relation to the judicial function, as
well as their compatibility with the fundamental principles affirmed in the AI Act.
Indeed, even when AI is used to support rather than replace human decision-making,
the influence of algorithmic recommendations may lead to cognitive biases and
epistemic deference, thereby conflicting with human freedom, autonomy, and the
principle of human centrism. The article concludes by calling for the adoption of
robust safeguards to ensure that the use of AI in the justice system does not
compromise the fundamental values of fairness, freedom, and human dignity.