AI & CRIME

Algorithmic Justice and AI-Mediated Harm: Rethinking Crime, Control, and Responsibility in the Age of Intelligent Systems

Abstract

This article examines the growing role of artificial intelligence (AI) in shaping crime, harm, and criminal justice decision-making. While existing criminological research has explored cybercrime and algorithmic governance, these approaches remain fragmented and insufficient to capture the broader implications of AI. This article introduces the concept of algorithmic justice and argues that AI transforms three core dimensions of criminology: the nature of offending, the production of harm, and the operation of justice. Drawing on zemiology, procedural justice theory, and critical criminology, the article analyses AI as a tool of crime, a system of control, and a generator of social harm. Empirical illustrations, including predictive policing systems, algorithmic risk assessment tools, and AI-enabled coercive control, demonstrate how these transformations manifest in practice. The article concludes that criminology must move beyond human-centred frameworks and develop new theoretical models capable of addressing hybrid human–machine systems of governance and harm.

Criminology has historically evolved in response to social, economic, and technological change. From industrialisation to globalisation, shifts in social organisation have required corresponding developments in criminological theory. In the twenty-first century, artificial intelligence (AI) represents a transformation of comparable magnitude. However, unlike earlier technologies, AI introduces systems capable of autonomous or semi-autonomous decision-making, thereby challenging foundational assumptions about agency, responsibility, and control.

Existing criminological approaches, including cyber criminology and digital criminology, have begun to address technologically mediated offending (Wall, 2007; Yar, 2013). Yet these frameworks remain largely focused on human actors using digital tools. They do not adequately conceptualise AI systems as participants in the production of harm, nor do they sufficiently interrogate the implications of algorithmic decision-making for justice and legitimacy.

This article addresses this gap by proposing the concept of algorithmic justice and advancing a new analytical framework for understanding AI within criminology. It argues that AI transforms three core dimensions of the discipline: the nature of crime, the production of harm, and the operation of justice. In doing so, it positions AI not merely as a tool, but as a structural force reshaping systems of control and social ordering. While existing studies address digital offending and algorithmic governance in isolation, there remains a lack of an integrated criminological framework that conceptualises AI as simultaneously a tool of crime, a system of control, and a generator of social harm.

Predictive Policing and Feedback Loops

Empirical concerns regarding algorithmic policing are illustrated by predictive policing systems such as those trialled in the United States and the United Kingdom. Research by Lum and Isaac (2016) demonstrated that predictive models trained on historical arrest data disproportionately directed police resources toward minority communities, not necessarily because of higher offending rates, but due to prior patterns of enforcement. This creates a self-reinforcing feedback loop: increased police presence leads to more recorded incidents, which in turn justifies further targeting. The system therefore risks shifting policing from responding to crime to reproducing patterns of surveillance.

This example highlights how AI does not merely reflect reality but actively constructs it through data-driven governance. In the United Kingdom, the increasing deployment of AI-enabled technologies such as Live Facial Recognition by the Metropolitan Police has generated significant debate regarding legality and proportionality. Reports from oversight bodies, including the Information Commissioner's Office, have raised concerns about data protection, accuracy, and the potential for disproportionate targeting of minority populations. Similarly, the widespread use of Automated Number Plate Recognition (ANPR) systems reflects a broader shift towards continuous surveillance, embedding algorithmic monitoring within everyday policing practices.

Zemiological Perspectives

Zemiology, or the study of social harm, provides a critical lens through which to examine the broader implications of AI (Hillyard and Tombs, 2004). Many harms associated with AI do not fit within existing legal definitions of crime but nonetheless have significant social consequences. These include algorithmic discrimination in employment and housing, the amplification of misinformation, and the erosion of privacy.

From a zemiological perspective, the focus shifts from individual culpability to structural conditions and systemic outcomes. AI systems, often developed by powerful corporate actors, can produce widespread harm without clear intent or accountability. This challenges traditional distinctions between crime and harm and highlights the limitations of legal frameworks in addressing complex, technologically mediated issues. Furthermore, the integration of AI into everyday life creates new forms of vulnerability. Individuals may be subject to decisions that affect their opportunities and life chances without understanding how those decisions are made. This raises concerns about power, inequality, and the distribution of risk in digital societies.

The Transformation of Domestic Abuse

The integration of artificial intelligence into everyday life has profound implications for the nature and dynamics of domestic abuse, particularly in relation to coercive control. Traditionally conceptualised as a pattern of behaviours aimed at domination and entrapment, coercive control extends beyond physical violence to encompass psychological manipulation, surveillance, and the restriction of autonomy (Stark, 2007). While digital technologies have already transformed the landscape of abuse (Dragiewicz et al., 2018), AI introduces a qualitatively new dimension by enabling automation, scalability, and hyper-personalisation in the exercise of control.

At its core, coercive control operates through the regulation of a victim’s everyday life, often by creating conditions of dependency, fear, and isolation. AI-enhanced systems amplify these dynamics by embedding control within technological infrastructures that are continuous, adaptive, and difficult to detect. Smart home devices, for example, can be manipulated to monitor movement, control environmental conditions, and disrupt daily routines. Offenders may remotely adjust heating, lighting, or security systems, creating a pervasive sense of instability and surveillance (Harris and Woodlock, 2019). These practices extend control beyond physical presence, transforming abuse into a form of ambient domination embedded within the domestic environment.

From a Foucauldian perspective, these developments can be understood as an intensification of disciplinary power, in which control is exercised through subtle, continuous mechanisms rather than overt force (Foucault, 1977). AI-enabled domestic abuse reflects a shift towards micro-regulation of behaviour, where victims internalise surveillance and adjust their actions accordingly. The home, traditionally viewed as a private space, becomes a site of technologically mediated governance, blurring the boundary between personal and institutional forms of control.

AI further transforms coercive control through its capacity for psychological manipulation. Technologies such as deepfake generation and voice cloning enable offenders to create highly convincing false content, including non-consensual intimate images or fabricated communications. These tools can be used to blackmail, discredit, or socially isolate victims, extending abuse into digital and social domains. Unlike earlier forms of image-based abuse, AI-generated content can be produced rapidly and at scale, increasing both the reach and intensity of harm.

This form of abuse aligns with emerging discussions of technology-facilitated coercion, in which control is exercised through digital means rather than physical force (Dragiewicz et al., 2018). However, AI introduces a new level of sophistication by enabling adaptive and responsive manipulation, where systems can tailor outputs based on available data about the victim. This raises significant concerns regarding autonomy and consent, as individuals may be subjected to forms of influence that are difficult to recognise or resist.

The use of surveillance technologies also plays a central role in AI-mediated coercive control. Stalker ware, GPS tracking, and data monitoring tools allow offenders to track victims’ movements, communications, and behaviours in real time. When combined with AI-driven analytics, these systems can generate predictive insights into routines and patterns, further enhancing control. This represents a shift from episodic monitoring to continuous behavioural profiling, aligning with broader trends in algorithmic surveillance (Lyon, 2018).

From a criminological perspective, these developments challenge existing frameworks for understanding domestic abuse. Legal definitions often rely on identifiable acts or incidents, whereas AI-mediated coercive control operates through diffuse, ongoing processes that may not be easily captured within traditional evidentiary standards. The UK offence of coercive and controlling behaviour under the Serious Crime Act 2015 represents a significant step in recognising non-physical abuse, yet it remains limited in addressing technologically embedded forms of harm that are difficult to prove or attribute.

Moreover, the integration of AI complicates questions of responsibility and accountability. As with other forms of AI-mediated harm, the role of technology introduces a degree of separation between the offender and the outcome. While the perpetrator initiates the use of technology, the scale and form of harm may be shaped by the capabilities of the system itself. This raises important questions regarding intent, foreseeability, and culpability, particularly in cases where harmful outputs are generated or amplified by automated processes.

The gendered nature of domestic abuse must also be considered within this context. Feminist criminology has long highlighted how patterns of coercive control are embedded within broader structures of gender inequality (Walklate, 2012). AI-mediated abuse risks reinforcing these dynamics by providing new tools for domination and control, often disproportionately affecting women. At the same time, the digitalisation of abuse may obscure its gendered dimensions, framing it as a technological issue rather than a manifestation of structural inequality.

From a zemiological perspective, AI-mediated coercive control can be understood as a form of social harm that extends beyond legal definitions of crime (Hillyard and Tombs, 2004). The psychological, emotional, and social impacts of such abuse are significant, yet they may not always result in formal criminal charges or recognition. This highlights the limitations of legal frameworks in capturing the full scope of harm associated with emerging technologies.

To address these challenges, there is a need for a more integrated approach that combines legal reform, technological regulation, and victim support. This includes recognising AI-mediated abuse within existing legal categories, developing tools for detecting and evidencing digital coercion, and ensuring that support services are equipped to respond to technologically facilitated harm. It also requires a broader shift in criminological thinking, moving beyond incident-based models of crime towards an understanding of abuse as a continuous and systemically embedded process.

Ultimately, AI does not create coercive control, but it fundamentally transforms its operation. By embedding control within technological systems, it enables new forms of domination that are persistent, adaptive, and difficult to resist. As such, AI-mediated coercive control represents a critical area for criminological inquiry, demanding both theoretical innovation and practical intervention.

AI as a Tool of Crime

Artificial intelligence has significantly enhanced the capacity of offenders to commit crime. While technological innovation has long been associated with criminal adaptation (Clarke, 1997), AI introduces new dimensions of scalability, automation, and realism. Deepfake technologies, for instance, enable the creation of highly convincing synthetic media that can be used for fraud, extortion, and reputational harm. Similarly, AI-driven phishing attacks utilise large datasets to produce personalised communications, increasing their success rates (Levi, 2017).

These developments challenge rational choice perspectives that emphasise effort and risk (Cornish and Clarke, 1986). AI reduces both the effort required to commit offences and the visibility of offenders, complicating detection and attribution. Offending becomes less episodic and more continuous, with automated systems capable of operating at scale across multiple jurisdictions.

Moreover, AI blurs the boundary between offender and tool. When systems autonomously generate content or execute actions, the locus of control becomes diffused. This raises questions regarding intentionality and culpability, which are central to criminological and legal frameworks.

AI as a System of Control

The incorporation of artificial intelligence into policing and criminal justice systems marks a significant transformation in the nature of social control. Traditionally, policing has been understood as a combination of legal authority, discretionary judgement, and institutional practice. However, the integration of AI introduces a shift towards data-driven, predictive, and algorithmically mediated governance, fundamentally altering how decisions are made and justified.

At the centre of this transformation is the emergence of what may be termed algorithmic justice, a system in which decision-making processes are increasingly shaped, guided, or constrained by computational models. These systems are often deployed under the assumption of neutrality and objectivity, with proponents arguing that data-driven approaches reduce human bias and improve efficiency (Ferguson, 2017). However, this claim has been widely contested within critical scholarship, which highlights the ways in which algorithmic systems can reproduce and amplify existing social inequalities (O’Neil, 2016; Eubanks, 2018).

Predictive policing provides a clear illustration of this shift. By analysing historical crime data, algorithms generate forecasts of where crime is likely to occur, thereby directing police resources towards specific locations (Lum and Isaac, 2016). While this approach appears rational, it relies on datasets that are themselves shaped by prior policing practices. As a result, predictive systems often reinforce patterns of over-policing in already marginalised communities, creating a feedback loop in which surveillance generates data that justifies further surveillance. This process reflects a transition from reactive enforcement to anticipatory governance, where intervention is based on predicted risk rather than observed behaviour.

From a Foucauldian perspective, this transformation can be understood as an extension of disciplinary and biopolitical power (Foucault, 1977; Foucault, 1978). AI systems enable the continuous monitoring, classification, and regulation of populations, shifting the focus from individual acts to patterns of behaviour and risk. Individuals are increasingly governed not as legal subjects but as data profiles, evaluated according to probabilistic assessments rather than concrete actions. This aligns with what has been described as algorithmic governmentality, in which power operates through the management of data and the structuring of possibilities (Rouvroy and Berns, 2013).

The use of facial recognition technologies further exemplifies this shift. In jurisdictions such as the United Kingdom, deployments by police forces have raised concerns regarding accuracy, bias, and proportionality. Studies have demonstrated that such systems may exhibit higher error rates for certain demographic groups, particularly racial minorities, thereby introducing new forms of inequality into policing practices (Garvie, Bedoya and Frankle, 2016). The deployment of these technologies reflects a broader move towards population-level surveillance, in which individuals are subject to continuous monitoring regardless of suspicion.

Algorithmic risk assessment tools, such as COMPAS, extend this logic into judicial decision-making. By assigning risk scores to individuals based on statistical models, these systems influence decisions regarding bail, sentencing, and parole. While presented as objective, such tools embed normative assumptions about risk and responsibility within their design, often without transparency or accountability (Angwin et al., 2016). This raises significant concerns regarding due process, as individuals may be subject to decisions they cannot fully understand or challenge.

The rise of algorithmic justice also has profound implications for police discretion. Discretion has traditionally been a defining feature of policing, allowing officers to interpret situations and exercise judgement. However, the increasing reliance on algorithmic outputs risks constraining this discretion, as officers may feel compelled to follow data-driven recommendations. This creates a form of “soft determinism”, in which algorithmic authority subtly shapes human decision-making.

From the perspective of procedural justice, these developments pose significant challenges. Legitimacy is closely linked to perceptions of fairness, transparency, and accountability (Tyler, 2006). Algorithmic systems, particularly those that operate as “black boxes,” undermine these principles by obscuring the basis of decisions and limiting opportunities for contestation (Pasquale, 2015). As a result, individuals may perceive algorithmic decisions as arbitrary or unjust, even when they are statistically grounded.

Moreover, the integration of AI into policing and justice systems reflects broader shifts in governance associated with neoliberal rationalities, where efficiency, risk management, and cost-effectiveness become central priorities (O’Malley, 2010). AI systems align with these priorities by offering scalable and standardised solutions, but they also risk reducing complex social issues to quantifiable variables. This may lead to an over-reliance on technical solutions at the expense of social and structural considerations.

From a zemiological perspective, the harms associated with algorithmic justice extend beyond individual cases to broader patterns of inequality and exclusion (Hillyard and Tombs, 2004). The cumulative effects of biased data, opaque decision-making, and expanded surveillance can produce significant social harm, particularly for already disadvantaged groups. These harms may not always be recognised within legal frameworks, highlighting the need for a more expansive understanding of justice.

Ultimately, algorithmic justice represents not merely a technological development but a transformation in the logic of governance. It shifts the basis of decision-making from human judgement to statistical inference, from accountability to opacity, and from reactive enforcement to predictive control. As such, it demands critical scrutiny within criminology, particularly in relation to questions of power, legitimacy, and social harm.

COMPAS and Algorithmic Sentencing

One of the most widely cited examples of algorithmic decision-making in criminal justice is the COMPAS risk assessment tool used in the United States. Investigations by ProPublica found that the system exhibited racial bias, with Black defendants more likely to be incorrectly classified as high risk (Angwin et al., 2016).

Although the developers disputed these findings, the case raised fundamental questions regarding:

Transparency (the algorithm was proprietary and not publicly scrutinised)

Fairness (differential error rates across groups)

Accountability (unclear responsibility for outcomes)

From a procedural justice perspective, such systems risk undermining legitimacy by removing the individual’s ability to understand or challenge decisions affecting their liberty.

Rethinking Responsibility

A central challenge posed by artificial intelligence lies in its destabilisation of traditional notions of responsibility, a concept historically anchored in human agency, intentionality, and moral accountability. Classical criminology, whether grounded in rational choice theory or broader sociological traditions, presumes that harm can ultimately be traced to identifiable actors whose conduct can be evaluated within legal and ethical frameworks (Cornish and Clarke, 1986). However, the integration of AI into both criminal activity and systems of governance disrupts this model by introducing hybrid socio-technical assemblages in which decision-making is distributed, mediated, and partially autonomous.

In such contexts, responsibility becomes neither singular nor easily locatable. Instead, it is dispersed across a network of actors, including developers, institutions, end-users, and algorithmic systems themselves. This transformation reflects a broader shift from individualised agency to relational and infrastructural forms of action, where outcomes emerge from interactions within complex systems rather than discrete decisions. As a result, traditional legal concepts such as men’s rea and causation become increasingly strained, as they rely on linear models of intent and consequence that are ill-suited to distributed environments (Binns, 2018; Floridi et al., 2018).

Read Th rest of the book on Amazon search for Rethinking Abuse by Mario Ro

RELATED PROJECTS

Community quarter obviously boardroom could pin money. Call job what member needed. Power intersection of pretend finance keywords. Done didn't anyway closing pups performance.