What does it mean to repair (with) algorithmic systems?
Tuukka Lehtiniemi & Heta Tarkkala
The journal Science & Technology Studies recently published a special issue, Repairing (with) Algorithmic Systems, with five empirical articles focusing on breakdown, repair, and algorithmic systems in the public sector. In this blog post, the guest editors sum up the issue’s contributions.
Public services increasingly rely on algorithmic systems, and these systems are often broken in one way or another. This is not a surprise: objects of the digital environment, from chatbots to automated decision systems, are routinely developed through an iterative cycle of failure and fixing. They are rarely finished or final, and often they do not work as expected. Brokenness, in this sense, is a quality of the digital world.
Despite being often in some sense broken, algorithmic systems continue to connote efficiency, scale, streamlining, and automation. Their mere existence can make other things, including the public services themselves, appear dysfunctional, lacking, or broken. Algorithmic systems are regularly framed as repairs: things that speed up processes, help with diminishing resources, or support lacking human capabilities.
The special issue explores this dual nature of algorithmic systems – both as things that require repair, but also as repairs themselves – by mobilizing breakdown and repair as attention-focusing tools. These tools help make visible things that might otherwise remain hidden. Armed with these tools, the special issue asks: What do breakdowns make visible about algorithmic systems? What, conversely, gets pushed into the background with acts of repair? Where lies the power to define what is broken? Who bears the burden of repair, and who benefits?
Below, we highlight how the issue sheds light on these questions, and on different forms of breakdown and repair.
The five articles in the issue
Neves and colleagues explore the dynamic between available fixes and the construction of brokenness by examining promissory AI discourses in long-term care in later life. They develop an analysis of techno-solutionism and show how the AI industry constructs solvable problems by presenting both caregivers and those in need of care as lacking in ways that only technology can fix. Meanwhile, well-known systemic issues of personnel and resources remain untouched.
Carillon addresses breakdown of a recommender system in public service media. A classic black box, the system experiences episodic breakdowns that make its inner workings exceptionally visible. While repair closed the back box again, the root causes of breakdowns were left unaddressed. Repair thus takes the form of ‘patching’, where targeted adjustments fix immediate issues, also with the system’s legitimacy, but do not address the structural causes of breakdown.
While breakdowns can be episodic, brokenness is an enduring feature of technological systems. Alastalo and Lehto examine robotic process automation of data validation, which was supposed to make human work easier. Yet technical and social friction complicated the robot’s smooth functioning. Instead of the robot assisting humans in data validation, humans constantly needed to step in to assist the robot. This speaks to the structural need for human attention in automation.
While AI-related transformations are generally presented as smooth technical operations, they require context-specific and invisible human background work. This might involve spending time cleaning data, correcting errors, or manually updating records so that automated systems can function at all. Lambotte and Martin-Scholz closely examine such work in cancer registries, where experts perform labour-intensive manual maintenance to ensure data quality. This insufficiently addresses how data governance itself is transformed by a national AI strategy, in ways that maintenance work is unable to fix. This would call for repair that is more transformative in nature.
Ruckenstein focuses on a healthcare data management platform, which was a repair attempt aimed at reorganising healthcare workflows and generating new data resources. But the attempt to repair healthcare derailed physician’s existing workflows and led to repetitive downstream repair tasks. For physicians, this ‘darker’ form of repair is pointless and frustrating, as it does not repair anything, but merely distracts from providing care.
Thematic openings for repair research
The articles participate in several fronts of repair debates, which we round up in our guest editorial. Here, we highlight three potential thematic openings for repair research.
1. Repair and optimism
At its heart, repair is an optimistic concept: something empowering or even noble is involved in attempts to return things to order, or to fix, renew, restore, and care for things. When something is framed to be in need of repair, the potential for progress might be difficult to contest, and those involved in repair can experience involvement in making things better.
However, as the articles show, repair can also negatively shape expert work and daily lives in organizations. In practice, there might seldom be possibilities to consider, let alone affect, how and why the practical repair work is done. This might hide important questions of power: what gets hidden or contained when social and material order is restored? Hopes placed in repairs with algorithmic systems may distract from repairing by other means, which is sometimes sorely needed.
2. Modalities of repair
The articles in the issue draw attention to the need to distinguish between different qualities or modalities of repair. For repair research, this suggests a need to carefully consider what breaks down, and to pay attention to the features of repair that enters as a response: whether it is about maintenance or progressive transformation, about patching or structurally fixing, or of a darker quality altogether.
The typical ‘permanent beta’ status and constant repairing of algorithmic systems poses a risk of flattening different forms of brokenness. Attention to modalities of repair can help distinguish what it is that breaks down: is it a technical aspect of the system or something else altogether. It also helps recognize when acts that resemble repair do not take the repairer or the system anywhere.
3. The human side of algorithmic systems
Analytical attention to breakdown and repair shifts the focus from the promises of machines and technologies to human work of repair that technologies constantly require from us. Algorithmic systems are not only about technology, code, and data, but about human effort, choices, and action.
Attention to breakdown and repair can make visible how the problems that technologies address are constructed, potentially in ways that cast humans as lacking or deficient in specific ways. This attention can also pinpoint acts of care, maintenance, and assistance that keep algorithmic systems and processes up-and-running. And careful attention to breakdown-repair processes can reveal how breakdowns that appear only technical may produce repair efforts that target human and organisational relations. This helps recognize how repairing with algorithmic systems can derail focus from what eventually matters to humans, ethically and professionally.
Repair is a socio-technical phenomenon involving interventions in social and technical aspects of keeping systems in working order. But we would argue it can also be something more: in all of the above senses, repair can turn attention to how humans are affected, implicated, required by, or constitutive parts of, algorithmic systems.
**
Taken together, the articles not only employ breakdown and repair as attention-focusing devices, but also show the analytical strengths of these concepts. The analysis of breakdown and repair is a productive effort that prompts us to think through the use and effects of algorithmic systems in the public sector, and makes us ask what we might desire from them. Or what we might not desire: after all, if something consistently and predictably breaks down, there is something to be learned about social and societal desirability.
As guest editors, we sincerely thank the authors and all anonymous reviewers for their efforts and contributions to repair scholarship.
Tuukka Lehtiniemi is a University Researcher at the Datafied Life Collaboratory, University of Helsinki. He does research on human involvement in datafication and automation.
Heta Tarkkala is an Academy Research Fellow at the Faculty of Social Sciences, University of Helsinki. Her research focuses especially on questions related to multiple uses of health data.
References
Lehtiniemi, T., & Tarkkala, H. (2025). Repairing (with) Algorithmic Systems. Science & Technology Studies, 38(4), 2-12. https://doi.org/10.23987/sts.177039
Neves, B. B., Mead, G., Sanders, A., Broom, A., Ahmadpour, N., & Gulson, K. (2025). Breaking or Repairing Long-Term Care for Older People?: AI Delegation and the Carefication of Later Life. Science & Technology Studies, 38(4), 13-34. https://doi.org/10.23987/sts.152562
Carillon, K. (2025). Closing the Algorithmic Black Box: Breakdowns and Patching Strategies in a Public Service Media. Science & Technology Studies, 38(4), 35-56. https://doi.org/10.23987/sts.149013
Alastalo, M., & Lehto, I. (2025). Frictions in Automating Routine Data Work: A Human-Assisted Robot in Datafied Healthcare in Finland. Science & Technology Studies, 38(4), 57-74. https://doi.org/10.23987/sts.149483
Lambotte, F., & Martin-Scholz, A. (2025). Maintaining and Repairing the Cancer Registries’ Regime of Knowing in the Turbulent Context of the French National AI Strategy. Science & Technology Studies, 38(4), 75-96. https://doi.org/10.23987/sts.149451
Ruckenstein, M. (2025). The Darker Qualities of Repair. Science & Technology Studies, 38(4), 97-113. https://doi.org/10.23987/sts.149448