23.4. From investment to care: Logics Shaping AI and Innovation in Health
Fanny Maurel & Digit / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
This seminar brings together scholars examining the development, credibility and social imaginaries of medical AI and health technologies. Join us on April 23 at the University of Helsinki’s Main Building (F3010, Fabianinkatu 33) from 14:00 to 17:00 for an afternoon of presentations and discussion.
Chair: Matti Ylönen (University of Helsinki)
Discussant: Kaire Holts (Tallinn University of Technology)
Programme
14:00 Matti Ylönen (University of Helsinki)
Opening words
14:15 Eleni Tsingou (Copenhagen Business School)
Investment narratives in women’s health: Insights on ‘tech for good’ from a new market
15:00 Wanheng Hu (Stanford University)
Restrictedness of Testing: Technical, Regulatory, and Clinical Logics of Credible Medical AI Systems
15:45 Minna Ruckenstein (University of Helsinki)
The logic of care in AI chatbot development
16:30 Discussion
Abstracts
Investment narratives in women’s health: Insights on ‘tech for good’ from a new market
Eleni Tsingou
Copenhagen Business School
Femtech, the corporate term that describes a marketplace for women’s health populated by start-up companies and venture capital investors, is receiving attention based on a logic of profit linked to the size of the potential market and a logic of care addressing women’s unmet medical needs. The presentation will look at how corporate actors in femtech, founders and investors, narrate the market during industry events. It will highlight the need for data (medical data, personal data, and business data) as a key theme and identifies four types of narratives that shed light on attempts to develop this market by morally justifying it, engaging participants, personalizing it, and financing it.
Restrictedness of Testing: Technical, Regulatory, and Clinical Logics of Credible Medical AI Systems
Wanheng Hu
Stanford University
Over the past decade, artificial intelligence (AI) systems have been increasingly introduced into clinical work in China amid the rapid commercialization of medical AI products. Trained on large amounts of expert-labeled data, they generate expert-like decisions as complements or alternatives to human judgment. Yet their reasoning remains opaque, making credibility a central concern among various stakeholders. This paper examines how developers, regulators, and clinicians assess the credibility of medical AI systems through testing in the context of radiological diagnostics in China. Drawing on ethnography at two medical AI startups, interviews with radiologists, and analysis of regulatory documents, I identify three distinct logics of testing: technical, regulatory, and clinical. I define restrictedness of testing as the degree of testers’ control over input data, desired outputs, algorithmic models, and relevant socio-technical arrangements, and show how differences in restrictedness, along with divergent practical aims and accountability forms, shape these logics. I argue that the credibility of medical AI systems is not reducible to intrinsic technical features, nor does it transfer seamlessly across contexts without additional translation work. This analysis contributes to the sociology of testing, technology assessment, and critical algorithm studies by showing how restrictedness in testing shapes the politics of evidence, expertise, and accountability in medical AI.
The logic of care in AI chatbot development
Minna Ruckenstein
University of Helsinki
This talk mobilises Annemarie Mol’s (2008) conceptual pair - the logic of choice and the logic of care - to discuss the development of an AI-based chatbot. By invoking ‘logic,’ Mol points to what counts as appropriate action within particular sites and situations, foregrounding how design decisions shape practices and relationalities. The logic of choice underpins dominant approaches to consumer-facing health technology, supporting the choosing subject and framing technologies as tools that promise to expand individual possibilities to stay healthy. In contrast, the logic of care is oriented toward desirable outcomes and relational collectives.
By attending to the process of chatbot design, the talk shows how a care orientation resists anthropomorphisation by avoiding first-person claims and other cues of human-like agency. Rather than eliciting personal or emotionally sensitive disclosures, the system seeks to normalise emotional responses. The chatbot is deliberately dehumanised as a protective gesture, grounded in the recognition that AI-simulated empathy may operate as a form of deception. The aim is to resist such deception through careful prompt engineering that avoids human-like cues and limits emotional engagement.
This case illustrates how the logic of care is an ongoing process and aim, embedded in design decisions that define what AI systems do in a specific context and how boundaries of humanness or health are drawn. Moving towards care is difficult because prevailing digital infrastructures and related imaginaries constrain its realisation. Yet this difficulty underscores the importance of focusing on the logic of care, and mobilising it as a critical lens. A care-oriented approach can evaluate whether communicative practices are arranged to support collective wellbeing, while remaining attentive to the digital infrastructures already in place.
Organizers
The seminar is co-hosted by research projects REPAIR (Valuable breakages: repair and renewal of algorithmic systems) and SEE-TECH (Seeing Like a Tech Firm: Advocacy in the Era of Platform Capitalism). SEE-TECH project has been funded by the Research Council of Finland. REPAIR is funded by the Strategic Research Council.