UKCEN 22nd Annual Conference
Hosted by: Cardiff University, Friday 5th June 2026
Theme: “Ethical Challenges in the Advent of Digital Health”
Bookings are NOW open.
Following the success of last year’s UKCEN conference, UKCEN had decided not to increase the conference fee. This year there is an even greater discount for the concessionary rate. By keeping the prices as low as possible, UKCEN hopes to give back to the clinical ethics community and support accessibility for our events. Please note that tickets sold out for the 2025 conference, so please book early for the 2026 event not to miss out.
Conference registraton 8.30am. Conference 9am - 5pm
The 2026 UKCEN conference will cover a broad range of topics relating to the advent of digital healthcare. While the programme is subject to change, please see the current line-up of speakers and talks. We hope to see you in Cardiff!
Jon is Professor of Empirical Bioethics, Head of Population Health Sciences, and Deputy Head of Bristol Medical School. His work focuses on combining ethical theory with empirical research to develop practicegrounded approaches to clinical and research ethics. He has published widely, and provided international training, in empirical bioethics and research ethics, covering range of topics including fertility and reproduction, bestinterests decisions, the ethics of soft robotics, with recent publications on autism-related workplace adjustments, clinical AI guidance, swarm robotics governance, robot adaptability and trust. Outside of the University, Jon chairs the NHS BNSSG ICB Risk and Ethics Advisory Forum, is deputy chair of the South West Secure Data Environment’s Data Access Committee, serves on funding panels for the Wellcome Trust, Belgium’s FWO Med8, and sits on NICE’s Highly Specialised Technology evaluation committee
Talk title: Healthcare ethics and digital health futures: Promises, Pitfalls, and Practice
Digital health technologies are increasingly reshaping how healthcare is delivered, understood, and experienced by patients, practitioners, and researchers. However, the benefits are neither automatic nor equitably distributed, and the ethical implications require careful and sustained scrutiny. This talk examines both the promises and the pitfalls of digital health through three case studies, each chosen to illuminate distinct challenges emerging at the intersection of digital technologies and healthcare practice.
The first case study explores the use of AI for mental health support, highlighting issues of safety, efficacy, and the risk of replacing relational care with computational proxies. The second considers AIenabled diagnostic tools, focusing on questions of bias, interpretability, and shifting professional responsibility. The third examines governance and access within secure data environments, drawing attention to tensions between privacy, research value, and public trust.
Building on these cases, I propose an approach to practice that acknowledges digital transformation as inevitable, yet frames digital health as a dynamic moral landscape as opposed to being on a set trajectory. It argues that ethical, trustworthy digital futures require participation and responsibility from all involved, and positions ethics as a central, ongoing practice in dialogue with developers, clinicians, institutions, and patients.
Jamie is a postdoctoral researcher in ethics and infectious diseases, based at the Ethox Centre as part of the Oxford-Johns Hopkins Global Infectious Disease Ethics Collaborative (GLIDE), working with Professor Michael Parker. As part of his fellowship, Jamie will also have a residency period at the Berman Institute of Bioethics, Johns Hopkins University. Jamie carried out his PhD at the University of Edinburgh, in the Centre for Technomoral Futures and the Centre for Biomedicine, Self & Society. His research project utilised empirical bioethics methodology, combining philosophical analysis and qualitative interviews, to make design recommendations for the ethical use of AI in healthcare resource allocation. Alongside his PhD, he was a researcher on the UK Pandemic Ethics Accelerator, working on its public values, transparency and governance work stream. Prior to his PhD, Jamie was a research associate in the Division of Medical Ethics at NYU Langone, a position he took up after earning an MA in Bioethics at NYU as a Fulbright Postgraduate Scholar. He gained his undergraduate degree at Pembroke College, Cambridge, receiving a BA in Philosophy and an MSci in History and Philosophy of Science, before completing the Teach First Leadership Development Programme.
Talk title: The Transplant Benefit Score, patient disclosure, and human control in algorithmic systems
The Transplant Benefit Score (TBS) was introduced in the UK in March 2018 as a method of allocating DBD (donation after brain death) livers for transplantation. The TBS is both far more algorithmically complex than the previous system of allocation, and offers less clinician autonomy in allocation decisions, with livers being matched to particular patients from a national database.
In 2023, the Financial Times led an investigation into the TBS. Madhumita Murgia’s story followed Jess, a young woman who had been waiting a long time on the transplant waitlist, and her attempts to understand the system that would determine if and when she received a transplant organ. Murgia writes, ‘Sarah and other transplant patients her family came across doing research had never been explicitly informed about the scoring algorithm. Most had no idea such a thing existed, or how it worked.’ Murgia also notes that after Jess inquired into TBS, medical professionals told her ‘there were no humans involved in overseeing or overriding the score, and there was no appeals process, even at a physician’s discretion.’
This presentation draws upon the presenter’s qualitative empirical research project which interviewed 20 patients and 9 transplant staff on their experiences with TBS. In doing so, it establishes a more complex picture of disclosure practices and human control of TBS than suggested in the Financial Times piece. But the interview data is primarily valuable in justifying normative recommendations around patient disclosure and human control within the context of algorithmic organ allocation.
Rachel is a research fellow in ethics with an interdisciplinary background that includes moral philosophy, anthropology, medical humanities and medical sciences and this is reflected in her intersectional approach to complex problems. Recent work spans genomic medicine, health data governance, and deliberative ethics. Focusing on the intersection of regulation and ethics in healthcare, research,and hybrid care-research spaces using emerging technologies, interests include relationships and trustworthiness in public health, research, public involvement and response to injustice in healthcare and research. In addition to research, Rachel has operational expertise in public dialogues, participatory methods and governance design, working with research organisations, social enterprises and government in the UK and internationally.
Talk title: Ethical Imagination and Creativity in the Work of Clinical Ethics Committees
Clinical ethics work combines disciplined reasoning, practical experience and creative imagination. While such work is often described as analytical— a balancing of duties, rights, values, and outcomes—it also requires imagination and creativity. The concepts of ethical imagination and creativity are hardly new. Still, in an era of increasingly technical and regulatory approaches, it is essential to highlight their integral role in clinical ethics work. Ethical imagination can enable us to inhabit ethical principles and to test them in specific, contextualised ways outside of our own experience. It is relational and, as such, facilitates dialogues across viewpoints and disciplines as well as navigation of the uncertain or unknown. I argue that these kinds of imagining and creativity in the decision-making process become even more important for clinical ethics in the era of digital health and AI. I will focus on three main areas: Firstly, imagination as the capacity to empathise with difference and envision possibilities within ethical and regulatory complexity and uncertainty. Second, creativity as the exploration of potentially novel pathways or resolutions that are still faithful to foundational principles. Thirdly, applications of ethical imagination and creativity in clinical ethics: methods and practice that cultivate interpretive richness, enable innovation, and enhance work grounded in reasoning and disciplined reflection.
Philip Scott is Professor of Digital Health & Care and Programme Director of the MSc in Digital Transformation for the Health & Care Professions at the University of Wales Trinity Saint David. He was previously Reader in Health Informatics at the University of Portsmouth. Philip worked in the NHS in various IT development and management roles for fifteen years before moving into academia in 2009. He is a Fellow of the Higher Education Academy, a Fellow of Health Level Seven (HL7) and board member of HL7 UK.
Philip is Vice-President of the International Medical Informatics Association (IMIA), Deputy Editor of BMJ Health & Care Informatics and co-chair of the British Computer Society (BCS) working group on computable biomedical knowledge.
Philip has previously been Chair of the BCS Faculty of Health & Care, chair of the evaluation working group of the European Federation of Medical Informatics (EFMI), member of the steering groups for the NHS Digital Academy and the Global Digital Exemplar evaluation programme, non-executive Board member of the Professional Record Standards Body and member of the informatics committee of the Royal College of Radiologists.
Talk title: How do the ethical failings of social media threaten planetary health?
Social media have been implicated in suicides, political disinformation, healthcare misinformation, conspiracy theories, bullying, revisionist history and hate groups of various types. Although early social media started as a well-meaning opportunity to connect, share news and ideas and build communities, it soon became an amplifying platform for all kinds of weird and angry people and professional influencers. The principal social media companies are primarily advertising channels, with some claiming free speech rights to justify minimal restrictions on potentially harmful content with little apparent regard for mental or physical health. Combined with bot farms, recommender algorithms, human cognitive biases and the “post-truth” political environment, social media is now intensifying the damaging effects of misleading or malicious content on vulnerable individuals and populations.
Planetary health primarily focusses on climate change, human health and the social determinants of health, so there is an obvious ethical conflict with many of the negative aspects of social media.
Can an ethical society balance free speech with evidence-based science and public health? This talk will explore the digital divide, regulation, the commercial determinants of health and the responsibilities of planetary health leadership as significant factors to address if we are to champion the pre-eminence of facts and ethical behaviour.
Alun is the Chief Executive of Adferiad, one of the UK’s largest charities working principally with people recovering from serious mental illness, substance use, or mental ill health, providing a wide range of services including housing, gambling treatment, employment support, inpatient care, veterans services, and support to families and carers. Alun is a Registered General Nurse who during his time with Adferiad has achieved a first class LLB (Hons), a Master’s in the Ethics of Social Welfare, and most recently a PhD in law at the Hillary Rodham Clinton School of Law at Swansea University. His thesis was titled ‘Using the Measure for Measure – How the Mental Health (Wales) Measure 2010 and the NHS Redress (Wales) Measure 2008 Can Improve the Rights of the Mentally Ill’.
Alun gave evidence to the Welsh Affairs Select Committee in Westminster and the Welsh Government on the development of the Mental Health (Wales) Measure 2010 and was a member of the Independent Review of the Mental Health Act 1983 Advisory Panel to the UK Government. He is a founding Trustee of Mental Health UK, a Trustee of Developing a Caring Wales, and a Trustee of the Association of Mental Health Providers.
Talk title: Artificial Intelligence in Mental Healthcare: Promise, Risk, and the Challenge of Trust
AI is being used both formally and informally to provide support to people seeking support for their mental health and wellbeing. AI has the potential to both improve outcomes for patients and to reduce costs within healthcare systems, but there remain many obstacles to overcome before trust, reliability, and efficacy can be assured.
In this paper I examine some of the formal developments in the use of AI in relapse prevention and the delivery of psychological support, along with the informal individual use of Large Language Models such as ChatGPT by people seeking their own answers online.
A recent survey by Mental Health UK, notes that over 1 in 3 adults reported using an AI chatbot to support their mental health or wellbeing. This is an area where legitimacy of responses, the ethical use of data and indeed the ethical nature of the AI provider, and the current clinical governance approach pose great risks. In examining informal use of such tools, I review recent legal challenges following personal tragedies where the responses provided by the AI tools is considered to be a contributary factor in suicide.
The formal development of AI within mental health services lends itself to both a clinical governance and ethical approach where patient safety, engagement, and outcomes can be monitored and researched. The development of AI in mental healthcare offers a remarkable opportunity to improve access and availability of psychological support within an appropriate clinical governance framework, and I explore in the paper some of the current approaches to addressing these ethical considerations.
Finally, but probably the most important factor in both the current use and the future development of AI in mental healthcare is the engagement of and development of trust in these tools in the patient community who are the intended beneficiaries, and of course, their clinicians. Recent research and commentary highlight the challenges faced in gaining trust in both these groups and I will discuss how, by involving the beneficiary group in both development and research, trust can be improved.
Dr Matt Morgan is a Consultant in Intensive Care Medicine in the NHS and an Honorary Professor for the Public Understanding of Medicine at Surrey, Cardiff and Curtin University. Alongside clinical work, he writes and speaks widely about medicine, ethics, technology and the stories we tell about illness and survival. He is the author of Critical, One Medicine and A Second Act, books that explore life, death and moral uncertainty through patient stories. He is a regular columnist for The BMJ and has written for publications including The Guardian and Esquire, and has appeared on BBC, CNN and national radio discussing healthcare, AI and public understanding of medicine.
Talk title: Benchmarking Ethical Judgement in Healthcare Using a Medical Moral Machine
Clinical ethics committees are faced with difficult decisions, yet their conclusions are often shaped by context, framing and deeply human intuitions that are rarely visible to patients or the public. At the same time, healthcare systems increasingly face stark trade-offs: expensive versus cheap treatments, saving many versus saving few, prolonging life versus preserving quality and prioritising the young versus the old. These are not abstract dilemmas, they are everyday realities in modern medicine.
This talk introduces the concept of a Moral Machine for Medicine, developed in collaboration with Edmond Awad, one of the founders of the original Moral Machine designed for self-driving care decisions. Rather than asking what an algorithm should do, this work uses structured, binary clinical scenarios to benchmark moral judgements across clinicians, ethicists and the public. The aim is not to generate a single “correct” answer, but to make variation, disagreement and consensus visible.
Using scenarios grounded in real UK healthcare ethical cases, the Moral Machine may allows comparison between individual intuition, group norms and institutional decision-making. During the session, attendees will take part in a short live demonstration, voting on scenarios and seeing how ethical judgements distribute in real time. Further scenarios will then be made available for exploration beyond the session.
By benchmarking moral intuitions rather than replacing ethical reasoning, this approach offers a novel tool for ethics education, committee reflection and public engagement. It provides a way to examine how ethical decisions in healthcare are actually made and how far they diverge from how we believe they ought to be made.
Oliver Bruff is a Researcher in the Ada Lovelace Institute's social and economic policy research domain. Oliver has worked on Ada's research on the use of AI transcription tools in social care contexts and public attitudes towards AI.
Talk title: Transcribing Trust: the impact of AI transcription tools in social care
'Workers in health and social care are increasingly using AI transcription tools to automate official documentation processes such as care assessments and patient records, as a result of the UK government’s push for AI adoption.
This research explores the impacts of the adoption of AI transcription tools by social workers in England and Scotland to understand the implications of automating frontline work with AI for workers. Using 39 qualitative interviews with digital delivery leads and social workers at local authorities, we find that these tools hold promise for creating efficiencies in care work, but that they also introduce harmful inaccuracies into the care records that inform decisions made about vulnerable adults and children.
While social workers benefited from the time that they saved with AI transcription tools, we find that the unpredictability of AI harms challenge the principles of autonomy and professional judgment in social work, and current approaches to risk mitigation place additional responsibility on individual social workers. Social workers with high workloads and limited training on AI’s risk profile are less equipped to interpret and action AI hallucinations and bias, which we suggest presents profound implications for accountability structures at work and procedural justice in care. We also find that differing approaches to the governance of AI transcription tools in the health and justice sectors further complicates the use of these tools in social care contexts.
We conclude with recommendations for organisations aiming to automate workflows with AI transcription tools and policymakers seeking to support the safe adoption of these tools by frontline workers. By centring the experiences of social workers, our paper advances the evidence base around the impact of AI transcription tools in frontline public services, providing insight and learnings for the safe and effective introduction of AI technologies in clinical contexts.
Provisional title: Ethically Capturing community health data for AI
Pricing:
£160 – full rate
£95 – concessions (incl. professions allied to medicine, IME members and UKCEN individual members)
£60 – students
Venue:
Large Chemistry Lecture Theatre, Cardiff University Main Building
Park Place, Cardiff, CF10 3AT
Pre conference workshop:
The pre-conference workshop will be on the 4th June. Bookings can be made at https://ime.datawareonline.co.uk/Event-Booking/EventId/1108