About
Privacy and Fairness in AI for Health
Over the past few years, the use of AI has continued to increase in healthcare services. Many healthcare providers are deploying AI-driven applications across various business lines, services, and products. Although AI can bring a range of benefits to healthcare providers and patients, AI can also raise novel risks for patients and even healthcare providers. The UK government is working hard to regulate AI in health to support the use of responsible and trustworthy AI. By doing so, we can unleash the full potential of AI while safeguarding our fundamental values and keeping us safe and secure.
Objectives
Objectives & Aims for the Event
The underlying goal of our deep dive workshop is to bring together privacy and fairness experts working in academia, industry, and universities to
- Mitigate the risks of unfair decisions and information leakage when machine learning is deployed by healthcare providers; and
- Design auditing frameworks.
In service of these goals, our workshop will:
- Surface cutting-edge research and challenges regarding fairness and privacy to regulators
- Articulate the challenges faced by regulators to the research community in order to inspire new research directions and information sharing
- Promote networking between researchers, and between researchers and policymakers, in the hope of new mutually beneficial collaborations.
(1) and (2) will be achieved through short presentations, Q&A, and an interdisciplinary panel discussion. (3) will be achieved through networking opportunities throughout and after the event.
Workshop Schedule
Program
The workshop is split into two sessions: morning and afternoon. The focus of the morning session is on privacy and the afternoon session is on fairness. At the end of each session we will have a panel discussion. A light lunch and refreshments will be available to facilitate networking opportunities.
Morning Session (Privacy)
- 10:00 - 10:15 | Opening Remarks - Adrian Weller and Ali Shahin Shamsabadi
- 10:15 - 10:45 | Differential Privacy - Borja Balle (DeepMind)
- 10:45 - 11:15 | Privacy from a causal perspective - Shruti Tople (Microsoft)
- 11:15 - 11:45 | Federated Learning and Trusted Execution Environments - Hamed Haddadi (Imperial College London)
- 11:45 - 12:15 | Mini panel discussion chaired by Ali Shahin Shamsabadi
- 12:15 - 13:00 | Lunch and Networking
Afternoon Session (Fairness)
- 13:00 - 13:30 | Fair AI for health - Brent Mittelstadt (Oxford)
- 13:30 - 14:00 | Health inequities - Smera Jayadeva (Turing)
- 14:00 - 14:30 | Intersection of fairness and privacy - Aurélien Bellet (Inria)
- 14:30 - 15:00 | Mini panel discussion chaired by Carolyn Ashurst
- 15:00 - 15:30 | Coffee Break
Panel Discussion
- 15:30 - 16:30 | Where next for privacy and fairness in Health? Overcoming challenges to using technical approaches in practice, chaired by Adrian Weller
- Alisha Davies (Health Theme Lead for the AI for science and government (ASG) programme at the Turing)
- Mark Durkee (Head of Data and Technology at CDEI)
- Clíodhna Ní Ghuidhir (Principal Scientific Advisor for AI at NICE)
- Xiao Liu (Senior Clinical Researcher in AI and Digital Healthcare at University Hospitals Birmingham NHS Foundation Trust)
- Blaise Thomson (the founder and CEO of Bitfount)
Guests
Invited Speakers
Borja Balle
Borja Balle is a Staff Research Scientist at DeepMind. His current research focuses on privacy-preserving training and privacy auditing for large-scale machine learning systems. He obtained his PhD from Universitat Politècnica de Catalunya in 2013, and then held positions as post-doctoral fellow at McGill University (2013-2015), lecturer at Lancaster University (2015-2017) and machine learning scientist at Amazon Research Cambridge (2017-2019).
Shruti Tople
Shruti Tople is a senior researcher at Microsoft, and her research spans across topics in security and privacy of computer systems. Her recent work provides both theoretical and empirical results to safeguard private data used in machine learning applications by designing new attacks and defenses. Her work has been published in prestigious security/privacy and ML conferences such as CCS, NDSS, Usenix Security, PETS and ICML. She has also been an active member of the research community and served on the program committees of these and other conferences such as AISTATS, UAI and NeurIPS. She received her Ph.D. from the School of Computing at National University of Singapore in 2018, where she was awarded the Dean’s Graduate Research Excellence award for her outstanding thesis work.
Hamed Haddadi
Hamed is a Reader in Human-Centred Systems at the Department of Computing at Imperial College London. He also serves as a Security Science Fellow of the Institute for Security Science and Technology. In his industrial role, he is the Chief Scientist at Brave Software where he works on developing privacy-preserving analytics protocols. He is interested in User-Centred Systems, IoT, Applied Machine Learning, and Data Security & Privacy. He enjoys designing and building systems that enable better use of our digital footprint, while respecting users' privacy. He studied for BEng/MSc/PhD at University College London and the University of Cambridge. He was a postdoctoral researcher at Max Planck Institute for Software Systems in Germany, and a postdoctoral research fellow at Department of Pharmacology, University of Cambridge and The Royal Veterinary College, University of London, followed by few years as a Lecturer and consequently Senior Lecturer in Digital Media at Queen Mary University of London. He has spent time working and collaborating with Brave Software, Intel Research, Microsoft Research, AT&T Research, Telefonica, and Sony Europe. When not in the lab, he prefers to be on a ski slope or in a kayak.
Aurélien Bellet
Aurélien Bellet is a tenured researcher at Inria (France). His current research focuses on the design of privacy-preserving machine learning algorithms in centralized, federated and decentralized settings. Aurélien has served as area chair for ICML (since 2019), NeurIPS (since 2020) and AISTATS (since 2022). He co-organized several international workshops on machine learning and privacy at NIPS/NeurIPS (2016, 2018, 2020), CCS 2021 and FOCS 2022. He also co-organizes FLOW, an online seminar on federated learning with 1000+ registered attendees.
Smera Jayadeva
Smera Jayadeva is a Research Assistant in Data Justice and Global Ethical Futures under the public policy programme. Prior to joining The Alan Turing Institute, Smera has worked in a collaborative placement with the Austrian Institute for International Affairs wherein the European and Indian approaches to medical AI were comparatively evaluated. She has experience in conducting research in the Synergia Foundation on themes ranging from geopolitics and policymaking to disruptive technology. Additionally, Smera has worked as an independent researcher in policy evaluation and governance in public and non-profit organisations. Smera holds an International Master in Security, Intelligence and Strategic Studies with distinction jointly awarded by the University of Glasgow, Dublin City University, and Karlova University. Her graduate dissertation titled “Systems in the subcontinent: Data, power, and the ethics of medical machine learning in India” evaluated the scope, challenges, and mediatory role of AI in Indian healthcare systems. She also holds a BA with a triple major in History, Economic, and Political Science from Christ University (Bengaluru).
Brent Mittelstadt
Professor Brent Mittelstadt is the Oxford Internet Institute's Director of Research, an Associate Professor and Senior Research Fellow. He also coordinates of the Governance of Emerging Technologies (GET) research programme which works across ethics, law, and emerging information technologies. He is a leading data ethicist and philosopher specializing in AI ethics, professional ethics, and technology law and policy. In his current role he leads the Trustworthiness Auditing for AI project, a three-year multi-disciplinary project with University of Reading cutting across ethics, law, computer science, and psychology to determine how to use AI accountability tools most effectively to create and maintain trustworthy AI systems. He also co-leads the A Right to Reasonable Inferences in Advertising and Financial Services project which examines opportunities and challenges facing sectoral implementation of a right to reasonable inferences in advertising and financial services.
Alisha Davies
Alisha Davies is the Health Theme Lead for the AI for science and government (ASG) programme at the Turing. She has a PhD in epidemiology from the London School of Hygiene and Tropical Medicine, and holds an Hon Professorship in the Faculty of Health and Life Sciences at Swansea University. Alisha is also an NHS Consultant in Public Health and Head of Research and Evaluation at Public Health Wales where she leads a multi-disciplinary division working alongside academic partners to address knowledge gaps in public health policy and practice. Examples include using linked routine health data to explore patterns of mental health in children and young people, national surveys to address digital exclusion in health, and social science methods to sustain community resilience. Alisha is also a member of the NIHR Public Health Research Prioritisation Committee, Deputy Director of the Centre for Population Health Research in Wales and leads the Health Foundation funded Wales Networked Data Lab.
Mark Durkee
Mark Durkee is Head of Data & Technology for the Centre for Data Ethics and Innovation, leading a portfolio of work including the Centre's work on privacy enhancing technologies, and public sector algorithmic transparency. He previously led CDEI's Review into Bias in Algorithmic Decision-Making. Prior to joining CDEI in 2019, he worked in a variety of technology strategy, architecture and cyber security roles elsewhere in the UK government, as a software engineer, and completed a PhD in theoretical physics.
Clíodhna Ní Ghuidhir
Clíodhna is leading delivery of the multi-agency advisory service project, which brings regulatory and health technology assessment partners together to provide joined-up guidance and advice to developers and adopters of these technologies, and has developed the AI & Digital Regulations Service. She has also served on funding award panels for AI technologies, and chaired the award panels for round 3 of the NHS AI Award last year (phases 3-4). Previously, Clíodhna worked on a broad range of national innovation programmes, including real-world evaluation and led the development of the NHS Innovation Service, a single front door
for innovators to the NHS. Her health-innovation policy work is informed by her frontline experience, as she worked in mental health and acute services within the UK and in supply chain operations in South Africa.
Xiao Liu
Xiao Liu is a medical doctor and senior clinical researcher at the AI and Digital Health Research Group, University Hospitals Birmingham NHS Foundation Trust. Her work sits at the intersection of medical artificial intelligence and health policy, with the goal of ensuring AI health technologies are safe, effective and equitable. She co-led the SPIRIT-AI and CONSORT-AI initiative, international standards for reporting AI clinical trials, which has been widely adopted by policy institutions including the MHRA, NICE, WHO, BSI and the NHS AI Lab. She is currently co-leading STANDING Together, a project on tackling bias in medical datasets to ensure AI benefits all.
Blaise Thomson
Blaise Thomson is the founder and CEO of Bitfount, a federated machine learning and analytics platform. He was the founder and CEO of VocalIQ, which he sold to Apple in 2015, subsequently leading their Cambridge, UK engineering office and holding the role of Chief Architect for Siri Understanding. Blaise holds a PhD in Computer Science from the University of Cambridge, where he was also a Research Fellow, and is an Honorary Fellow at the Cambridge Judge Business School.
Organization
Workshop Organizers
Ali Shahin Shamsabadi
Research AssociateSafe and Ethical AI
The Alan Turing Institute
Carolyn Ashurst
Senior Research AssociateSafe and Ethical AI
The Alan Turing Institute
Sina Sajadmanesh
PhD VisitorSafe and Ethical AI
The Alan Turing Institute
Ruth Drysdale
Programme ManagerSafe and Ethical AI
The Alan Turing Institute
Adrian Weller
Programme DirectorSafe and Ethical AI
The Alan Turing Institute