MPhil in Ethics of AI, Data & Algorithms

University of Cambridge · Enter password to continue

Overview Structure Experience Modules People Students Apply

Centre for the Future of Intelligence · University of Cambridge

AI is changing society.
Learn to shape what comes next.

The MPhil in Ethics of AI, Data and Algorithms is a full-time, research-intensive programme equipping the next generation of researchers, policymakers and industry leaders to understand and address the ethical, social and practical dimensions of artificial intelligence.

Aerial view of Cambridge

We Are

A community of researchers across disciplines

Based at the Centre for the Future of Intelligence (CFI), a research centre at the University of Cambridge, we bring together philosophers, social scientists, computer scientists, legal scholars, designers, cultural theorists and policy researchers with a shared mission: ensuring AI goes well. That breadth of disciplines and methods, all focused on AI, is what makes CFI unusual.

CFILearn more about CFI →

You Are

Curious, rigorous, and ready to engage

You want to do serious research on AI and its implications, whether your next step is a PhD, a role in policy or government, or a position in industry. You enjoy interdisciplinary challenge and want to develop real research skills. We welcome applicants from philosophy, social science, computer science, law, policy, design, humanities and beyond.


Broad in scope. Rigorous in method. Grounded in real-world impact.

The programme covers AI ethics, governance, safety, evaluation, the economics and geopolitics of AI, human-AI relationships, cultural and critical perspectives, and the future of work, while allowing students to pursue specialised interests through independent research and engagement with the range of expertise at CFI.

Interdisciplinary Cohort

Join students from philosophy, law, computer science, history, political science, economics and beyond. Different perspectives are compared, challenged and integrated throughout the year.

Flexible Research Focus

Assessments aren't tied to specific modules, so you're free to research whatever interests you within the programme's scope, guided by expert supervision.

Core, Recommended & Elective Modules

A structured core and recommended modules provide shared foundations. Electives change each year to reflect the research landscape, letting you go deeper into the areas that matter most to you.

World-Class Setting

Attend seminars, reading groups, conferences and events at CFI, while drawing on Cambridge's wider ecosystem in science, philosophy, law and policy.

  • How should governments regulate frontier AI systems they can't fully understand?
  • What do we owe digital minds, if they exist?
  • How do you evaluate an AI system for risks no one has seen yet?
  • Whose values get encoded in AI systems, and whose get left out?
  • Could AI trigger an intelligence explosion — and how should we prepare?

Nine months of taught modules, independent research, and supervised writing

The programme runs full-time across the three Cambridge terms. Taught modules build shared foundations and specialist knowledge. A mix of essays, presentations, group work and other formats develops your ability to research and communicate independently.

Michaelmas Term

Core modules & electives. Research Essay 1 (4,000 words).

Lent Term

Elective modules & seminars. Research Essay 2 (8,000 words). Works-in-progress presentations.

Easter Term

Dissertation (up to 12,000 words). Presentation. Supervision and revision.

Taught Modules

One core module provides shared foundations: an introduction to key concepts, theories and debates in AI ethics and society. A recommended technical module builds intuition for how AI and ML systems work. Students attend at least four additional elective modules from a list that changes each year.

Supervised Research

Students work individually with domain experts to produce three pieces of written work of increasing length and depth. You receive dedicated one-to-one supervision for each essay, building from a shorter research essay to a full dissertation. Those intending doctoral work will develop a well-planned PhD proposal.

Build Your Own Stream

Everyone takes the same core module. Beyond that, your choice of electives, essay topics and dissertation focus lets you shape a personalised stream that reflects your interests and career goals. There are no fixed tracks — you design your own path and can highlight it on your CV. Streams don’t map one-to-one onto modules; they emerge from the combination of modules you attend, the topics you write about, and the expertise you develop.

AI Safety & Governance

Evaluation, risk assessment, international governance, forecasting, the policy challenges of frontier models and risks in potential fast take-off scenarios.

Ethics, Justice & Society

Fairness, accountability, critical theory, cultural perspectives, human–AI interaction and the philosophical foundations of AI ethics.

Policy & Regulation

Legal frameworks, international security, economics of AI, labour markets, the design of regulatory institutions and the geopolitics of AI competition.

These are examples — students are free to define their own combination.


Not a passive lecture programme

We combine research essays with teaching formats that develop the skills you can't pick up alone at home with a chatbot: arguing on your feet, working in teams, thinking under pressure.

🎓

Research Seminar Series

Throughout the year, researchers and practitioners present on live topics in AI and society. Invited speakers include people from organisations such as DeepMind, METR, RAND and leading universities. PhD students join too, and sessions are followed by informal discussion at the pub.

Research Essays

Three essays of increasing length (4,000 to 12,000 words), each supervised one-to-one. We ask for original analytical or empirical contributions, not literature reviews.

Structured Debates

Argue different sides of live controversies in AI policy and ethics. We also use "anti-debate" formats where the goal is arriving at truth together rather than winning.

Group Projects & Presentations

Work in small teams on research questions and present your findings. The kind of collaboration that policy and industry roles actually require.

Simulations & Role-Play

Work through real-world scenarios: international AI governance negotiations, organisational crises, decision-making under uncertainty. Then reflect on what happened and why.

Works-in-Progress Presentations

Present your developing research to peers and faculty. Get peer feedback and sharpen your arguments before they reach the page.

Collaborative In-Class Work

Group problem-solving, in-class exercises and collaborative analysis. Learn to think and work with others under real-time pressure.

AI Literacy & Responsible Use

Since this is an MPhil on AI and society, we treat the programme's own use of AI tools as part of the intellectual project. Early in the year, a dedicated session covers how to use LLMs well and where they go wrong.

Using LLMs for literature discovery, brainstorming, stress-testing arguments, and strengthening your own reasoning

Understanding LLM limitations: hallucination, sycophancy, reasoning failures, distributional biases

What intellectual integrity looks like when capable AI tools are available to everyone


What you might study

Elective topics vary each year, reflecting the current research interests of staff and developments in the field. The following are examples of modules that have been or may be offered.

Introduction to Ethics of AI

Key concepts, theories and debates: AI capabilities and risks, bias, fairness, moral reasoning, machine decision-making, value alignment, and anticipating future challenges.

Core

Technical Foundations

How AI and ML systems are built, evaluated and deployed: from regression and classification to reinforcement learning and language modelling.

Recommended

Law & Policy of General-Purpose AI

Emerging legal frameworks for GPAI — the EU AI Act, systemic risk regulation, governance under uncertainty, and the role of capability evaluation in law.

Elective

Evaluation of AI Systems

Why robust evaluation matters, alternative approaches, and the challenges of assessing increasingly capable systems for safety and societal impact.

Elective

AI & Social Science

Empirical approaches to AI's societal effects: public attitudes, misinformation, epistemic ecosystems, human–AI interaction and the social psychology of AI.

Elective

AI, Narratives & Culture

How stories, media and cultural imaginaries shape the development and reception of AI. Feminist, STS and critical theory perspectives on technology and power.

Elective

Consciousness in AI

Can machines have minds — or only the appearance of minds? Philosophical and neuroscientific perspectives on AI consciousness, moral status and digital welfare.

Elective

Ethical Design

Where AI ethics meets the user. How interface design, defaults, nudges and interaction patterns shape behaviour, trust and autonomy — and how to design AI-powered products that respect the people who use them.

Elective

Fairness, Prediction & Accountability

Definitions of fairness and justice, the ethics of data-driven prediction, classification as policy, and practical auditing methods. Cases from criminal justice, healthcare and finance.

Elective

AI, Race & Empire

How AI intersects with colonialism, global power and epistemic inequality. Decolonial and indigenous approaches to more just technological futures.

Elective

AI & International Security

How AI transforms national security, military strategy and geopolitics. Autonomous weapons, surveillance, cyber capabilities and arms control challenges.

Elective

AI, Economics & the Future of Work

How AI reshapes labour markets, productivity, wealth distribution and economic policy. Automation, job displacement, new forms of work, and debates around redistribution and growth.

Elective

Forecasting & Societal Decision-Making

Tools for improving societal rationality and anticipating AI trajectories. Superforecasting, calibrated reasoning, scenario planning, and frameworks for high-stakes decisions under deep uncertainty.

Elective

Module offerings and formats are indicative and subject to change. Not all modules listed will be available in a given year.


Learn from leading researchers and practitioners

The programme is directed by researchers at the Centre for the Future of Intelligence and draws on a network of contributors from Cambridge, other universities and frontier AI organisations.

Module Convenors & Supervisors

Modules are taught by researchers from CFI and the broader Cambridge community, spanning philosophy, social science, computer science, law, policy, HCI and design, and cultural and media studies. Specific teaching staff for the coming year will be confirmed closer to the start of the course. Dissertation supervisors are drawn from researchers at CFI.

Guest Speakers & External Contributors

The programme regularly features guest lectures from researchers and practitioners at other universities, policy organisations, frontier AI labs and industry, covering AI safety, governance, philosophy, economics, law and international security.


Knowledge and skills for the AI era

Graduates leave with the conceptual tools, practical skills and professional networks to pursue research, policy, governance or careers at the intersection of AI and society.

Critical thinking and clear communication: evaluating evidence and arguments carefully, and expressing ideas effectively through essays, presentations and debates.

AI literacy: how frontier systems work, how to use them as research tools, and where they fail.

Broad foundations across philosophy, history, social science, computer science, law, economics, public policy and critical theory as they relate to AI.

Forecasting and decision-making under uncertainty. Tools for thinking about where AI is heading and what that means.

Research skills in AI governance, risk assessment, safety, regulation and policy.

Thinking on your feet, developed through live debates, presentations and in-class exercises.

Training in independent research, culminating in a supervised dissertation on a topic of your choice.

A launchpad for doctoral research, policy roles in government and international organisations, or positions at AI companies where analytical depth matters.


What our students say

I particularly loved the flexibility of this course. The assessments aren't tied to specific modules, so you're free to research whatever interests you. That freedom made the course especially rewarding. With the guidance of my supervisors, I had the space to develop my own ideas — and realised I wanted to pursue a PhD.

Mathilda Mulert · 2024–25

The network I got exposed to, and the signal of the master's programme, meant I could secure a full-time role at the AI Safety Institute. CFI enabled me to draw connections between topics that domain experts often missed — enabling impactful research usually only possible later in one's career.

Jai Patel · 2023–24

One of the best aspects is the diverse cohort. Coming from different cultural backgrounds, academic disciplines and professional experiences, I learned so much about AI ethics from a variety of viewpoints. Everyone encouraged me to carve my own academic path and explore intersections between AI, ethics, law and philosophy.

Zoya Yousef · 2023–24


Join the MPhil in Ethics of AI, Data and Algorithms

We're looking for people passionate about the implications of AI, committed to interdisciplinary perspectives, and from a range of academic backgrounds and experiences.

What you'll need

  • Two academic references
  • Transcript
  • CV / résumé
  • Evidence of competence in English
  • Two writing samples (2,500–5,000 words each)
  • Statement of purpose (~600 words)
  • Research proposal (max 500 words)

Key dates

  • September — Applications open
  • October — Gates Scholarship deadline (US applicants)
  • December — Final application and university-wide funding deadline

Precise dates, application details and further information are available on the postgraduate admissions portal.

Scholarships and financial aid may be available. See funding options.

For queries: education@lcfi.cam.ac.uk

Apply to the MPhil.

Applications for 2027-28 expected to open in September 2026.

Apply Now