Centre for the Future of Intelligence · University of Cambridge
The MPhil in Ethics of AI, Data and Algorithms is a full-time, research-intensive programme equipping the next generation of researchers, policymakers and industry leaders to understand and address the ethical, social and practical dimensions of artificial intelligence.
We Are
Based at the Centre for the Future of Intelligence (CFI), a research centre at the University of Cambridge, we bring together philosophers, social scientists, computer scientists, legal scholars, designers, cultural theorists and policy researchers with a shared mission: ensuring AI goes well. That breadth of disciplines and methods, all focused on AI, is what makes CFI unusual.
You Are
You want to do serious research on AI and its implications, whether your next step is a PhD, a role in policy or government, or a position in industry. You enjoy interdisciplinary challenge and want to develop real research skills. We welcome applicants from philosophy, social science, computer science, law, policy, design, humanities and beyond.
Programme Overview
The programme covers AI ethics, governance, safety, evaluation, the economics and geopolitics of AI, human-AI relationships, cultural and critical perspectives, and the future of work, while allowing students to pursue specialised interests through independent research and engagement with the range of expertise at CFI.
Join students from philosophy, law, computer science, history, political science, economics and beyond. Different perspectives are compared, challenged and integrated throughout the year.
Assessments aren't tied to specific modules, so you're free to research whatever interests you within the programme's scope, guided by expert supervision.
A structured core and recommended modules provide shared foundations. Electives change each year to reflect the research landscape, letting you go deeper into the areas that matter most to you.
Attend seminars, reading groups, conferences and events at CFI, while drawing on Cambridge's wider ecosystem in science, philosophy, law and policy.
Questions our students work on
Course Structure
The programme runs full-time across the three Cambridge terms. Taught modules build shared foundations and specialist knowledge. A mix of essays, presentations, group work and other formats develops your ability to research and communicate independently.
Core modules & electives. Research Essay 1 (4,000 words).
Elective modules & seminars. Research Essay 2 (8,000 words). Works-in-progress presentations.
Dissertation (up to 12,000 words). Presentation. Supervision and revision.
One core module provides shared foundations: an introduction to key concepts, theories and debates in AI ethics and society. A recommended technical module builds intuition for how AI and ML systems work. Students attend at least four additional elective modules from a list that changes each year.
Students work individually with domain experts to produce three pieces of written work of increasing length and depth. You receive dedicated one-to-one supervision for each essay, building from a shorter research essay to a full dissertation. Those intending doctoral work will develop a well-planned PhD proposal.
Everyone takes the same core module. Beyond that, your choice of electives, essay topics and dissertation focus lets you shape a personalised stream that reflects your interests and career goals. There are no fixed tracks — you design your own path and can highlight it on your CV. Streams don’t map one-to-one onto modules; they emerge from the combination of modules you attend, the topics you write about, and the expertise you develop.
Evaluation, risk assessment, international governance, forecasting, the policy challenges of frontier models and risks in potential fast take-off scenarios.
Fairness, accountability, critical theory, cultural perspectives, human–AI interaction and the philosophical foundations of AI ethics.
Legal frameworks, international security, economics of AI, labour markets, the design of regulatory institutions and the geopolitics of AI competition.
These are examples — students are free to define their own combination.
How You'll Learn
We combine research essays with teaching formats that develop the skills you can't pick up alone at home with a chatbot: arguing on your feet, working in teams, thinking under pressure.
Three essays of increasing length (4,000 to 12,000 words), each supervised one-to-one. We ask for original analytical or empirical contributions, not literature reviews.
Argue different sides of live controversies in AI policy and ethics. We also use "anti-debate" formats where the goal is arriving at truth together rather than winning.
Work in small teams on research questions and present your findings. The kind of collaboration that policy and industry roles actually require.
Work through real-world scenarios: international AI governance negotiations, organisational crises, decision-making under uncertainty. Then reflect on what happened and why.
Present your developing research to peers and faculty. Get peer feedback and sharpen your arguments before they reach the page.
Group problem-solving, in-class exercises and collaborative analysis. Learn to think and work with others under real-time pressure.
Since this is an MPhil on AI and society, we treat the programme's own use of AI tools as part of the intellectual project. Early in the year, a dedicated session covers how to use LLMs well and where they go wrong.
Using LLMs for literature discovery, brainstorming, stress-testing arguments, and strengthening your own reasoning
Understanding LLM limitations: hallucination, sycophancy, reasoning failures, distributional biases
What intellectual integrity looks like when capable AI tools are available to everyone
Indicative Modules
Elective topics vary each year, reflecting the current research interests of staff and developments in the field. The following are examples of modules that have been or may be offered.
Key concepts, theories and debates: AI capabilities and risks, bias, fairness, moral reasoning, machine decision-making, value alignment, and anticipating future challenges.
CoreHow AI and ML systems are built, evaluated and deployed: from regression and classification to reinforcement learning and language modelling.
RecommendedEmerging legal frameworks for GPAI — the EU AI Act, systemic risk regulation, governance under uncertainty, and the role of capability evaluation in law.
ElectiveWhy robust evaluation matters, alternative approaches, and the challenges of assessing increasingly capable systems for safety and societal impact.
ElectiveEmpirical approaches to AI's societal effects: public attitudes, misinformation, epistemic ecosystems, human–AI interaction and the social psychology of AI.
ElectiveHow stories, media and cultural imaginaries shape the development and reception of AI. Feminist, STS and critical theory perspectives on technology and power.
ElectiveCan machines have minds — or only the appearance of minds? Philosophical and neuroscientific perspectives on AI consciousness, moral status and digital welfare.
ElectiveWhere AI ethics meets the user. How interface design, defaults, nudges and interaction patterns shape behaviour, trust and autonomy — and how to design AI-powered products that respect the people who use them.
ElectiveDefinitions of fairness and justice, the ethics of data-driven prediction, classification as policy, and practical auditing methods. Cases from criminal justice, healthcare and finance.
ElectiveHow AI intersects with colonialism, global power and epistemic inequality. Decolonial and indigenous approaches to more just technological futures.
ElectiveHow AI transforms national security, military strategy and geopolitics. Autonomous weapons, surveillance, cyber capabilities and arms control challenges.
ElectiveHow AI reshapes labour markets, productivity, wealth distribution and economic policy. Automation, job displacement, new forms of work, and debates around redistribution and growth.
ElectiveTools for improving societal rationality and anticipating AI trajectories. Superforecasting, calibrated reasoning, scenario planning, and frameworks for high-stakes decisions under deep uncertainty.
ElectiveModule offerings and formats are indicative and subject to change. Not all modules listed will be available in a given year.
People
The programme is directed by researchers at the Centre for the Future of Intelligence and draws on a network of contributors from Cambridge, other universities and frontier AI organisations.
Modules are taught by researchers from CFI and the broader Cambridge community, spanning philosophy, social science, computer science, law, policy, HCI and design, and cultural and media studies. Specific teaching staff for the coming year will be confirmed closer to the start of the course. Dissertation supervisors are drawn from researchers at CFI.
The programme regularly features guest lectures from researchers and practitioners at other universities, policy organisations, frontier AI labs and industry, covering AI safety, governance, philosophy, economics, law and international security.
What You'll Gain
Graduates leave with the conceptual tools, practical skills and professional networks to pursue research, policy, governance or careers at the intersection of AI and society.
Critical thinking and clear communication: evaluating evidence and arguments carefully, and expressing ideas effectively through essays, presentations and debates.
AI literacy: how frontier systems work, how to use them as research tools, and where they fail.
Broad foundations across philosophy, history, social science, computer science, law, economics, public policy and critical theory as they relate to AI.
Forecasting and decision-making under uncertainty. Tools for thinking about where AI is heading and what that means.
Research skills in AI governance, risk assessment, safety, regulation and policy.
Thinking on your feet, developed through live debates, presentations and in-class exercises.
Training in independent research, culminating in a supervised dissertation on a topic of your choice.
A launchpad for doctoral research, policy roles in government and international organisations, or positions at AI companies where analytical depth matters.
Student Voices
I particularly loved the flexibility of this course. The assessments aren't tied to specific modules, so you're free to research whatever interests you. That freedom made the course especially rewarding. With the guidance of my supervisors, I had the space to develop my own ideas — and realised I wanted to pursue a PhD.
The network I got exposed to, and the signal of the master's programme, meant I could secure a full-time role at the AI Safety Institute. CFI enabled me to draw connections between topics that domain experts often missed — enabling impactful research usually only possible later in one's career.
One of the best aspects is the diverse cohort. Coming from different cultural backgrounds, academic disciplines and professional experiences, I learned so much about AI ethics from a variety of viewpoints. Everyone encouraged me to carve my own academic path and explore intersections between AI, ethics, law and philosophy.
How to Apply
We're looking for people passionate about the implications of AI, committed to interdisciplinary perspectives, and from a range of academic backgrounds and experiences.
Precise dates, application details and further information are available on the postgraduate admissions portal.
Scholarships and financial aid may be available. See funding options.
For queries: education@lcfi.cam.ac.uk