Responsible AI
Builds equity, inclusivity, interpretability, privacy, and security into AI at the ground level
Programme Information
As AI systems are used more widely across society, it is critical to ensure that these systems are responsible. This is relevant for all the users of AI, but particularly critical when this technology is used for sensitive, assistive or predictive tasks such as medical or legal decision-making.
There are many aspects in which AI should be responsible. AI systems should have human oversight. They should be resilient, secure, safe, reliable and reproducible. The data and models used by the systems should be transparent: the decisions made should be explainable and interpretable. Humans need to be aware that they are interacting with an AI system, and they should be informed of its capabilities and limitations. AI systems should be accessible to us all. They should be free of bias and involve people throughout their life cycle. Systems should also be sustainable and environmentally friendly, and they should benefit other living beings and future generations.
Progress in responsible AI requires interdisciplinary research between AI, ethics, law and other fields.
Featured
Machine Learning Group
Helping to ensure that the use of AI has beneficial outcomes for society, including: explainability, fairness, robustness, scalability, privacy, safety, ethics and finance.
Group Lead: Adrian Weller
Trustworthy Machine Learning
Building tools for routing decision-makers to appropriate forms of decision support and for cataloging how AI systems are used in decision-making contexts all over the world.
Group Lead: Umang Bhatt