Builds equity, inclusivity, interpretability, privacy, and security into AI at the ground level
As AI systems are used more widely across society, it is critical to ensure that these systems are responsible. This is relevant for all the users of AI, but particularly critical when this technology is used for sensitive, assistive or predictive tasks such as medical or legal decision-making.
There are many aspects in which AI should be responsible. AI systems should have human oversight. They should be resilient, secure, safe, reliable and reproducible. The data and models used by the systems should be transparent: the decisions made should be explainable and interpretable. Humans need to be aware that they are interacting with an AI system, and they should be informed of its capabilities and limitations. AI systems should be accessible to us all. They should be free of bias and involve people throughout their life cycle. Systems should also be sustainable and environmentally friendly, and they should benefit other living beings and future generations.
Progress in responsible AI requires interdisciplinary research between AI, ethics, law and other fields.