How should government maintain trust and accountability in using AI and machine learning? What is the public appetite for government making use of these in decision making?
Background
Our aim is to support government and other public sector organisations in finding and exploiting emerging technologies and other innovative solutions to operational service and policy delivery challenges.
Next steps
Should you have questions relating to this ARI please contact co_aris@cabinetoffice.gov.uk. If your query relates to a specific question please state its title in your email.
Source
This question was published as part of the set of ARIs in this document:
Related UKRI funded projects
-
Seclea Platform - Responsible AI Tools for Everyone
Artificial Intelligence has the potential to improve our lives with rapid, personalised and assistive services. It presents risks of negative effects on both society and individual citizens. Recent debacles have showed t...
Funded by: Innovate UK
Why might this be relevant?
Addresses the need for transparency, explainability, and accountability in AI to maintain public trust.
-
Seclea – Building Trust in AI
Artificial Intelligence has the potential to improve our lives with rapid, personalised and assistive services. At the same time, it presents risks of negative impacts on both society and individual citizens. Recent deba...
Funded by: Innovate UK
Why might this be relevant?
Focuses on building trust in AI through transparency and accountability, aligning with the question's concerns.
-
Turing AI Fellowship: Citizen-Centric AI Systems
AI holds great promise in addressing several grand societal challenges, including the development of a smarter, cleaner electricity grid, the seamless provision of convenient on-demand mobility services, and the ability ...
Funded by: EPSRC
Why might this be relevant?
While not fully addressing the question, it provides insights on citizen-centric AI systems and trust-building mechanisms.