How should government maintain trust and accountability in using AI and machine learning? What is the public appetite for government making use of these in decision making?

Background

Our aim is to support government and other public sector organisations in finding and exploiting emerging technologies and other innovative solutions to operational service and policy delivery challenges.

Next steps

Should you have questions relating to this ARI please contact co_aris@cabinetoffice.gov.uk. If your query relates to a specific question please state its title in your email.

Source

This question was published as part of the set of ARIs in this document:

CO AR Is 2019 20190429

Related UKRI funded projects


  • Seclea Platform - Responsible AI Tools for Everyone

    Artificial Intelligence has the potential to improve our lives with rapid, personalised and assistive services. It presents risks of negative effects on both society and individual citizens. Recent debacles have showed t...

    Funded by: Innovate UK

    Why might this be relevant?

    Addresses the need for transparency, explainability, and accountability in AI to maintain public trust.

  • Seclea – Building Trust in AI

    Artificial Intelligence has the potential to improve our lives with rapid, personalised and assistive services. At the same time, it presents risks of negative impacts on both society and individual citizens. Recent deba...

    Funded by: Innovate UK

    Why might this be relevant?

    Focuses on building trust in AI through transparency and accountability, aligning with the question's concerns.

  • Turing AI Fellowship: Citizen-Centric AI Systems

    AI holds great promise in addressing several grand societal challenges, including the development of a smarter, cleaner electricity grid, the seamless provision of convenient on-demand mobility services, and the ability ...

    Funded by: EPSRC

    Why might this be relevant?

    While not fully addressing the question, it provides insights on citizen-centric AI systems and trust-building mechanisms.

  • Manchester Metropolitan University and Greater Manchester Combined Authority KTP 24_25 R1

    To develop, test, and embed an Artificial Intelligence Assessment Framework Tool that will allow users from across the public sector to understand the risks, limitations and opportunities of AI tools and technologies....

    Funded by: Innovate UK

  • Democratise access to AI governance through bringing responsible AI platform providers together and enabling access to SMEs

    Enzai has built a responsible AI platform which allows users to understand and manage the risks that come with AI, through policy and governance controls. The company is seeking to form a consortium in order to democrati...

    Funded by: Innovate UK

  • Turing AI Fellowship: Trustworthy Machine Learning

    Machine learning (ML) systems are increasingly being deployed across society, in ways that affect many lives. We must ensure that there are good reasons for us to trust their use. That is, as Baroness Onora O'Neill has s...

    Funded by: EPSRC

  • FAITH: Fostering Artificial Intelligence Trust for Humans towards the optimization of trustworthiness through large-scale pilots in critical domains

    The increasing requirement for trustworthy AI systems across diverse application domains has become a pressing need not least due to the critical role that AI plays in the ongoing digital transformation addressing urgent...

    Funded by: Horizon Europe Guarantee

  • TrustMe: Secure and Trustworthy AI platform

    According to recent survey by global analytics firm FICO and Corinium, 65% of companies cannot explain how Artificial Intelligence (AI) model decisions/predictions are made and poor data has caused 11.8 million/year, fin...

    Funded by: Innovate UK

  • People Powered Algorithms for Desirable Social Outcomes

    Algorithms increasingly govern interactions between state and citizen and as the 'digital by default' model of government-citizen interaction spreads this will increase. This increase, combined with the value of data sci...

    Funded by: EPSRC

    Why might this be relevant?

    The project specifically focuses on algorithmic interactions between government and citizens, addressing trust and accountability in AI and machine learning.

  • Using Machine Learning to make the best use of Innovate UK’s operational data.

    In April of 2016 the European Parliament released the General Data Protection Regulation (GDPR) in which any individual subject to automated profiling has the right to “meaningful information about the logic involved.&qu...

    Funded by: Innovate UK