How should government maintain trust and accountability in using AI and machine learning? What is the public appetite for government making use of these in decision making?

Background

Our aim is to support government and other public sector organisations in finding and exploiting emerging technologies and other innovative solutions to operational service and policy delivery challenges.

Next steps

Should you have questions relating to this ARI please contact co_aris@cabinetoffice.gov.uk. If your query relates to a specific question please state its title in your email.

Source

This question was published as part of the set of ARIs in this document:

CO AR Is 2019 20190429

Related UKRI funded projects


  • Seclea Platform - Responsible AI Tools for Everyone

    Artificial Intelligence has the potential to improve our lives with rapid, personalised and assistive services. It presents risks of negative effects on both society and individual citizens. Recent debacles have showed t...

    Funded by: Innovate UK

    Lead research organisation: SECLEA LTD

    Why might this be relevant?

    The project aims to build public trust and confidence in autonomous decision-making processes using AI.

  • Seclea – Building Trust in AI

    Artificial Intelligence has the potential to improve our lives with rapid, personalised and assistive services. At the same time, it presents risks of negative impacts on both society and individual citizens. Recent deba...

    Funded by: Innovate UK

    Lead research organisation: SECLEA

    Why might this be relevant?

    The project aims to build public trust and confidence in autonomous decision-making processes using AI.

  • Democratise access to AI governance through bringing responsible AI platform providers together and enabling access to SMEs

    Enzai has built a responsible AI platform which allows users to understand and manage the risks that come with AI, through policy and governance controls. The company is seeking to form a consortium in order to democrati...

    Funded by: Innovate UK

    Lead research organisation: ENZAI TECHNOLOGIES LIMITED

    Why might this be relevant?

    The project aims to democratize access to AI governance and risk management platform, which is relevant to maintaining trust and accountability in using AI and machine learning.

  • FAITH: Fostering Artificial Intelligence Trust for Humans towards the optimization of trustworthiness through large-scale pilots in critical domains

    The increasing requirement for trustworthy AI systems across diverse application domains has become a pressing need not least due to the critical role that AI plays in the ongoing digital transformation addressing urgent...

    Funded by: Horizon Europe Guarantee

    Lead research organisation: UNIVERSITY OF SOUTHAMPTON

  • TrustMe: Secure and Trustworthy AI platform

    According to recent survey by global analytics firm FICO and Corinium, 65% of companies cannot explain how Artificial Intelligence (AI) model decisions/predictions are made and poor data has caused 11.8 million/year, fin...

    Funded by: Innovate UK

    Lead research organisation: UNIVERSITY OF WOLVERHAMPTON

  • People Powered Algorithms for Desirable Social Outcomes

    Algorithms increasingly govern interactions between state and citizen and as the 'digital by default' model of government-citizen interaction spreads this will increase. This increase, combined with the value of data sci...

    Funded by: EPSRC

    Lead research organisation: CRANFIELD UNIVERSITY

    Why might this be relevant?

    The project specifically focuses on algorithmic interactions between government and citizens, addressing trust and accountability in AI and machine learning.

  • Using Machine Learning to make the best use of Innovate UK’s operational data.

    In April of 2016 the European Parliament released the General Data Protection Regulation (GDPR) in which any individual subject to automated profiling has the right to “meaningful information about the logic involved.&qu...

    Funded by: Innovate UK

    Lead research organisation: EVOLUTION ARTIFICIAL INTELLIGENCE LTD

  • FAIR: Framework for responsible adoption of Artificial Intelligence in the financial seRvices industry

    AI technologies have the potential to unlock significant growth for the UK financial services sector through novel personalised products and services, improved cost-efficiency, increased consumer confidence, and more eff...

    Funded by: EPSRC

    Lead research organisation: The Alan Turing Institute

  • Enhancing AI Assurance through Comprehensive Compliance, Risk Management, and Explainability Solutions

    AI TrustGuard (AITG) is a **comprehensive AI-driven platform designed to address the growing need for AI compliance, risk management, and explainability** across various industries. As AI systems continue to permeate div...

    Funded by: Innovate UK

    Lead research organisation: BASILICON GLOBAL LIMITED

  • FRAIM: Framing Responsible AI Implementation and Management

    Context Increasing applications of AI technologies have necessitated rapid evolution in organisational policy and practice. However, these rapid changes have often been isolated in individual organisations and sectors, ...

    Funded by: AHRC

    Lead research organisation: University of Sheffield

    Why might this be relevant?

    The project aims to establish shared values and knowledge for responsible and ethical AI implementation and management in organizations, aligning with the question's focus on government trust and accountability in AI.

Similar ARIs from other organisations