To what extent do AI companies face sufficient incentives to invest in risk measurement, prevention and mitigation?

Background

In the National AI Strategy, the government made commitments to enrich our understanding of AI as it impacts the economy and society more broadly. Additionally, we recently launched a steering board chaired by the heads of both the government analysis and scientific functions, to ensure cohesive cross government approaches to understanding AI impacts. An overview of the high-level questions we are asking in this regard are outlined in the section below. (https://www.gov.uk/government/publications/national-aistrategy)

Some priority work we are currently developing to meet these commitments include:

An analysis of the AI White Paper consultation to feed into the formal consultation response. This will allow us to take on feedback from the public and various key players in sectors across the economy, and better tailor policy interventions to support strategic AI aims.

Establishing the AI Safety Institute to advance the world’s knowledge of AI safety by carefully examining, evaluating, and testing new frontier AI systems. The Institute will conduct fundamental research on how to keep people safe in the face of fast and unpredictable progress in AI, improving our understanding of the capabilities and risks of AI systems.

A monitoring and evaluation framework for AI regulatory interventions in tandem with the AI regulatory white paper. This will develop our understanding of key metrics to monitor with regards to Ai governance and ecosystem impacts.

Research into the AI sector and supply. Updating the AI Sector Study to establish a consistent and comparable set of economic indicators for the AI sector in terms of producers and suppliers. This study helps us to best understand where the AI sector needs support, to grow sovereign capability of the UK in AI, in alignment with strategic priorities.

The development of a cross-economy national AI risk register. Developed in tandem with a responsibility register that garnered cross Whitehall agreement on which departments hold which risks with regards to AI. The risk register will provide a single source of truth on AI risks which regulators, government departments, and external groups can use to prioritise further action.

Further research into Compute and the best ways to leverage compute to support the AI sector. This will be key to informing our response to the future of compute review, and maximising the £1 billion+ investments in state-of-the-art compute.

Next steps

If you are keen to register your interest in working and connecting with DSIT Digital Technology and Telecoms Group and/or submitting evidence, then please complete the DSIT-ARI Evidence survey - https://dsit.qualtrics.com/jfe/form/SV_cDfmK2OukVAnirs.
Please view full details: https://www.gov.uk/government/publications/department-for-science-innovation-and-technology-areas-of-research-interest/dsit-areas-of-research-interest-2024

Related UKRI funded projects


  • Democratise access to AI governance through bringing responsible AI platform providers together and enabling access to SMEs

    Enzai has built a responsible AI platform which allows users to understand and manage the risks that come with AI, through policy and governance controls. The company is seeking to form a consortium in order to democrati...

    Funded by: Innovate UK

    Lead research organisation: ENZAI TECHNOLOGIES LIMITED

    Why might this be relevant?

    Partially relevant as it focuses on AI governance and risk management, but does not directly address incentives for investment.

  • Accelerating Trustworthy AI: developing a first-to-market AI System Risk Management Platform for Insurance Product creation

    Algorithm adoption is rapidly expanding across business and society. The number of AI companies in the UK has grown by 600% over the last ten years to \>1,300 companies, and AI adoption within businesses is predicted ...

    Funded by: Innovate UK

    Lead research organisation: HOLISTIC AI LIMITED

    Why might this be relevant?

    Partially relevant as it focuses on developing a Risk Management Platform for AI in the insurance industry, but does not directly address incentives for AI companies to invest in risk measurement, prevention, and mitigation.

  • LEAP - Legal Ecosystem for AI Proliferation

    LEAP is the Legal Ecosystem for AI Proliferation. Innovation in AI and the potential impact it is having on current and emerging industries cannot be underestimated. The speed of this innovation means the impacts are bar...

    Funded by: Innovate UK

    Lead research organisation: CHARLTON STONEHILL LTD

    Why might this be relevant?

    Partially relevant as it addresses risk analysis and legal frameworks for AI, but does not directly discuss incentives for investment.

  • AI Risk Management Platform for Insurance Industry

    Holistic AI (HAI) will lead a consortium in the development of the first-to-market AI Risk Management Platform for insurance. The Platform has been co-conceived with our insurer and AI technology partners to enable insur...

    Funded by: Innovate UK

    Lead research organisation: HOLISTIC AI LIMITED

    Why might this be relevant?

    Partially relevant as it focuses on AI risk management in the insurance industry, but does not directly tackle incentives for investment.

  • Multi-scale Operation-assurance evaluation Tool for AI (MOT4AI) Systems

    According to a UK government report (2019), approximately 50% of SMEs in the UK were using AI technology. The government has called for greater transparency in AI systems to ensure they are used ethically and fairly. We ...

    Funded by: Innovate UK

    Lead research organisation: DIGITAL READINESS & INTELLIGENCE LTD

  • Ethical AI Marketplace-as-a-Service to help businesses impacted by COVID build back better

    ProtectBox have developed an "artificial intelligence (AI) Engine" (a highly adaptable AI Marketplace-as-a-Service) with many user-friendly features that lets B2B/C/G/I users (in minutes, for free) assess, matc...

    Funded by: Innovate UK

    Lead research organisation: PROTECTBOX LTD

  • Enhancing AI Assurance through Comprehensive Compliance, Risk Management, and Explainability Solutions

    AI TrustGuard (AITG) is a **comprehensive AI-driven platform designed to address the growing need for AI compliance, risk management, and explainability** across various industries. As AI systems continue to permeate div...

    Funded by: Innovate UK

    Lead research organisation: BASILICON GLOBAL LIMITED

  • Ai Risk Management for SME ecosystem

    Until recently, the use of AI in business has been generally unregulated. However, this is rapidly changing. AI's increasing prevalence and wider consumer and legislator recognition of its role in society is driving forw...

    Funded by: Innovate UK

    Lead research organisation: HOLISTIC AI LIMITED

  • AI UK: Creating an International Ecosystem for Responsible AI Research and Innovation

    Artificial Intelligence (AI) can have dramatic effects on industrial sectors and societies (e.g., Generative AI, facial recognition, autonomous vehicles). AI UK will pioneer a reflective, inclusive approach to responsibl...

    Funded by: EPSRC

    Lead research organisation: University of Southampton

  • Predictive modelling for HEI-Commercialisation Dashboard

    The use of technology in everyday life is increasing year on year. One of the methods that is becoming increasingly common to see is the use of Artificial Intelligence (AI) to support or deliver services. Frequently it i...

    Funded by: Innovate UK

    Lead research organisation: BUSINESSABLE LTD

Similar ARIs from other organisations