How can we best stress test the UK’s playbooks for different risks becoming crises in an ongoing way?

Background

In the National AI Strategy, the government made commitments to enrich our understanding of AI as it impacts the economy and society more broadly. Additionally, we recently launched a steering board chaired by the heads of both the government analysis and scientific functions, to ensure cohesive cross government approaches to understanding AI impacts. An overview of the high-level questions we are asking in this regard are outlined in the section below. (https://www.gov.uk/government/publications/national-aistrategy)

Some priority work we are currently developing to meet these commitments include:

An analysis of the AI White Paper consultation to feed into the formal consultation response. This will allow us to take on feedback from the public and various key players in sectors across the economy, and better tailor policy interventions to support strategic AI aims.

Establishing the AI Safety Institute to advance the world’s knowledge of AI safety by carefully examining, evaluating, and testing new frontier AI systems. The Institute will conduct fundamental research on how to keep people safe in the face of fast and unpredictable progress in AI, improving our understanding of the capabilities and risks of AI systems.

A monitoring and evaluation framework for AI regulatory interventions in tandem with the AI regulatory white paper. This will develop our understanding of key metrics to monitor with regards to Ai governance and ecosystem impacts.

Research into the AI sector and supply. Updating the AI Sector Study to establish a consistent and comparable set of economic indicators for the AI sector in terms of producers and suppliers. This study helps us to best understand where the AI sector needs support, to grow sovereign capability of the UK in AI, in alignment with strategic priorities.

The development of a cross-economy national AI risk register. Developed in tandem with a responsibility register that garnered cross Whitehall agreement on which departments hold which risks with regards to AI. The risk register will provide a single source of truth on AI risks which regulators, government departments, and external groups can use to prioritise further action.

Further research into Compute and the best ways to leverage compute to support the AI sector. This will be key to informing our response to the future of compute review, and maximising the £1 billion+ investments in state-of-the-art compute.

Next steps

If you are keen to register your interest in working and connecting with DSIT Digital Technology and Telecoms Group and/or submitting evidence, then please complete the DSIT-ARI Evidence survey - https://dsit.qualtrics.com/jfe/form/SV_cDfmK2OukVAnirs.
Please view full details: https://www.gov.uk/government/publications/department-for-science-innovation-and-technology-areas-of-research-interest/dsit-areas-of-research-interest-2024

Related UKRI funded projects


  • IDEAS Factory - Global View

    The aim of this project is to scope a form of dash-board that gives policy makers an integrated view of the state of the UK, both at the current time, and into the past. If we are equipped with a better view of the UK, w...

    Funded by: EPSRC

    Lead research organisation: University of Oxford

    Why might this be relevant?

    Partially relevant as it focuses on providing a dashboard for policy makers to understand the state of the UK, but does not specifically address stress testing playbooks for different risks becoming crises.

  • AGILE - AGnostic risk management for high Impact Low probability Events

    AGILE will design, develop, and apply a holistic methodological framework and practical tools for understanding, assessing, managing, and communicating HILPs events with a systemic risk and resilience perspective. The pr...

    Funded by: Horizon Europe Guarantee

    Lead research organisation: UNIVERSITY COLLEGE LONDON

    Why might this be relevant?

    Fully relevant as it focuses on designing a methodology for understanding, assessing, managing, and communicating high impact low probability events with a systemic risk and resilience perspective, which aligns with stress testing playbooks for different risks becoming crises.

  • Democratise access to AI governance through bringing responsible AI platform providers together and enabling access to SMEs

    Enzai has built a responsible AI platform which allows users to understand and manage the risks that come with AI, through policy and governance controls. The company is seeking to form a consortium in order to democrati...

    Funded by: Innovate UK

    Lead research organisation: ENZAI TECHNOLOGIES LIMITED

    Why might this be relevant?

    Partially relevant as it focuses on democratizing access to AI governance through a responsible AI platform, which could be beneficial for stress testing playbooks, but does not directly address the specific question.

  • Systemic environmental risk analysis for threats to UK recovery from COVID-19

    UK Recovery from COVID-19 needs to balance the needs of the economy, societal cohesion and health. The Environment is not currently explicit in some framings of national recovery, yet there are a number of major risks in...

    Funded by: COVID

    Lead research organisation: University of Reading

  • From models to insight: Effective use of models to inform decisions

    Decision-makers are often keen to "follow the science" in highly-charged contexts such as climate policy, pandemic response, economic policy and humanitarian crisis response. In situations like these, where dec...

    Funded by: FLF

    Lead research organisation: University College London

  • Multi-scale Operation-assurance evaluation Tool for AI (MOT4AI) Systems

    According to a UK government report (2019), approximately 50% of SMEs in the UK were using AI technology. The government has called for greater transparency in AI systems to ensure they are used ethically and fairly. We ...

    Funded by: Innovate UK

    Lead research organisation: DIGITAL READINESS & INTELLIGENCE LTD

  • Centre for the Evaluation of Complexity Across the Nexus (CECAN)

    Responding to the increasing recognition of complexity of policy and policy implementation, CECAN produced innovation in policy evaluation for policies relating to energy, food, water and the environment (the 'Nexus'). C...

    Funded by: ESRC

    Lead research organisation: University of Surrey

Similar ARIs from other organisations