How will AI impact societal cohesion, including through trust in institutions and the government and through factionalism?

Background

"In the National AI Strategy, the government made commitments to enrich our understanding of AI as it impacts the economy and society more broadly. Additionally, we recently launched a steering board chaired by the heads of both the government analysis and scientific functions, to ensure cohesive cross government approaches to understanding AI impacts. An overview of the high-level questions we are asking in this regard are outlined in the section below. (https://www.gov.uk/government/publications/national-aistrategy)

Some priority work we are currently developing to meet these commitments include:

An analysis of the AI White Paper consultation to feed into the formal consultation response. This will allow us to take on feedback from the public and various key players in sectors across the economy, and better tailor policy interventions to support strategic AI aims.

Establishing the AI Safety Institute to advance the world’s knowledge of AI safety by carefully examining, evaluating, and testing new frontier AI systems. The Institute will conduct fundamental research on how to keep people safe in the face of fast and unpredictable progress in AI, improving our understanding of the capabilities and risks of AI systems.

A monitoring and evaluation framework for AI regulatory interventions in tandem with the AI regulatory white paper. This will develop our understanding of key metrics to monitor with regards to Ai governance and ecosystem impacts.

Research into the AI sector and supply. Updating the AI Sector Study to establish a consistent and comparable set of economic indicators for the AI sector in terms of producers and suppliers. This study helps us to best understand where the AI sector needs support, to grow sovereign capability of the UK in AI, in alignment with strategic priorities.

The development of a cross-economy national AI risk register. Developed in tandem with a responsibility register that garnered cross Whitehall agreement on which departments hold which risks with regards to AI. The risk register will provide a single source of truth on AI risks which regulators, government departments, and external groups can use to prioritise further action.

Further research into Compute and the best ways to leverage compute to support the AI sector. This will be key to informing our response to the future of compute review and maximising the £1 billion+ investments in state-of-the-art compute."

Next steps

If you are keen to register your interest in working and connecting with DSIT Science, Innovation, and Research Directorate, and/or submitting evidence, then please complete the DSIT-ARI Evidence survey - https://dsit.qualtrics.com/jfe/form/SV_cDfmK2OukVAnirs

Link to ARI Document : https://www.gov.uk/government/publications/department-for-science-innovation-and-technology-areas-of-research-interest/dsit-areas-of-research-interest-2024

Related UKRI funded projects


  • Responsible AI for Long-term Trustworthy Autonomous Systems (RAILS): Integrating Responsible AI and Socio-legal Governance

    Society is seeing enormous growth in the development and implementation of autonomous systems, which can offer significant benefits to citizens, communities, and businesses. The potential for improvements in societal wel...

    Funded by: SPF

    Lead research organisation: University of Oxford

    Why might this be relevant?

    The RAILS project focuses on the societal impact of autonomous systems and the effects of change, which directly relates to the question about AI impact on societal cohesion and trust in institutions.

  • Enabling a Responsible AI Ecosystem

    Problem Space: There is now a broad base of research in AI ethics, policy and law that can inform and guide efforts to construct a Responsible AI (R-AI) ecosystem, but three gaps must be bridged before this is achieved: ...

    Funded by: AHRC

    Lead research organisation: University of Edinburgh

    Why might this be relevant?

    The project addresses the gaps in constructing a Responsible AI ecosystem, which is related to the question about AI impact on societal cohesion and trust in institutions.

  • What does Artificial Intelligence Mean for the Future of Democratic Society? Examining the societal impact of AI and whether human rights can respond

    This research examines the impacts that States' use of artificial intelligence (AI) in decision making processes has on how individuals and societies evolve and develop and what this means for democratic society. Underst...

    Funded by: FLF

    Lead research organisation: Queen Mary University of London

    Why might this be relevant?

    The project examines the societal impact of AI on democratic society and human rights, which partially addresses the question about AI impact on societal cohesion and trust in institutions.

  • Shaping 21st Century AI: Controversies and Closure in Media, Policy, and Research

    Talk about "artificial intelligence" (AI) is abundant. Politicians, experts and start-up founders tell us that AI will change how we live, communicate, work and travel tomorrow. Autonomous vehicles, the detecti...

    Funded by: ESRC

    Lead research organisation: University of Warwick

  • Seclea – Building Trust in AI

    Artificial Intelligence has the potential to improve our lives with rapid, personalised and assistive services. At the same time, it presents risks of negative impacts on both society and individual citizens. Recent deba...

    Funded by: Innovate UK

    Lead research organisation: SECLEA

  • Seclea Platform - Responsible AI Tools for Everyone

    Artificial Intelligence has the potential to improve our lives with rapid, personalised and assistive services. It presents risks of negative effects on both society and individual citizens. Recent debacles have showed t...

    Funded by: Innovate UK

    Lead research organisation: SECLEA LTD

  • Everyone-Virtuoso-Everyday: Exploring Strong Human-Centred Perspectives to Diversify and Disrupt AI Discovery and Innovation

    This Fellowship is about re-orientating interactive AI systems, away from systems that might lead to people feeling powerless, redundant and undervalued, turning towards approaches that let people experience joy, creativ...

    Funded by: EPSRC

    Lead research organisation: Swansea University

  • AI UK: Creating an International Ecosystem for Responsible AI Research and Innovation

    Artificial Intelligence (AI) can have dramatic effects on industrial sectors and societies (e.g., Generative AI, facial recognition, autonomous vehicles). AI UK will pioneer a reflective, inclusive approach to responsibl...

    Funded by: EPSRC

    Lead research organisation: University of Southampton

  • People Powered Algorithms for Desirable Social Outcomes

    Algorithms increasingly govern interactions between state and citizen and as the 'digital by default' model of government-citizen interaction spreads this will increase. This increase, combined with the value of data sci...

    Funded by: EPSRC

    Lead research organisation: Cranfield University

  • EPSRC NetworkPlus on Social Justice through the Digital Economy

    Technological advances in Artificial Intelligence and Big Data, have already given rise to extensive socio-economic transformation and new and emerging technologies, such as distributed ledgers and the Internet of Things...

    Funded by: EPSRC

    Lead research organisation: Newcastle University