• What are the best ways to ensure that AI and other digital technology are used safely, ethically, and in ways that protect the data and interests of children, our workforce, and bodies? What forms of collaborative working, regulation and enforcement may be appropriate?

Background

Technology, including digital technology and artificial intelligence, has great potential to transform lives, but we need to develop our expertise to deploy them well, and to help learners, families and carers to use them safely and judiciously, taking account of the need for a healthy balance of “screen time” and time away from screens.

Full details can be found at: https://www.gov.uk/government/publications/department-for-education-areas-of-research-interest

Next steps

Get in touch with research.engagement@education.gov.uk

Related UKRI funded projects


  • Towards Embedding Responsible AI in the School System: Co-Creation with Young People

    Recent advances in Generative Artificial Intelligence (GenAI) have the potential to transform education, from reactive tweaks in assessment practices to fundamental philosophical debates about what we should value in the...

    Funded by: AHRC

    Lead research organisation: University of Edinburgh

    Why might this be relevant?

    The project focuses on embedding responsible AI in the school system, engaging young people in the process, and producing recommendations for educational policy.

  • Enabling a Responsible AI Ecosystem

    Problem Space: There is now a broad base of research in AI ethics, policy and law that can inform and guide efforts to construct a Responsible AI (R-AI) ecosystem, but three gaps must be bridged before this is achieved: ...

    Funded by: AHRC

    Lead research organisation: University of Edinburgh

    Why might this be relevant?

    The project aims to develop a UK-wide infrastructure for responsible AI, addressing gaps in AI ethics, policy, and law.

  • Seclea Platform - Responsible AI Tools for Everyone

    Artificial Intelligence has the potential to improve our lives with rapid, personalised and assistive services. It presents risks of negative effects on both society and individual citizens. Recent debacles have showed t...

    Funded by: Innovate UK

    Lead research organisation: SECLEA LTD

    Why might this be relevant?

    The project focuses on making AI transparent, explainable, auditable, and accountable, aligning with the goal of ensuring safe and ethical use of AI.

  • Trustworthy and Ethical Assurance of Digital Twins (TEA-DT)

    In recent years, considerable effort has gone into defining "responsible" AI research and innovation. Though progress is tangible, many sectors still lack the tools and capabilities for operationalising and imp...

    Funded by: AHRC

    Lead research organisation: The Alan Turing Institute

  • AI UK: Creating an International Ecosystem for Responsible AI Research and Innovation

    Artificial Intelligence (AI) can have dramatic effects on industrial sectors and societies (e.g., Generative AI, facial recognition, autonomous vehicles). AI UK will pioneer a reflective, inclusive approach to responsibl...

    Funded by: EPSRC

    Lead research organisation: University of Southampton

  • Automated Ethical AI Assurance Service

    Currently there is an explosion of guidance and regulation for the rapidly growing number of companies developing products & services using AI technologies in health. NHS staff and patients are yet to be convinced of...

    Funded by: Innovate UK

    Lead research organisation: HILLTOP DIGITAL LAB LTD

  • FRAIM: Framing Responsible AI Implementation and Management

    Context Increasing applications of AI technologies have necessitated rapid evolution in organisational policy and practice. However, these rapid changes have often been isolated in individual organisations and sectors, ...

    Funded by: AHRC

    Lead research organisation: University of Sheffield

  • FAITH: Fostering Artificial Intelligence Trust for Humans towards the optimization of trustworthiness through large-scale pilots in critical domains

    The increasing requirement for trustworthy AI systems across diverse application domains has become a pressing need not least due to the critical role that AI plays in the ongoing digital transformation addressing urgent...

    Funded by: Horizon Europe Guarantee

    Lead research organisation: UNIVERSITY OF SOUTHAMPTON

  • Responsible AI for Long-term Trustworthy Autonomous Systems (RAILS): Integrating Responsible AI and Socio-legal Governance

    Society is seeing enormous growth in the development and implementation of autonomous systems, which can offer significant benefits to citizens, communities, and businesses. The potential for improvements in societal wel...

    Funded by: SPF

    Lead research organisation: University of Oxford

  • Shaping 21st Century AI: Controversies and Closure in Media, Policy, and Research

    Talk about "artificial intelligence" (AI) is abundant. Politicians, experts and start-up founders tell us that AI will change how we live, communicate, work and travel tomorrow. Autonomous vehicles, the detecti...

    Funded by: ESRC

    Lead research organisation: University of Warwick