How will the use of generative AI to create ‘deepfakes’ that manipulate people’s likeness (face, body, voice) evolve? What is the psychological impact of being deepfaked, and what harmful uses (e.g. intimate image abuse, fraud, reputational damage) will develop and increase?

Background

Although there is already material evidence on the types of serious harms individuals encounter online, there still remain a number of emerging harms, where the evidence base is still yet to mature (e.g. epilepsy trolling, online animal abuse). SOH would like to close this significant gap in understanding the impact of encountering different types of serious harms online and understanding the best approaches to measuring the impact of the Online Safety legislation.

SOH highlights the importance of Media Literacy in the digital age and asks for further studies to uncover barriers to engagement as well as the effectiveness of DSIT programmes. This issue closely relates to Counter-Disinformation interventions, which requires evidence for its effect on bystanders, topic specific disinformation and what tools can be used to combat this issue.

Research on Safety Technology would greatly develop SOH’s understanding of the relationship that DSIT online safety objectives have with the technology market today. A primary focus lands on improving Age Assurance (AA) measures. This includes ensuring transparency and assessing opportunities for the sector.

Next steps

If you are keen to register your interest in working and connecting with DSIT Digital Technology and Telecoms Group and/or submitting evidence, then please complete the DSIT-ARI Evidence survey - https://dsit.qualtrics.com/jfe/form/SV_cDfmK2OukVAnirs.
Please view full details: https://www.gov.uk/government/publications/department-for-science-innovation-and-technology-areas-of-research-interest/dsit-areas-of-research-interest-2024

Source

This question was published as part of the set of ARIs in this document:

DSIT Areas of Research Interest 2024 GOV UK

Related UKRI funded projects


  • Integrating user experience data into image algorithms to mitigate online harm

    The culmination of decades of academic research and commercial application, this proposal offers a step change on how algorithms account for the end user experience. Images hold within them varying degrees of emotional '...

    Funded by: Innovate UK

    Why might this be relevant?

    Partially addresses the use of generative AI for online harm mitigation, but does not specifically focus on deepfakes.

  • Mapping and mitigating the threats to ordinary people from deep fakes

    Novel deep fake technology poses a serious and imminent threat to society and work is urgently needed to better protect ordinary people. Deep fakes (also termed synthetic media) refer to audio, image, text, or video that...

    Funded by: UKRI FLF

    Why might this be relevant?

    Addresses the threats of deepfakes, including psychological impact and harmful uses, with expertise in the field.

  • Tackling Child Exploitation in Live Streaming Applications

    Securium is a Cyber Intelligence company developing innovative products to protect businesses and individuals by detecting and preventing online harm such as grooming and exploitation, radicalisation, hate speech, and re...

    Funded by: Innovate UK

    Why might this be relevant?

    Does not directly address the use of generative AI for deepfakes or the psychological impact mentioned in the question.

  • PrivacyEye: Controlling Harmful Multimedia Sharing Among Children

    The increasing use of electronic devices and online applications among children in the UK has raised significant concerns about their online safety. Nearly 90% of children aged 0-18 go online daily, with those aged 5-15 ...

    Funded by: Innovate UK

    Lead research organisation: DE MONTFORT UNIVERSITY

  • Postdigital Intimacies and the Networked Public-Private

    Digital culture has become inextricable from all forms of intimate social and personal life, to the point of being imperceptible. This creates a number of global challenges, not least in how we make sense of ourselves, h...

    Funded by: AHRC

    Why might this be relevant?

    Partially relevant as it addresses the impact of digital culture on personal life and relationships, but does not specifically focus on generative AI deepfakes.

  • Equally Safe Online

    We address the timely topic of online gender-based violence (GBV): Almost 1 in every 2 women and non-binary people (46%) reported experiencing online abuse since the beginning of COVID-19 (Glitch report, 2020). Our aim i...

    Funded by: EPSRC

    Why might this be relevant?

    Fully relevant as it directly addresses online gender-based violence, prevention, intervention, and support through advanced Machine Learning algorithms.