How will AI affect existing kinds of harmful online content (e.g. online abuse, scams) and what new kinds of online harmful content might it give rise to?

Background

Although there is already material evidence on the types of serious harms individuals encounter online, there still remain a number of emerging harms, where the evidence base is still yet to mature (e.g. epilepsy trolling, online animal abuse). SOH would like to close this significant gap in understanding the impact of encountering different types of serious harms online and understanding the best approaches to measuring the impact of the Online Safety legislation.

SOH highlights the importance of Media Literacy in the digital age and asks for further studies to uncover barriers to engagement as well as the effectiveness of DSIT programmes. This issue closely relates to Counter-Disinformation interventions, which requires evidence for its effect on bystanders, topic specific disinformation and what tools can be used to combat this issue.

Research on Safety Technology would greatly develop SOH’s understanding of the relationship that DSIT online safety objectives have with the technology market today. A primary focus lands on improving Age Assurance (AA) measures. This includes ensuring transparency and assessing opportunities for the sector.

Next steps

If you are keen to register your interest in working and connecting with DSIT Digital Technology and Telecoms Group and/or submitting evidence, then please complete the DSIT-ARI Evidence survey - https://dsit.qualtrics.com/jfe/form/SV_cDfmK2OukVAnirs.
Please view full details: https://www.gov.uk/government/publications/department-for-science-innovation-and-technology-areas-of-research-interest/dsit-areas-of-research-interest-2024

Source

This question was published as part of the set of ARIs in this document:

DSIT Areas of Research Interest 2024 GOV UK

Related UKRI funded projects


  • Safe Internet surfing with an intelligent child-centred shield against harmful content

    The Internet provides high exposure to malicious content with direct impact on children's safety. Illicit, violent and pornographic material to name a few. The Internet is also an enabler for cyber victimisation such as ...

    Funded by: Innovate UK

    Why might this be relevant?

    The project focuses on developing a child-centered shield against harmful online content, addressing the impact of encountering different types of serious harms online.

  • AI Safety Platform: Generative AI and Cybersecurity Training SaaS for Schools and Families

    **Problem statement:** The rapid rise of **generative AI** has introduced **unprecedented cybersecurity risks**, particularly for students and families. **Deepfakes, AI-driven scams, misinformation, identity theft, and c...

    Funded by: Innovate UK

    Lead research organisation: UNIVERSITY OF EAST LONDON

    Why might this be relevant?

    The project addresses AI-driven cybersecurity risks, deepfakes, scams, and cyberbullying, which are relevant to the question on the impact of AI on harmful online content.

  • PrivacyEye: Controlling Harmful Multimedia Sharing Among Children

    The increasing use of electronic devices and online applications among children in the UK has raised significant concerns about their online safety. Nearly 90% of children aged 0-18 go online daily, with those aged 5-15 ...

    Funded by: Innovate UK

    Lead research organisation: DE MONTFORT UNIVERSITY

    Why might this be relevant?

    The project focuses on controlling harmful multimedia sharing among children, addressing concerns about online safety, cyberbullying, and exposure to harmful content.

  • Equally Safe Online

    We address the timely topic of online gender-based violence (GBV): Almost 1 in every 2 women and non-binary people (46%) reported experiencing online abuse since the beginning of COVID-19 (Glitch report, 2020). Our aim i...

    Funded by: EPSRC

    Why might this be relevant?

    The project specifically addresses online gender-based violence and aims to create safer online spaces through advanced Machine Learning algorithms.

  • Tackling Child Exploitation in Live Streaming Applications

    Securium is a Cyber Intelligence company developing innovative products to protect businesses and individuals by detecting and preventing online harm such as grooming and exploitation, radicalisation, hate speech, and re...

    Funded by: Innovate UK

  • An innovative, AI-driven application that helps users assess/action information pollution for social media content.

    Sway is a UK-based social media safety technology SME with a core project team of Mike Bennett (CEO and serial entrepreneur), Daniela Fernandez (CXO and entrepreneur) and Alan Simpson (CTO and digital transformation stra...

    Funded by: Innovate UK

  • Systems for Internet Safety: Counteracting Online Predators (SIS:COP)

    "In 2016, NSPCC reported a 50% increase in cases of online grooming. The proposed 12-month project - SIS:COP intends to demonstrate the potential for a system embodying advanced approaches for detecting and preventi...

    Funded by: Innovate UK

  • Detox: Human-led AI to automate and radically improve online content moderation

    In this project, Rewire Online Limited will develop and commercially trial a new Artificial Intelligence (AI)-powered product, _Detox_, which massively improve show platforms moderate online content. Detox will automatic...

    Funded by: Innovate UK

  • ISIS: Protecting children in online social networks

    The aim of the Isis project is to develop an ethics-centred monitoring framework and tools for supporting law enforcement agencies in policing online social networks for the purpose of protecting children. The project wi...

    Funded by: EPSRC

    Why might this be relevant?

    The project focuses on monitoring online social networks for child exploitation, which is a form of harmful online content.

  • Integrating user experience data into image algorithms to mitigate online harm

    The culmination of decades of academic research and commercial application, this proposal offers a step change on how algorithms account for the end user experience. Images hold within them varying degrees of emotional '...

    Funded by: Innovate UK