AI will democratise access to capabilities that used to be expensive or hard to access, and create new capabilities that didn’t previously exist. As barriers (e.g. technical skills, access to specialist equipment) are reduced, AI use will increase. What is the prevalence of AI generated content online?
Background
Although there is already material evidence on the types of serious harms individuals encounter online, there still remain a number of emerging harms, where the evidence base is still yet to mature (e.g. epilepsy trolling, online animal abuse). SOH would like to close this significant gap in understanding the impact of encountering different types of serious harms online and understanding the best approaches to measuring the impact of the Online Safety legislation.
SOH highlights the importance of Media Literacy in the digital age and asks for further studies to uncover barriers to engagement as well as the effectiveness of DSIT programmes. This issue closely relates to Counter-Disinformation interventions, which requires evidence for its effect on bystanders, topic specific disinformation and what tools can be used to combat this issue.
Research on Safety Technology would greatly develop SOH’s understanding of the relationship that DSIT online safety objectives have with the technology market today. A primary focus lands on improving Age Assurance (AA) measures. This includes ensuring transparency and assessing opportunities for the sector.
Next steps
If you are keen to register your interest in working and connecting with DSIT Digital Technology and Telecoms Group and/or submitting evidence, then please complete the DSIT-ARI Evidence survey - https://dsit.qualtrics.com/jfe/form/SV_cDfmK2OukVAnirs.
Please view full details: https://www.gov.uk/government/publications/department-for-science-innovation-and-technology-areas-of-research-interest/dsit-areas-of-research-interest-2024
Source
This question was published as part of the set of ARIs in this document:
Related UKRI funded projects
-
AI Safety Platform: Generative AI and Cybersecurity Training SaaS for Schools and Families
**Problem statement:** The rapid rise of **generative AI** has introduced **unprecedented cybersecurity risks**, particularly for students and families. **Deepfakes, AI-driven scams, misinformation, identity theft, and c...
Funded by: Innovate UK
Lead research organisation: UNIVERSITY OF EAST LONDON
Why might this be relevant?
The project specifically addresses the prevalence of AI-generated content online and provides a solution to empower users with AI-driven cybersecurity skills.
-
Safe Internet surfing with an intelligent child-centred shield against harmful content
The Internet provides high exposure to malicious content with direct impact on children's safety. Illicit, violent and pornographic material to name a few. The Internet is also an enabler for cyber victimisation such as ...
Funded by: Innovate UK
Why might this be relevant?
The project focuses on child safety online, which is related to online harms, but does not directly address the prevalence of AI-generated content online.
-
AGENCY: Assuring Citizen Agency in a World with Complex Online Harms
The online world is a curious but uncertain world. It enriches many facets of life but at the same time exposes citizens to a variety of threats that may cause harm to them, their loved ones and to wider society. Many of...
Funded by: SPF
Why might this be relevant?
Partially relevant as it addresses online harms and citizen agency, but does not specifically focus on AI generated content.