One expected impact of AI will be on trust in information. How might AI reduce public trust in information available online? Do UK citizens trust AI-generated online content?
Background
Although there is already material evidence on the types of serious harms individuals encounter online, there still remain a number of emerging harms, where the evidence base is still yet to mature (e.g. epilepsy trolling, online animal abuse). SOH would like to close this significant gap in understanding the impact of encountering different types of serious harms online and understanding the best approaches to measuring the impact of the Online Safety legislation.
SOH highlights the importance of Media Literacy in the digital age and asks for further studies to uncover barriers to engagement as well as the effectiveness of DSIT programmes. This issue closely relates to Counter-Disinformation interventions, which requires evidence for its effect on bystanders, topic specific disinformation and what tools can be used to combat this issue.
Research on Safety Technology would greatly develop SOH’s understanding of the relationship that DSIT online safety objectives have with the technology market today. A primary focus lands on improving Age Assurance (AA) measures. This includes ensuring transparency and assessing opportunities for the sector.
Next steps
If you are keen to register your interest in working and connecting with DSIT Digital Technology and Telecoms Group and/or submitting evidence, then please complete the DSIT-ARI Evidence survey - https://dsit.qualtrics.com/jfe/form/SV_cDfmK2OukVAnirs.
Please view full details: https://www.gov.uk/government/publications/department-for-science-innovation-and-technology-areas-of-research-interest/dsit-areas-of-research-interest-2024
Source
This question was published as part of the set of ARIs in this document:
Related UKRI funded projects
-
An innovative, AI-driven application that helps users assess/action information pollution for social media content.
Sway is a UK-based social media safety technology SME with a core project team of Mike Bennett (CEO and serial entrepreneur), Daniela Fernandez (CXO and entrepreneur) and Alan Simpson (CTO and digital transformation stra...
Funded by: Innovate UK
Why might this be relevant?
The project addresses how AI can reduce public trust in information online and specifically focuses on developing a tool to evaluate and reduce information pollution on social media.
-
UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy
Contrary to public opinion, young people care about their personal data and want a digital world more transparent, a digital world they can trust. For example, little is known about how Amazon is able to tailor advertise...
Funded by: EPSRC
Why might this be relevant?
This project focuses on understanding how AI impacts trust in online information and aims to provide citizens with skills to judge and trust online content.
-
ReEnTrust: Rebuilding and Enhancing Trust in Algorithms
As interaction on online Web-based platforms is becoming an essential part of people's everyday lives and data-driven AI algorithms are starting to exert a massive influence on society, we are experiencing significant te...
Funded by: EPSRC
Why might this be relevant?
This project explores rebuilding trust in algorithms and online platforms, addressing the issue of trust breakdown due to algorithmic processes.