A historical case study of AI in social media by Aristotle University

A historical case study of AI in social media by Aristotle University

The IANUS partner, Aristotle University of Thessaloniki, is currently working on developing a historical case study about the use of Artificial Intelligence (AI) in social media. The motivation behind the analysis came from the fact that trust toward (AI) is an emergent topic, gradually receiving a core position in the thematic area of trust toward STI. European Commission is also active in producing ethical guidelines for Trustworthy AI. The rationale behind the Guidelines is the development of a unique “ecosystem of trust” that will provide a human-centric approach to AI applications which should then increase the trust level of the general public towards these applications.

Currently, social media use and particularly their news feed, occupy a dominant position in our everyday digital life. In terms of algorithmic operation, the news feed of social media is highly subject to algorithmic curation; social media users exhibit concerns and skepticism towards the algorithmically-driven (and occasionally biased) personalization systems of social media dictating the content appearing to users. In terms of content, AI-driven and algorithmic systems may even be the cause behind modified content, while poorly implemented algorithms are not able to discern real from fake news and end up spreading misinformation.

A preliminary analysis indicated very insightful findings. In terms of algorithmic operation, a few studies adopt a user-centric perspective and examine algorithms as “experience technologies” (Cotter and Reisdorf, 2020). Moreover, the negative attitude and lack of trust are mainly due to a) negative emotional experiences evoked and b) algorithms’ opaqueness (opacity). In more detail, negative emotional experiences that are evoked are fear of personal data; feelings of surveillance; feeling manipulated and losing uniqueness as a user; deprived decision-making ability; algorithms as incapable of capturing the human experience. Furthermore, the algorithms’ opaqueness appears as algorithms are continuously updated, collecting additional data, and there is a significant lack of explicit feedback mechanisms.

In terms of the algorithmic-driven content, through the analysis of Eurobarometer surveys, timewise factors that affect citizens’ trust level in social media have been highlighted. Undoubtedly, it is challenging to give a panacea answer if EU citizens trust the content in social media. However, some statistics are worth mentioning: in 2016, 53% of respondents believed in the trustworthiness of social media content, while in 2018, only 26% of respondents believed in the trustworthiness of social media content. The trust level is noticed to be positively affected by the news agency and source of information; diversity of information, views, and opinions; socio-demographics (e.g., age, educational level); social media usage frequency; and responsible scientists. In contrast, the trust level is noticed to be negatively affected by content affected by political, governmental, and commercial pressures; exposure to fake news; and inadequate knowledge of science and technological developments.

Within the next months, the IANUS partner, the Aristotle University of Thessaloniki, will provide a more detailed report analyzing the historical case study about the use of Artificial Intelligence (AI) in social media.

SideMenu
Scroll to Top