Unveiling AI-evoked (dis)trust – a historical Case-Study

The issue of trust towards social media algorithms has certainly gained prominence in recent years, in accordance with the dominant presence of social media in our everyday lives. While past studies suggested that users were not highly aware of the algorithmic function (Eslami et al., 2015), participants in more recent studies are keenly conscious of the algorithmic curation of social media.

Most users have a negative attitude and exhibit a lack of trust towards news feed personalization. This is due to several reasons, from fear of personal data to feelings of ‘frustration’ caused by the algorithm limiting users’ decision-making ability and agency. Still, a lack of trust toward scientists (i.e., technology developers) has not been noticed. Hesitancy and distrust are mostly oriented toward the capabilities of AI and algorithms that constantly collect new data and are being trained. Quite frequently, lack of knowledge and fear of how personal data will be used are also distrust factors.

Eurobarometer surveys also suggest that several EU citizens do not trust the operation and content of social media. Indicatively, 26% of respondents in Greece do not find social media news to be trustworthy, while only 37% at the EU level consider that news is independent of political, governmental, or commercial pressures.

Additionally, users do not seem to trust the algorithmic operation of social media. Firstly, AI algorithms can cause various negative emotions in users. Swart (2021) examined the perspective of Facebook users in the Netherlands and concluded that users’ emotional experiences will highly determine their attitude and trust towards the algorithms and social media personalization systems. Distrust mostly emerges out of fear of personal data, as well as due to users feeling that they are missing important news. It is noted that in an earlier study conducted by Eslami et al. (2015), users attributed missing stories to their acquaintances’ actions to exclude them or to their own ‘false actions’ (e.g., scrolling too quickly) rather than to the Facebook News Feed algorithm. On the contrary, participants of recent studies are highly aware of the algorithmic news curation (Swart, 2021; Alvarado & Waern, 2018) and the reason why they miss some news.

Furthermore, several users tend to be skeptical when irrelevant results appear in their news feeds instead of the desired ones. More precisely, most social media users’ distrust is activated when the algorithm presents ‘false’ content (Swart, 2021; Eslami et al., 2019) or even annoying and out-of-date results (Bucher, 2017).

Social media algorithmic operation is then associated with additional negative feelings, including feelings of surveillance. As reported by Bucher (2017), Facebook users often get the feeling of being classified and watched in a non-pleasant way. In fact, users’ trust is further hindered when, apart from being watched, they feel like being manipulated -overall losing their decision-making ability and their uniqueness as users (Reviglio and Agosti, 2020). As argued by Swart (2021), algorithmic automatic decisions are far from neutral and create a sense of limited agency. For example, users can easily feel “uneasy about the ways in which they perceive the Facebook algorithm to make decisions on their behalf, controlling what they see and do not get to see, or even think that that there are intentional manipulations being experimented by the Facebook administration” (Bucher, 2017, p.39). Finally, unpleasant feelings can be caused by the algorithms’ “strong memory, the cruel connections they can make, and their overall inhuman functioning” (Bucher, 2017, p. 38). Algorithms systematically collect users’ data and occasionally present reviews/flashbacks of users’ personal moments and posts on social media; being, however, incapable of capturing human experience, the algorithmic reviews curation can remind users of unpleasant moments, discouraging them from putting trust in the algorithm.

Another major hindering factor is the opaqueness (opacity) of the algorithms themselves. It negatively affects users’ trust and raises various concerns about social media use. Pasquale (2015) argues that the selection mechanisms of algorithms mostly remain opaque due to the commercial nature of most social media platforms -a phenomenon also described as the “algorithmic black-box problem in social media” (Reviglio and Agosti, 2020, p. 2). The negative way users perceive this opaqueness is amplified due to the algorithms being systematically updated and collecting additional data -thus evoking both trust and comprehension issues (Swart, 2021). As argued by Eslami et al. (2015), algorithmic interfaces rarely include feedback mechanisms that can enhance users’ comprehension. Comprehension issues that are caused by opaqueness make several users believe that algorithms lack agency -again triggering distrust and negative emotions due to unexplainable and inconsistent (‘false’) results. (Eslami et al., 2019). Users also make assumptions that the opaque algorithms may be biased and lead to deceptive results. (ibid, 2019).

IANUS partners presented this data during the second of a series of Workshops on trust and distrust in science. The IANUS project considered it as a historical case study to unveil the different factors that affect (dis)trust in AI. The audience of stakeholders, scholars, and professionals that attended the event also noted that ‘trust’ could be understood as a default state, while ‘distrust’ is the outcome of a learning process or an experience that moves the original trustee away from fully trusting; the result can be a total distrust or a case-specific distrust. As a result, it was suggested that it might be better to talk about ‘confidence’ instead of trust, which opens interesting windows into new research on ‘algorithmic confidence’ and trust in AI.

Overall, the workshop not only highlighted important areas of study for addressing (dis)trust in AI but also emphasized that scientists, too, have values and intentions. Therefore, these, too, need to be made more transparent, together with the values and intentions of the media reporting on science.

SideMenu
Scroll to Top