Researchers highlight need for public education on impact of algorithms.
In a new series of experiments, artificial intelligence (A.I.) algorithms were able to influence people’s preferences for fictitious political candidates or potential romantic partners, depending on whether recommendations were explicit or covert. Ujué Agudo and Helena Matute of Universidad de Deusto in Bilbao, Spain, present these findings in the open-access journal PLOS ONE on April 21, 2021.
From Facebook to Google search results, many people encounter A.I. algorithms every day. Private companies are conducting extensive research on the data of their users, generating insights into human behavior that are not publicly available. Academic social science research lags behind private research, and public knowledge on how A.I. algorithms might shape people’s decisions is lacking.
To shed new light, Agudo and Matute conducted a series of experiments that tested the influence of A.I. algorithms in different contexts. They recruited participants to interact with algorithms that presented photos of fictitious political candidates or online dating candidates, and asked the participants to indicate whom they would vote for or message. The algorithms promoted some candidates over others, either explicitly (e.g., “90% compatibility”) or covertly, such as by showing their photos more often than others.
Overall, the experiments showed that the algorithms had a significant influence on participants’ decisions of whom to vote for or message. For political decisions, explicit manipulation significantly influenced decisions, while covert manipulation was not effective. The opposite effect was seen for dating decisions.
The researchers speculate these results might reflect people’s preference for human explicit advice when it comes to subjective matters such as dating, while people might prefer algorithmic advice on rational political decisions.
In light of their findings, the authors express support for initiatives that seek to boost the trustworthiness of A.I., such as the European Commission’s Ethics Guidelines for Trustworthy AI and DARPA’s explainable AI (XAI) program. Still, they caution that more publicly available research is needed to understand human vulnerability to algorithms.
Meanwhile, the researchers call for efforts to educate the public on the risks of blind trust in recommendations from algorithms. They also highlight the need for discussions around ownership of the data that drives these algorithms.
The authors add: “If a fictitious and simplistic algorithm like ours can achieve such a level of persuasion without establishing actually customized profiles of the participants (and using the same photographs in all cases), a more sophisticated algorithm such as those with which people interact in their daily lives should certainly be able to exert a much stronger influence.”
Reference: “The influence of algorithms on political and dating decisions” by Ujué Agudo and Helena Matute, 21 April 2021, PLOS ONE.DOI: 10.1371/journal.pone.0249454
Funding: Support for this research was provided by Grant PSI2016-78818-R from Agencia Estatal de Investigación of the Spanish Government, and Grant IT955-16 from the Basque Government, both awarded to HM. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.