The European Union’s new law on artificial intelligence could allow AI to access our subconscious minds.
The neurorights initiative led by the Neurorights Foundation advocates for the recognition of a new set of protection measures against the challenges of these technical advances. Some of these are being debated in connection with the Artificial Intelligence Act that is currently being negotiated within the EU’s governing bodies. This law must regulate, among other matters, the ability of AI to influence our subconscious (similarly to the Cambridge Analytica case but at much deeper levels).
Ignasi Beltran de Heredia, dean of the Faculty of Law and Political Science at the Universitat Oberta de Catalunya (UOC) and author of the book “Inteligencia artificial y neuroderechos” (Aranzadi, 2023), has just published an open-access article examining the challenges we face as a result of the advances in AI and questioning the EU’s latest bill from the perspective of neuroscience.
The risks of giving AI access to our subconscious
According to estimates, only 5% of human brain activity is conscious. The remaining 95% takes place subconsciously and not only do we have no real control over it, but we are also not even aware that it is taking place. As noted by Beltran de Heredia in his article, we are unaware of this extraordinary torrent of neural activity due to the high complexity of the interaction between our conscious mind and our subconscious behavior and our complete lack of control over the forces that guide our lives.
However, this does not mean that people cannot be influenced subconsciously. “There are two ways for artificial intelligence to do this,” he explained. “The first one is by collecting data about people’s lives and creating a decision architecture that leads you to make a particular decision. And the other – which is currently less developed – involves using applications or devices to directly create impulses that are irresistible for our subconscious mind in order to generate impulsive responses at a subliminal level, i.e. to create impulses.”
“As we gradually develop better and more powerful machines and become more closely connected to them, both options will become increasingly widespread. Algorithms will have more information about our lives, and creating tools to generate these impulsive responses will be easier […] The risk of these technologies is that, just like the Pied Piper of Hamelin, they will make us dance without knowing why.”
In Beltran de Heredia’s opinion, the field in which we are most likely to see the first attempts to influence human behavior through AI is that of work, more specifically occupational health. He argues that a number of intrusive technologies are currently in use. These include devices that monitor bus drivers to detect microsleep or electroencephalography (EEG) sensors used by employers to monitor employees’ brainwaves for stress and attention levels while at work. “It’s hard to predict the future but, if we don’t restrict such intrusive technologies while they’re still at the earliest stages of development, the most likely scenario is that they’ll keep improving and spreading their tendrils in the name of productivity.”
The (blurry) limits proposed by the EU
The new artificial intelligence regulation currently being discussed by the EU seeks to anticipate the possible future risks of this and other uses of AI. Article 5.1 of the original bill contained an express prohibition on placing on the market, putting into service, or using an AI that is capable of influencing a person other than at a conscious level in order to distort that person’s behavior. However, the amendments and modifications gradually introduced since then have slowly diluted the absolute nature of the prohibition.
The current bill, which will be used as a reference for the final wording of the law, bans such techniques only if they are intended to be manipulative or deceptive, they significantly affect a person’s ability to make an informed decision such that they make a decision that they would not otherwise have made, and they cause significant harm to someone in some way. In addition, the prohibition will not apply to AI systems for approved therapeutic purposes.
“Under the proposal, the AI ban will apply when there is serious harm and the person ends up doing something they wouldn’t otherwise have done. But that’s an unrealistic standard. If I can’t access my subconscious, I can’t possibly prove what I would’ve done without the stimulus, and I can’t prove the harm either […] If subliminal advertising is now completely banned without qualification, why are we leaving room for subliminal conditioning by artificial intelligence?”
According to Beltran de Heredia, if we leave the door open to our subconscious mind, even for good reasons, we won’t be able to control who has access to it, how it is accessed or the aims of this access. “Some may think that these concerns belong to an unlikely dystopian future. And yet there’s no doubt that we’re already being intruded upon at a depth that was unimaginable only a few years ago and that the public should be given the fullest protection possible. Our subconscious mind represents our most private selves and should be completely sealed from outside access. Indeed, we shouldn’t even be discussing it.”
There’s still much we don’t know about how our brain works and how the conscious and subconscious parts of our mind interact with each other. The brain remains a very elusive organ and, although science is making great strides in this field, we don’t know about many of the ways in which its functioning could be affected by certain stimuli. “We need to be aware of the risk of giving other people and companies access to our inner selves at such deep levels. In the context of the data economy, many public and private institutions are competing for access to our information but, paradoxically, it’s been shown time and time again that individuals place little value on their privacy,” he concluded.
Reference: “Algoritmos y condicionamiento por debajo del nivel consciente: un análisis crítico de la propuesta de Ley de Inteligencia Artificial de la Unión Europea” by Ignasi Beltran de Heredia Ruiz, 5 September 2023, Revista de la Facultad de Derecho de México.DOI: 10.22201/fder.24488933e.2023.286.86406
Source: SciTechDaily