We are constantly bending and being bent to the will of others—and neurotechnology may be enabling new methods for those seeking to bend others to their will. In 2021, Ahmed Shaheed, during his mandate as the UN Special Rapporteur on freedom of religion or belief, presented the first-ever report on freedom of thought, which argued that “freedom of thought” should be interpreted to include the right not to reveal one’s thoughts nor to be penalized for them. He also recommended that freedom of thought include the right not to have our thoughts manipulated. But manipulation is a slippery concept. If ill-defined, an absolute prohibition on it could do more harm to human interactions than good.
About a decade ago, I went down a rabbit hole trying to untangle claims about philosophical and legal free will. The written debate goes back at least two thousand years, but neuroscientists have recently joined the fray by arguing that decisionmaking is hardwired in our brains. Punishment, they argue, cannot be justified by retributivism—an eye for an eye—because people are not morally culpable for their actions. I disagree and have sought in my own scholarship to explain why freedom of action is a freedom worth defending.
In a well-known 1971 essay titled “Freedom of Will and the Concept of a Person,” the American philosopher Harry Frankfurt describes what he calls a peculiar characteristic of humans—that we can form “second-order desires.” Besides our subconscious preferences, biases, and desires, we can also “want to have (or not to have) certain desires and motives.” Frankfurt calls this capacity for reflective self-evaluation of those biases and desires “higher-order volition.” We don’t have to be fully aware of our unconscious desires to engage in reflective self-evaluation. We might be completely unaware of some desires, while being mistaken about others. Free will, he argues, is our capacity to form higher-order volitions, by recognizing certain desires as our own.
Frankfurt uses an example of two animals addicted to drugs. One is conflicted about his addiction—he craves the drug but also wants to be free from it. He wants his desire to be free from his addiction to become the one that drives his behavior. The other animal also has conflicting desires but lacks the capacity for self-reflection, and so doesn’t form a preference between them. The first animal is human while the latter is not, because only the first makes one of his desires “more truly his own, and in so doing, he withdraws himself from the other.” Frankfurt implicitly connects this to manipulation, by explaining that when the human addict is unable to break his addiction, he feels like the force “moving him to take the drug is a force other than his own.” When we believe that something other than our free will is driving us to act contrary to a desire we identify with, we feel that we are being manipulated.
Frankfurt’s example helps us distinguish between freedom of will and freedom of action. Freedom of will is our capacity to identify with our desires. Freedom of action enables us to make our will our own through our actions. Our freedom of will may be illusory—we commit to desires, biases, or preferences believing we have done so freely, but we may have chosen that preference because it was unconsciously primed by our environment. Our freedom may also be interfered with, making it harder to make our volition effective, if we are manipulated into acting compulsively with a “force other than [our] own.” We may want to stop checking Instagram every five minutes, but cleverly timed notifications compulsively draw us back in.
In Autonomy and Behavior Control, Gerald Dworkin wrote that a person’s motivation can belong to them without it truly being “their” motivation. This happens if that motivation is created using deception, or by short-circuiting someone’s desires and beliefs, and thus interferes with a person’s ability to reflect rationally on their interests, making them a passive recipient of the change. Philosophers Daniel Susser, Beate Roessler, and Helen Nissenbaum in a recent article extended manipulation by deception to the digital age, by arguing that acceptable influence appeals to our “capacity for conscious deliberation and choice,” while manipulation takes “hold of the controls,” depriving us of “authorship over [our] actions” and driving us “toward the manipulator’s ends.”
Other scholars define manipulation as interfering with our “mental integrity,” which Andrea Lavazza describes as “the individual’s mastery of his mental states and his brain data.” He argues that we should draw a bright line that prohibits unconsented-to interferences that “can read, spread, or alter such states and data in order to condition the individual in any way.” Marcello Ienca and Roberto Adorno take a more tempered view to non-consensual interference with the brain, focusing on those that technologies have the potential to cause the individual harm.
These accounts all coalesce around a definition of manipulation as hidden attempts to use our cognitive biases, emotions, or subconscious “as vulnerabilities to exploit” by bypassing our capacity for conscious thought. What they get wrong is that they build on an outdated Freudian view that our psyche has “two minds”—a conscious and an unconscious one. We have since learned that unconscious processes use the same brain regions in the same ways as conscious processes. Our unconscious mind is primed all the time through regular stimuli (rather than hidden and subliminal ones). Think of the popcorn and soda advertisements before a movie begins. They are hardly hidden, but they play to our baked-in desires. Advertisers and tech giants have just gotten much better at identifying and targeting them. Indeed, social psychologists have argued for decades that people are unaware of the powerful influences that are brought to bear on their choices and behavior. Which is why it’s critical that we understand what others can and can’t do to change our minds as neurotechnology enables newfound ways to track and hack the human brain.
The starkest examples of manipulation include assaulting our brains with intentional (and nonconsensual) administration of “mind control” drugs, or using weapons to rob us of even having the capacity to choose. These clearly violate our right to self-determination and freedom of thought. The more difficult cases to resolve, however, are the subtler influences that shape our everyday decision-making and that are quickly becoming normalized. It’s much easier to prime us to act in ways that are consistent with our existing goals than to use these weapons. Priming us with cues that are related to our goals will focus our “selective attention” on “goal-relevant features of the environment,” which can shape our choices that follow.
Professors of marketing and psychology Gráinne Fitzsimons, Tanya Chartrand, and Gavan Fitzsimons found compelling evidence of this effect when they subliminally primed study participants with Apple and IBM brand logos. The Apple logo prime led people to act more creatively on subsequent study tasks compared to subliminal IBM logo priming—but only when creativity was a part of the participants’ self-descriptions. Apple evoked in these participants an association of creativity, leading those with a prior stated goal of being creative to act more creatively on subsequent tasks. Because IBM didn’t evoke the same association, even those with creativity as a stated goal didn’t act more creatively when primed with IBM instead.
Even asking us questions about our hidden vices can change our subsequent behavior. We often have conflicting attitudes about behaviors like smoking, drinking, and using drugs. We get a short-term reward (like a dopamine hit in our brain) when we indulge, but we also understand the negative long-term consequences that go with them. When we hold conflicting explicit negative and implicit positive attitudes about a behavior, priming may give us “license to sin.” Frankfurt’s human addict wants to break his addiction but asking him how often he plans to take the drug in the next week can nudge him toward doing so more often, despite his explicit preference otherwise. When researchers asked students about their attitudes toward skipping class, they reported strongly negative attitudes toward doing so, but then skipped class more frequently in the weeks following. When study participants were asked how often they would go out drinking or watch television instead of studying, they also did so more frequently in the week following. But when framed negatively—telling participants that drinking and wasting time watching television are vices to be avoided—the vice behavior remained the same. How an influencer frames a question can liberate us to sin or increase our ability to avoid doing so.
All of this makes it exceptionally unrealistic at best, or outdated at worst, to define unlawful manipulation as intentionally using hidden influences to affect our decision-making. Most practices are not hidden at all, we just don’t realize how they influence our behavior. When neuromarketers use advances in neurotechnology to discover what makes us tick, and then use that information to make their products more enticing, they don’t render us unable to act consistently with our goals any more than putting tempting candy or gossip magazines near the check-out counter at a store does. As of yet, no one has discovered the so-called buy button in our brains. When the Disinformation Dozen, the twelve people to whom most of the misleading information about vaccines have been attributed, intentionally exploit evolutionary shortcuts in our brains such as clickbait and alarming headlines or claims couched in pseudoscience to make us more susceptible to fake news content, they don’t prevent us from getting vaccinated, even if their bad arguments do appeal to our heuristics.
But if a product is designed to be addictive and becomes actually or nearly impossible to resist, our freedom of action will be hindered and our self-determination and freedom of thought will be put at risk. Two of the three rights that comprise our right cognitive liberty.
Shaheed concedes that freedom of thought cannot and should not be used to prevent “ordinary social influences, such as persuasion.” We may encourage others, advise them, even cajole them, he argues. But at some point, an influence crosses the line from permissible persuasion to impermissible manipulation. He offers a nonexclusive set of factors to consider, including (1) whether the person has consented to the practice with fully and freely informed consent; (2) whether a reasonable person would be aware of the intended influence; (3) whether there is a power imbalance between the influencer and target; and (4) whether there has been actual harm to the person subject to manipulation.
These are helpful but still don’t make clear the nature of the influence we are defending ourselves against. We can’t and shouldn’t attempt to regulate every marketer, politician, artist, or entity who tries to appeal to our unconscious biases, desires, and neural shortcuts, lest we interfere with everyday interactions that are part of what it means to be human, whether those attempts are hidden or visible, or targeted at our unconscious or conscious neural processes. But when a person or entity tries to override our will by making it exceedingly difficult to act consistently with our desires, and they act with the intention to cause actual harm, they violate our freedom of action, and our right to cognitive liberty should be invoked as a reason to regulate their conduct.
However begrudgingly, we must admit that neuromarketing per se does not inherently violate cognitive liberty, so long as the research is conducted ethically and the findings are not used to intentionally cause us harm. Neuromarketing may help marketers better understand our higher-order goals and preferences we have committed ourselves to, and serve us up more of what we want. We can’t confidently say the same about intentional efforts to exploit our brains by bypassing our goals and preferences to addict us to technology, social media platforms, or other products all of which are designed to overcome our freedom of action, and have harmful consequences for individuals.
While our brains may fall for bad arguments when cleverly framed, we can and should encourage societal interventions that nudge us to slow down and think critically, or to combat intentional efforts to exploit our brains. When Twitter asks “Would you like to read the article first?” before retweeting it, it’s asking us to slow down and think critically before we act. More companies ought to implement mechanisms that encourage users to do the same. And we should aspire to do so ourselves even when we aren’t nudged by others. We can and should become more aware of how others are using clickbait headlines or emotional appeals to tap into shortcuts in how we process information, and use our knowledge of those practices to secure ourselves against them. We should avidly check the accuracy and credibility of information before accepting it as true, and seek out a diversity of sources and perspectives to combat intentional efforts to limit our thinking. Even taking breaks from technology, news, and other sources of information can give our brain time to recharge and process information.
The right to cognitive liberty protects our right to self determination over our brains and mental processes. It includes the right to be free from manipulation by others, but also the right to resist manipulation and reclaim our brains. Building resilience to tactics that shortcut our thinking will help us exercise this right. But freedom of thought should not be used as an excuse for filtering that information for us.
As for Shaheed’s recommendation that we consider whether a person has freely and voluntarily consented to an intervention? While consent will rarely be enough to shield us from the coming encroachments to cognitive liberty, with at least the newest technique we turn to next, it should be a critical factor in considering the legitimacy of the technique.
From The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology by Nita A. Farahany. Copyright © 2023 by the author and reprinted by permission of St. Martin’s Publishing Group.
Lire l’article complet sur : www.wired.com
Leave A Comment