What Does it Mean to be a "Free Thinker" When Algorithms Control Your Beliefs?

Ask yourself this question: How do you know you are free? “I can do anything I want within the bounds of the law, and that’s freedom,” you might say. But how did you come to want the things that you want? Are your wants intrinsic, like hunger or thirst? Or did some external force influence you to want one thing over another, like a flashy advertisement? In the wake of the recent controversy surrounding the British political consulting firm Cambridge Analytica, these questions, which have been around since the dawn of TV ad campaigns, have taken on a darker and more urgent tone in the digital age.

Internet marketers working for groups like Cambridge Analytica and political actors across the globe have figured out how to influence the outcomes of democratic elections using targeted social media ad campaigns. This is a massive problem because it attacks the legitimacy of elections: if elections are meant to measure the will of the people, but the people want what corporations and political groups systematically influence them to want, then elections are simply a measure of how effectively the corporations have influenced the people using their marketing budgets. Votes become data points in the ever-expanding data set which marketers use to build ever more influential algorithms. Elections become a technological arms race between investors to see who can capture a larger market share with the most highly refined algorithms. Democracy is twisted into a marketplace in which the group with the best algorithm and the most capital to invest wins, and the convictions of voters are the raw resource to be mined. 

Given the ubiquity of these belief-forming algorithms in our lives, from Google to Youtube to Facebook and beyond, we must ask ourselves: when algorithms control our beliefs, are we really free? After all, in order to be truly free, our wants and desires must be freely acquired. If a large corporation with hundreds of engineers, designers, writers, psychologists, and marketers have figured out a way to create a web of interconnected ads and websites that are effective enough to capture your attention, suck you in, and then influence you into wanting something, say, a certain political candidate to be president, that you did not want beforehand, would you still say that you freely chose that candidate when you vote for them on election day?

One of the most pressing questions facing Americans ahead of election day on November 5th, 2020 is the question of how to protect our elections from the influence of online marketing campaigns of this sort. The problem is that it is extremely easy to influence election outcomes. 

In episode #915 of the Planet Money podcast titled How To Meddle In An Election, NPR reporters interviewed a researcher named David Goldstein who decided to try to reproduce the results of the Cambridge Analytica strategy in order to test its efficacy. The idea behind this strategy was to create profiles of people using data bought from tech companies like Facebook, and use those profiles to set up a personalized ecosystem of online content meant to subtly persuade people to change their minds, often without them even knowing this was happening, “kinda like drawing a fly into a web,” said Goldstein. Moreover, while the ads that attract people to click into the spider’s web are public, these ads are not shown to everyone the way they would be on TV or billboards. Instead, each advertisement is only shown to a select group of individuals on social media platforms who are targeted in advance by the ad companies. The ads are designed for specific people using data about what they have clicked on in the past, and they are refined through an iterative process of testing over time to make them more enticing for the targeted individuals to click on. Eventually, each ad will generate a predictable number of clicks. When users do click on the ads, they are brought to websites and blogs with articles that are written using language that is designed to change their political opinions or influence them to vote for a certain candidate. Importantly, even though not everyone will click on an ad, maybe 3% of people will, and if a 3% bump in the polls is all that a candidate needs to win an election, then that ad could make the difference. Worryingly, this method can also be used for voter suppression. For instance, the goal of David Goldstein's experiment was to encourage Republican voters in Alabama who supported Roy Moore to stay home on election day during the 2018 midterms. The ecosystem of websites and content that people were sucked into when they clicked on one of Goldstein’s ads was designed specifically to discourage them from showing up to the poles. Goldstein describes the results to NPR’s Alex Goldmark thusly:

GOLDSTEIN: The Democrats in our experimental group turned out at a 4% higher level than the Democrats in the control group.
GOLDMARK: Four percent.
GOLDSTEIN: Four percent.
GOLDMARK: So you think that your work caused 4% higher turnout...
GOLDSTEIN: Yes.
GOLDMARK: ...In the places you tried it?
GOLDSTEIN: Yes.
GOLDMARK: That's a lot.
GOLDSTEIN: It is. It is. Most...
GOLDMARK: That is more than the margin of victory in a lot of elections.
GOLDSTEIN: Loads.
GOLDMARK: Yeah.
GOLDSTEIN: (Laughter).
GOLDMARK: It's enough to flip an election.
GOLDSTEIN: Absolutely.
GOLDMARK: And the Republicans, did it work on them?
GOLDSTEIN: It did. The experimental condition was 2.5% lower for the moderate Republicans.
GOLDMARK: OK, so you believe you caused a 2.5% lower turnout for moderate Republicans that you...
GOLDSTEIN: Exactly.
GOLDMARK: ...That you targeted?
GOLDSTEIN: Exactly.
GOLDMARK: OK.
GOLDSTEIN: And then for the conservative Republicans, it was 4.4%...
GOLDMARK: OK.
GOLDSTEIN: ...Of a drop-off.
GOLDMARK: Which is also a lot to take out of someone's base.
GOLDSTEIN: Exactly.
CHANG: Roy Moore loses the election to Doug Jones, the Democrat, by just 1.5 percentage points; that's just a little over 20,000 votes statewide.
GOLDMARK: So I asked David what did he think that his science experiment had caused?
GOLDSTEIN: I would say we could reasonably lay claim to about 8 to 9,000 of those votes.
GOLDMARK: So that sounds like a lot of votes.
GOLDSTEIN: (Laughter).
GOLDMARK: No, I'm serious. Like, it's...
GOLDSTEIN: (Laughter) Yeah, I know.
GOLDMARK: It just - it's a big claim.

According to NPR, countless similar campaigns are running behind the scenes, and many of them will never be made public. The inevitable conclusion is that, as things currently stand, elections can be bought in the United States. But there is a deeper lesson to be learned from Cambridge Analytica and Goldstein’s experiment that all Americans need to be aware of going into the 2020 election season: we are facing an attack on our liberty that is unlike anything that has come before it.

The source of that attack is coming from the expansion of a new type of power. Dr. Shoshana Zuboff, Charles Edward Wilson Professor Emerita, Harvard Business School, and author of the best seller The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, calls this new type of power “Instrumentarian” power. 

According to Dr. Zuboff, Instrumentarian power is different from totalitarian power in several key ways. Totalitarian power strives to control an individual from the inside out. It forces societies to conform to an ideology, combining industrial capitalism with violence to achieve this goal. The industrialization of violence characterized the totalitarian states of the 20th century, manifested in the gulags and work camps of countless nations, which continue even to this day in North Korea, and as some argue, in the prison industrial complex in America. In contrast, as Dr. Zuboff puts it in the Hidden Forces podcast episode from 2/15/19, “The nature of instrumentarian power… is used to shape, tune, herd the behavior of individuals and populations toward the kinds of commercial outcomes that surveillance capitalists and their business customers seek, and it uses the instrumentation of the ubiquitous digital architecture that now surrounds us in our daily lives…. as the means to a sort of global capability for behavioral modification that can push us and herd us and shunt us in the direction that it wants us to go for the sake of its commercial outcomes. But the thing is here that it doesn’t care what we do — it doesn’t care what we believe, feel, if we are in great pain or joy - all it cares about is that, whatever we are doing and thinking, that we are doing those things in ways that it can capture the behavioral data from those activities, translate the behavioral data into predictions, sell those predictions into new markets that trade exclusively in behavioral futures and predictions of what we will do now, soon, and later, sell those to its business customers who have a vested interest in knowing our future behavior.”

As instrumentarian power encroaches further and further on our liberties, deep asymmetries of knowledge and power arise. Large tech companies now know everything about us, including our most private details — what we eat, who we love, how we voted, our medical history, and often our deepest inner experiences that not even our closest friends and family know — yet we know nothing about them. These companies have deep insights into our inner worlds, but we have no insight into their inner worlds. They can control our smallest beliefs or reshape our entire worldview, while we can’t even make them pay their taxes. 

Over the past decade, there have been significant attempts to reign in these new Instrumentarians. Activists have called for stricter privacy laws for years with limited success. The European Union passed the General Data Protection Regulation (GDPR) in 2016, partly in response to the manipulation of the Brexit vote and the 2016 presidential election in the US by the Instrumentarians, which was a good first step. But no such regulation exists in the United States. Part of the reason for this lag in regulations has to do with the fact that we have not had the vocabulary needed to name the problem until very recently. Dr. Zuboff’s work is monumentally helpful in developing the conversation around these topics. She has named the Instrumentalists, popularized the concept of surveillance capitalism, and described in detail the economic and political systems of the digital era. 

There is more work to be done, of course. Another necessary step in the conversation is a reevaluation of our understanding of current human rights legislation, and perhaps a redeployment of privacy rights with an expanded understanding of the methods by which companies can violate our privacy with new technologies. But our understanding of the right to privacy is not the only freedom that must be updated. Our understanding of freedom of speech must also be reconsidered. After all, the tech giants can now influence what we say, who we say things to, and how we say them. But their control goes further than speech. Not only can they shape conversations between users by deplatforming users or selectively displaying content, and not only can they or their business partners influence individuals to say what they want them to say, but they can control our moods, attitudes, and mental health. 

A study in 2014 of how users respond to content in Facebook’s newsfeed over time demonstrated that Facebook could cause negative emotions in users. In an academic paper published in conjunction with two university researchers, the company reported that, for one week in January 2012, it had altered the number of positive and negative posts in the news feeds of 689,003 randomly selected users to see what effect the changes had on the tone of the posts the recipients then wrote. According to the New York Times, “the researchers found that moods were contagious. The people who saw more positive posts responded by writing more positive posts. Similarly, seeing more negative content prompted the viewers to be more negative in their own posts. Although academic protocols generally call for getting people’s consent before psychological research is conducted on them, Facebook didn’t ask for explicit permission from those it selected for the experiment. It argued that its 1.28 billion monthly users gave blanket consent to the company’s research as a condition of using the service.” 

More recent studies have expanded the research on Facebook and confirmed similar results other platforms like Twitter, Instagram, and Snapchat. With moods and feelings and mental states in general being manipulated at will by corporate interests, something broader than privacy rights, or voting rights, or freedom of speech is being violated here. Something about our freedom of thought is under attack. 

A growing group of activists and political commentators in the United States have given a name to this something: cognitive liberty. According to wikipedia, cognitive liberty, or the "right to mental self-determination,” is the freedom of an individual to control his or her own mental processes, cognition, and consciousness. The term ‘cognitive liberty’ has a controversial reputation following its previous deployment in legal defenses related to the consumption of psychedelic drugs and the application of pharmaceutical drugs to participants in legal proceedings. But the term itself has a wider meaning that is useful for describing the particular threat that surveillance capitalism poses to human rights. 

Insofar as surveillance capitalism attempts to monitor and influence our beliefs, mental states, and behaviors, it is violating our right to control our own mental processes. The reason that ‘cognitive liberty’ is the correct term for describing what is violated when facebook ad campaigns influence us into voting for a different candidate than we otherwise would have and not ‘right to privacy’ or ‘freedom of speech’ or ‘voting rights’ is that, in some real sense, we still chose to vote the way we did. In such circumstances, we chose to give the tech companies access to our private data when we signed the user agreements (to the extent that we understood what we were signing, which we arguably did not), we chose to express our opinions on the platform, and we chose to vote the way we did in the voting booth. What we did not choose was the order in which information was presented to us for cognitive processing. 

Presenting information in an organized way in and of itself is not wrong. But by presenting information in a specific order, an order that is tailored specifically to each user according to data about past behaviors, surveillance capitalists can influence what we think. For instance, suppose Facebook marketers want us to think about how rainy it is outside on election day so that we do not go out and vote — they might show us lots of pictures of puppies that are miserably wet. What is wrong is not that they show us puppies or other information in an order of their choosing; what is wrong is that they are doing so in order to manipulate our thoughts, our moods, our opinions — broadly, our cognition. Cognitive liberty is not included in any human rights legislation, but perhaps it should be. After all, the right to mental self-determination is a precondition for the freedom of speech, voting rights, and privacy rights. Without protections for cognitive liberty, it is not clear how we can preserve our democratic institutions.

So, let’s ask the question we started with again: how do you know you are free? Perhaps you only believe you are free because internet marketers influenced you into believing you are free. Perhaps internet marketers even gave you many reasons to back up the belief that you are free. Perhaps each reason came in the form of an in-depth article or an emotionally provocative image, and collectively, this content has influenced you into holding the strong conviction that you are free. But believing that you are free and actually being free are two different things. And after reading this article, if you doubt that you are free, how will you check? You can’t verify whether your cognitive liberty has been violated or not by writing to the tech giants and asking them to show you how they operate, because they will not tell you. There is no public database for you to check. You signed their user agreement, remember? Instead, will you google “am I free” and see what the Google search algorithm decides to show you? Will you ask a friend on Facebook to read this article and discuss it with you, thereby giving Facebook more data to use to influence you? Will you search for videos about freedom on Youtube and trust the Youtube algorithm to show you content that is free from any ulterior motives? Whatever you do to check on your freedom, the data will be extracted from your actions, packaged into a larger data set along with all of the other data they have about you — your health, finances, hobbies, appearance, voice, location, loved ones, employment, etc. — and sold to third parties who will then use your data to create ads that you will click on in the future. In short, they already know what you will do in the future, and unless you are willing to stop using the internet, get rid of all internet enabled equipment, and cancel all electronic services (online banking, health tracking, GPS, etc.), you currently have no control over that. To be blunt: without new regulations or oversight, your data will be used to influence your future behavior, and you have no choice in the matter. If having no choice other than to be manipulated using behavioral modification techniques is freedom, then yes, you are free.

Related News
Comments