As the 2020 presidential campaign cycle intensifies, an increasing number of Americans will volunteer for political campaigns. Maybe you have volunteered or have interacted with a campaign volunteer via text message, email, telephone, or in person at your front door. Distinct from any other time in history, twenty-first century political campaigns in the United States explicitly focus on and foster these volunteer-voter interactions as they have been found to measurably impact voting behavior, including by swaying votes for specific candidates and turning out voters to the polls. Expanding these kinds of volunteer practices are made possible by the big data campaign model, which helps campaigns make determinations over who volunteers interact with and how they interact, including by emphasizing scripted interaction, campaign messaging, and data extraction. These interactions impact how we see and experience democratic practice, political agency, and political efficacy, in ways that further devalue everyday political conversation and exacerbate growing political polarization.
The big data campaign model, which rose to prominence with Barack Obama’s presidential run in 2008, is derived from business and marketing models. Now ubiquitous, big data campaigns foreground the mass collection of personal information about potential voters, as this is considered critical for building and streamlining different arms of campaign infrastructure—from volunteer recruitment, to fundraising, to voter outreach. The collection of a broad range of information about potential voters’ identities and preferences is also considered a vital tool for persuasion, as it aids in the top-down development of what they consider to be effective messaging, as well as the deployment of advertisements to specific voters through various forms of media, a phenomenon that is also known as microtargeting. In essence, big data political campaigns prioritize the collection of data on a mass scale, based on the assumption that this data can be meaningfully quantified, measured, and interpreted to create maximally efficient campaign expenditures and maximally effective political messaging and advertising. They apply scientific methods to the control of voting blocs, who are positioned as consumers.
Based on this belief in the power of data and data interpretation, contemporary campaigns have made two moves that critically impact volunteer-voter interactions: they have narrowed who is contacted for one-on-one interaction and attempted to exert more control over how these interactions take place.
First, the who: When volunteering for a campaign, voter information sheets are used to direct volunteers to engage with potential voters through specific mediums (e.g., door knocks, text messages, phone calls). The seeming randomness of these potential voters is particularly notable when canvassing—instead of targeting an entire neighborhood, volunteers may be asked to knock on the doors of only two houses on one block and three houses on the next. Why these voters? These potential voters have been labeled as worthwhile targets by outside consultants and high-level campaign operatives through a somewhat cryptic combination of predictive modeling and statistical analyses based on individual voter data. Such targeting almost always directs volunteers to interact with potential voters who are more likely to vote for their particular candidate, although the measurement of likeliness can vary, depending on location or campaign timeline or both. Volunteers are frequently directed to interact with potential voters who fall into one of two categories: those labeled as sporadically or infrequently voting partisans, with whom volunteers share party affiliation, or those labeled as frequently undecided voters, with whom volunteers do not necessarily share party affiliation.
Second, the how: During one-on-one interactions, volunteers are instructed to follow a script, ask specific questions, and keep their interactions brief. They are also asked to transform their conversations into quantifiable metrics of candidate and issue-based support (e.g., by marking 1 for “strong support”). By following the script as written, volunteers are also asked to convey campaign messaging, which is considered crucial for establishing a candidate’s brand. Although volunteers are told to personalize the interaction with short stories about why they are supporting the candidate, responses by potential voters that cannot be translated into metrics are largely ignored by campaign operatives. Long open-ended conversations are also discouraged in order to maximize the number of contacts that can be made in a given window of time, and thus the amount of data that can be extracted. In sum, campaign volunteers are trained to interact with a specific segment of the American voting population, and to do so primarily as data collection agents and as a form of media for the campaign.
Even as volunteers might see what they are doing as valuable persuasive work, these big data practices can impact how volunteers and potential voters identify specific language practices as persuasive. This can, in turn, impact how they assess their own word and (inter)action choices as politically efficacious, and ultimately how they conceptualize democratic practice itself. Engagement in one-on-one interactions about politics with unknown others is something that scholars position as a critical pillar of democracy and democratic legitimacy. Yet, interactions are normally positioned as most valuable for democracy when speakers are able to equally participate in conversations, engaging in a form of dialectic through which they can reason together over issues of mutual concern. In the big data campaign, volunteers are not only asked to believe in the importance of quantifiable data and metrics for political efficacy, but also in the wisdom of top-down scripting and messaging. This teaches everyday political actors that their own voice and words have less value and less impact in achieving specific political outcomes.
In reality, despite the data focus of volunteer training and voter outreach, we know that some—perhaps many—campaign volunteers deviate from scripts or invent voter data or both in an effort to simply have conversations with other political actors. The tendency toward ditching the scripts and unreliably documenting information drawn from conversations may, in part, contribute to the confusion that persists among researchers and campaign operatives who know that one-on-one interactions are important for achieving campaign goals, but who openly admit that they don’t really know why or how these interactions are meaningful or effective (Nielsen 2012).
However, beyond considering the ways in which campaigns more and less successfully attempt to structure the form and focus of volunteer-voter interactions, we should also note that contemporary campaign volunteerism also means that we are being socialized into the belief that only certain political actors—those who are likely to vote for the same person that we are—are worth engaging in political conversation at all. Unless advised to do so by a consultant, campaigns do not use their massive voter-information databanks to foster interactions with political actors who are not thought to share party affiliation or candidate preference, even if and when these political actors might share concerns about similar sets of issues. This kind of voter outreach targeting can exacerbate existing polarization, which is already being exacerbated by other data-driven metrics, such as the algorithms that structure the ways in which we interact with individuals and advertisements in social media spaces.
Ultimately, participation in big data political campaigns pushes volunteers into practices that further devalue political conversation, marginalize individual political agency, and exacerbate polarization. As has been recently addressed by a number of anthropologists, including Karen Ho and Jillian Cavanaugh and Aaron Graan, Adam Hodges, and Meg Stalcup, the inability for differently positioned Americans to agree on facts or communicate productively is a major concern in assessing the health of American democracy, particularly in the current political environment. While not overlooking the potential impact of data, messaging, and advertising on influencing publics or winning political contests, we need to recognize what the big data campaign structure means for us—especially in terms of shaping the who and how of our interactions with others about politics in everyday life, and ultimately understandings of our own agency as actors in the political system.
What should we do in light of this information? First, we can volunteer for campaigns with a better awareness of these phenomena and how they might impact our interactions with others as political practice. Second, knowing this, we might work to create more community-based, in-person and virtual spaces for interaction that are explicitly structured around shared concerns. In these spaces, those who can must do the hard work of listening to others and use language that can bypass the polarizing frames of party-based messaging. These changes to the whos and hows of interaction might help us to recognize and reinforce existing points of commonality in and through everyday political conversations.
Melissa Maceyko is a faculty member at California State University, Long Beach. She teaches and writes on gender, democracy, politics, language, and the United States. She will be starting her position as an APLA Section News contributing editor in June. You can follow her at @maceykom.
Luzilda Carrillo Arciniega and Chandra Middleton are contributing editors for the Association for Political and Legal Anthropology’s section news column.
Cite as: Maceyko, Melissa. 2020. “Volunteering for Political Campaigns Is Impacting Democracy, but Not in the Ways You Might Expect.” Anthropology News website, May 28, 2020. DOI: 10.1111/AN.1408