Article begins

Analyzing online ecosystems in real time, teams of anthropologists and data scientists can begin to understand rapid social changes as they happen.

Ask not what data science can do for anthropology, but what anthropology can do for data science. —Anders Kristian Munk, Why the World Needs Anthropologists Symposium 2022

In the last decade, emerging technologies, such as AI, immersive realities, and new and more addictive social networks, have permeated almost every aspect of our lives. These innovations are influencing how we form identities and belief systems. Social media influences the rise of subcultures on TikTok, the communications of extremist communities on Telegram, and the rapid spread of conspiracy theories that bounce around various online echo chambers. 

People with shared values or experiences can connect and form online cultures at unprecedented scales and speeds. But these new cultures are evolving and shifting faster than our current ability to understand them. 

To keep up with the depth and speed of online transformations, digital anthropologists are teaming up with data scientists to develop interdisciplinary methods and tools to bring the deep cultural context of anthropology to scales available only through data science—producing a surge in innovative methodologies for more effectively decoding online cultures in real time.

Tracking far-right extremism in Brazil

On Sunday, January 8, 2023, a group of Jair Bolsonaro supporters stormed prominent government buildings in Brasilia in a violent protest against the recently and democratically elected president of Brazil, Luisz Inácio Lula da Silva. Many Brazilians wondered how such an organized collective attack could have taken so many by surprise. But one person who was not surprised is Letícia Cesarino, digital anthropologist at the Federal University of Santa Catarina (UFSC). She has spent the last year working with a team of data scientists, researchers, and students to track threats around election fraud by blending digital anthropology and data science to understand far-right groups on Telegram.

A few years ago, Cesarino was conducting “live ethnographies” on WhatsApp to understand the digitalization of electoral campaigns and voter behavior, and discovered she was missing a systems-level perspective on how various groups were coming together and operating: “I realized I needed to see how algorithms could add to the conventional ethnographic outlook by showing an ecosystem view from the outside to identify systemic patterns across users, influencers, and algorithms.”

Cesarino needed the tools of data science to see the influencing structures of complex online groups. This ambition and her work with an interdisciplinary team led to the development of an innovative research dashboard that tracks far-right groups on Telegram in real time. The platform was designed by data scientists to collect live data from Telegram, but its insights are driven by search queries designed by anthropologists. Typically, data-driven dashboards use basic search queries that lack cultural context and meaning and so become stale quickly. A search lexicon might be relevant one day and irrelevant the next due to the changing contextual lexicon (hashtags, words, topics) far-right groups use. And when a search query lacks the right lexicon, communities remain invisible in the data results. To avoid this, Cesarino and her team run daily ethnographic immersions into individual far-right ecosystems so they can ensure the lexicon in their search queries remains culturally accurate and in step with online change. As Cesarino put it, “We are trying to create real-time continuous learning between the anthropologist and the search algorithm.”

While game-changing AI innovations like ChatGPT seem eerily human, it’s quite literally missing “anthropological intelligence” garnered from decades of research.

With this powerful new tool the team may well have been able to predict the January 8 attack; unfortunately, they were all away on holiday and not updating their search queries with the transforming contextual lexicons on Telegram. But Cesarino’s phone is ringing off the hook as more and more organizations, including sectors of the judiciary and the media, realize how valuable this interdisciplinary work is to making sense of social change.

Cesarino sits on the edge of something exciting and knows this is only the beginning of a much larger need to bring digital anthropology and data science together to understand and make sense of changing online ecosystems at real-time speeds and scales.

Exploring machine learning in the United States

Five thousand miles north in New York, digital anthropologist, business anthropologist, and podcaster Matt Artz has been thinking about how the entire body of anthropological knowledge, which exists almost entirely in academic papers locked behind paywalls, is absent from the data training sets used to teach AI tools. Data training sets essentially set the boundaries of what an algorithm can understand about the world. And while game-changing AI innovations like ChatGPT seem eerily human, it’s quite literally missing “anthropological intelligence” garnered from decades of research.

This realization has led him to devise a proposal to build the first anthropological knowledge graph to “translate” and train algorithms on the corpus of anthropological research.

Artz believes that if anthropologists could find a way to translate their research into knowledge graphs, it would be a significant step towards building smarter and more culturally aware AI. This more anthropologically intelligent AI could one day support the development of better ethnographic technologies for anthropologists: By building an anthropological knowledge graph of all our research, we could lay the groundwork for better digital tools to assist anthropologists—and others—in the future.”

As the digital world continues to evolve and complexify, Artz believes it’s more important than ever that we incorporate anthropological knowledge into the backbone of artificial intelligence, to ensure these innovations advance ethically and empathetically.

Credit: An Pan
Decorative element. Purple, blue, pink, yellow, black, and white lines overlap and create the impression of a digital, visual glitch, like one might see on a malfunctioning monitor.

Empowering survivors of online hate in India

A few years ago, Hameeda Syed, a Muslim woman working as a journalist in India, coauthored a report on a new government development project with a male colleague. She was quickly targeted on Twitter with floods of hateful online comments about her identity, religion, and background, while her male colleague was spared such vitriolic attention.

Today, online hate takes various forms but is often identified as the sharing of hateful and prejudiced content that encourages or promotes violence against a person or group. While this kind of content has existed for as long as there were people to create and spread it, online hate and discrimination has reached unprecedented levels. As UN Secretary General António Guterres states, “Social media provides a global megaphone for hate.” And even though online discrimination and violence affects all kinds of people and communities around the world, it has had disproportionate effects on women and the LGBTQI+ community. More than a third of women globally experienced abuse online in the year between May 2019 and May 2020, according to the Economist Intelligence Unit. Additionally, the GLAAD Social Media Safety Index released in 2021 reports 40 percent of LGBTQI+ adults and 49 percent of transgender and nonbinary people do not feel welcomed and safe on social media.

In Syed’s case, she feels she is more vulnerable to online hate because she wears a hijab, making her religion more visible. The comments she received were cruel, and she just wanted to disappear: “I distinctly remember feeling a sense of helplessness; feeling unable to do anything about it, except delete my account.”

Syed felt torn. There was no neutral space to have a conversation or respond because Twitter was not designed for this type of contextual discourse or support. She realized how most methods to combat online hate speech do not actually work, and so together with a team of researchers from computational social science, journalism, design thinking, data science, and anthropology, she set up Dignity in Difference, an organization that takes an interdisciplinary approach to combating online hate. As cofounder Himanshu Panday explained, “Our team has a diverse set of backgrounds that allow us to see the layered complexity and subjectivity of online hate and triangulate our experiences into new methods to solve it.”

The team recognized two key problems with current approaches to reducing online hate: First, survivors are rarely given agency in the process. And second, the data sets used to identify hate quickly become outdated, missing changing cultural context, lexicons, and multilingual data examples. This missing data and dynamic context mean early moments of online hate—and the actors and drivers of it—go unseen by the algorithm and social platforms.

Even though online discrimination and violence affects all kinds of people and communities around the world, it has had disproportionate effects on women and the LGBTQI+ community.

To change this, the team devised a new method for tracking online hate that is focused on contextualizing online hate from a survivor’s perspective. This idea focuses around a chatbot that anyone can use to report online hate. The chatbot asks the survivor to share a link to the incident and then answer a series of quantitative and qualitative questions to contextualize the incident from their own perspective. The chatbot also crawls the website where the incident occurred and lets the survivor label different aspects of the incident in detail. It then gives them the opportunity to join a survivor support community. Their data is added to a dashboard of all survivors’ data that can be shared (with their consent) with a vetted group of journalists, researchers, counter speech organizations, and other groups. Each time their data is used to model better classifiers, test accuracy in data sets, build training data sets in new languages and contexts, or generally make a positive social improvement to online hate, the survivor is notified and can see the impact their data had on contributing to a kinder digital world.

“Often the very processes aimed at understanding the survivor’s experiences and preventing online discrimination end up isolating them,” Panday explained. “Our chatbot will provide new contextually rich data to researchers, social scientists, journalists, policymakers, changemakers, and other stakeholders to shape their work and approach accordingly. It will also inform the greater work of digital anthropology and its intersection with big, thick data.”

Social media platforms have struggled for decades to manage online hate speech, but the Dignity in Difference team sees the source of these problems connected to not just the notable biases in big data training sets, but also the fact that many teams and academic disciplines are organized around traditional silos and hierarchies that do not support effective collaboration toward problem-solving. Together, thinkers across the human and data sciences can overcome many of the challenges around hate speech and gain a deeper understanding of the needs across online communities and ecosystems. 

The Dignity in Difference team is just getting started. In November 2022, they won the UNESCO and Liiv Center Digital Anthropology Design Challenge, which includes a grant to further develop their idea so it can be applied to drive widespread impact in the future.

Credit: An Pan
Decorative element. Purple, blue, pink, yellow, black, and white lines overlap and create the impression of a digital, visual glitch, like one might see on a malfunctioning monitor.

Rewiring connection in a world of information bubbles

Using mathematical tools originally developed for analyzing physical and biological complex systems, Cristián Huepe has spent over five years developing an innovative approach to the study of online social networks with his team at the Social Listening Lab of the Universidad Católica de Chile. Huepe, a theoretical physicist who is also a researcher at the Northwestern Institute on Complex Systems and the Department of Engineering Sciences and Applied Mathematics at Northwestern University, has worked with a team of anthropologists, sociologists, communication experts, engineers, and complexity scientists in Chile, blending conceptual and mathematical tools with anthropology to study online social change, cultures, and networks at a systemic level, with a depth and scale that goes beyond standard big data statistics by focusing on the structural properties of the digital interactions and language.

The team maps temporal dynamics, interaction networks, and semantic networks of online communities and their conversation, weaving ongoing digital ethnography throughout the research to understand, and sometimes intervene in, the fine-grained social context, beliefs, and motivations driving group actions.

This methodology was successfully used during the UN Climate Change Conference COP25 to understand which groups of people with differing ideological perspectives were open to engaging in productive dialogue during the conference. Huepe and his team were able to identify changing language and viewpoints along with conflicting stories of truth that were driving various groups online across the political and climate divide. These insights were then used to bring diverging groups together in productive and progressive conversations during the event.

The methodology was also applied to analyze online interactions that favored or disfavored vaccination in Chile during the COVID-19 pandemic. Huepe and his team showed that the conversation involved not only pro- and anti-vaccine groups, but also others who inadvertently promoted or inhibited vaccination in their discussions, and that while anti-vaccine users were constantly attacking pro-vaccine messages, pro-vaccine accounts rarely addressed them, leaving conspiracy theories uncontested. Their results promoted policy and behavioral changes that helped increase the immunization rates.

The team believe strongly in the positive and urgent need to advance digital anthropology methods to understand conflict, drive positive social discourse, and avoid the establishment of disconnected post-truth realities: “We are just beginning to understand the power of digital anthropology in helping us develop the subtle nudges and legal framework required to ensure that online social media benefit society, bringing us together and helping us make informed decisions instead of dividing us into disconnected factions immersed in different information bubbles.”

Huepe and the team at SoL will include this work in the upcoming UNESCO and Liiv Center Digital Anthropology Toolkit, with the hope that others will be able to apply this method to similar challenges around peaceful online discourse and collaboration.

Credit: An Pan
Decorative element. Purple, blue, pink, yellow, black, and white lines overlap and create the impression of a digital, visual glitch, like one might see on a malfunctioning monitor.

Innovation in digital anthropology

Digital anthropology and data science are complementary fields with much to offer each other. As our world continues to change at an unprecedented pace, it is crucial that innovators continue to blend the scale of data science with the depth of anthropology to accurately understand these changes in real time. These insights are essential for leaders and decision-makers who are under increasing pressure to respond to urgent social issues. Without them, they risk misunderstanding communities and creating social policies, services, and solutions that perpetuate bias and inequality and fail to serve the public good. In the words of Christian Madsbjerg, cofounder of strategy consultancy ReD Associates, “When we get our understanding of people wrong, we get everything wrong.”

Collaborations between anthropologists and data scientists provide unique and valuable opportunities to understand rapidly developing online ecosystems, political extremism, machine learning, or online hate. Of course, there is always a risk that innovation in digital anthropology can be used for purposes other than public good, but that is the case with most technologies. By embracing and developing such interdisciplinary collaborations and their innovative approaches we might make sense of digital life and work to change it for the better.

Over the last two years, UNESCO and the Liiv Center have partnered to advance digital anthropology for the public good by supporting public research, global design challenges, reports, and academic experiments to develop innovative methodologies, academic training, and career opportunities for digital anthropologists across the public and private sectors. The ideas discussed here (and others) will be included in a new Digital Anthropology Toolkit, where people can access new ideas and methods to drive actionable impact in their research, projects, and platforms. 

Illustrator bio: An Pan is a multimedia designer, illustrator, and culture lover. He is currently a designer-accessory to Chinese consumerism but works with a big dream of decolonizing design. He enjoys traveling and doll collecting.

Authors

Katie Hillier

Katie Hillier is chief digital anthropologist at the Liiv Center, a Social Intelligence Insider Top 50, and a founding member of the World Metaverse Council. She spent a decade running digital anthropology labs at What If Innovation and later at Accenture and believes digital anthropology is essential to building a more equal and ethical digital future.

Cite as

Hillier, Katie. 2023. “Digital Anthropology Meets Data Science.” Anthropology News website, May 8, 2023.

More Related Articles

A Reliable Narrator

Erin Routon