Article begins
An anthropologist and a psychologist walk into a bank and decide to study Artificial Intelligence (AI). It sounds like the start of a joke—except, instead of a punchline, the result is a year’s worth of insights and the story of how we created a whole new industry research program to get them.
At this point you might be thinking, what are social scientists even doing at a bank?
As user researchers at TD Bank, a Canadian financial institution, our team specializes in understanding emerging technologies and the future of money. As the world continues to shift and we are catapulted into the technological future, our societal institutions need to adapt. This is where we come in. To design for the future, though, we first need to understand the social trajectories connecting yesterday, today, and tomorrow. We draw on an interdisciplinary mix of methods to ask fundamental questions about people, society, money, and technology.
Most recently, we set our sights on Generative AI (GenAI), the latest technological innovation to effectively “change the game”. However, we were less concerned with what technology like AI actually does; what mattered to us was what people think it does, because what people think shapes how they act. As GenAI hit the mainstream and became a hot-button topic in living rooms and boardrooms alike, we knew we needed the public’s perspective. Therefore, we set out to understand: how are everyday people talking about AI?
We wanted to conduct a long-term discourse analysis investigating how people talked about AI online to understand their underlying worldviews, mental models, and values. However, this proved challenging. Industry researchers generally undertake multiple simultaneous projects and adhere to short timelines, so this project would require a new framework to continue long-term. But the questions around AI were critical, and we saw a golden opportunity. ChatGPT had been released to the public only months before. Since much of the public was encountering GenAI for the first time, we had a chance to track the “hype cycle” from its inception. With this research, we could glimpse not just how people engage with AI, but how they react to novel technology. We knew we had to seize the moment, but we could not do it alone.
Every four months, our research team brings on a group of undergraduate interns to work with us for a semester, so we turned to them to help with our vision. In the first semester of 2023, we started with one intern. They had never conducted a discourse analysis before, so we realized that we had to do more than just lead the project—we would have to teach it. Across that semester we developed a program that guided the intern, step by step, to investigate what people were saying online, how they said it, the context surrounding it, and how this changed over time.
In the second semester we had three interns. We continued to build our program, standardizing our teachings so that this project could easily continue across semesters with different people. Our aim was to develop clear steps to guide them while still allowing them the freedom to delve into areas of interest, enabling those who had never even heard of a discourse analysis before to conduct their own study in a short amount of time.
By the third semester we had seven interns. The Discourse Analysis Program was now fully standardized, and these term-long projects could fit together into a longer, overarching study like a relay race, helping us run the length of AI discourse over the course of 2023.
We investigated key discursive trends, with topics ranging from reactions to AI in the workplace to the ways AI was redefining what it means to be “authentically human”. By looking at everything from YouTube and TikTok to Reddit and LinkedIn we generated countless reports and artifacts, providing us with a fuller picture of how people talked about AI online.
By tracking the public’s journey, post by post, from their first real exposure to GenAI to their integration of it into their lives, we captured a snapshot of how people react to novel technology and what the development of GenAI means to them, laying out the path into an uncharted future.
Job Loss
Top of mind for many were questions around work. These questions took two forms: “Is AI coming for my job?” and “How can I use AI in my job?”
For many people, GenAI was first and foremost a force that could upend their livelihoods in a tangible way—if it replaced their work, how would they pay their rent? They feared that human labor would be valued less and were apprehensive about the long-term consequences on their careers and lives. Everything from their daily routines to their entire careers could be disrupted. As a result, many engaged with AI from a place of anxiety.
However, conversations also included discussions on how to adopt AI in their jobs, working with it to improve their work. Some people were curious about reskilling and the new kinds of work that AI might open up for them, seeing it as an opportunity rather than a threat. However, this was the minority; most saw it primarily as a danger to their livelihood.
Accountability
Alongside employment anxieties were growing questions around the legal accountability for AI advice and AI-driven actions—for instance, who is responsible for an accident caused by an AI-driven car? People felt that there was no clear chain of accountability if something “went wrong” due to AI.
This apprehension tapped into broader questions around the policies and institutions that are meant to be regulating AI. As AI-related lawsuits move through various courts and policies around AI are still being written, people are not just apprehensive about immediate accountability, but about how different policies will shape the future of employment, entertainment, and safety. In their eyes, the stakes for the future are high.
Nonhuman Other
These concerns around a lack of clear accountability coupled with fears of AI agency, as people increasingly perceived AI as a “nonhuman Other.” They imagined GenAI as sentient, but not “human,” and non-human sentience was seen as a threat.
This underlying mental model of AI as “Other” framed people’s expectations of what types of systems AI should (or should not) be integrated into. For example, tasks that would require empathy, an ability assumed to be exclusively human, felt off-limits to them. To perceive empathy in a “nonhuman Other” would produce an uncanny valley effect and increase their distrust.
However, people’s discomfort with AI attempting to display empathy and critical thinking was matched, paradoxically, by a fear that it lacked these traits. They were uncomfortable at the idea of advanced technology taking actions without these traits, fearing the consequences of “nonhuman” authority more broadly. This presents a conundrum: if AI displays empathy, people don’t trust it because it cannot be “real,” but if it doesn’t have empathy, they fear it as threat to humanity.
Additionally, within this human/AI divide, people made a clear distinction between human- and AI-generated work, and we observed a tug of war over the valuation of these two types of work. Human output was valued for the effort and expertise it required, while AI output was valued for its speed and cost-effectiveness. Depending on a person’s values, preference was given to one output or the other, but the divide was always there.
Usefulness Trumps Fear of Use
However, these fears around job loss, (un)accountability, and the threat of the “nonhuman” did not impede people from using AI. While we observed some staunch holdouts, many of the same people who feared and critiqued AI simultaneously tried it out and eventually integrated it into their lives.
As we probed this more, we came to understand the driver behind this seeming contradiction: convenience. Despite AI’s perceived negative implications and many people’s moral objections to it, its ability to offload and streamline tasks, and take on some of the cognitive load of their day-to-day lives proved a valuable enough tradeoff. In short, they valued convenience more than they feared AI.
This speaks to a broader trend: people often distrust new technology until they see its tangible benefit to their lives. Upon their first encounter with it, new technology represents a disruption to “life as they know it” and is understood in this frame of an ambiguous threat to their “normal.” However, as more people begin to interact with the technology, they begin to see how it integrates into their lives, and their mental model shifts from one of disruption to integration. This process is largely driven by convenience. Critically, adoption often precedes acceptance, and it is only through consistent exposure because of convenience that these technologies become a normalized part of daily life.
Distrust in Inaccuracy
At the same time, there are limits to this integration. While people did start using AI, and their fear was tempered by convenience, a new hesitation emerged: distrust. This was not a broad distrust in an ambiguous and unknown “threat”, but a specific distrust that came from encountering AI’s limits. Whether it was witnessing AI hallucinate or hearing about a mistake it made at a friend’s job, people saw GenAI’s shortcomings. As this happened, they began to treat AI as a useful but limited and unreliable tool.
This distrust hinged on output inaccuracy, based on people’s personal experiences with AI’s shortcomings; however, as AI continues to advance and reduce its error rate, this may, in turn, shift. While people do not currently trust AI completely, trust is still on the table, but it will have to be earned. People will have to experience any newfound reliability for themselves to truly believe it, and it will be an uphill battle.
AI Demystification Process
Taken together, these findings depict a journey, as people shifted from engaging with AI as an unknown threat to accepting it as a practical tool. We call this the “AI Demystification Process,” which illustrates the key role convenience plays in the adoption of technology: usefulness trumps fear and drives integration (see Figures 1 and 2).
At the start of 2023, shortly after ChatGPT was first released to the public, people viewed GenAI as an ambiguous and omnipotent threat. They were afraid to use it and did not trust it, in part because it was so new and poorly understood, and people viewed it as a disruption to life as they knew it. However, as GenAI became more familiar and people heard about it from others in their lives, they got curious—trying it out couldn’t hurt, right?
For many, this meant using it for entertainment, something they viewed as “harmless,” such as asking it to create a self-portrait or give them TV recommendations. In their minds, they were just experimenting. In doing so, though, they were transforming their own mental model of GenAI, with it shifting from an unknown threat to a known entity, and reducing their own discomfort in the process.
This allowed people to see past their fears to the potential utility of GenAI. They began to use it for a wider range of tasks that enhanced the speed and efficiency of their day-to-day lives: it created grocery lists for them, helped them organize data, and brainstormed essay titles for them. AI quickly went from a toy to a daily tool that helped streamline their lives.
At this point, people often hit a roadblock. As they increasingly integrated GenAI into their lives, using it for more and more complex tasks—not just brainstorming an essay but writing one, for instance—they witnessed it hallucinate, make a critical error, or display an uncomfortable bias that threw the entire output into question. Witnessing these tangible mistakes led to doubt and caused people to scale back how they used GenAI. Rather than prevent further usage, though, this doubt merely modified it. Folks kept using GenAI because of its sheer usefulness for practical tasks, but they restricted their use to specific tasks, aware that GenAI was not infallible. It was still useful, but only useful in certain contexts. They had returned to a place of distrust, but it was now practice-based distrust rather than fear-based distrust.
Zooming out, this journey shows us the trajectory of new technologies: people shift from fear of the unknown, to integration into daily life via use for entertainment, to use for ease and efficiency. Convenience serves as the motor beneath this adoption process, and understanding this can help us guide design around future technology and anticipate how the public might react to it.
Overall, the past year has been a whirlwind for new technology, sometimes changing “life as we know it” faster than we feel we can keep up. GenAI has taken the world by storm, presenting new challenges but also new opportunities as we learn to work with and think with this new tool. From being understood as a threat to jobs, a challenge to accountability, and a nonhuman “Other,” to being integrated despite these fears, AI continues to present new questions and challenge existing ways of thinking and being. What happens now remains to be seen, and we will continue watching the discourse to see what this next chapter holds.
An anthropologist and a psychologist walk into a bank and decide to study AI. They walk out with a snapshot, a program, and a whole lot more research ahead of them.
Talia Vogt and Zoë Poucher are applied researchers working in the UX field at TD Bank Group. As part of the Bank’s Research Science Team under the Human-Centered Design Practice, their work applies an interdisciplinary approach to understanding societal trends and shifts around money and technology. To learn more about how they’re helping to shape the future of banking at TD, visit tdinvent.td.com.
Acknowledgments: We would like to thank all of the interns who helped us conduct this research throughout their placements with us on the HCDP research team! Masha Aresheva, Karen Eng, Paneet Gill, Jessie Liu, Sophia MacKeigan, Fatima Nazir, Alethea Pook, Pragya Sharma, Emily Shen, and Matthew Yeung, your support was invaluable.