Article begins

Generative artificial intelligence (GenAI) is reshaping student-teacher relations in higher education in both exciting and worrying ways. Proponents point to its potential to personalize learning and foster innovative teaching approaches. But others view GenAI as a liability, casting the technology as a new and improved way to cheat, giving into a stereotype of students as natural-born plagiarists. 

Is this fair? Is it right? What does the stereotype tell us about educational culture and structure more broadly? And how can a better understanding of the student position inform higher education’s response to GenAI?

The Spider in the Room

Questions about teaching and learning are not new, especially in anthropology. Take Susan Blum’s work on plagiarism, which documents the disconnection between old-school academic standards and today’s students. Blum paints a picture of plagiarism as a large spider sitting at the center of a web spun from various aspects of higher education and the social structures on which it rests. In other words, cheating is in many ways a fitting response to our educational system’s design.

In relation to this, Blum’s “Learning In and Out of School” initiative highlights the difference Lave and McDermott see between “alienated” and “authentic” learning. The latter entails learning for learning’s sake, or learning because we want to know how to do something. Such self-driven, intrinsically rewarding learning typifies children on their own and in non-industrialized societies

In school settings, however, where lessons are imposed and students are rated and ranked, something akin to “alienation” (described in the theory of historical materialism) takes hold: school learners can feel like factory laborers whose lives and creations are experienced as their own only when “off the clock.” Add to alienation the money and time shortages many college students suffer. Layer on the emphasis on “success” and twenty-first-century social media for using and adapting others’ posts, a la Blum. Toss in the bleak (but erroneous) “competition” mindset. With all of this, it comes as no surprise that cheating happens. 

Cost aside, evidence does suggest that retooling curriculum for more authentic learning could help change that. As Jordan Stein observes, rather than focusing on GenAI’s potential for cheating, we should be “working to create environments and provide resources that would help make cheating unnecessary or unthinkable in the first place.” 

Credit: Photographer: Alan Decker. Copyright: San Diego State University. Used with permission.
Students in class at San Diego State University.
Students in class at San Diego State University.

A Community-Based Response

At San Diego State University (SDSU), we seek to foster authentic learning for our 37,539 students. In other words, we favor “process” over “product” where appropriate to learning objectives. In this spirit, we’ve taken an innovative approach to GenAI. 

Rather than viewing GenAI as an apocalyptic threat to education, we chose not to give in to panic. Instead, we decided to study what was happening on our campus, using a rapid assessment approach to understand how much our students already knew about GenAI, how they used it, and how they felt about it. Initial, informal focus groups and interviews led up to a large-scale survey co-designed with student input.

The students also played a key role in promoting the survey, resulting in a tremendous response: we received 7,811 replies. This is the largest turnout recorded worldwide, to our knowledge, for a study on students’ interaction with GenAI. Further, to ensure a full understanding of the findings and how we might act on them, we had diverse campus stakeholders review them with us. 

Our inclusive methodology empowered us to respond thoughtfully, responsibly, proactively, and in a locally relevant way to the rise of GenAI. One significant outcome has been the creation of a tailored training program for teachers, designed to meet our students’ needs. 

The fact that our campus is part of the California State University (CSU) system—the largest, most diverse four-year university system in the United States—means that our experience could be widely informative for other campuses, too, as we all grapple with how to optimize teaching and learning and settle into the new normal that GenAI entails for student-teacher relations. In April, our training program was rolled out to all 23 CSU campuses; our survey, too, is being applied by various CSU campuses as well as in other US universities and worldwide, with instantiations as far away as Peru and Kuwait. 

The Student Point of View: Looking to the Future

“I’m excited about AI’s potential in the field of medicine.”

“AI’s role in analytics will be crucial for business.”

“I’m studying arts, and AI offers exciting new ways to create.”

Although publicly available chatbots had only been in the mainstream for one semester when we surveyed (fall 2023), 71% of students indicated they thought that GenAI would become an essential part of most professions. However, 89% had as yet received no formal training regarding GenAI’s use. Their responses made clear that students are hungry for such opportunities. 

Why? Put simply, “I want to be prepared for using AI in my field,” as one student reported. The need to stay nimble as a career skill was also noted: “AI will require us to continually adapt and learn new skills.” And that requires engaging with GenAI in present coursework for all parties. Students wrote things like “There’s a need for educational resources that demystify AI,” calling out the need not only for workshops and training for themselves, but also for faculty and staff (e.g., “Teachers should be equipped to integrate AI into their teaching”). 

Ethical Awareness

Beyond practical instruction, students want discussions about responsible use and GenAI’s potential problems: “Understanding AI’s limitations and possibilities should be part of our education.” For instance, one student commented, “Concerns about privacy and data security with AI need to be addressed.” Plus, as another noted, “It [GenAI] has internal biases taught to it by users, so it’s not this neutral figure and I worry it will be framed as such.” Indeed, the majority (72%) indicated that ethical use of GenAI was a major concern, 87% agreed that unregulated GenAI development could lead to unforeseen risks, and 80% were concerned about GenAI’s long-term social impact.

One threat students flagged was GenAI’s potential to increase the digital divide due to how it might “give people an unfair advantage.” To offset this, one student suggested that any required GenAI tools should be “in the public domain or somewhere where it is accessible to those who would otherwise not be able to afford college.” If equity is not prioritized, one asked, how “is every student going to be able to afford the technology that is needed to keep up?” Device access too is important in enabling GenAI literacy: the fewer kinds of smart devices students owned, the less likely they were to say GenAI “has positively affected learning” and the more likely they were to agree that GenAI “is too complex for me to grasp.” 

Credit: Photographer: Alan Decker. Copyright: San Diego State University. Used with permission.
Students by Hepner Hall at San Diego State University.
Students by Hepner Hall at San Diego State University.

Proceeding with Caution

Students who did grasp GenAI enough to use it were often cautious. While 44% trusted GenAI algorithms to provide accurate information, 81% felt the need to verify GenAI’s responses. A similar proportion (83%) indicated that GenAI algorithms are insufficiently transparent.

Students already using GenAI pointed to efficiency: “AI tools have significantly reduced the time I spend on research” or “I use AI tools to summarize articles for my assignments.” The ability to get help easily and to confirm answers also increased self-efficacy in some cases: “I’m more confident in my assignments thanks to AI’s assistance.” 

Notably, students generally preferred using GenAI for assistance (for instance, to help with language translations) rather than relying on it completely for task completion. They did not want to shortchange themselves: one student mused that “AI has made studying more efficient but sometimes I feel like I’m learning less.” Others worried that “[GenAI] will make students lazy” or “too dependent.” The advice was to “use it to check your work, not do your work.”

Academic Integrity and a Need for Trust

What about cheating? The survey was anonymous; still, only 16% of students indicated that they were comfortable directly submitting GenAI responses in their courses. Most were interested instead in, as one reported, “How to use it in a way that does not affect academic integrity.”

Students saw a clear difference between using GenAI as a tool for learning and a cheating device. However, as one said, “The rules about AI and cheating are so vague, it’s easy to accidentally cross a line.” They requested in-class discussions as well as overall ethics and use-case guidance. They expressed a desire to do the right thing—once it was clearly defined for them: “Please just tell us what to do and be clear about it.”

The request for “guidelines on how to use AI responsibly” in part reflected awareness that academic integrity is unevenly policed. “I’ve seen some students get away with blatant cheating, while others get harshly penalized for minor mistakes” and “Some students are watched more closely than others when it comes to cheating.” It also demonstrated their knowledge that GenAI plagiarism detectors can be faulty—“they are wildly inaccurate”—and indexed worry about, as one put it, “getting false flagged”: “I worry that teachers will start using checking software that will be wrong and take points off for no reason.” 

Some students did link GenAI to an increase in cheating—by others. They were concerned about the “higher expectations” undetected cheating might underwrite, and there was some indication (in keeping with the above-mentioned “competitive mindset”) that in this context one might need to up one’s own GenAI game just to survive. Further, some lamented “unfair treatment [for] those that put in effort vs rely on AI.” One said bluntly: “There needs to be more trust between students and professors about using AI responsibly.” 

Credit: Photographer: Alan Decker. Copyright: San Diego State University. Used with permission.
Students in a small group learning session at San Diego State University.
Students in a small group learning session at San Diego State University.

Human Presence

Students appreciated that GenAI could help in “making things more efficient” not just for themselves but for faculty too: “AI can support faculty in administrative tasks, allowing more time for teaching”; “AI could help streamline grading.” But fears regarding the potential loss of the live side of student-teacher relations were palpable.

As one student said, “I am mostly concerned about AI replacing human interactions in the classroom.” Another worried that “AI might make the learning less personal and more automated.” Still another lamented, “I don’t want to be taught and graded by a robot because it removes the human presence from course work.” Such worries were accompanied by statements such as “There should be a balance between AI and traditional education methods.”

Leveraging Student Insights for Faculty Development

Late in 2023, the Academic Senate of the CSU system passed a resolution calling for “Professional development opportunities for faculty to learn about generative AI and its applications to ensure they are prepared to effectively integrate it into their teaching.” Further, varied government and regulatory agencies have pressed for widespread AI literacy training. For example, H.R. 6791 (Artificial Intelligence Literacy Act of 2023) has been introduced in the US House of Representatives. 

Our findings indicated that such training must address the value of authentic learning and support the health of the student-teacher relationship itself, rather than perpetuating a distrustful pedagogical environment. So that’s what we did in our Academic Applications of AI (AAAI) micro-credential program. 

Our self-paced online AAAI mini-course comprises five modules, offering participants a pathway to earn a digital badge. Tucked into the practical skills and theoretical foundations are pearls of wisdom from the student survey regarding the student-teacher relationship, the digital divide, bias in and validity of GenAI output, and the need for GenAI skills in today’s job market. 

The comprehensive nature of the rapid assessment we did prior to building the mini-course and the constant contact maintained with all stakeholder groups and various campus partners helped support high uptake (one in five faculty members had signed up to complete the AAAI even before it launched). Of further importance was our commitment to keeping the content manageable. The course (soon to be available to the public via SDSU Global Campus) can be audited in less than two hours, or completed for badging in just over four, making it feasible for even the busiest faculty members. 

By marketing the course as informational—as a tool for making informed decisions regarding if, when, and how to use GenAI—and by attending to skeptical viewpoints, we were able to attract some of even the most AI-resistant faculty members. One such individual wrote in an assessment, “AI tools can really open up some doors of what can be done and the speed [which] in turn that opens up my thinking”; another said, “OMG! my perspective changed 100%, I was not aware of the capability of the apps.” 

Anti-Adversarial Effects

Our efforts to highlight how poorly the students-are-cheaters stereotype sits with students themselves and promote authentic learning that values process over product seem to be working. As one instructor taking the course said, “Instead of [emphasizing] the students’ cheating using GenAI apps, it is a pleasant surprise to learn AI’s power to improve student learning.” Another said that their original orientation “was about catching plagiarism and discouraging students from using AI at all. Now I am thinking about it as a tool for me and my students. I can use this! Woohoo!” Woohoo indeed. 

Authors

Elisa Sobo

Elisa "EJ" Sobo ([email protected]) is professor of anthropology and director of undergraduate research, College of Arts and Letters, San Diego State University (SDSU), and a GenAI Faculty Fellow at SDSU.

David Goldberg

David Goldberg is associate professor of management information systems, Fowler College of Business, SDSU, and a GenAI Faculty Fellow at SDSU.

Sean Hauze

Sean Hauze is senior director of Instructional Technology Services, SDSU.

Abir Mohamed

Abir Mohamed is an SDSU undergraduate majoring in business information systems and Africana studies and a GenAI Student Fellow at SDSU.

Colin Ro

Colin Ro is an SDSU undergraduate majoring in mechanical engineering and a GenAI Student Fellow at SDSU.

James P. Frazee

James P. Frazee is interim chief information officer (CIO) and vice president for information technology at San Diego State University.

Cite as

Sobo, Elisa, David Goldberg, Sean Hauze, Abir Mohamed, Colin Ro, and James P. Frazee. 2024. ““I Don’t Want to Be Taught and Graded by a Robot”: Student-Teacher Relations in the Age of Generative AI.” Anthropology News website, June 18, 2024.

More Related Articles

Going Native: Praxis

Bernard C. Perley