Reflections from the Roundtable

Panelists Explore the Role of Generative AI in Higher Education

In the fall of 2023, educators and students are figuring out how to make room in the classroom for a third kind of learner.

Large language models (LLMs) like Chat GPT are not capable of learning in the messy, human sense – but they are increasingly sophisticated. These forms of generative artificial intelligence (AI) “learn” from human inputs, training their algorithms to generate all kinds of content: from essay outlines and syllabi to email templates and dinner recipes.

LLMs can transform and streamline tasks at all levels of higher education. This potential also introduces ethical questions about the role of AI in academic spaces, as well as the workplaces students are preparing to enter.  

On Friday, September 22nd, a panel of experts assembled at MSU and wrestled with these questions and possibilities. The four panelists, listed below, come from varied backgrounds in technology, education, and accessibility:

  • Huiling Ding, Professor of English at North Carolina State University and Director of Labor Analytics and Workforce Development at the Data Science Academy
  • Brendan Guenther, Chief Academic Digital Officer at the MSU Center for Learning and Teaching Innovation
  • Kate Sonka, Executive Director at Teach Access
  • Jeremy Van Hof, Director of Learning Technology and Development at the MSU Broad College of Business

The Department of Writing, Rhetoric, and Cultures (WRAC) co-sponsored the roundtable conversation, which was moderated by Caitlin Kirby, Associate Director of Research at the MSU Enhanced Digital Learning Initiative. The event was organized as part of the Prompt Response pop-up exhibition at the MSU Museum CoLab Studio.

Throughout the conversation, the panel responded to moderator- and audience-sourced questions about the use of generative AI at MSU, and more broadly in higher education and industry.

Responding to these questions as well as the fast-changing nature of AI itself, several core themes emerged from the panelists’ conversation: among them, honesty, assessment, accessibility, and preparedness for the evolution of AI – and, in turn, the places we all work and learn.

In Lieu of Universal AI Policies, Start with Honesty

Positioning honesty as a guiding principle, Van Hof encouraged educators to be honest with their students about the “current capacity of these tools and what it will grow to be.”

By allocating time to learn about and simply play with generative AI, educators can better understand their potentialities, integrate AI tools into the classroom, and develop policies that suit their students’ needs and course learning goals. All four panelists envisioned classroom uses of AI that promote academic honesty, critical thinking, and more holistic, process-driven assessment.

As one example, Van Hof described a scenario in which a professor coaches their students to generate an analytical essay using ChatGPT. Students then take time to develop “really deep, meaningful questions about what this essay actually says,” Vah Hof imagined, and discuss how they might write their own essays to communicate more effectively.

Review and (Re)Assess the Writing Process

The evolution of AI also demands creative forms of assessing and supporting students throughout the learning process. “We want to think about the different stages of writing and which stage might actually be okay to incorporate ChatGPT,” suggested Ding, mirroring Bill Hart-Davidson’s recommendations for AI use in writing classrooms.

In a recent “Ask the Expert” interview with the College of Arts and Letters, Hart-Davidson highlighted review and revision as the “durable human skills of writing,” drawing from the condensed writing process of write, review, revise, and repeat.

Students should still review, revise, and generate their own research ideas as independent, scholarly acts, Ding argued, but they could use AI for “pulsed writing,” such as copyediting or brainstorming. “ChatGPT produces really mediocre writing,” Ding emphasized, “and students should do better than that, but still think critically about the integration of these tools into the appropriate stage of writing.”

Ultimately, “the human thing is what we should be assessing,” Van Hof reflected. Guenther expressed a similar sentiment, noting that assessment is always contextual: first-year writing courses, for instance, will respond to and utilize AI differently than graduate-level writing courses, or courses in non-writing departments.

Center Accessibility in Conversations About AI

Policies and responses to AI vary widely across disciplines, highlighting the need for interdisciplinary conversations about academic integrity, assessment, and other elements of the collegiate learning environment. At MSU, these conversations are already happening through the AI Club, which seeks to “empower students with the knowledge of AI through an inclusive environment that closes the gap between curiosity and hands-on practice in the field.”

The AI Club’s mission speaks to the importance of accessibility in both theoretical discussions and real-time uses of AI. “Accessibility,” as Sonka described at the roundtable, requires thinking about both the possibilities and limitations of AI for people with disabilities, as well as disadvantaged or nontraditional students. Students, faculty, and other stakeholders should have access to spaces to explore AI tools, the panelists asserted, and to discuss their applications: the roundtable discussion being a clear example of this vision.

Prepare Students for a Fast-Changing World (and Jobs That Don’t Yet Exist)

As students prepare to go into the world, Sonka expressed hope that by teaching students about accessible AI practices in the classroom, “they’ll know accessibility should be included from the beginning of whatever job they’re doing” – even if that job title doesn’t yet exist in 2023. More broadly, Sonka noted, AI implores human users to consider how, in an ableist society, we can wield the power of AI “in ways that allow us to be more human.”

Asked to reflect on the future of AI in the classroom, the panelists offered similar takeaways. Noting the automation of various entry-level or apprentice-type jobs, educators face the daunting task of preparing students “for a world that requires them to be more productive, efficient, and advanced,” Ding said. While the panelists acknowledged the lack of answers, Ding and others pointed toward the abundance of real-world professional connections at MSU, which empower students to start preparing for this world as early as possible.

In the absence of easy answers, Guenther suggested “case-based learning” as a way for students to gather data about their future industries – and potentially alleviate some of their fears or uncertainties regarding AI. Instructors can ask students to go out into the world and make observations about AI use and policies in the workplace of their choosing, and then report back to their peers and instructors.

As educators work to create learning experiences that reflect rapidly changing professional environments, students can and should “be allies in this work,” said Sonka. In the personalized setting of a classroom, educators have an opportunity – and perhaps an imperative – to model these efforts, learning how to use these tools critically and reflectively alongside their students.

This roundtable event only represents the beginning of the conversation about academic use and AI. The discussion continues in spaces like the AI Club and the Prompt Response pop-up exhibit at the Museum CoLab Studio, as well as WRAC and hundreds of departments and classrooms across campus. Wherever we find ourselves in the matrix of higher education, AI continues to develop at a rapid clip, urging students and educators to learn, write, review, revise, and repeat – with and without the support of AI.