The Token Writer: ChatGPT in the First-Year Writing Classroom

Type of Presentation

Individual paper/presentation

Conference Strand

Critical Literacy

Target Audience

Higher Education

Second Target Audience

K-12

Location

Ballroom A

Relevance

I'm presenting a classroom activity that foster information literacy by helping college students understand how AI works. This helps them make informed, deliberate decisions about how to incorporate it into their research and writing process.

Proposal

Since ChatGPT was released in 2022, scholars have used various metaphors to describe how AI affects the writing process. ChatGPT has been described as a “calculator for writing” (Brynjolfsson, 2023), a writing assistant (Aguilar, 2024), and even a co-author (Stokel-Walker, 2023). However, these metaphors risk obscuring a crucial difference between the way humans and machines engage with texts – the former read for context and meaning, while the latter analyze patterns to predict the most likely word in a sequence. If we want college students to understand that ChatGPT will never be a viable substitute for the skills they develop in the first-year writing classroom, we need to create language and pedagogical strategies that foster information literacy by emphasizing the distinction between 'reading' and 'processing text.’

In my presentation, I will introduce a classroom activity that draws on the work of Harry Frankfurt (2005) to offer college students a different metaphor for thinking about generative AI: ChatGPT as a bullshitter. For Frankfurt, the liar intentionally subverts the truth, whereas the bullshitter is indifferent to the truth and will say anything to please an audience. In my first-year writing classroom, I teach college students to think of ChatGPT as a helpful bullshitter that “misleads to please” by showing them how large language models work. I ask students to "code" a passage using tokens, develop rules based on observed patterns, and then test those rules by decoding another section of the text.

This exercise illustrates how large language models make predictions while teaching students how and why ChatGPT hallucinates. Most importantly, it shows students that ChatGPT processes text without understanding it. This reinforces the importance of human ingenuity in the research and writing process, while also encouraging them to fact-check ChatGPT’s output and trust it less when they are unsure of the answer. Through this activity, students can start developing strategies for incorporating large language models into their writing process (for brainstorming, feedback and revision) without sacrificing critical thinking skills or becoming over-reliant on AI.

Short Description

How can we teach students how to incorporate ChatGPT into the research and writing process without becoming over-reliant on it?

Keywords

ChatGPT, information literacy, Harry Frankfurt, genre theory, composition, writing studies

Publication Type and Release Option

Presentation (Open Access)

Creative Commons License

Creative Commons Attribution 4.0 License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS
 
Feb 7th, 10:45 AM Feb 7th, 11:30 AM

The Token Writer: ChatGPT in the First-Year Writing Classroom

Ballroom A

Since ChatGPT was released in 2022, scholars have used various metaphors to describe how AI affects the writing process. ChatGPT has been described as a “calculator for writing” (Brynjolfsson, 2023), a writing assistant (Aguilar, 2024), and even a co-author (Stokel-Walker, 2023). However, these metaphors risk obscuring a crucial difference between the way humans and machines engage with texts – the former read for context and meaning, while the latter analyze patterns to predict the most likely word in a sequence. If we want college students to understand that ChatGPT will never be a viable substitute for the skills they develop in the first-year writing classroom, we need to create language and pedagogical strategies that foster information literacy by emphasizing the distinction between 'reading' and 'processing text.’

In my presentation, I will introduce a classroom activity that draws on the work of Harry Frankfurt (2005) to offer college students a different metaphor for thinking about generative AI: ChatGPT as a bullshitter. For Frankfurt, the liar intentionally subverts the truth, whereas the bullshitter is indifferent to the truth and will say anything to please an audience. In my first-year writing classroom, I teach college students to think of ChatGPT as a helpful bullshitter that “misleads to please” by showing them how large language models work. I ask students to "code" a passage using tokens, develop rules based on observed patterns, and then test those rules by decoding another section of the text.

This exercise illustrates how large language models make predictions while teaching students how and why ChatGPT hallucinates. Most importantly, it shows students that ChatGPT processes text without understanding it. This reinforces the importance of human ingenuity in the research and writing process, while also encouraging them to fact-check ChatGPT’s output and trust it less when they are unsure of the answer. Through this activity, students can start developing strategies for incorporating large language models into their writing process (for brainstorming, feedback and revision) without sacrificing critical thinking skills or becoming over-reliant on AI.