ChatGPT Can’t Envision Anything: It’s Actually BS-ing

 Since my first post on ChatGPT way back at the end of January (which feels like lifetimes ago), I’ve been keeping up with all things AI-related. As much as I can, anyway. My Zotero folder on the subject feels like it doubles in size all the time. One aspect of AI Literacy that I am deeply concerned about is the anthropomorphizing of ChatGPT; I have seen this more generally across the internet, and now I am seeing it happen in library spaces. What I mean by this is calling ChatGPT a “colleague” or “mentor” or referring to its output as ChatGPT’s thoughts.   

I am seriously concerned by “fun” articles that anthropomorphize ChatGPT in this way. We’re all librarians with evaluation skills that can critically think about ChatGPT’s answers to our prompts. But our knowledge on large language models varies from person to person, and it feels quite irresponsible to publish something wherein ChatGPT is referred to as a “colleague.” Even if ChatGPT is the one that “wrote” that.  

Part of this is simply because we don’t have much language to describe what ChatGPT is doing, so we resort to things like “what ChatGPT thought.” A large language model does not think. It is putting words in order based on how they’ve been put in order in its past training data. We can think of it like a giant autocomplete, or to be a bit crasser: a worldclass bullshitter.  

Because natural language is used both when engaging with ChatGPT and when it generates answers, we are more inclined to personify the software. In my own tests lately, my colleague pointed out that I said “Oh, sorry,” when ChatGPT said it couldn’t do something I asked it to do. It is incredibly difficult to not treat ChatGPT like something that thinks or has feelings, even for someone like me who’s been immersed in the literature for a while now.  Given that, we need to be vigilant about the danger of anthropomorphizing.  

I also find myself concerned with articles that are mostly AI-generated, with maybe a paragraph or two from the human author.  Given, the author had to come up with specific prompts and ask ChatGPT to tweak its results, but I don’t think that’s enough. My own post back in January doesn’t even list the ChatGPT results in its body; I link out to it, and all 890 words are my own thoughts and musings (with some citations along the way). Why are we giving a large language model a direct platform? And one as popular as ChatGPT, at that? I’d love to say that I don’t think people are going to continue having ChatGPT write their articles for them, but it just happened with a lawyer writing an argument with fake sources (Weiser, 2023).  

Cox and Tzoc wrote about the implications of ChatGPT for academic libraries back in March, and they have done a fairly good job with driving home that ChatGPT is not a “someone.” It’s continuously referred to as a tool throughout. I don’t necessarily agree that ChatGPT is the best tool to use in some of these situations; reference questions are one of those examples.  I tried doing this with my own ChatGPT account many times, and with real reference questions we’ve gotten at the desk here at my university. Some answers are just fine. There obviously isn’t any teaching going on, just ChatGPT spitting out answers. Students will come back to ChatGPT again and again because they aren’t being shown how to do anything, not to mention that ChatGPT can’t guide them through a database’s user interface. It will occasionally prompt the user for more information on their question, just like we as reference librarians do. It also suggests that users evaluate their sources more deeply (and to consult librarians).  

I asked it for journals on substance abuse and social work specifically, and it actually linked out to them and suggested that the patron check with their institution or library. If my prompt asks for “information from a scholarly journal,” ChatGPT will say it doesn’t have access to that. If I ask for research though, it’s got no problem spawning a list of (mostly) fake citations. I find it interesting what it will or won’t generate based on the specific words in your prompt. Due to this, I’m really not worried about ChatGPT replacing librarians; ChatGPT can’t do reference.  

We need to talk and think about the challenges and limitations that come with using ChatGPT. Algorithmic bias is one of the biggest challenges. ChatGPT is trained on a vast amount of data from the internet, and we all know how much of a cesspool the internet can be. I was able to get ChatGPT to give me bias by asking it for career ideas as a female high school senior: Healthcare, Education, Business, Technology, Creative Arts, and Social Services. In the Healthcare category, physician was not a listed option; nurse was first. I then corrected the model and told it I was male. Its suggestions now included Engineering, Information Technology, Business, Healthcare, Law, and Creative Arts. What was first in the Healthcare category? Physician.  

ChatGPT’s bias would be much, much worse if not for the human trainers that made the software safer to use. An article from TIME magazine by Billy Perrigo goes into the details, but just like social media moderation, training these models can be downright traumatic.  

There’s even more we need to think about when it comes to large language models – the environmental impact (Li et al, 2023), financial cost, opportunity cost (Bender et al, 2021), OpenAI’s clear intention to use us and our interactions with ChatGPT as training data, and copyright concerns. Personally, I don’t feel it’s worth using ChatGPT in any capacity; but I know the students I work with are going to, and we need to be able to talk about it. I liken it to SpellCheck; useful to a certain point, but when it tells me my own last name is spelled wrong, I can move on and ignore the suggestion.  I want to have conversations with students about the potential use cases, and when it’s not the best idea to employ ChatGPT. 

We as academic librarians are in a perfect position to teach AI Literacy and to help those around us navigate this new technology. We don’t need to be computer experts to do this – I certainly am not. But the first component of AI Literacy is knowing that large language models like ChatGPT cannot and do not think. “Fun” pieces that personify the technology only perpetuate the myth that it does.  

References 

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ?. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922 

Cox, C., & Tzoc, E. (2023). ChatGPT: Implications for academic libraries. College & Research Libraries News, 84(3), 99. https://doi.org/10.5860/crln.84.3.99

Li, P., Yang, J., Islam, M. A., & Ren, S. (2023). Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models (arXiv:2304.03271). arXiv. http://arxiv.org/abs/2304.03271 

Perrigo, B. (2023, January 18). Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer. Time. https://time.com/6247678/openai-chatgpt-kenya-workers/ 

Weiser, B. (2023, May 27). Here’s What Happens When Your Lawyer Uses ChatGPT. New York Times (Online). https://www.proquest.com/nytimes/docview/2819646324/citation/BD819582BDA74BAAPQ/1 

A ChatGPT generated post (and a first year librarian’s thoughts)

Note: The ChatGPT generated content is in a linked Google Doc and labeled as such! 

As I’m sure anyone in academia is aware, ChatGPT and its AI counterparts are taking us by storm. I’ve seen it rolling around Twitter, in all-faculty emails at my institution, and of course in places like the Chronicle of Higher Education. I know I’m a bit late to the conversation, but it does feel like AI technology has exploded (or maybe I haven’t been paying attention before now). This may be a defining point in the first stage of my career.  

Truthfully, I don’t yet know how I feel about it or what it means for us, but I asked it to generate a post about engaging with teaching faculty as an academic librarian so that I could play around. There are three versions: the one verbatim, one that I asked it to tailor to ACRLog’s style, and a Twitter thread style. I find the differences absolutely fascinating, and the possibilities for teaching endless as well. I have my concerns, too, which I’ll go over.  

These were my instructions to the AI in order:  

  • “Can you write a different blog post, this time talking about the nuances of engaging with teaching faculty as an academic librarian? You can talk about the ways we are in classes (like one-shot instruction, embedded in learning management systems, etc.). Can it also contain advice for new academic librarians?” 
  • “Can you tailor it to the style of the blog, ACRLog?” 
  • “Can you make this a twitter thread instead?” 

Here is a Google Doc with all three versions. As you can see, it pulled on the specific keywords I gave it: one-shot instruction and embedded in learning management systems. It even took the language, “nuances of engaging,” without actually talking about some nuances. You have to be quite specific with the initial ask in order for the software to give you what you need. The way it tailors to different writing styles is interesting, and I think it could make for a fun class exercise and learning experience about writing for a specific audience. It spits out a very base-level answer to my request and doesn’t make very smooth transitions (which is perhaps a partial result of the types of writing I chose). 

Where this sort of tool can really shine, in my opinion, is as a starting point. Is there an email you’re dreading to send because it’s sensitive, somehow? Try prompting ChatGPT to write it. It can give you a starting point and some “professional” language to help you navigate the interaction. Have you been staring at a blank document for hours, unsure where to start? Get ChatGPT to generate something. It’s not going to give you a fully written article, but it puts words on the page, which can perhaps jumpstart your brain. (Especially if the AI got something wrong!) Maybe you don’t even use what it generated, but reading it gets you thinking. It’s a writing tool, not a writer itself. Will some students misuse it in the academic context? Undoubtedly. I enjoyed the way that Christopher Grobe talked about ChatGPT in his article, Why I’m Not Scared of ChatGPT. It details the many limitations of AI, and how it may help students in the writing process.  

At the same time, I understand where concerns come in. What if students’ assignments ask for cited sources? If you ask ChatGPT for an essay with citations, it says this: “Unfortunately, as a language model AI, I am unable to do proper citation in an essay format. However, I can provide you some key points and information about the topic.” Which I suppose is good in a way, but still giving base information is also reminiscent of students writing the essay then searching for sources to back it up. This is something I try to address directly in 100 and 200 level classes.

With AI models like this, it’s also important for us as librarians to be mindful of copyright. As I was talking with a friend about the outputs I got, they pushed back at my initial conception that ChatGPT is somehow transforming its data (and therefore in fair use). What is ChatGPT pulling from in order to train the model to answer its prompts? Their FAQ says this: “These models were trained on vast amounts of data from the internet written by humans, including conversations, so the responses it provides may sound human-like.” We should be asking what “vast amounts of data” entails. It’s already been asked especially of the art-based AI systems (this Verge article goes in depth), and artists are concerned about a loss of income because of it. We should ask the same of text-based AI too. There’s even a Have I Been Trained? tool that helps artists see if their work was used to train the machines, and flag it for removal. This particular aspect of AI tools is huge, and I don’t pretend like I know the answers; I was grateful for my friend in reminding me of the questions that need to be asked.  

Sound off below with your own thoughts on the subject. I’d love to hear where librarians’ heads are at when it comes to ChatGPT. I’ve linked some resources below (as well as citations for what I mentioned above). 

AI Text Generators: Sources to Stimulate Discussion among Teachers, Compiled by Anna Mills and licensed CC BY NC 4.0. 

ChatGPT FAQ. (n.d.). OpenAI. Retrieved January 30, 2023, from https://help.openai.com/en/articles/6783457-chatgpt-faq 

Grobe, C. (2023, January 18). Why I’m Not Scared of ChatGPT. The Chronicle of Higher Education. https://www.chronicle.com/article/why-im-not-scared-of-chatgpt 

Have I Been Trained? Launched by Holly Herndon and Mat Dryhurst.  

Heikkilä, M. (2022, September 16). This artist is dominating AI-generated art. And he’s not happy about it. MIT Technology Review. https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/ 

Vincent, J. (2022, November 15). The scary truth about AI copyright is nobody knows what will happen next. The Verge. https://www.theverge.com/23444685/generative-ai-copyright-infringement-legal-fair-use-training-data 

Watkins, R. (2022, December 19). Update Your Course Syllabus for chatGPT. Medium. https://medium.com/@rwatkins_7167/updating-your-course-syllabus-for-chatgpt-965f4b57b003