ChatGPT Can’t Envision Anything: It’s Actually BS-ing

 Since my first post on ChatGPT way back at the end of January (which feels like lifetimes ago), I’ve been keeping up with all things AI-related. As much as I can, anyway. My Zotero folder on the subject feels like it doubles in size all the time. One aspect of AI Literacy that I am deeply concerned about is the anthropomorphizing of ChatGPT; I have seen this more generally across the internet, and now I am seeing it happen in library spaces. What I mean by this is calling ChatGPT a “colleague” or “mentor” or referring to its output as ChatGPT’s thoughts.   

I am seriously concerned by “fun” articles that anthropomorphize ChatGPT in this way. We’re all librarians with evaluation skills that can critically think about ChatGPT’s answers to our prompts. But our knowledge on large language models varies from person to person, and it feels quite irresponsible to publish something wherein ChatGPT is referred to as a “colleague.” Even if ChatGPT is the one that “wrote” that.  

Part of this is simply because we don’t have much language to describe what ChatGPT is doing, so we resort to things like “what ChatGPT thought.” A large language model does not think. It is putting words in order based on how they’ve been put in order in its past training data. We can think of it like a giant autocomplete, or to be a bit crasser: a worldclass bullshitter.  

Because natural language is used both when engaging with ChatGPT and when it generates answers, we are more inclined to personify the software. In my own tests lately, my colleague pointed out that I said “Oh, sorry,” when ChatGPT said it couldn’t do something I asked it to do. It is incredibly difficult to not treat ChatGPT like something that thinks or has feelings, even for someone like me who’s been immersed in the literature for a while now.  Given that, we need to be vigilant about the danger of anthropomorphizing.  

I also find myself concerned with articles that are mostly AI-generated, with maybe a paragraph or two from the human author.  Given, the author had to come up with specific prompts and ask ChatGPT to tweak its results, but I don’t think that’s enough. My own post back in January doesn’t even list the ChatGPT results in its body; I link out to it, and all 890 words are my own thoughts and musings (with some citations along the way). Why are we giving a large language model a direct platform? And one as popular as ChatGPT, at that? I’d love to say that I don’t think people are going to continue having ChatGPT write their articles for them, but it just happened with a lawyer writing an argument with fake sources (Weiser, 2023).  

Cox and Tzoc wrote about the implications of ChatGPT for academic libraries back in March, and they have done a fairly good job with driving home that ChatGPT is not a “someone.” It’s continuously referred to as a tool throughout. I don’t necessarily agree that ChatGPT is the best tool to use in some of these situations; reference questions are one of those examples.  I tried doing this with my own ChatGPT account many times, and with real reference questions we’ve gotten at the desk here at my university. Some answers are just fine. There obviously isn’t any teaching going on, just ChatGPT spitting out answers. Students will come back to ChatGPT again and again because they aren’t being shown how to do anything, not to mention that ChatGPT can’t guide them through a database’s user interface. It will occasionally prompt the user for more information on their question, just like we as reference librarians do. It also suggests that users evaluate their sources more deeply (and to consult librarians).  

I asked it for journals on substance abuse and social work specifically, and it actually linked out to them and suggested that the patron check with their institution or library. If my prompt asks for “information from a scholarly journal,” ChatGPT will say it doesn’t have access to that. If I ask for research though, it’s got no problem spawning a list of (mostly) fake citations. I find it interesting what it will or won’t generate based on the specific words in your prompt. Due to this, I’m really not worried about ChatGPT replacing librarians; ChatGPT can’t do reference.  

We need to talk and think about the challenges and limitations that come with using ChatGPT. Algorithmic bias is one of the biggest challenges. ChatGPT is trained on a vast amount of data from the internet, and we all know how much of a cesspool the internet can be. I was able to get ChatGPT to give me bias by asking it for career ideas as a female high school senior: Healthcare, Education, Business, Technology, Creative Arts, and Social Services. In the Healthcare category, physician was not a listed option; nurse was first. I then corrected the model and told it I was male. Its suggestions now included Engineering, Information Technology, Business, Healthcare, Law, and Creative Arts. What was first in the Healthcare category? Physician.  

ChatGPT’s bias would be much, much worse if not for the human trainers that made the software safer to use. An article from TIME magazine by Billy Perrigo goes into the details, but just like social media moderation, training these models can be downright traumatic.  

There’s even more we need to think about when it comes to large language models – the environmental impact (Li et al, 2023), financial cost, opportunity cost (Bender et al, 2021), OpenAI’s clear intention to use us and our interactions with ChatGPT as training data, and copyright concerns. Personally, I don’t feel it’s worth using ChatGPT in any capacity; but I know the students I work with are going to, and we need to be able to talk about it. I liken it to SpellCheck; useful to a certain point, but when it tells me my own last name is spelled wrong, I can move on and ignore the suggestion.  I want to have conversations with students about the potential use cases, and when it’s not the best idea to employ ChatGPT. 

We as academic librarians are in a perfect position to teach AI Literacy and to help those around us navigate this new technology. We don’t need to be computer experts to do this – I certainly am not. But the first component of AI Literacy is knowing that large language models like ChatGPT cannot and do not think. “Fun” pieces that personify the technology only perpetuate the myth that it does.  

References 

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ?. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922 

Cox, C., & Tzoc, E. (2023). ChatGPT: Implications for academic libraries. College & Research Libraries News, 84(3), 99. https://doi.org/10.5860/crln.84.3.99

Li, P., Yang, J., Islam, M. A., & Ren, S. (2023). Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models (arXiv:2304.03271). arXiv. http://arxiv.org/abs/2304.03271 

Perrigo, B. (2023, January 18). Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer. Time. https://time.com/6247678/openai-chatgpt-kenya-workers/ 

Weiser, B. (2023, May 27). Here’s What Happens When Your Lawyer Uses ChatGPT. New York Times (Online). https://www.proquest.com/nytimes/docview/2819646324/citation/BD819582BDA74BAAPQ/1 

3 thoughts on “ChatGPT Can’t Envision Anything: It’s Actually BS-ing”

  1. This is wonderful insight Emily, way to call out the bias and BS that can come from AI.

  2. Great post! Sort of related – something I’ve noticed that’s been bugging me are these clickbait-y videos and posts from what I am going to call “social media academics.”

    I think videos like this one https://www.youtube.com/watch?v=yklFHtlK4sQ&lc=Ugz4ZjNawPRqonJnkTJ4AaABAg.9s7OkcugPgx9sJ0cila606 are extremely irresponsible. Touting AI and more specifically ChatGPT as some type of revolutionary game changer for academia, but taking no time to mention the limitations of the AI nor (in the case of this particular video) the ethics of inputting any type of data into a third party website.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.