Generative AI & the Evolution of Academic Librarianship

During my first week as an academic librarian, many faculty discussions on campus were regarding the issue of generative AI software, such as ChatGPT. A majority of the faculty at a panel discussion held on campus about AI expressed concerns over plagiarism, copyright, academic integrity, etc. Those on the panel, however, commented on how beneficial using AI was. When asked more specifically on what faculty should do to combat potential cheating from using generative AI, the panel seemed in agreeance on an answer: educate your students on how to responsibly use AI.

I will admit; prior to starting my career as an academic librarian, I had never used generative AI. Of course, I saw generative AI blasted all over the news and saw updates on sites and apps like Snapchat, but I never understood what generative AI was. I did not have any interest in learning about it either. After attending the panel discussion, however, I was reminded of a book I read called Who Moved My Cheese? by Dr. Spencer Johnson. I was assigned to read Who Moved My Cheese? by a professor in graduate school and often refer back to it (I highly recommend reading it if you have not already done so). The book explains how change can happen unexpectedly, and when it does, it is better to adapt and move forward than be left behind. Feeling like I was being left behind while other faculty embraced generative AI, I decided to learn as much as I could about it.

Although I read numerous articles and watched hours of YouTube videos, I was still confused as to how generative AI worked. Near the end of August, my dean notified the library faculty of a course offered through ALA’s eLearning platform. The course was titled Exploring AI with Critical Information Literacy and taught by Sarah Morris. I enrolled in the course and learned about the development and usage of generative AI and machine learning, current discussions around AI, opportunities and challenges for AI usage in higher education, and how to engage AI as an academic librarian. Throughout the course, we examined AI through a critical lens and discussed strategies for AI to be incorporated at our own institutions. I enjoyed the course and found the lesson on prompt engineering to be the most intriguing.

One of the ways in which academic librarians can enter the generative AI realm in higher education is through teaching faculty and students prompt engineering. Prompt engineering is strategizing your generative AI input to obtain your desired output. While one can simply ask ChatGPT a standard question, prompt engineering recommends telling ChatGPT through what lens to answer the question. For example, if I was wondering how to craft a lesson for my class on implicit bias, I could plainly input:

“What lesson on implicit bias could I give my college class?”

Using prompt engineering, a better input would be:

“Act like an Academic Librarian teaching a college course on critical thinking. Design a lesson about implicit bias. Include topics for the class to discuss in small groups.”

While the results appeared similar, the detailed prompt elicited a result more applicable to my course by covering topics such as bias in information sources and media literacy.

Another way academic librarians can educate faculty and students on generative AI is on responsible use. More specifically, we can create lessons and workshops around copyright, academic integrity, and the reliability of the output. I tried this with my critical thinking class. I first introduced the university’s academic integrity policy, including definitions of cheating and plagiarism. Because the majority of my class was unfamiliar with generative AI, I briefly explained how generative AI worked. Afterwards, I had the students discuss the potential benefits and challenges of using generative AI. Using my personal account (my university does not support the use of ChatGPT), I asked ChatGPT and had the students read the output. I stressed that when used responsibly, ChatGPT can be a great resource for brainstorming; however, I cautioned my students from using it for writing assignments due to plagiarism, copyright infringement, and incorrect information. To illustrate this point further, I informed my students of the two attorneys in New York who acquired case law through ChatGPT. The attorneys did not fact-check the case law, and the judge discovered that the case law actually did not exist. The cases ChatGPT cited were made up. Overall, the lesson was a success. Many students chose to explore generative AI in more depth for the final projects.

By embracing generative AI, academic librarians can increase their skillset and become a useful resource for faculty and students navigating the rapidly evolving world of AI. It will be interesting to learn about how varying universities respond, if they have not done so already. I imagine we will see new policies implemented on campus, positions established, and roles altered.

ChatGPT Can’t Envision Anything: It’s Actually BS-ing

 Since my first post on ChatGPT way back at the end of January (which feels like lifetimes ago), I’ve been keeping up with all things AI-related. As much as I can, anyway. My Zotero folder on the subject feels like it doubles in size all the time. One aspect of AI Literacy that I am deeply concerned about is the anthropomorphizing of ChatGPT; I have seen this more generally across the internet, and now I am seeing it happen in library spaces. What I mean by this is calling ChatGPT a “colleague” or “mentor” or referring to its output as ChatGPT’s thoughts.   

I am seriously concerned by “fun” articles that anthropomorphize ChatGPT in this way. We’re all librarians with evaluation skills that can critically think about ChatGPT’s answers to our prompts. But our knowledge on large language models varies from person to person, and it feels quite irresponsible to publish something wherein ChatGPT is referred to as a “colleague.” Even if ChatGPT is the one that “wrote” that.  

Part of this is simply because we don’t have much language to describe what ChatGPT is doing, so we resort to things like “what ChatGPT thought.” A large language model does not think. It is putting words in order based on how they’ve been put in order in its past training data. We can think of it like a giant autocomplete, or to be a bit crasser: a worldclass bullshitter.  

Because natural language is used both when engaging with ChatGPT and when it generates answers, we are more inclined to personify the software. In my own tests lately, my colleague pointed out that I said “Oh, sorry,” when ChatGPT said it couldn’t do something I asked it to do. It is incredibly difficult to not treat ChatGPT like something that thinks or has feelings, even for someone like me who’s been immersed in the literature for a while now.  Given that, we need to be vigilant about the danger of anthropomorphizing.  

I also find myself concerned with articles that are mostly AI-generated, with maybe a paragraph or two from the human author.  Given, the author had to come up with specific prompts and ask ChatGPT to tweak its results, but I don’t think that’s enough. My own post back in January doesn’t even list the ChatGPT results in its body; I link out to it, and all 890 words are my own thoughts and musings (with some citations along the way). Why are we giving a large language model a direct platform? And one as popular as ChatGPT, at that? I’d love to say that I don’t think people are going to continue having ChatGPT write their articles for them, but it just happened with a lawyer writing an argument with fake sources (Weiser, 2023).  

Cox and Tzoc wrote about the implications of ChatGPT for academic libraries back in March, and they have done a fairly good job with driving home that ChatGPT is not a “someone.” It’s continuously referred to as a tool throughout. I don’t necessarily agree that ChatGPT is the best tool to use in some of these situations; reference questions are one of those examples.  I tried doing this with my own ChatGPT account many times, and with real reference questions we’ve gotten at the desk here at my university. Some answers are just fine. There obviously isn’t any teaching going on, just ChatGPT spitting out answers. Students will come back to ChatGPT again and again because they aren’t being shown how to do anything, not to mention that ChatGPT can’t guide them through a database’s user interface. It will occasionally prompt the user for more information on their question, just like we as reference librarians do. It also suggests that users evaluate their sources more deeply (and to consult librarians).  

I asked it for journals on substance abuse and social work specifically, and it actually linked out to them and suggested that the patron check with their institution or library. If my prompt asks for “information from a scholarly journal,” ChatGPT will say it doesn’t have access to that. If I ask for research though, it’s got no problem spawning a list of (mostly) fake citations. I find it interesting what it will or won’t generate based on the specific words in your prompt. Due to this, I’m really not worried about ChatGPT replacing librarians; ChatGPT can’t do reference.  

We need to talk and think about the challenges and limitations that come with using ChatGPT. Algorithmic bias is one of the biggest challenges. ChatGPT is trained on a vast amount of data from the internet, and we all know how much of a cesspool the internet can be. I was able to get ChatGPT to give me bias by asking it for career ideas as a female high school senior: Healthcare, Education, Business, Technology, Creative Arts, and Social Services. In the Healthcare category, physician was not a listed option; nurse was first. I then corrected the model and told it I was male. Its suggestions now included Engineering, Information Technology, Business, Healthcare, Law, and Creative Arts. What was first in the Healthcare category? Physician.  

ChatGPT’s bias would be much, much worse if not for the human trainers that made the software safer to use. An article from TIME magazine by Billy Perrigo goes into the details, but just like social media moderation, training these models can be downright traumatic.  

There’s even more we need to think about when it comes to large language models – the environmental impact (Li et al, 2023), financial cost, opportunity cost (Bender et al, 2021), OpenAI’s clear intention to use us and our interactions with ChatGPT as training data, and copyright concerns. Personally, I don’t feel it’s worth using ChatGPT in any capacity; but I know the students I work with are going to, and we need to be able to talk about it. I liken it to SpellCheck; useful to a certain point, but when it tells me my own last name is spelled wrong, I can move on and ignore the suggestion.  I want to have conversations with students about the potential use cases, and when it’s not the best idea to employ ChatGPT. 

We as academic librarians are in a perfect position to teach AI Literacy and to help those around us navigate this new technology. We don’t need to be computer experts to do this – I certainly am not. But the first component of AI Literacy is knowing that large language models like ChatGPT cannot and do not think. “Fun” pieces that personify the technology only perpetuate the myth that it does.  


Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ?. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. 

Cox, C., & Tzoc, E. (2023). ChatGPT: Implications for academic libraries. College & Research Libraries News, 84(3), 99.

Li, P., Yang, J., Islam, M. A., & Ren, S. (2023). Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models (arXiv:2304.03271). arXiv. 

Perrigo, B. (2023, January 18). Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer. Time. 

Weiser, B. (2023, May 27). Here’s What Happens When Your Lawyer Uses ChatGPT. New York Times (Online). 

A ChatGPT generated post (and a first year librarian’s thoughts)

Note: The ChatGPT generated content is in a linked Google Doc and labeled as such! 

As I’m sure anyone in academia is aware, ChatGPT and its AI counterparts are taking us by storm. I’ve seen it rolling around Twitter, in all-faculty emails at my institution, and of course in places like the Chronicle of Higher Education. I know I’m a bit late to the conversation, but it does feel like AI technology has exploded (or maybe I haven’t been paying attention before now). This may be a defining point in the first stage of my career.  

Truthfully, I don’t yet know how I feel about it or what it means for us, but I asked it to generate a post about engaging with teaching faculty as an academic librarian so that I could play around. There are three versions: the one verbatim, one that I asked it to tailor to ACRLog’s style, and a Twitter thread style. I find the differences absolutely fascinating, and the possibilities for teaching endless as well. I have my concerns, too, which I’ll go over.  

These were my instructions to the AI in order:  

  • “Can you write a different blog post, this time talking about the nuances of engaging with teaching faculty as an academic librarian? You can talk about the ways we are in classes (like one-shot instruction, embedded in learning management systems, etc.). Can it also contain advice for new academic librarians?” 
  • “Can you tailor it to the style of the blog, ACRLog?” 
  • “Can you make this a twitter thread instead?” 

Here is a Google Doc with all three versions. As you can see, it pulled on the specific keywords I gave it: one-shot instruction and embedded in learning management systems. It even took the language, “nuances of engaging,” without actually talking about some nuances. You have to be quite specific with the initial ask in order for the software to give you what you need. The way it tailors to different writing styles is interesting, and I think it could make for a fun class exercise and learning experience about writing for a specific audience. It spits out a very base-level answer to my request and doesn’t make very smooth transitions (which is perhaps a partial result of the types of writing I chose). 

Where this sort of tool can really shine, in my opinion, is as a starting point. Is there an email you’re dreading to send because it’s sensitive, somehow? Try prompting ChatGPT to write it. It can give you a starting point and some “professional” language to help you navigate the interaction. Have you been staring at a blank document for hours, unsure where to start? Get ChatGPT to generate something. It’s not going to give you a fully written article, but it puts words on the page, which can perhaps jumpstart your brain. (Especially if the AI got something wrong!) Maybe you don’t even use what it generated, but reading it gets you thinking. It’s a writing tool, not a writer itself. Will some students misuse it in the academic context? Undoubtedly. I enjoyed the way that Christopher Grobe talked about ChatGPT in his article, Why I’m Not Scared of ChatGPT. It details the many limitations of AI, and how it may help students in the writing process.  

At the same time, I understand where concerns come in. What if students’ assignments ask for cited sources? If you ask ChatGPT for an essay with citations, it says this: “Unfortunately, as a language model AI, I am unable to do proper citation in an essay format. However, I can provide you some key points and information about the topic.” Which I suppose is good in a way, but still giving base information is also reminiscent of students writing the essay then searching for sources to back it up. This is something I try to address directly in 100 and 200 level classes.

With AI models like this, it’s also important for us as librarians to be mindful of copyright. As I was talking with a friend about the outputs I got, they pushed back at my initial conception that ChatGPT is somehow transforming its data (and therefore in fair use). What is ChatGPT pulling from in order to train the model to answer its prompts? Their FAQ says this: “These models were trained on vast amounts of data from the internet written by humans, including conversations, so the responses it provides may sound human-like.” We should be asking what “vast amounts of data” entails. It’s already been asked especially of the art-based AI systems (this Verge article goes in depth), and artists are concerned about a loss of income because of it. We should ask the same of text-based AI too. There’s even a Have I Been Trained? tool that helps artists see if their work was used to train the machines, and flag it for removal. This particular aspect of AI tools is huge, and I don’t pretend like I know the answers; I was grateful for my friend in reminding me of the questions that need to be asked.  

Sound off below with your own thoughts on the subject. I’d love to hear where librarians’ heads are at when it comes to ChatGPT. I’ve linked some resources below (as well as citations for what I mentioned above). 

AI Text Generators: Sources to Stimulate Discussion among Teachers, Compiled by Anna Mills and licensed CC BY NC 4.0. 

ChatGPT FAQ. (n.d.). OpenAI. Retrieved January 30, 2023, from 

Grobe, C. (2023, January 18). Why I’m Not Scared of ChatGPT. The Chronicle of Higher Education. 

Have I Been Trained? Launched by Holly Herndon and Mat Dryhurst.  

Heikkilä, M. (2022, September 16). This artist is dominating AI-generated art. And he’s not happy about it. MIT Technology Review. 

Vincent, J. (2022, November 15). The scary truth about AI copyright is nobody knows what will happen next. The Verge. 

Watkins, R. (2022, December 19). Update Your Course Syllabus for chatGPT. Medium.