Note: The ChatGPT generated content is in a linked Google Doc and labeled as such!
As I’m sure anyone in academia is aware, ChatGPT and its AI counterparts are taking us by storm. I’ve seen it rolling around Twitter, in all-faculty emails at my institution, and of course in places like the Chronicle of Higher Education. I know I’m a bit late to the conversation, but it does feel like AI technology has exploded (or maybe I haven’t been paying attention before now). This may be a defining point in the first stage of my career.
Truthfully, I don’t yet know how I feel about it or what it means for us, but I asked it to generate a post about engaging with teaching faculty as an academic librarian so that I could play around. There are three versions: the one verbatim, one that I asked it to tailor to ACRLog’s style, and a Twitter thread style. I find the differences absolutely fascinating, and the possibilities for teaching endless as well. I have my concerns, too, which I’ll go over.
These were my instructions to the AI in order:
- “Can you write a different blog post, this time talking about the nuances of engaging with teaching faculty as an academic librarian? You can talk about the ways we are in classes (like one-shot instruction, embedded in learning management systems, etc.). Can it also contain advice for new academic librarians?”
- “Can you tailor it to the style of the blog, ACRLog?”
- “Can you make this a twitter thread instead?”
Here is a Google Doc with all three versions. As you can see, it pulled on the specific keywords I gave it: one-shot instruction and embedded in learning management systems. It even took the language, “nuances of engaging,” without actually talking about some nuances. You have to be quite specific with the initial ask in order for the software to give you what you need. The way it tailors to different writing styles is interesting, and I think it could make for a fun class exercise and learning experience about writing for a specific audience. It spits out a very base-level answer to my request and doesn’t make very smooth transitions (which is perhaps a partial result of the types of writing I chose).
Where this sort of tool can really shine, in my opinion, is as a starting point. Is there an email you’re dreading to send because it’s sensitive, somehow? Try prompting ChatGPT to write it. It can give you a starting point and some “professional” language to help you navigate the interaction. Have you been staring at a blank document for hours, unsure where to start? Get ChatGPT to generate something. It’s not going to give you a fully written article, but it puts words on the page, which can perhaps jumpstart your brain. (Especially if the AI got something wrong!) Maybe you don’t even use what it generated, but reading it gets you thinking. It’s a writing tool, not a writer itself. Will some students misuse it in the academic context? Undoubtedly. I enjoyed the way that Christopher Grobe talked about ChatGPT in his article, Why I’m Not Scared of ChatGPT. It details the many limitations of AI, and how it may help students in the writing process.
At the same time, I understand where concerns come in. What if students’ assignments ask for cited sources? If you ask ChatGPT for an essay with citations, it says this: “Unfortunately, as a language model AI, I am unable to do proper citation in an essay format. However, I can provide you some key points and information about the topic.” Which I suppose is good in a way, but still giving base information is also reminiscent of students writing the essay then searching for sources to back it up. This is something I try to address directly in 100 and 200 level classes.
With AI models like this, it’s also important for us as librarians to be mindful of copyright. As I was talking with a friend about the outputs I got, they pushed back at my initial conception that ChatGPT is somehow transforming its data (and therefore in fair use). What is ChatGPT pulling from in order to train the model to answer its prompts? Their FAQ says this: “These models were trained on vast amounts of data from the internet written by humans, including conversations, so the responses it provides may sound human-like.” We should be asking what “vast amounts of data” entails. It’s already been asked especially of the art-based AI systems (this Verge article goes in depth), and artists are concerned about a loss of income because of it. We should ask the same of text-based AI too. There’s even a Have I Been Trained? tool that helps artists see if their work was used to train the machines, and flag it for removal. This particular aspect of AI tools is huge, and I don’t pretend like I know the answers; I was grateful for my friend in reminding me of the questions that need to be asked.
Sound off below with your own thoughts on the subject. I’d love to hear where librarians’ heads are at when it comes to ChatGPT. I’ve linked some resources below (as well as citations for what I mentioned above).
AI Text Generators: Sources to Stimulate Discussion among Teachers, Compiled by Anna Mills and licensed CC BY NC 4.0.
ChatGPT FAQ. (n.d.). OpenAI. Retrieved January 30, 2023, from https://help.openai.com/en/articles/6783457-chatgpt-faq
Grobe, C. (2023, January 18). Why I’m Not Scared of ChatGPT. The Chronicle of Higher Education. https://www.chronicle.com/article/why-im-not-scared-of-chatgpt
Have I Been Trained? Launched by Holly Herndon and Mat Dryhurst.
Heikkilä, M. (2022, September 16). This artist is dominating AI-generated art. And he’s not happy about it. MIT Technology Review. https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/
Vincent, J. (2022, November 15). The scary truth about AI copyright is nobody knows what will happen next. The Verge. https://www.theverge.com/23444685/generative-ai-copyright-infringement-legal-fair-use-training-data
Watkins, R. (2022, December 19). Update Your Course Syllabus for chatGPT. Medium. https://medium.com/@rwatkins_7167/updating-your-course-syllabus-for-chatgpt-965f4b57b003