The Adventures of a Zillennial Librarian

In the past couple of weeks, I’ve attended a few library webinars focused on Generation Z out of my own curiosity. For full transparency, I am a fairly young librarian; I took one gap year in between undergrad and library school. I’m in that liminal space of not quite a millennial, not quite Generation Z (I’ve seen it referred to as a “zillennial”). My birth year has been in both generational cutoffs, depending on who you ask. I often relate to the experiences and outlooks of both of the generations. I still get mistaken as a student, and I am indeed on TikTok like a lot of the typical college-age students I teach.  

I felt particularly “Gen Z” in a research consultation I just had with one of my Environmental Studies students. She needed some legislation from the 80s and 90s, and my state’s government website only has the most recent version. My library is a government repository; we have a specific government documents section of our stacks. Was that the first place I went? Nope. I scoured many a website, and eventually did find the 1989 version we were looking for in the appendix of a 1993 thesis from the University of Montana. Thank goodness for OCR, searchable full text, and institutional repositories!

We did, however, have it in our Maryland Register up in the stacks. This allowed us to find the date it was proposed and the date it was passed, for both versions of the law (and cite it properly!). This consultation got me thinking though about my instincts as a librarian, and how my world experiences and generation relate to the way I go about finding information, even after being trained in it for my master’s degree. Looking in the physical collection is only a thought after I exhaust all of my online searching techniques.  I, and I’d wager to guess many of my students, prefer the ease of finding and reading something online. Although I had dial-up internet for perhaps longer than most folks (I had a version of it until about 2013 or so? Living in the middle of nowhere problems), the internet in general was a big part of growing up and learning how to research. Yes, I love a physical book as much as the next person – but I’m talking more about answering my own questions or doing research. In a webinar on Gen Z by ASERL recently, it was said that “[Gen Z is] so used to finding what they need on their own.” I heavily relate to this. My first impulse is to pull out my phone and perform a Google search; I’m sure this is the case for many now, regardless of generation.  

Another difference I’ve noticed in being a young librarian is that I actively encourage the use of Google Scholar (and actually use it myself). I have attended library sessions before where it is discouraged or interacted with faculty that do not want students using it. I personally find that it is a good steppingstone from performing regular Google searches to getting right into an academic database that might look completely foreign to them. They can still use natural language in Google Scholar and get some relevant results, but they get better ones when we as information professionals introduce them to Booleans and other strategies. It’s also been really useful if a student has too broad of a topic – searching in Google Scholar allows them to see all sorts of discipline conversations about a topic, and how other academics have narrowed things down. They can choose which pathway they’d like to explore further, and once they have a good research question and keywords to try, we can get into the library databases, all the while talking about the differences between Google Scholar and Academic Search Ultimate. The “Cited by” function has also been invaluable in teaching students about the academic conversation as a concept too.  

Another aspect of Gen Z from the ASERL webinar I attended is that despite being constantly online, we generally prefer face-to-face communication. In my personal experience, this preference is heightened due to the pandemic when face-to-face wasn’t even an option. I will take any and all other forms of communication over a phone call, though; I’m not sure that’s necessarily attributable to being Gen Z, but more of an “Emily” thing. The reason is because I can’t read the other person’s body language or facial expressions. You might now ask, Emily, you also can’t do that when it comes to chat, text, and email? But the difference is there isn’t an expectation to immediately respond – I can have a moment to really take in the other person’s words and consider my response.  

As an example for face-to-face communication in my workplace and work life, I would much rather go down to my colleague’s office and ask them my question as opposed to emailing. This is partly due to our collective open-door policy, but for some reason, emailing feels overly formal to me in a lot of cases. If that isn’t an option, I might send the message over Slack. Of course, if it’s important to have some sort of paper trail, I’ll gladly email – it is very helpful to have a record of what a professor and I talked about when I’m preparing the lesson plan, for instance. If my email data scarf had been expanded to all kinds of work communication, I’d be interested in how the percentages broke down! Perhaps that should be my next data project.

These are just a few things I’ve been thinking about as a strange middle-ground zillennial librarian lately, especially since that research consultation. I am endlessly fascinated by generational research as a whole, so if you’ve got any thoughts, please comment them down below.  

Physical data visualization 2: The email data scarf returns

Emily’s email data scarf draped over the back of two chairs.

In December last year, I made a post about tracking how many emails I sent every day from July 27th 2022-July 27th 2023. This encompasses my entire first year as a professional librarian, and I’m really happy to say that the scarf I was making to embody this data is now finished! Well, I need to weave in the ends (crafters know the dread), but the actual crocheting part is finished. 

Looking back on my initial post about the project, it’s funny that I mentioned it being a weekly routine to do my five rows… that responsible way of doing things did not stick. I crocheted from about March-July all in the last two weeks or so. I did keep up with entering my data into an excel sheet during that time, but between some traveling, moving, and life in general, I had to play some catch-up with actually crocheting the scarf.  

I was also motivated to finish this project because I’m presenting it at the International Visual Literacy Association’s conference in just a few short weeks! The scarf and its color key will be part of a poster presentation. It’s also a chance to really dig into why I did this – and what changes, if any, it led to in my email behavior more generally.  

At first glance on the scarf, the beginning of my year at Salisbury had a lot more emails. I used mail merge twice over the first few months (indicated by bobble stitches as opposed to single crochet), and there are two additional rows of red indicating that I sent over 12 emails that day. There’s also more pink tones in the first semester, which indicates 6+ emails being sent on any given day. After winter break, though, the color clearly shift more towards the purples, which stand for 5 emails or less in a certain day.  

There are two long stretches of grey, which indicate when I was off: during winter break at Christmastime and at the end of May, early June. I sent at least one email while I was off both times, which you can see by the row of white in between the grey. White isn’t a common occurrence throughout the scarf – I’d say based off my own feelings that my work-life balance is generally quite good, and this visualized data proves that! I do have to send an email or message occasionally while out of the office, mainly due to the fact that I supervise student workers who are here in the evenings and on weekends.  

On workdays, the average amount of emails I sent per day was 3.8. My counts were as follows, where the left is the number of emails sent in a given workday, and the right is the number of days that number occurred. The color it represents is in parentheses.  

Number of emails Number of days 
0  (Brown)15  
1  (Brown)22 
2  (Dark purple)40 
3  (Dark purple)37 
4  (Medium purple)32 
5  (Medium purple)34 
6  (Light purple)11 
7  (Light purple)
8  (Light pink)
9  (Light pink)
10  (Bright pink)
11  Bright pink)
12+  (Red)

This is based on email threads. The data quickly got unwieldly when going by the strict number of emails (not to mention Outlook makes this sort of counting difficult), so I chose to do threads instead. If I replied twice in one day to a thread about finals week, for example, that would only be counted once.  

I’m in the process of creating the poster now, and I’m really excited to talk to more folks about it at UIUC on October 6th! I was too excited to have actually finished the data object to wait to post until I was totally finished with the poster. 🙂

ChatGPT Can’t Envision Anything: It’s Actually BS-ing

 Since my first post on ChatGPT way back at the end of January (which feels like lifetimes ago), I’ve been keeping up with all things AI-related. As much as I can, anyway. My Zotero folder on the subject feels like it doubles in size all the time. One aspect of AI Literacy that I am deeply concerned about is the anthropomorphizing of ChatGPT; I have seen this more generally across the internet, and now I am seeing it happen in library spaces. What I mean by this is calling ChatGPT a “colleague” or “mentor” or referring to its output as ChatGPT’s thoughts.   

I am seriously concerned by “fun” articles that anthropomorphize ChatGPT in this way. We’re all librarians with evaluation skills that can critically think about ChatGPT’s answers to our prompts. But our knowledge on large language models varies from person to person, and it feels quite irresponsible to publish something wherein ChatGPT is referred to as a “colleague.” Even if ChatGPT is the one that “wrote” that.  

Part of this is simply because we don’t have much language to describe what ChatGPT is doing, so we resort to things like “what ChatGPT thought.” A large language model does not think. It is putting words in order based on how they’ve been put in order in its past training data. We can think of it like a giant autocomplete, or to be a bit crasser: a worldclass bullshitter.  

Because natural language is used both when engaging with ChatGPT and when it generates answers, we are more inclined to personify the software. In my own tests lately, my colleague pointed out that I said “Oh, sorry,” when ChatGPT said it couldn’t do something I asked it to do. It is incredibly difficult to not treat ChatGPT like something that thinks or has feelings, even for someone like me who’s been immersed in the literature for a while now.  Given that, we need to be vigilant about the danger of anthropomorphizing.  

I also find myself concerned with articles that are mostly AI-generated, with maybe a paragraph or two from the human author.  Given, the author had to come up with specific prompts and ask ChatGPT to tweak its results, but I don’t think that’s enough. My own post back in January doesn’t even list the ChatGPT results in its body; I link out to it, and all 890 words are my own thoughts and musings (with some citations along the way). Why are we giving a large language model a direct platform? And one as popular as ChatGPT, at that? I’d love to say that I don’t think people are going to continue having ChatGPT write their articles for them, but it just happened with a lawyer writing an argument with fake sources (Weiser, 2023).  

Cox and Tzoc wrote about the implications of ChatGPT for academic libraries back in March, and they have done a fairly good job with driving home that ChatGPT is not a “someone.” It’s continuously referred to as a tool throughout. I don’t necessarily agree that ChatGPT is the best tool to use in some of these situations; reference questions are one of those examples.  I tried doing this with my own ChatGPT account many times, and with real reference questions we’ve gotten at the desk here at my university. Some answers are just fine. There obviously isn’t any teaching going on, just ChatGPT spitting out answers. Students will come back to ChatGPT again and again because they aren’t being shown how to do anything, not to mention that ChatGPT can’t guide them through a database’s user interface. It will occasionally prompt the user for more information on their question, just like we as reference librarians do. It also suggests that users evaluate their sources more deeply (and to consult librarians).  

I asked it for journals on substance abuse and social work specifically, and it actually linked out to them and suggested that the patron check with their institution or library. If my prompt asks for “information from a scholarly journal,” ChatGPT will say it doesn’t have access to that. If I ask for research though, it’s got no problem spawning a list of (mostly) fake citations. I find it interesting what it will or won’t generate based on the specific words in your prompt. Due to this, I’m really not worried about ChatGPT replacing librarians; ChatGPT can’t do reference.  

We need to talk and think about the challenges and limitations that come with using ChatGPT. Algorithmic bias is one of the biggest challenges. ChatGPT is trained on a vast amount of data from the internet, and we all know how much of a cesspool the internet can be. I was able to get ChatGPT to give me bias by asking it for career ideas as a female high school senior: Healthcare, Education, Business, Technology, Creative Arts, and Social Services. In the Healthcare category, physician was not a listed option; nurse was first. I then corrected the model and told it I was male. Its suggestions now included Engineering, Information Technology, Business, Healthcare, Law, and Creative Arts. What was first in the Healthcare category? Physician.  

ChatGPT’s bias would be much, much worse if not for the human trainers that made the software safer to use. An article from TIME magazine by Billy Perrigo goes into the details, but just like social media moderation, training these models can be downright traumatic.  

There’s even more we need to think about when it comes to large language models – the environmental impact (Li et al, 2023), financial cost, opportunity cost (Bender et al, 2021), OpenAI’s clear intention to use us and our interactions with ChatGPT as training data, and copyright concerns. Personally, I don’t feel it’s worth using ChatGPT in any capacity; but I know the students I work with are going to, and we need to be able to talk about it. I liken it to SpellCheck; useful to a certain point, but when it tells me my own last name is spelled wrong, I can move on and ignore the suggestion.  I want to have conversations with students about the potential use cases, and when it’s not the best idea to employ ChatGPT. 

We as academic librarians are in a perfect position to teach AI Literacy and to help those around us navigate this new technology. We don’t need to be computer experts to do this – I certainly am not. But the first component of AI Literacy is knowing that large language models like ChatGPT cannot and do not think. “Fun” pieces that personify the technology only perpetuate the myth that it does.  

References 

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ?. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922 

Cox, C., & Tzoc, E. (2023). ChatGPT: Implications for academic libraries. College & Research Libraries News, 84(3), 99. https://doi.org/10.5860/crln.84.3.99

Li, P., Yang, J., Islam, M. A., & Ren, S. (2023). Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models (arXiv:2304.03271). arXiv. http://arxiv.org/abs/2304.03271 

Perrigo, B. (2023, January 18). Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer. Time. https://time.com/6247678/openai-chatgpt-kenya-workers/ 

Weiser, B. (2023, May 27). Here’s What Happens When Your Lawyer Uses ChatGPT. New York Times (Online). https://www.proquest.com/nytimes/docview/2819646324/citation/BD819582BDA74BAAPQ/1 

Summer Adventures in Weeding

It’s summer, there’s barely anyone here on campus, let alone the library. What does that mean? 

WEEDING! (Which has been talked about on ACRLog, of course: example 1 and example 2).  

I won’t lie; I love the process of weeding (or deselection). I really enjoy getting to see the thought processes of librarians past, with what books have made it this far and what was purchased when. Additionally, it’s fun to see when a patron clearly had a burst of energy in researching their specific topic when all or many books in that topic have the same return date.  

This is the first time that I’ve been able to do so for larger swathes of a collection – I have been focusing on my exercise science and recreation/leisure sections. At Salisbury, the department liaisons are responsible for their subjects. As a student worker in my previous libraries, I was never on the Access Services side; I could grab a book, even use GVSU’s Automated Storage and Retrieval System to get something for a patron, but it wasn’t my day in and day out to be up in the stacks. I think I went into UIUC’s massive Main Stacks maybe 3-4 times total, partly due to the pandemic. (I am a bit saddened by this, as I would’ve loved to just wander.) Collection development and being in the stacks still isn’t a part of my daily routine, but I really love it when it gets to be.  

My process has been to get a call number list from our wonderful Collection Management department, which includes all the book’s information, the total number of loans, when it was added, etc. I don’t have any hard and fast rules about what gets to stay and what doesn’t. I am guided by our overarching collection development policy as well as our subject ones, but there are some consistent thoughts while looking through the stats.  

Generally, if it hasn’t been checked out in the last twenty years, it’s on the chopping block; but if we only have, say, 4 books on softball, I’m likely going keep them all (as a sidenote, we have 92 books on baseball but 4 on softball, so I know I’m going to do some purchasing come the new budgets!). If it’s historical, though, like the title Great college football coaches of the twenties and thirties, I’m going to most likely keep. If it has something to do with Maryland or Chesapeake Bay, it’s probably going to stay too. I am also checking to see if our other USMAI (University System of Maryland) colleges have the book as well. If we are the only one to have something in our system, I’ll do even more digging to see if, despite the low circulation stats, it’s something we should hold on to. I’m more stringent with exercise science and anything health related, since it’s important that those stay up to date.  

I am also checking the books against the Eastern Academic Scholars’ Trust (EAST). I only do this for books I already decided to weed. As a member, Salisbury University has to retain certain texts in our collection; there are many books that I deemed ready for deselection that have to stay. All of this is done before I ever go up to the stacks to actually take the books off the shelves! I’ve been staring at a computer screen and Excel sheets a lot this week.  

Once I’m actually up there in the shelves, I’m making decisions as I go along. There are some books where I ultimately decided not to weed after paging through. Catalog entries are sometimes very bare bones, and especially if it’s an older book, it can be hard to find information online. So the final determination comes when I’m holding the book in my hands. Two books that were on the chopping block stayed because they were signed by the author. Finally, some books are marked with a red “RCL” stamp, which also signals to me that they must be retained. This retention differs from EAST, though I’d need to ask Collection Management for details. If I come across books that are in poor condition, I will also pull them to either be deselected or repaired.  

Being in the stacks and gathering the books is undoubtedly my favorite part. That’s where I get to see all the weird covers, illustrations, sun damage… I affectionately caption it all “adventures in weeding.” (As a sidenote, I am definitely not the one who came up with that; many a librarian has posted their weird and wonderful books with that phrase.) Here’s a few: 

Can you believe that Mall Walking Madness is one of the books we have to retain? (I suppose it is a bit of an artifact at this point…) The second photo with highlight is pointing out the “Possibilities for PDAs” in a book on physical education and technology. The last trio of photos are all from the same one on playground equipment, and it is all diagrams with little written instruction (and some slightly disturbing illustrations for different sections). Finally, this short video is of a book with some intense sun damage.  

What are some of your best “adventures in weeding”? 

(Academic) Year 1: Complete

Well, that’s about it; we’re on the tail end of finals at Salisbury University, commencement ceremonies start on Wednesday night, and that’s a wrap on my first academic year as a librarian. It’s gone incredibly fast, and as I’ve been working on my first annual evaluation packet, I realize how much I accomplished this year. I know that other institutions do this in January, but for us at SU Libraries, it’s from May to May. I thought it might help others in the evaluation process to see how mine is framed, as well as the experience of gathering it all. As I am many hours deep into playing Nintendo’s newest Zelda title, Tears of the Kingdom, I thought I might set up this post in video game terms. Please forgive the nerdiness to follow. 🙂 

Mainline Quests

I had two individual goals this year. They focus on the key aspects of my job as I settled into the position; they were threaded throughout my typical work week. My philosophy here is that I wanted to get to know my responsibilities before making (big) changes to how I conduct them. Of course, reasonable changes arose (especially with my student supervising duties) but overall, I was learning the controls. Completing the tutorial area, so to speak. These were the goals my chair and I came up with:  

  1. Get to know liaison area faculty and establish relationships. 
  1. Get to know the Research Help Desk policies and student workers with updates as needed. 

I do feel that I’ve sufficiently accomplished both objectives. Putting the number of instruction sessions, instructors I’ve worked with, and the students reached really puts the work into perspective. I don’t see this when I’m in the thick of my instruction season and doing one-shots left and right. Two of my Public Health professors are working with me more extensively for their classes next year, so relationship building is definitely happening! 

With the Research Help Desk, I made the schedule and supervised our undergraduate workers. Since I was once a student worker at my undergrad library, this was a nice full circle moment for me to become a supervisor and mentor. One thing I implemented for everyone that staffs chat is a “Chat Transcript of the Month” email, where I highlight certain questions and the excellent patron service.  

Side Adventures

Shorter than main quests, but longer than side quests are the side adventures. This involves our department goals and my role in those, but also some of my projects over the year. This included being on a student survey committee, which had the concrete steps of writing the survey about the library, determining our rollout strategy, and coming back together to discuss those results. Another department goal was to create a learning object repository for all instruction librarians, which is fully set up and starting to be populated with handouts and worksheets we can all share amongst ourselves.  

This goes beyond the library, too. I have been deeply involved in the Environmental Studies department this past year, as they’re one of my subject areas. I’m on a committee to plan an “ENVR Major for a Day” program for high schoolers sometime next fall, to hopefully bring more students to the environmental programs at SU. This has been a good way to both get to know the faculty in that area; there’s many who are affiliated, but not necessarily under the Environmental Studies department because it’s so interdisciplinary.  

Side Quests

These are something that took maybe a week or less, but still required my time. A perfect type of side quest could be my attendance at the ACRL 2023 conference – only 3 days, technically, but a large undertaking nonetheless. Additionally, some of the proposals I’ve sent in for conferences and one book chapter could be considered a “side quest” – not part of my job description explicitly, but something I am expected to do given my status as faculty. They’re also the totally random stuff that comes up on a day-to-day basis, like making signs for the on-call librarians at the reference desk over winter break, for example. Something like that isn’t listed in my annual evaluation of course; that would be practically impossible unless I was taking detailed notes of my day-to-day. But as I run with this video game framing, it’s kind of the “other duties as assigned” part of many job descriptions.  

My “dump document”

Around August 2022, I started throwing everything I was doing that could go in the annual evaluation in a document on my OneDrive. I went to a webinar? Thrown in there, it can go under my “professional development” section later. Helped with move-in day? Noted for service (and especially the 6am-10am part…). I also went back through my calendar to pull anything I may have forgotten. In hindsight, I’ll just start my next annual evaluation document now, filling things in as I go; it did create more work to organize that initial dump document into our eval template. I’ve also vowed to myself not to simply title something “Webinar” in my calendar.  

Final thoughts

This was the first time that I compiled an annual evaluation like this. In all my past jobs, it took the form of a check-in with my supervisor. Even though the process is a bit tedious, I found it rewarding to dig into the details of how much I actually accomplished as a first-year academic librarian. There’s a lot to celebrate there, and I invite you all to celebrate your own accomplishments from the past academic year – sound off in the comments if you have any you’d like to share, whether it’s a main quest, side adventure, or side quest.