Turn it off and on again: digital literacy in college students

What happened to digital literacy and competency? 

I’ll start this post with some examples of declining digital and computer literacy that me and my colleagues have noticed just in the past academic year with students.  

  • Tried to turn on a lab computer via the monitor, not the tower 
  • Manually added spaces for double-spaced paper 
  • Hitting spacebar to create indents 
  • Not being able to find their downloaded PDF 
  • Saving everything to desktop/not using file directories 
  • Unable to use browser (only uses phone applications) 
  • Not understanding how to navigate Microsoft OneDrive vs computer file directories (or: why doesn’t my paper show up on the computer?) 

I’m sure a lot of these, along with many other examples, sound very familiar to academic librarians. Although the IT Help Desk is just a few feet down from the Library Service Desk at my library, we become tech support in so many ways. The technical understanding of computers, programs, and how they work just isn’t there in many young adults, which might be surprising to some. Surely, the kids who have grown up with technology are good at it, right? They’re “digital natives”? Many a librarian, academic or otherwise, could tell you that that’s not the case.  

The 2018 International Computer and Information Literacy study showed that only 2 percent of students scored at the highest level of computer and information literacy (Fraillon et al, 2020). Yet, Global Web Index’s report on Generation Z says that “[they] are clocking up nearly 7 hours a day online” (2019). Those of us who work in universities, whether as faculty, staff, or otherwise, need to remember that students using technology for interaction and leisure doesn’t necessarily translate to familiarity with tools for academic or professional work. As an example: If I’m on TikTok all day, why would I then know how to use Microsoft Word for APA format in my paper? If I am posting stories to Instagram and direct messaging people, why would I know the difference between cloud storage like Google Drive and the hardware storage of a laptop? 

It’s easier for me to think about this in terms of my own experiences. I had a computer basics class in high school where I learned about the different mechanical parts of a computer, what the abbreviations KB, MB, and GB mean, among other things that I ultimately use every day in my professional and personal life. Someone who came even 2 or 3 years after me at my same high school didn’t have the same thing. Chromebooks were just gaining traction during my senior year, and they were fully implemented a few years after I left. I firmly believe that the rise of these sort of limiting products has limited the digital literacy and competency of today’s students, but perhaps exploring that relationship can be saved for an entirely different blog post.  

I think the ultimate problem with digital literacy is not necessarily the lack of technical knowledge, but the lack of curiosity. Oftentimes when students come to the desk for help with formatting a paper, they haven’t attempted to figure it out themselves. One way to address the lack of curiosity and digital literacy is something many librarians are already doing: modeling inquiry. We perform reference interviews to get more information about the question or issue at hand, and often times, we are figuring out technology issues along with the patron. I am always telling students exactly what I do – no, I don’t remember this off the top of my head, I Google things about programs constantly. Even in our instruction sessions, we model curiosity and exploration; I purposely try not to have canned database searches, because I know how messy research is. Students might not yet. If they see that a librarian can get a “no results found” search or something that isn’t as relevant, they might feel better about continuing to try in their own research process. They can also learn how to search the web for their problems – how many times have you Googled something, gotten completely irrelevant results, and had to change or add keywords? This first attempt is where I find that students might stop, if they do try to figure it out. It’s okay if they can’t find the answer and come ask us anyway – I just want to empower them to try.  

Although they’re of a generation who is quite familiar with technology, everyone’s experience varies. This is why I don’t really like the term digital native (Prensky, 2001). I prefer the term digital learner – none of us are born knowing natively how to use these tools, but they and we are born learning them (Gallardo-Echenique et al, 2015). Since every student comes to us with different backgrounds, experiences, and access, we should focus our efforts on modeling and teaching with inquiry and curiosity. As fast as technology changes, having a solid foundation of curiosity will benefit students for the rest of their lives.  

References 

Fraillon, J., Ainley, J., Schulz, W., Friedman, T., & Duckworth, D. (2020). Preparing for Life in a Digital World: IEA International Computer and Information Literacy Study 2018 International Report. Springer International Publishing. https://doi.org/10.1007/978-3-030-38781-5 

Gallardo-Echenique, E. E., Marqués-Molías, L., Bullen, M., & Strijbos, J.-W. (2015). Let’s talk about digital learners in the digital era. The International Review of Research in Open and Distributed Learning, 16(3). https://doi.org/10.19173/irrodl.v16i3.2196 

Prensky, M. (2001). Digital Natives, Digital Immigrants Part 1. On the Horizon, 9(5), 1–6. https://doi.org/10.1108/10748120110424816 

The Adventures of a Zillennial Librarian

In the past couple of weeks, I’ve attended a few library webinars focused on Generation Z out of my own curiosity. For full transparency, I am a fairly young librarian; I took one gap year in between undergrad and library school. I’m in that liminal space of not quite a millennial, not quite Generation Z (I’ve seen it referred to as a “zillennial”). My birth year has been in both generational cutoffs, depending on who you ask. I often relate to the experiences and outlooks of both of the generations. I still get mistaken as a student, and I am indeed on TikTok like a lot of the typical college-age students I teach.  

I felt particularly “Gen Z” in a research consultation I just had with one of my Environmental Studies students. She needed some legislation from the 80s and 90s, and my state’s government website only has the most recent version. My library is a government repository; we have a specific government documents section of our stacks. Was that the first place I went? Nope. I scoured many a website, and eventually did find the 1989 version we were looking for in the appendix of a 1993 thesis from the University of Montana. Thank goodness for OCR, searchable full text, and institutional repositories!

We did, however, have it in our Maryland Register up in the stacks. This allowed us to find the date it was proposed and the date it was passed, for both versions of the law (and cite it properly!). This consultation got me thinking though about my instincts as a librarian, and how my world experiences and generation relate to the way I go about finding information, even after being trained in it for my master’s degree. Looking in the physical collection is only a thought after I exhaust all of my online searching techniques.  I, and I’d wager to guess many of my students, prefer the ease of finding and reading something online. Although I had dial-up internet for perhaps longer than most folks (I had a version of it until about 2013 or so? Living in the middle of nowhere problems), the internet in general was a big part of growing up and learning how to research. Yes, I love a physical book as much as the next person – but I’m talking more about answering my own questions or doing research. In a webinar on Gen Z by ASERL recently, it was said that “[Gen Z is] so used to finding what they need on their own.” I heavily relate to this. My first impulse is to pull out my phone and perform a Google search; I’m sure this is the case for many now, regardless of generation.  

Another difference I’ve noticed in being a young librarian is that I actively encourage the use of Google Scholar (and actually use it myself). I have attended library sessions before where it is discouraged or interacted with faculty that do not want students using it. I personally find that it is a good steppingstone from performing regular Google searches to getting right into an academic database that might look completely foreign to them. They can still use natural language in Google Scholar and get some relevant results, but they get better ones when we as information professionals introduce them to Booleans and other strategies. It’s also been really useful if a student has too broad of a topic – searching in Google Scholar allows them to see all sorts of discipline conversations about a topic, and how other academics have narrowed things down. They can choose which pathway they’d like to explore further, and once they have a good research question and keywords to try, we can get into the library databases, all the while talking about the differences between Google Scholar and Academic Search Ultimate. The “Cited by” function has also been invaluable in teaching students about the academic conversation as a concept too.  

Another aspect of Gen Z from the ASERL webinar I attended is that despite being constantly online, we generally prefer face-to-face communication. In my personal experience, this preference is heightened due to the pandemic when face-to-face wasn’t even an option. I will take any and all other forms of communication over a phone call, though; I’m not sure that’s necessarily attributable to being Gen Z, but more of an “Emily” thing. The reason is because I can’t read the other person’s body language or facial expressions. You might now ask, Emily, you also can’t do that when it comes to chat, text, and email? But the difference is there isn’t an expectation to immediately respond – I can have a moment to really take in the other person’s words and consider my response.  

As an example for face-to-face communication in my workplace and work life, I would much rather go down to my colleague’s office and ask them my question as opposed to emailing. This is partly due to our collective open-door policy, but for some reason, emailing feels overly formal to me in a lot of cases. If that isn’t an option, I might send the message over Slack. Of course, if it’s important to have some sort of paper trail, I’ll gladly email – it is very helpful to have a record of what a professor and I talked about when I’m preparing the lesson plan, for instance. If my email data scarf had been expanded to all kinds of work communication, I’d be interested in how the percentages broke down! Perhaps that should be my next data project.

These are just a few things I’ve been thinking about as a strange middle-ground zillennial librarian lately, especially since that research consultation. I am endlessly fascinated by generational research as a whole, so if you’ve got any thoughts, please comment them down below.  

Physical data visualization 2: The email data scarf returns

Emily’s email data scarf draped over the back of two chairs.

In December last year, I made a post about tracking how many emails I sent every day from July 27th 2022-July 27th 2023. This encompasses my entire first year as a professional librarian, and I’m really happy to say that the scarf I was making to embody this data is now finished! Well, I need to weave in the ends (crafters know the dread), but the actual crocheting part is finished. 

Looking back on my initial post about the project, it’s funny that I mentioned it being a weekly routine to do my five rows… that responsible way of doing things did not stick. I crocheted from about March-July all in the last two weeks or so. I did keep up with entering my data into an excel sheet during that time, but between some traveling, moving, and life in general, I had to play some catch-up with actually crocheting the scarf.  

I was also motivated to finish this project because I’m presenting it at the International Visual Literacy Association’s conference in just a few short weeks! The scarf and its color key will be part of a poster presentation. It’s also a chance to really dig into why I did this – and what changes, if any, it led to in my email behavior more generally.  

At first glance on the scarf, the beginning of my year at Salisbury had a lot more emails. I used mail merge twice over the first few months (indicated by bobble stitches as opposed to single crochet), and there are two additional rows of red indicating that I sent over 12 emails that day. There’s also more pink tones in the first semester, which indicates 6+ emails being sent on any given day. After winter break, though, the color clearly shift more towards the purples, which stand for 5 emails or less in a certain day.  

There are two long stretches of grey, which indicate when I was off: during winter break at Christmastime and at the end of May, early June. I sent at least one email while I was off both times, which you can see by the row of white in between the grey. White isn’t a common occurrence throughout the scarf – I’d say based off my own feelings that my work-life balance is generally quite good, and this visualized data proves that! I do have to send an email or message occasionally while out of the office, mainly due to the fact that I supervise student workers who are here in the evenings and on weekends.  

On workdays, the average amount of emails I sent per day was 3.8. My counts were as follows, where the left is the number of emails sent in a given workday, and the right is the number of days that number occurred. The color it represents is in parentheses.  

Number of emails Number of days 
0  (Brown)15  
1  (Brown)22 
2  (Dark purple)40 
3  (Dark purple)37 
4  (Medium purple)32 
5  (Medium purple)34 
6  (Light purple)11 
7  (Light purple)
8  (Light pink)
9  (Light pink)
10  (Bright pink)
11  Bright pink)
12+  (Red)

This is based on email threads. The data quickly got unwieldly when going by the strict number of emails (not to mention Outlook makes this sort of counting difficult), so I chose to do threads instead. If I replied twice in one day to a thread about finals week, for example, that would only be counted once.  

I’m in the process of creating the poster now, and I’m really excited to talk to more folks about it at UIUC on October 6th! I was too excited to have actually finished the data object to wait to post until I was totally finished with the poster. 🙂

ChatGPT Can’t Envision Anything: It’s Actually BS-ing

 Since my first post on ChatGPT way back at the end of January (which feels like lifetimes ago), I’ve been keeping up with all things AI-related. As much as I can, anyway. My Zotero folder on the subject feels like it doubles in size all the time. One aspect of AI Literacy that I am deeply concerned about is the anthropomorphizing of ChatGPT; I have seen this more generally across the internet, and now I am seeing it happen in library spaces. What I mean by this is calling ChatGPT a “colleague” or “mentor” or referring to its output as ChatGPT’s thoughts.   

I am seriously concerned by “fun” articles that anthropomorphize ChatGPT in this way. We’re all librarians with evaluation skills that can critically think about ChatGPT’s answers to our prompts. But our knowledge on large language models varies from person to person, and it feels quite irresponsible to publish something wherein ChatGPT is referred to as a “colleague.” Even if ChatGPT is the one that “wrote” that.  

Part of this is simply because we don’t have much language to describe what ChatGPT is doing, so we resort to things like “what ChatGPT thought.” A large language model does not think. It is putting words in order based on how they’ve been put in order in its past training data. We can think of it like a giant autocomplete, or to be a bit crasser: a worldclass bullshitter.  

Because natural language is used both when engaging with ChatGPT and when it generates answers, we are more inclined to personify the software. In my own tests lately, my colleague pointed out that I said “Oh, sorry,” when ChatGPT said it couldn’t do something I asked it to do. It is incredibly difficult to not treat ChatGPT like something that thinks or has feelings, even for someone like me who’s been immersed in the literature for a while now.  Given that, we need to be vigilant about the danger of anthropomorphizing.  

I also find myself concerned with articles that are mostly AI-generated, with maybe a paragraph or two from the human author.  Given, the author had to come up with specific prompts and ask ChatGPT to tweak its results, but I don’t think that’s enough. My own post back in January doesn’t even list the ChatGPT results in its body; I link out to it, and all 890 words are my own thoughts and musings (with some citations along the way). Why are we giving a large language model a direct platform? And one as popular as ChatGPT, at that? I’d love to say that I don’t think people are going to continue having ChatGPT write their articles for them, but it just happened with a lawyer writing an argument with fake sources (Weiser, 2023).  

Cox and Tzoc wrote about the implications of ChatGPT for academic libraries back in March, and they have done a fairly good job with driving home that ChatGPT is not a “someone.” It’s continuously referred to as a tool throughout. I don’t necessarily agree that ChatGPT is the best tool to use in some of these situations; reference questions are one of those examples.  I tried doing this with my own ChatGPT account many times, and with real reference questions we’ve gotten at the desk here at my university. Some answers are just fine. There obviously isn’t any teaching going on, just ChatGPT spitting out answers. Students will come back to ChatGPT again and again because they aren’t being shown how to do anything, not to mention that ChatGPT can’t guide them through a database’s user interface. It will occasionally prompt the user for more information on their question, just like we as reference librarians do. It also suggests that users evaluate their sources more deeply (and to consult librarians).  

I asked it for journals on substance abuse and social work specifically, and it actually linked out to them and suggested that the patron check with their institution or library. If my prompt asks for “information from a scholarly journal,” ChatGPT will say it doesn’t have access to that. If I ask for research though, it’s got no problem spawning a list of (mostly) fake citations. I find it interesting what it will or won’t generate based on the specific words in your prompt. Due to this, I’m really not worried about ChatGPT replacing librarians; ChatGPT can’t do reference.  

We need to talk and think about the challenges and limitations that come with using ChatGPT. Algorithmic bias is one of the biggest challenges. ChatGPT is trained on a vast amount of data from the internet, and we all know how much of a cesspool the internet can be. I was able to get ChatGPT to give me bias by asking it for career ideas as a female high school senior: Healthcare, Education, Business, Technology, Creative Arts, and Social Services. In the Healthcare category, physician was not a listed option; nurse was first. I then corrected the model and told it I was male. Its suggestions now included Engineering, Information Technology, Business, Healthcare, Law, and Creative Arts. What was first in the Healthcare category? Physician.  

ChatGPT’s bias would be much, much worse if not for the human trainers that made the software safer to use. An article from TIME magazine by Billy Perrigo goes into the details, but just like social media moderation, training these models can be downright traumatic.  

There’s even more we need to think about when it comes to large language models – the environmental impact (Li et al, 2023), financial cost, opportunity cost (Bender et al, 2021), OpenAI’s clear intention to use us and our interactions with ChatGPT as training data, and copyright concerns. Personally, I don’t feel it’s worth using ChatGPT in any capacity; but I know the students I work with are going to, and we need to be able to talk about it. I liken it to SpellCheck; useful to a certain point, but when it tells me my own last name is spelled wrong, I can move on and ignore the suggestion.  I want to have conversations with students about the potential use cases, and when it’s not the best idea to employ ChatGPT. 

We as academic librarians are in a perfect position to teach AI Literacy and to help those around us navigate this new technology. We don’t need to be computer experts to do this – I certainly am not. But the first component of AI Literacy is knowing that large language models like ChatGPT cannot and do not think. “Fun” pieces that personify the technology only perpetuate the myth that it does.  

References 

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ?. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922 

Cox, C., & Tzoc, E. (2023). ChatGPT: Implications for academic libraries. College & Research Libraries News, 84(3), 99. https://doi.org/10.5860/crln.84.3.99

Li, P., Yang, J., Islam, M. A., & Ren, S. (2023). Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models (arXiv:2304.03271). arXiv. http://arxiv.org/abs/2304.03271 

Perrigo, B. (2023, January 18). Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer. Time. https://time.com/6247678/openai-chatgpt-kenya-workers/ 

Weiser, B. (2023, May 27). Here’s What Happens When Your Lawyer Uses ChatGPT. New York Times (Online). https://www.proquest.com/nytimes/docview/2819646324/citation/BD819582BDA74BAAPQ/1 

Summer Adventures in Weeding

It’s summer, there’s barely anyone here on campus, let alone the library. What does that mean? 

WEEDING! (Which has been talked about on ACRLog, of course: example 1 and example 2).  

I won’t lie; I love the process of weeding (or deselection). I really enjoy getting to see the thought processes of librarians past, with what books have made it this far and what was purchased when. Additionally, it’s fun to see when a patron clearly had a burst of energy in researching their specific topic when all or many books in that topic have the same return date.  

This is the first time that I’ve been able to do so for larger swathes of a collection – I have been focusing on my exercise science and recreation/leisure sections. At Salisbury, the department liaisons are responsible for their subjects. As a student worker in my previous libraries, I was never on the Access Services side; I could grab a book, even use GVSU’s Automated Storage and Retrieval System to get something for a patron, but it wasn’t my day in and day out to be up in the stacks. I think I went into UIUC’s massive Main Stacks maybe 3-4 times total, partly due to the pandemic. (I am a bit saddened by this, as I would’ve loved to just wander.) Collection development and being in the stacks still isn’t a part of my daily routine, but I really love it when it gets to be.  

My process has been to get a call number list from our wonderful Collection Management department, which includes all the book’s information, the total number of loans, when it was added, etc. I don’t have any hard and fast rules about what gets to stay and what doesn’t. I am guided by our overarching collection development policy as well as our subject ones, but there are some consistent thoughts while looking through the stats.  

Generally, if it hasn’t been checked out in the last twenty years, it’s on the chopping block; but if we only have, say, 4 books on softball, I’m likely going keep them all (as a sidenote, we have 92 books on baseball but 4 on softball, so I know I’m going to do some purchasing come the new budgets!). If it’s historical, though, like the title Great college football coaches of the twenties and thirties, I’m going to most likely keep. If it has something to do with Maryland or Chesapeake Bay, it’s probably going to stay too. I am also checking to see if our other USMAI (University System of Maryland) colleges have the book as well. If we are the only one to have something in our system, I’ll do even more digging to see if, despite the low circulation stats, it’s something we should hold on to. I’m more stringent with exercise science and anything health related, since it’s important that those stay up to date.  

I am also checking the books against the Eastern Academic Scholars’ Trust (EAST). I only do this for books I already decided to weed. As a member, Salisbury University has to retain certain texts in our collection; there are many books that I deemed ready for deselection that have to stay. All of this is done before I ever go up to the stacks to actually take the books off the shelves! I’ve been staring at a computer screen and Excel sheets a lot this week.  

Once I’m actually up there in the shelves, I’m making decisions as I go along. There are some books where I ultimately decided not to weed after paging through. Catalog entries are sometimes very bare bones, and especially if it’s an older book, it can be hard to find information online. So the final determination comes when I’m holding the book in my hands. Two books that were on the chopping block stayed because they were signed by the author. Finally, some books are marked with a red “RCL” stamp, which also signals to me that they must be retained. This retention differs from EAST, though I’d need to ask Collection Management for details. If I come across books that are in poor condition, I will also pull them to either be deselected or repaired.  

Being in the stacks and gathering the books is undoubtedly my favorite part. That’s where I get to see all the weird covers, illustrations, sun damage… I affectionately caption it all “adventures in weeding.” (As a sidenote, I am definitely not the one who came up with that; many a librarian has posted their weird and wonderful books with that phrase.) Here’s a few: 

Can you believe that Mall Walking Madness is one of the books we have to retain? (I suppose it is a bit of an artifact at this point…) The second photo with highlight is pointing out the “Possibilities for PDAs” in a book on physical education and technology. The last trio of photos are all from the same one on playground equipment, and it is all diagrams with little written instruction (and some slightly disturbing illustrations for different sections). Finally, this short video is of a book with some intense sun damage.  

What are some of your best “adventures in weeding”?