Category Archives: Information Literacy

Ghosts in the Library – A Collaborative Approach to Game-Based Pedagogy

ACRLog welcomes a guest post from Mandy Babirad, Instructional Services Librarian at SUNY Morrisville State College, Heather Shimon, Science and Engineering Librarian at the University of Wisconsin Madison, and Lydia Willoughby, Reference Librarian, Research and Education, at SUNY New Paltz.

Mandy Babirad (now at SUNY Morrisville), Heather Shimon (now at UW-Madison), and Lydia Willoughby (SUNY New Paltz) created an instructional game called Ghosts in the Library (Ghosts) to use in English Composition I library sessions (Comp I) at SUNY New Paltz in Fall 2015.

The game aligns with established Comp I learning outcomes and includes self-directed learning, problem solving, collaborative learning, and peer review. In the game, students work in groups and use the library catalog and databases to research a notable person with ties to New York State (a “ghost” who is haunting the library), and then create a digital artifact based on that research to appease the ghost. The “ghosts” are people of color and women who have made significant contributions to New York State, yet are underrepresented in the historical record. With the library’s namesake of Sojourner Truth, and student protests against a predominantly white curriculum occuring in Fall 2015, the game was also an attempt to include marginalized voices within the library collection and course syllabi.

The primary goal for Ghosts was to frame a 75 minute one shot library instruction session with a pedagogy of possibility. Roger Simon[1], in work that drew from deep collaboration with Henry Giroux, thought about student-centered learning as a choice of hope, and teaching as an act of hope. “Hope is the acknowledgement of more openness in a situation than the situation easily reveals… the hopeful person acts.” (3) Being open to possibilities is the only mindful and clear choice for teaching librarians facing technology distraction and student disinterest in a required library session. Bringing in curiosity as play engages inquiry as an affective process that asks student and teacher to act and reveal a more whole self in the classroom.

Ghosts Game Play

The game has one central goal: to appease your team’s ghost so that the ghost will leave the library and our campus alone. Each team gets a ghost card, team members choose role cards, the team members then use the tool cards to hunt down information that will help them appease their ghosts, and the final and culminating component of the game is the team creation of a historical marker.

All game materials can be downloaded from the Ghosts research guide: newpaltz.libguides.com/ghosts/scholarship.

Players in the Ghosts game receive a packet that contains the following game materials:

  • Map of the Sojourner Truth Library (with corresponding key to call numbers to floors)
  • Worksheet for the game to be completed in class time (the worksheet contains the rubric that teams use to evaluate their work and the success of their historical marker at appeasing their ghost).
  • Game Rules (like all rules, this is probably more useful for the librarian and teachers, than it is used by students. This was a key element in our game design and creation process, though it is most likely the least utilized part of the game by actual players during game play.)
  • A Packet of Cards (each packet contains 1 ghost card, 3 role cards and a 3 tool cards.
    • Ghost cards are randomly given to each group and are all women and people of color from New York State history that have a tie to the Hudson Valley region.
    • The 3 role cards include a historian, a presenter and a facilitator. If the composition of the class that you are teaching needs the group to be divided into more than 3 people per group, you can double up on historian role cards. All role cards contribute to information gathering and drafting the text of the historical marker.
      • The historian takes notes on the worksheet and enters the team’s text on the historical marker that the team is working together to create.
      • The presenter is the person that presents the team’s historical marker to the class.
      • The facilitator keeps the team on track and ensures that all tool cards have been used in information gathering, and that the team’s work fulfills all the roles of the rubric.
    • The 3 tool cards correspond to the library research tools students use on the library website to conduct research.
      • A tool card for databases that guides students to Academic Search Complete to find scholarly articles
      • A tool card for the library catalog that helps them discover books
      • A tool card for reference resources helps students to find background and biographical information on their ghost using Gale Virtual Reference Library.

The final part of the worksheet is a space where they can draft the text of their historical marker, a synthesis of their respective roles and tools in the research process. Once teams have completed the worksheet, they go to our custom historical marker website, https://apps.library.newpaltz.edu/plaque/index/plaque, to enter their text. Once they publish their historical marker and hit “Create,” their original text will appear on a digital artifact that looks like a ‘real’ NY State Education Department 1940. The artifact creation component of this game is designed to encourage student learning with a pedagogy that helps students connect to something ‘real’ and physical in the research process. Students present the historical markers, and all game players receive a ghost button and a FAQ zine about the library. Summary discussion concludes the session focused on what kinds of information the students gleaned from which kinds of library resources.

The game was tested with library staff, librarians, and student staff before being used in the classroom for the Science and Technology Entry Program program and for one Comp I in Spring 2016. Ghosts launched as a pilot in Fall 2016. Since that time, the game has continued as pilot for Comp I sessions in Fall 2017. Ghosts has been taught in roughly 42% of Comp I sessions since its launch. The assessment and feedback that we have is based on the worksheets from students, a survey given to faculty and students

Student, Teacher Feedback on Ghosts

We found that student input on the worksheet question, “Why did you choose this?,” to be the most valuable question to assess student learning. Even though only 52% of students reported that they would definitely use the resources from Ghosts again (and 42% reported ‘kind of’), their worksheets suggested otherwise. The student worksheets demonstrated skill in describing the research process in detail, showing an ability to evaluate information sources and needs. Even so, only 35% of student reported that the game had value to their course assignments, and 48% said that the game was ‘kind of’ valuable to their course work. There is a disconnect between the students ability to reflect on their own research and their view of the usefulness of those skills. Meaning, that as with all library instruction, the value of learning systemic thinking struggles to be visible and relevant to course assignments when structured in required sessions. The students were describing their research process, but not equating that task with the value of learning how to research. Interestingly, 71% of students definitely felt included while playing the game, and 31% ‘kind of’ felt included.

In the future, more evaluative questions will be posed in both the worksheet completed during the game and the post-assessment. The final product, the historical marker, won’t have a word count. Editing the marker down to 50 words took up a lot of time and stressed some of the students out which in turn may have influenced their evaluation of the game. The game could be tightened up and the worksheets could be transferred to online forms, so that the game is more of an online tutorial, which would facilitate flipped learning and provide the opportunity to use class time to have a more discussion based session informed by the work that was done outside of class. An idea for a follow-up assignment include writing a letter or postcard to your ghost describing how you research their history and what kind of things you found to encourage students to again practice being descriptive of their process. It is hard to get students to reflect on process; it is not practiced and it is rarely evaluated or asked for in their graded assignments.

The code for the historical marker would not have been possible without the work of software developer Andrew Vehlies. He created the marker from scratch in consultation with the librarians and posted the code on Github (available here) so that other history enthusiasts can benefit from his work. Once the code was developed and posted publicly, library technician (and human grumpy cat) Gary Oliver was able to post to local servers so that it could be used by students. Many thanks to instruction coordinator Anne Deutsch, at SUNY New Paltz, for letting us pilot Ghosts in the first place, and for supporting the game development in the library instruction program.

[1] Simon, R.I. (1992). Teaching against the grain: Texts for pedagogy of possibility. Greenwood Publishing Group.

Narrative as Evidence

This past week I attended the MLGSCA & NCNMLG Joint Meeting in Scottsdale, AZ. What do all these letters mean, you ask? They stand for the Medical Library Group of Southern California and Arizona and Northern California and Nevada Medical Library Group. So basically it was a western regional meeting of medical librarians. I attended sessions covering topics including survey design, information literacy assessment, National Library of Medicine updates, using Python to navigate e-mail reference, systematic reviews, and so many engaging posters! Of course, it was also an excellent opportunity to network with others and learn what different institutions are doing.

The survey design course was especially informative. As we know, surveys are a critical tool used by librarians. I learned how certain question types (ranking, for example) can be misleading, how to avoid asking double-barreled questions, and how to not ask a leading question (i.e. Do you really really love the library?!?) Of course, these survey design practices reduce bias and attempt to represent the most accurate results. The instructor, Deborah Charbonneau, reiterated that you can only do the best you can with surveys. And while this seems obvious, I feel that librarians can be a little perfectionistic. But let’s be real. It’s hard to know exactly what everyone thinks and wants through a survey. So yes, you can only do the best you can.

The posters and presentations about systematic reviews covered evidence-based medicine. As I discussed in my previous post, the evidence-based pyramid prioritizes research that reduces bias. Sackett, Rosenberg, Gray, Haynes, and Richardson (1996) helped to conceptualize the three-legged stool of evidence based practice. Essentially, evidence-based clinical decisions should consider the best of (1) the best research evidence, (2) clinical expertise, and (3) patient values and preferences. As medical librarians we generally focus on delivering strategies for the best research evidence. Simple enough, right? Overall, the conference was informative, social, and not overwhelming – three things I enjoy.

On my flight home, my center shifted from medical librarianship to Joan Didion’s Slouching Towards Bethlehem. The only essay I had previously read in this collection of essays was “On Keeping a Notebook”. I had been assigned this essay for a memoir writing class I took a few years ago. (I promise this is going somewhere.)  In this essay, Didion discusses how she has kept a form of a notebook, not a diary, since she was a child. Within these notebooks were random notes about people or things she saw, heard, and perhaps they included a time/location. These tidbits couldn’t possibly mean anything to anyone else except her. And that was the point. The pieces of information she jotted down over the years gave her reminders of who she was at that time. How she felt.

I took this memoir class in 2015 at Story Studio Chicago, a lofty spot in the Ravenswood neighborhood of Chicago. It was trendy and up and coming. At the time, I had just gotten divorced, my dad had died two years prior, and I discovered my passion for writing at the age of 33. So, I was certainly feeling quite up and coming (and hopefully I was also trendy). Her essay was powerful and resonated with me (as it has for so many others). After I started library school, I slowed down with my personal writing and focused on working and getting my degree, allowing me to land a fantastic job at UCLA! Now that I’m mostly settled in to all the newness, I have renewed my commitment to writing and reading memoir/creative non-fiction. I feel up and coming once again after all these new changes in my life.

As my plane ascended, I opened the book and saw that I had left off right at this essay. I found myself quietly verbalizing “Wow” and “Yeah” multiples times during my flight. I was grateful that the hum of the plane drowned out my voice, but I also didn’t care if anyone heard me. Because if they did, I would tell them why. I would say that the memories we have are really defined by who we were at that time. I would add that memory recall is actually not that reliable. Ultimately, our personal narrative is based upon the scatterplot of our lives: our actual past, present, future; our imagined past, present, future; our fantasized past, present, and future. As Didion (2000) states:

I think we are well advised to keep on nodding terms with the people we used to be, whether we find them attractive company or not. Otherwise they turn up unannounced and surprise us, come hammering on the mind’s door at 4 a.m. of a bad night and demand to know who deserted them, who betrayed them, who is going to make amends. We forget all too soon the things we thought we could never forget. We forget the loves and the betrayals alike, forget what we whispered and what we screamed, forget who we were. (p. 124)

What does this have to do with evidence-based medicine? Well, leaving a medical library conference and floating into this essay felt like polar opposites. But were they? While re-reading this essay, I found myself considering how reducing bias (or increasing perspectives) in research evidence and personal narrative can be connected. They may not seem so, but they are really part of a larger scholarly conversation. While medical librarians focus upon the research aspect of this three-legged stool, we cannot forget that clinical expertise (based upon personal experience) and patient perspective (also based upon personal experience) provide the remaining foundation for this stool.

I also wonder about how our experiences are reflected. Are we remembering who we were when we decided to become librarians? What were our goals? Hopes? Dreams? Look back at that essay you wrote when you applied to school. Look back at a picture of yourself from that time. Who were you? What did you want? Who was annoying you? What were you really yearning to purchase at the time? Did Netflix or Amazon Prime even exist?? Keeping on “nodding terms” with these people allows us to not let these former selves “turn up unannounced”. It allows us to ground ourselves and remember where we came from and how we came to be. And it is a good reminder that our narratives are our personal evidence, and they affect how we perceive and deliver “unbiased” information. I believe that the library is never neutral. So I am always wary to claim a lack of bias with research, no matter what. I prefer to be transparent about the strengths of evidence-based research and its pitfalls.

A couple creative ways I have seen this reflected in medicine is through narrative medicine, JAMA Poetry and Medicine, and Expert Opinions, the bottom of the evidence-based pyramid, in journals. Yes, these are biased. But I think it’s critical that we not forget that medicine ultimately heals the human body which is comprised of the human experience. Greenhalgh and Hurwitz (1999) propose:

At its most arid, modern medicine lacks a metric for existential qualities such as the inner hurt, despair, hope, grief, and moral pain that frequently accompany, and often indeed constitute, the illnesses from which people suffer. The relentless substitution during the course of medical training of skills deemed “scientific”—those that are eminently measurable but unavoidably reductionist—for those that are fundamentally linguistic, empathic, and interpretive should be seen as anything but a successful feature of the modern curriculum. (p. 50)

Medical librarians are not doctors. But librarians are purveyors of stories, so I do think we reside in more legs of this evidence-based stool. I would encourage all types of librarians to seek these outside perspectives to ground themselves in the everyday stories of healthcare professionals, patients, and of ourselves.

 

References

  1. Didion, J. (2000). Slouching towards Bethlehem. New York: Modern Library.
  2. Greenhalgh, T., & Hurwitz, B. (1999). Why study narrative? BMJ: British Medical Journal, 318(7175), 48–50.
  3. Sackett D.L., Rosenberg W.M., Gray J.A., Haynes R.B., & Richardson W.S. (1996). Evidence based medicine: What it is and what it isn’t. BMJ: British Medical Journal, 312(7023), 71–2. doi: 10.1136/bmj.312.7023.71.

 

Small Steps, Big Picture

As I thought about composing a blog post this week, I felt that familiar frustration of searching not only for a good idea, but a big one. I feel like I’m often striving (read: struggling!) to make space for big picture thinking. I’m either consumed by small to-do list items that, while important, feel piecemeal or puzzling over how to make a big idea more precise and actionable. So it feels worthwhile now, as I reflect back on the semester, to consider how small things can have a sizable impact.

I’m recalling, for example, a few small changes I’ve made to some information evaluation activities this semester in order to deepen students’ critical thinking skills. For context, here’s an example of the kind of activity I had been using. I would ask students to work together to compare two sources that I gave them and talk about what made the sources reliable or not and if one source was more reliable than the other. As a class, we would then turn the characteristics they articulated into criteria that we thought generally make for reliable sources. It seemed like the activity helped students identify and articulate what made those particular sources reliable or not and permitted us to abstract to evaluation criteria that could be applied to other sources.

While effective in some ways, I began to see how this activity contributed to, rather than countered, the problem of oversimplified information evaluation. Generally, I have found that students can identify key criteria for source evaluation such as an author’s credentials, an author’s use of evidence to support claims, the publication’s reputation, and the presence of bias. Despite their facility with naming these characteristics, though, I’ve observed that students’ evaluation of them is sometimes simplistic. In this activity, it felt like students could easily say evidence, author, bias, etc., but those seemed like knee-jerk reactions. Instead of creating opportunities to balance a source’s strengths/weaknesses on a spectrum, this activity seemed to reinforce the checklist approach to information evaluation and students’ assumptions of sources as good versus bad.  

At the same time, I’ve noticed that increased attention to “fake news” in the media has heightened students’ awareness of the need to evaluate information. Yet many students seem more prone to dismiss a source altogether as biased or unreliable without careful evaluation. The “fake news” conversation seems to have bolstered some students’ simplistic evaluations rather than deepen them.

In an effort to introduce more nuance into students’ evaluation practices and attitudes, then, I experimented with a few small shifts and have so far landed with revisions like the following.

Small shift #1 – Students balance the characteristics of a single source.
I ask students to work with a partner to evaluate a single source. Specifically, I ask them to brainstorm two characteristics about a given source that make it reliable and/or not reliable. I set this up on the board in two columns. Students can write in either/both columns: two reliable, two not reliable, or one of each. Using the columns side-by-side helps to visually illustrate evaluation as a balance of characteristics; a source isn’t necessarily all good or all bad, but has strengths and weaknesses.

Small shift #2 – Students examine how other students balance the strengths and weaknesses of the source.
Sometimes different students will write similar characteristics in both columns (e.g., comments about evidence used in the source show up in both sides) helping students to recognize how others might evaluate the same characteristic as reliable when they see it as unreliable or vice versa. This helps illustrate the ways different readers might approach and interpret a source.

Small shift #3 – Rather than develop a list of evaluation criteria, we turn the characteristics they notice into questions to ask about sources.
In our class discussion, we talk about the characteristics of the source that they identify, but we don’t turn them into criteria. Instead we talk about them in terms of questions they might ask of any source. For example, they might cite “data” as a characteristic that suggests a source is reliable. With a little coaxing, they might expand, “well, I think the author in this source used a variety of types of evidence – statistics, interviews, research study, etc.” So we would turn that into questions to ask of any source (e.g., what type(s) of evidence are used? what is the quantity and quality of the evidence used?) rather than a criterion to check off.

Despite their smallness, these shifts have helped make space for conversation about pretty big ideas in information evaluation: interpretation, nuance, and balance. What small steps do you take to connect to the big picture? I’d love to hear your thoughts in the comments.

Questioning the Evidence-Based Pyramid

As a first year health sciences librarian, I have not yet conducted a systematic review. However, as a speech-language pathologist, I learned about evidence-based medicine and the importance of clinical expertise combined with clinical evidence and patient values. As a librarian, I’m now able to combine these experiences, allowing me to view see evidence-based medicine more holistically.

In the past month, I attended two professional development courses. The first was a Systematic Review Workshop held by the University of Pittsburgh. The second was an Edward Tufte course titled “Presenting Data and Information”. While these are two seemingly unrelated subjects, I left both reconsidering how we literally and figuratively view evidence-based medicine.

One of my biggest takeaways from the Systematic Review workshop was that a purpose of  systematic reviews is to search for evidence on a specific topic in order limit bias. This is done by searching multiple databases, reviewing grey literature, and having multiple team members  to screen papers and resolve disputes. One of my biggest takeaways from the Tufte course was that space should be used well to effectively arrange information and that displayed content should have integrity. In his book Visual Explanations, Tufte poses the following questions to test the integrity of information design (p. 70):

  • Is the display revealing the truth?
  • Is the representation accurate?
  • Are the data carefully documented?
  • Do the methods of display avoid spurious readings of the data?
  • Are appropriate comparisons and contexts shown?

When I think about visualization of evidence-based medicine, the evidence-based pyramid immediately comes to mind. It is an image used in many presentations related to evidence-based medicine:

EBM Pyramid and EBM Page Generator, copyright 2006 Trustees of Dartmouth College and Yale University. All Rights Reserved. Produced by Jan Glover, David Izzo, Karen Odato and Lei Wang.

While there is a lot of information in this image, I don’t think it is very clear. I have spoken to librarians (in the health sciences and not in the health sciences) that agree. I think this is a problem. I don’t think all librarians need to immediately know what cohort studies are, but I do think they should understand its context within the visual.

From what I have gathered and discussed with other professionals, quality of evidence/limited bias increases as you go up the pyramid. The pyramid is often explained in a hierarchical way; systematic reviews are considered highest standard of evidence, which is why it is at the top. There are usually fewer systematic reviews (since they take a long time and gather all the available literature about one topic), so the apex also indicates the least quantity. So let’s take a look each of the integrity questions about information design and investigate this further:

Is the display revealing the truth?

Is it? How do we know if this truthfully represent the quantity of each type of study/information? I believe that systematic reviews are probably the least in quantity and expert opinion are the most in quantity. That makes logical sense given the level of difficulty to produce and disperse this type of information. However, what about the types of research in between? Also, is one type of evidence inherently less biased than the ones below? Several studies suggest that systematic reviews may be systematic, but are not always transparent or completely reported and are outdated. This includes systematic reviews published in Cochrane, the highest standard of systematic reviews. While there are standards, they are very frequently not followed. However, following these standards can be very challenging and paradoxical. It’s very possible that a cohort study can be designed in a way that is much more systematic and informed than even a systematic review.

Is the representation accurate?

When I see the word “representation”, I am thinking about visual representation – the pyramid shape itself. There is an assumed hierarchy not just in terms of evidence, but also superiority here. This is a simplistic and elitist way of thinking about this information rather than being informative and useful. If you think about it, a systematic review cannot be conducted without having supporting RCT’s or case reports, etc. Research had to start somewhere. It this was seen as more of a scholarly conversation, I wonder if there would be a place for hierarchy.

I have learned that the slices of the pyramid represent the quantity of publications of each level of evidence. However, this is not something that can be easily understood by looking at this visual alone. Also, if the sizes of the slices represent quantity, why so? Quality is indicated in this version with the arrow going up the pyramid. This helps to represent idea of quality and quantity. However, if evidence-based medicine wants to prioritize quality, maybe the sizes of the slices should represent the quality, not quantity, of evidence. If it is viewed from that perspective, the systematic review slice should be the biggest because it is ideally the highest quality. Or, should the slices represent the amount of bias? This is all quite unclear.

Are the data carefully documented? Do the methods of display avoid spurious readings of the data?

I don’t believe that any data is actually represented here. Moreso, it feels like it’s being told to us so we believe it. I understand this is a visual model, but this image has been floating around so much that it is taken as the truth. I don’t think one can avoid spurious readings of the data because data aren’t represented here.

Are appropriate comparisons and contexts shown?

I do think that this pyramid provides visual way to compare information, however, I don’t think contexts are shown. Again, should the amount of each level of evidence referring quantity or quality? Is the context meant to indicate research superiority? If not, perhaps a pyramid isn’t the best shape. By virtue of its definition, a pyramid has an apex at the top, indicating superiority. Maybe a different shape or representation can provide alternate contexts.

So, how should evidence-based medicine be represented?

I have presented my own perceptions sprinkled with perceptions from others. I’m a new librarian, and my opinion has value. However, I also think this concept needs to be re-envisioned collectively with healthcare practitioners, researchers, librarians, and patients.

Another visualization that has been proposed is the Health Care Literature Wedge. It would look like  a triangle with the apex facing right indicating progressive research stages. I do think there are other shapes or concepts to consider. Perhaps concentric circles? Perhaps this can be a sort of spectrum? 3D maybe? I really don’t know. Another concept to consider is that systematic reviews are intended to reduce bias pertaining to a research question. Instead of reducing bias, maybe we can look at systematic reviews as having increased perspectives? How could this change the way evidence-based medicine is visualized?

I think the questions posed by Tufte can help to guide this. And I’m sure there are other questions and models than can also help. I would love to hear other epistemologies and/or models, so please share!

References

  1. Chang, S. M., Bass, E. B., Berkman, N., Carey, T. S., Kane, R. L., Lau, J., & Ratichek, S. (2013). Challenges in implementing The Institute of Medicine systematic review standards. Systematic Reviews, 2, 69. http://doi.org/10.1186/2046-4053-2-69
  2. Garritty, C., Tsertsvadze, A., Tricco, A. C., Sampson, M., & Moher, D. (2010). Updating Systematic Reviews: An International Survey. PLoS ONE, 5(4), e9914. http://doi.org/10.1371/journal.pone.0009914
  3. IOM (Institute of Medicine). (2011). Finding What Works in Health Care: Standards for Systematic Reviews. Washington, DC: The National Academies Press.) Retrieved from http://www.nationalacademies.org/hmd/Reports/2011/Finding-What-Works-in-Health-Care-Standards-for-Systematic-Reviews.aspx
  4. McKibbon, K. A. (1998). Evidence-based practice. Bulletin of the Medical Library Association, 86(3), 396–401.
  5. The PLoS Medicine Editors. (2007). Many Reviews Are Systematic but Some Are More Transparent and Completely Reported than Others. PLoS Medicine, 4(3), e147. http://doi.org/10.1371/journal.pmed.0040147
  6. Tufte, E. R. (1997). Visual Explanations: Images and Quantities, Evidence and Narrative. Cheshire, CT: Graphics Press.

 

An instruction librarian, a digital scholarship librarian, and a scientist enter a Twitter chat…

A quick note to preface this post: Thank you, Dylan Burns. After reading your post–What We Know and What They Know: Scholarly Communication, Usability, and Un-Usability–I can’t stop thinking about this weird nebula of article access, entitlement, ignorance, and resistance. Your blog post has done what every good blog post should do: Make me think. If you haven’t read Dylan’s post yet, stop, go back, and read. You’ll be better for it. I promise.

I am an instruction librarian, so everything that I read and learn about within the world of library and information science is filtered through a lens of education and pedagogy. This includes things like Dylan Burns’ latest blog post on access to scholarship, #TwitterLibraryLoan, and other not-so-legal means of obtaining academic works. He argues that faculty who use platforms like #Icanhazpdf or SciHub are not “willfully ignorant or disloyal to their institutions, libraries, or librarians. They just want what they want, when they want it,” and that “We as librarians shouldn’t  ‘teach’ our patrons to adapt to our obtuse and oftentimes difficult systems but libraries should adapt to the needs of our patrons.”

My initial reaction was YES, BUT…which means I’m trying to think of a polite way to express dissent. Thankfully, Dylan’s always up for a good Twitter discussion, so here’s what ensued:

My gut reaction to libraries giving people “what they want, when they want it” is always going to be non-committal. I’ve never been one to subscribe to what a colleague a long time ago referred to as “eat your peas librarianship” (credit: Michelle Boulé). I don’t think things should be difficult just for the sake of being difficult because things were hard for me, and you youngin’s should have to face hardships too! But I am also enough of a parent to know that giving people what they want when they want it without telling them how it got there is going to cause a lot of problems (and possibly temper-tantrums) later on. Here’s where the education librarian in me emerges: I don’t want scholars to just be able to get what they want when they need/want it without understanding the deeper problems within the arguably broken scholarly publishing model. In other words, I want to advocate for Lydia Thorne’s model of educating scholars about scholarly publishing problems. To which Dylan responds:

To which I can only respond:

Point: Dylan. Those of us who teach have all had the experience of trying to turn an experience into a teaching moment, only to be met by rolling eyes, blank stares, sighs, huffs, etc. Is the scholarly publishing system so broken that even knowing about the problems with platforms like SciHub, scholars will still engage in the piracy of academic works because, well, it’s all a part of the game they need to play? Is this even an issue of usability then? Creating extremely user-friendly library systems won’t change the fact that some libraries simply can’t afford the resources their community wants/needs, but those scholars still need to engage in the system that produces that resources. Is it always going to be a lose-lose for libraries?

At this point a friend of mine enters the Twitter discussion. Jonathan Jackson is an instructor of neurology and researcher at Massachusetts General Hospital:

Prior to this conversation I’d not thought about #TwitterLibraryLoan and similar efforts at not-so-legal access to scholarship as acts of resistance, but Jonathan’s entrance into the discussion forced me to think about the power of publicly asking for pdfs. I’ll admit that part of me skeptical that all researchers are as politically conscious as Jonathan and his colleagues. I’m sure there are some folks who just need that article asap and don’t care how they get it. But there is power in calling out that one publisher or that one journal again and again on #ICanHazPDF because your library will never be able to afford that subscription.

I’ll admit that the whole Twitter exchange made me second guess motivations all around, which is what a good discussion should do, right?