Small Steps, Big Picture

As I thought about composing a blog post this week, I felt that familiar frustration of searching not only for a good idea, but a big one. I feel like I’m often striving (read: struggling!) to make space for big picture thinking. I’m either consumed by small to-do list items that, while important, feel piecemeal or puzzling over how to make a big idea more precise and actionable. So it feels worthwhile now, as I reflect back on the semester, to consider how small things can have a sizable impact.

I’m recalling, for example, a few small changes I’ve made to some information evaluation activities this semester in order to deepen students’ critical thinking skills. For context, here’s an example of the kind of activity I had been using. I would ask students to work together to compare two sources that I gave them and talk about what made the sources reliable or not and if one source was more reliable than the other. As a class, we would then turn the characteristics they articulated into criteria that we thought generally make for reliable sources. It seemed like the activity helped students identify and articulate what made those particular sources reliable or not and permitted us to abstract to evaluation criteria that could be applied to other sources.

While effective in some ways, I began to see how this activity contributed to, rather than countered, the problem of oversimplified information evaluation. Generally, I have found that students can identify key criteria for source evaluation such as an author’s credentials, an author’s use of evidence to support claims, the publication’s reputation, and the presence of bias. Despite their facility with naming these characteristics, though, I’ve observed that students’ evaluation of them is sometimes simplistic. In this activity, it felt like students could easily say evidence, author, bias, etc., but those seemed like knee-jerk reactions. Instead of creating opportunities to balance a source’s strengths/weaknesses on a spectrum, this activity seemed to reinforce the checklist approach to information evaluation and students’ assumptions of sources as good versus bad.  

At the same time, I’ve noticed that increased attention to “fake news” in the media has heightened students’ awareness of the need to evaluate information. Yet many students seem more prone to dismiss a source altogether as biased or unreliable without careful evaluation. The “fake news” conversation seems to have bolstered some students’ simplistic evaluations rather than deepen them.

In an effort to introduce more nuance into students’ evaluation practices and attitudes, then, I experimented with a few small shifts and have so far landed with revisions like the following.

Small shift #1 – Students balance the characteristics of a single source.
I ask students to work with a partner to evaluate a single source. Specifically, I ask them to brainstorm two characteristics about a given source that make it reliable and/or not reliable. I set this up on the board in two columns. Students can write in either/both columns: two reliable, two not reliable, or one of each. Using the columns side-by-side helps to visually illustrate evaluation as a balance of characteristics; a source isn’t necessarily all good or all bad, but has strengths and weaknesses.

Small shift #2 – Students examine how other students balance the strengths and weaknesses of the source.
Sometimes different students will write similar characteristics in both columns (e.g., comments about evidence used in the source show up in both sides) helping students to recognize how others might evaluate the same characteristic as reliable when they see it as unreliable or vice versa. This helps illustrate the ways different readers might approach and interpret a source.

Small shift #3 – Rather than develop a list of evaluation criteria, we turn the characteristics they notice into questions to ask about sources.
In our class discussion, we talk about the characteristics of the source that they identify, but we don’t turn them into criteria. Instead we talk about them in terms of questions they might ask of any source. For example, they might cite “data” as a characteristic that suggests a source is reliable. With a little coaxing, they might expand, “well, I think the author in this source used a variety of types of evidence – statistics, interviews, research study, etc.” So we would turn that into questions to ask of any source (e.g., what type(s) of evidence are used? what is the quantity and quality of the evidence used?) rather than a criterion to check off.

Despite their smallness, these shifts have helped make space for conversation about pretty big ideas in information evaluation: interpretation, nuance, and balance. What small steps do you take to connect to the big picture? I’d love to hear your thoughts in the comments.

Questioning the Evidence-Based Pyramid

As a first year health sciences librarian, I have not yet conducted a systematic review. However, as a speech-language pathologist, I learned about evidence-based medicine and the importance of clinical expertise combined with clinical evidence and patient values. As a librarian, I’m now able to combine these experiences, allowing me to view see evidence-based medicine more holistically.

In the past month, I attended two professional development courses. The first was a Systematic Review Workshop held by the University of Pittsburgh. The second was an Edward Tufte course titled “Presenting Data and Information”. While these are two seemingly unrelated subjects, I left both reconsidering how we literally and figuratively view evidence-based medicine.

One of my biggest takeaways from the Systematic Review workshop was that a purpose of  systematic reviews is to search for evidence on a specific topic in order limit bias. This is done by searching multiple databases, reviewing grey literature, and having multiple team members  to screen papers and resolve disputes. One of my biggest takeaways from the Tufte course was that space should be used well to effectively arrange information and that displayed content should have integrity. In his book Visual Explanations, Tufte poses the following questions to test the integrity of information design (p. 70):

  • Is the display revealing the truth?
  • Is the representation accurate?
  • Are the data carefully documented?
  • Do the methods of display avoid spurious readings of the data?
  • Are appropriate comparisons and contexts shown?

When I think about visualization of evidence-based medicine, the evidence-based pyramid immediately comes to mind. It is an image used in many presentations related to evidence-based medicine:

EBM Pyramid and EBM Page Generator, copyright 2006 Trustees of Dartmouth College and Yale University. All Rights Reserved. Produced by Jan Glover, David Izzo, Karen Odato and Lei Wang.

While there is a lot of information in this image, I don’t think it is very clear. I have spoken to librarians (in the health sciences and not in the health sciences) that agree. I think this is a problem. I don’t think all librarians need to immediately know what cohort studies are, but I do think they should understand its context within the visual.

From what I have gathered and discussed with other professionals, quality of evidence/limited bias increases as you go up the pyramid. The pyramid is often explained in a hierarchical way; systematic reviews are considered highest standard of evidence, which is why it is at the top. There are usually fewer systematic reviews (since they take a long time and gather all the available literature about one topic), so the apex also indicates the least quantity. So let’s take a look each of the integrity questions about information design and investigate this further:

Is the display revealing the truth?

Is it? How do we know if this truthfully represent the quantity of each type of study/information? I believe that systematic reviews are probably the least in quantity and expert opinion are the most in quantity. That makes logical sense given the level of difficulty to produce and disperse this type of information. However, what about the types of research in between? Also, is one type of evidence inherently less biased than the ones below? Several studies suggest that systematic reviews may be systematic, but are not always transparent or completely reported and are outdated. This includes systematic reviews published in Cochrane, the highest standard of systematic reviews. While there are standards, they are very frequently not followed. However, following these standards can be very challenging and paradoxical. It’s very possible that a cohort study can be designed in a way that is much more systematic and informed than even a systematic review.

Is the representation accurate?

When I see the word “representation”, I am thinking about visual representation – the pyramid shape itself. There is an assumed hierarchy not just in terms of evidence, but also superiority here. This is a simplistic and elitist way of thinking about this information rather than being informative and useful. If you think about it, a systematic review cannot be conducted without having supporting RCT’s or case reports, etc. Research had to start somewhere. It this was seen as more of a scholarly conversation, I wonder if there would be a place for hierarchy.

I have learned that the slices of the pyramid represent the quantity of publications of each level of evidence. However, this is not something that can be easily understood by looking at this visual alone. Also, if the sizes of the slices represent quantity, why so? Quality is indicated in this version with the arrow going up the pyramid. This helps to represent idea of quality and quantity. However, if evidence-based medicine wants to prioritize quality, maybe the sizes of the slices should represent the quality, not quantity, of evidence. If it is viewed from that perspective, the systematic review slice should be the biggest because it is ideally the highest quality. Or, should the slices represent the amount of bias? This is all quite unclear.

Are the data carefully documented? Do the methods of display avoid spurious readings of the data?

I don’t believe that any data is actually represented here. Moreso, it feels like it’s being told to us so we believe it. I understand this is a visual model, but this image has been floating around so much that it is taken as the truth. I don’t think one can avoid spurious readings of the data because data aren’t represented here.

Are appropriate comparisons and contexts shown?

I do think that this pyramid provides visual way to compare information, however, I don’t think contexts are shown. Again, should the amount of each level of evidence referring quantity or quality? Is the context meant to indicate research superiority? If not, perhaps a pyramid isn’t the best shape. By virtue of its definition, a pyramid has an apex at the top, indicating superiority. Maybe a different shape or representation can provide alternate contexts.

So, how should evidence-based medicine be represented?

I have presented my own perceptions sprinkled with perceptions from others. I’m a new librarian, and my opinion has value. However, I also think this concept needs to be re-envisioned collectively with healthcare practitioners, researchers, librarians, and patients.

Another visualization that has been proposed is the Health Care Literature Wedge. It would look like  a triangle with the apex facing right indicating progressive research stages. I do think there are other shapes or concepts to consider. Perhaps concentric circles? Perhaps this can be a sort of spectrum? 3D maybe? I really don’t know. Another concept to consider is that systematic reviews are intended to reduce bias pertaining to a research question. Instead of reducing bias, maybe we can look at systematic reviews as having increased perspectives? How could this change the way evidence-based medicine is visualized?

I think the questions posed by Tufte can help to guide this. And I’m sure there are other questions and models than can also help. I would love to hear other epistemologies and/or models, so please share!

References

  1. Chang, S. M., Bass, E. B., Berkman, N., Carey, T. S., Kane, R. L., Lau, J., & Ratichek, S. (2013). Challenges in implementing The Institute of Medicine systematic review standards. Systematic Reviews, 2, 69. http://doi.org/10.1186/2046-4053-2-69
  2. Garritty, C., Tsertsvadze, A., Tricco, A. C., Sampson, M., & Moher, D. (2010). Updating Systematic Reviews: An International Survey. PLoS ONE, 5(4), e9914. http://doi.org/10.1371/journal.pone.0009914
  3. IOM (Institute of Medicine). (2011). Finding What Works in Health Care: Standards for Systematic Reviews. Washington, DC: The National Academies Press.) Retrieved from http://www.nationalacademies.org/hmd/Reports/2011/Finding-What-Works-in-Health-Care-Standards-for-Systematic-Reviews.aspx
  4. McKibbon, K. A. (1998). Evidence-based practice. Bulletin of the Medical Library Association, 86(3), 396–401.
  5. The PLoS Medicine Editors. (2007). Many Reviews Are Systematic but Some Are More Transparent and Completely Reported than Others. PLoS Medicine, 4(3), e147. http://doi.org/10.1371/journal.pmed.0040147
  6. Tufte, E. R. (1997). Visual Explanations: Images and Quantities, Evidence and Narrative. Cheshire, CT: Graphics Press.

 

An instruction librarian, a digital scholarship librarian, and a scientist enter a Twitter chat…

A quick note to preface this post: Thank you, Dylan Burns. After reading your post–What We Know and What They Know: Scholarly Communication, Usability, and Un-Usability–I can’t stop thinking about this weird nebula of article access, entitlement, ignorance, and resistance. Your blog post has done what every good blog post should do: Make me think. If you haven’t read Dylan’s post yet, stop, go back, and read. You’ll be better for it. I promise.

I am an instruction librarian, so everything that I read and learn about within the world of library and information science is filtered through a lens of education and pedagogy. This includes things like Dylan Burns’ latest blog post on access to scholarship, #TwitterLibraryLoan, and other not-so-legal means of obtaining academic works. He argues that faculty who use platforms like #Icanhazpdf or SciHub are not “willfully ignorant or disloyal to their institutions, libraries, or librarians. They just want what they want, when they want it,” and that “We as librarians shouldn’t  ‘teach’ our patrons to adapt to our obtuse and oftentimes difficult systems but libraries should adapt to the needs of our patrons.”

My initial reaction was YES, BUT…which means I’m trying to think of a polite way to express dissent. Thankfully, Dylan’s always up for a good Twitter discussion, so here’s what ensued:

My gut reaction to libraries giving people “what they want, when they want it” is always going to be non-committal. I’ve never been one to subscribe to what a colleague a long time ago referred to as “eat your peas librarianship” (credit: Michelle Boulé). I don’t think things should be difficult just for the sake of being difficult because things were hard for me, and you youngin’s should have to face hardships too! But I am also enough of a parent to know that giving people what they want when they want it without telling them how it got there is going to cause a lot of problems (and possibly temper-tantrums) later on. Here’s where the education librarian in me emerges: I don’t want scholars to just be able to get what they want when they need/want it without understanding the deeper problems within the arguably broken scholarly publishing model. In other words, I want to advocate for Lydia Thorne’s model of educating scholars about scholarly publishing problems. To which Dylan responds:

To which I can only respond:

Point: Dylan. Those of us who teach have all had the experience of trying to turn an experience into a teaching moment, only to be met by rolling eyes, blank stares, sighs, huffs, etc. Is the scholarly publishing system so broken that even knowing about the problems with platforms like SciHub, scholars will still engage in the piracy of academic works because, well, it’s all a part of the game they need to play? Is this even an issue of usability then? Creating extremely user-friendly library systems won’t change the fact that some libraries simply can’t afford the resources their community wants/needs, but those scholars still need to engage in the system that produces that resources. Is it always going to be a lose-lose for libraries?

At this point a friend of mine enters the Twitter discussion. Jonathan Jackson is an instructor of neurology and researcher at Massachusetts General Hospital:

Prior to this conversation I’d not thought about #TwitterLibraryLoan and similar efforts at not-so-legal access to scholarship as acts of resistance, but Jonathan’s entrance into the discussion forced me to think about the power of publicly asking for pdfs. I’ll admit that part of me skeptical that all researchers are as politically conscious as Jonathan and his colleagues. I’m sure there are some folks who just need that article asap and don’t care how they get it. But there is power in calling out that one publisher or that one journal again and again on #ICanHazPDF because your library will never be able to afford that subscription.

I’ll admit that the whole Twitter exchange made me second guess motivations all around, which is what a good discussion should do, right?

I Can’t Think of Anything to Ask

My family and I have been deep in the health care system these past few weeks, in and out of hospitals and doctor’s offices, on the phone scheduling appointments, and in line at pharmacies. Everyone is home, everyone is as fine as can be expected, and long-term plans are being made for maintenance and healing strategies for my family member.

During every interaction with a medical professional, inevitably someone in a coat or scrubs would ask, “Do you have any questions?” or “Is there anything I can answer for you?” or “Do you need anything from me, right now?” In response I always felt like I should have had a list of questions. Occasionally I’d have one or two to tack on to a question a family member already asked, but more often than not I was struck by the feeling of not knowing what to ask. 

Information is my field. I teach students how to ask questions and engage in inquiry in subjects that are new to them. I know that when someone asks me if I have any questions, they genuinely want to give me information, because when I ask my students if they have any questions, I want to answer them. That doesn’t change the fact that

  1. Questions are hard to ask; and
  2. Anxiety, fear, sadness, and exhaustion turn brains to mush; and
  3. It’s hard to ask questions with mush for brains.

Every time I unsuccessfully came up with questions to ask about the future health and well-being of my family member I felt like a failure. It felt like such a high-pressure critical moment, as though I could have drastically changed things by simply asking a question that would get to the *right* piece of information that would unlock this whole health puzzle. I know it’s an illogical thought, but again, Mush. Brains. Brain Mush.

I don’t want to equate families seeking health care information with all library patrons seeking information. I know that most people would argue that we are not necessarily in the same headspace or seeking information of equal importance, but really, how do we know? We don’t know what’s going on with our students, faculty, staff, and community members. Assumptions are poor substitutes for empathy, openness, and understanding.

One thing I wish were possible with health care professionals is the opportunity to email them or text them a question after an appointment or hospital visit. I am so frustrated by having to wait until our next meeting to rattle off my list of questions, the ones I could never come up with on the spot, without adequate time to research and reflect. We, as librarians, have that opportunity of continued interaction with our community. It’s what makes us special. We don’t need someone to have all the questions at one critical moment. We’re open to questions whenever they arise. I feel as though I could do a better job of making sure my own community knows that there isn’t just one right time to ask me a question. Questions are always welcome, and compassion is a needed response.

Conscientious Engagement and the Framework for Information Literacy

In April 2017, an article written by Geographers Carrie Mott and Daniel Cockayne was published in Gender, Place & Culture entitled “Citation matters: mobilizing the politics of citation toward a practice of ‘conscientious engagement’” (http://www.tandfonline.com/doi/abs/10.1080/0966369X.2017.1339022?journalCode=cgpc20)  Mott and Cockayne problematized the ways in which certain voices are privileged in scholarly circles. As with many other feminist academic polemics this article drew the ire of many conservative outlets, including the National Review which alerted its readers that “Feminist Geographers Warn Against Citing Too Many White Men in Scholarly Articles”

While yes, technically, the article does talk about moving away from dominant white male voices in Geography; the National Review, and many others, miss the point entirely. Mott and Cockayne investigate what they term the “politics of citation,” which “contributes” to, the authors explain, an “uneven reproduction of academic and disciplinary geographic knowledge.” (Mott and Cockayne 2) As the authors see it, “performativity of citations,” borrowing from Judith Butler and J.L Austin, leads to a place where “well-cited scholars have authority precisely because they are well-cited.” (Mott and Cockayne 13) Certain kinds of scholars, perched upon hegemonic forces, are better represented because of the echo-chamber-like machinations of contemporary scholarship. Mott and Cockayne conclude by suggesting that authors “carefully read through and count the citations in their list of references prior to submitting papers as a way to self-consciously draw attention to whose work is being reproduced.”( Mott and Cockayne 13) This has direct ties to how we approach information literacy, and, more directly in the scholarly communication field, how we measure the use and quality of materials.

With an eye toward students, the ACRL Framework for Information Literacy for Higher Education maintains the distinctions between novice learners and seasoned scholars in its “Scholarship as Conversation” section. It states that students should be able to “contribute to scholarly conversations at an appropriate level, such as local online community, guided discussion, undergraduate research journal…” while acknowledging that “systems privilege authorities and that not having a fluency in the language and process of a discipline disempowers their ability to participate and engage.” (Framework 8) Part of this though plays on the fears that Mott and Cockayne are exploring; namely, that fluency is required to gain access to the academe.  Fluency in what? Acknowledging that students must be fluent in the authorities of fields (somewhat contradictorily) reinforces the barriers that the Framework attempts to disrupt, and echoes discussions in the literary world over canons and canonicity. In some ways, this is how it is, and our part as librarians is to prepare students for the academic worlds they inhabit and the games that have to play. While I do not want to retread old discussions and debates over library neutrality, fluency as a requirement for contribution is not a neutral act as its neutrality reinforces the hegemonic forces that dominate academia.

This is not to say that the Framework is silent on the hegemonic forces behind standardization in academia as it explicitly names biases in the way that authority is constructed and contextual (Framework 4). However, I wonder whether or not “authority” is always somewhat limiting in the way in which we approach new concepts. Authority is nearly always, even in fields traditionally and statistically female like librarianship, heavily skewed toward dominant (male, white, heterosexual, cisgendered, Western) voices. Even if we name authority as contextual or constructed do we not give in to that construction when we teach to the standards and fluency in dominant paradigms?

This hegemonic echo-chamber is even more visible in scholarly communication. As long as the citation is still the lead indicator of influence for tenure track faculty it will represent a have and have-not situation for our newest faculty. The most-cited articles will be the most authoritative and therefore cited more often. On a practical level, this makes things difficult for new or outsider voices to be heard and/or respected. By encouraging “conscientious engagement” with sources and sourcing, we might be able to spread influence beyond the greatest hits of a genre and beyond the old, white, male, heterosexual forces that have defined authority for centuries. In so doing we could work to further include younger faculty and historically disenfranchised faculty in the larger conversations, which could greatly benefit the future of individual fields as well as individual tenure cases on a more pragmatic level.

What should a librarian do who is interested in conscientious engagement? I, for one, am going to start demoing and suggesting sources to my students outside of the cultural hegemony. While these sources may not be the “greatest hits,” this small(ish) action will promote larger engagement with new and challenging ideas. In my own work, I will also strive to cite voices outside of the dominant hegemony, and use my status to promote work that is challenging to the status quo. Is there a way to preserve the hallmarks of a field while encouraging new voices? I believe so.  I think there could be a middle ground, where disciplinary fluency is possible without the parroting of white male voices only. By being conscientious about who we cite and who we read, we can build a larger and more diverse set of authorities.

Given the outcry on the right over the mere suggestion that we cite non-dominant voices in scholarship, it is difficult to see this a quick and easy transition. Yet, if we take Mott and Cockayne’s piece beyond the scope of Geography and let it influence our own approaches to research and information literacy, it will benefit many of our stakeholders. On one hand, it will increase exposure for those faculty on the outside of the cultural hegemony, and, on the other, it will encourage diversity of thought and action for our students. Part of encouraging critical thinking should be encouraging conscientious engagement.