New Year, New Weed

I think a lot of us have New Year’s resolutions or goal-setting on our minds as we start the spring semester, but this time of the year has me thinking more about our fiscal year goals. Heading into January means that we’re wrapping up the second quarter, and we can evaluate how the collection is measuring up to goals that were set before I started. The best way for me to determine progress is by looking at the data, and the most effective way to share that with my colleagues is through data storytelling. I’m still growing my data literacy, but narratives (the storytelling part) I can do.

One of the action items for our strategic plan is to incorporate new tools for assessment. I recently found out about Dossiers from BLUECloud Analytics, a SirsiDynix tool that is powered by Microstrategy to pull data and create visualizations. Using knowledge I gained from a Learning Analytics course at Mizzou during my MLIS, plus from consulting books like Storytelling with Data, Data Science for Librarians, and Data Visualization: A Guide to Visual Storytelling for Libraries, I crafted a brief presentation as an update to the annual collection report. Honestly, compared to other programs like Tableau, this Dossier was tough to make. Although, between creating it and writing this post, they have upgraded their system to include new features that I would have loved to use. I spent a lot of time figuring out the system, making the visualizations, and creating a visually appealing template. Besides finding out how extra I am, I think my colleagues had an easier time understanding the data, and gained a better understanding of where we stand. This is a small start towards incorporating data storytelling into our work culture.

Page of BLUECloud Analytics Dossier from ERAU

The biggest takeaway from this project was that deselection of materials had a largely positive impact on the age of the collection, greater than just adding brand new materials could. It’s like trying to mix a grey paint; you’re going to need to dump a whole lot of white onto your black paint to get it to lighten up. It’s so much more effective if you take all the old, unused stuff away first. Committing to keeping up with how we are progressing towards our goals is the only way I would have found out that the time invested by liaison librarians into collection development has been paying off – and more importantly, just how much of an impact their actions made. I think it is so much more valuable to see that quantitative comparison in the data than to simply say “good job.”

There is an IMLS project coming out of the University of Illinois Urbana-Champaign for a “Data Storytelling Toolkit for Librarians” that I am really excited to learn more about. With a resource like that, we can all learn more on how to gain insights from our data, and especially how to share our impact with our stakeholders, whether they be internal or external. When people ask me what the most beneficial classes during my MLIS were, I always list Learning Analytics among them. We live in a data culture, and in my first year as an academic librarian, I am definitely seeing how it is starting to seep into my everyday work.

Telling the stories of our spaces

Space is a challenge in my library. With limited square footage, we sometimes don’t have enough seating for the number of students seeking to use our space. We can’t accommodate all the furniture types and configurations we need for students’ assorted library space uses. We’re further challenged by the competition of different space uses (read: noise levels) in such close proximity. It’s not a surprise, then, that space improvement is a topic that’s on my mind quite often. We’re working to address these issues and needs with both small enhancements and larger-scale improvements, thinking about adjustments to our existing footprint while also advocating for an expansion.

Collecting and using data effectively is vital to our ability to identify, plan, and implement improvements. Relying on our assumptions about how students use and feel about space and services won’t cut it. So we’ve been using a variety of methods, both formal and informal, to inform our understanding of students and space–and how it could better meet their needs. Quantitative data like gate counts and service transactions document foot traffic and usage patterns. Occupancy rates show how many (or how few, as the case may be) seats we have in the library in relation to how many students we have. Enrollment trends and projections for our campus provide important context. Qualitative data–gathered through informal focus group meetings with student government and clubs and through questions posted on whiteboards in the library inviting students’ comments on space use and needs–contribute important, albeit selected, student perspectives to our understanding. And there are surely more data pieces we could gather and fit together in this puzzle. All this data can help us understand our current physical constraints and usage patterns and plan improved spaces.

Of course, space is tight on our campus for many departments and needs, not just the library. And competition for money is stiff. Funding for these improvements hinges, at least significantly if not entirely, on sharing the data in a meaningful and compelling way and effectively demonstrating our students’ needs. I’ve been searching around a bit for some inspiration or insight into how I might best tell the story of our students and space and stumbled across this from Jonathan Harris at just the right moment: “I think people have begun to forget how powerful human stories are, exchanging their sense of empathy for a fetishistic fascination with data, networks, patterns, and total information… Really, the data is just part of the story. The human stuff is the main stuff, and the data should enrich it.” Right when I was drowning in all the data visualization best practices and software recommendations–helpful in their own right to be sure–this timely reminder re-focused my view on the students at the center of our space story.

What has helped you tell the story of your students and spaces? How have you made your story and advocacy most compelling? Your planning most effective? I’d love to hear your thoughts in the comments.

 

Questioning the Evidence-Based Pyramid

As a first year health sciences librarian, I have not yet conducted a systematic review. However, as a speech-language pathologist, I learned about evidence-based medicine and the importance of clinical expertise combined with clinical evidence and patient values. As a librarian, I’m now able to combine these experiences, allowing me to view see evidence-based medicine more holistically.

In the past month, I attended two professional development courses. The first was a Systematic Review Workshop held by the University of Pittsburgh. The second was an Edward Tufte course titled “Presenting Data and Information”. While these are two seemingly unrelated subjects, I left both reconsidering how we literally and figuratively view evidence-based medicine.

One of my biggest takeaways from the Systematic Review workshop was that a purpose of  systematic reviews is to search for evidence on a specific topic in order limit bias. This is done by searching multiple databases, reviewing grey literature, and having multiple team members  to screen papers and resolve disputes. One of my biggest takeaways from the Tufte course was that space should be used well to effectively arrange information and that displayed content should have integrity. In his book Visual Explanations, Tufte poses the following questions to test the integrity of information design (p. 70):

  • Is the display revealing the truth?
  • Is the representation accurate?
  • Are the data carefully documented?
  • Do the methods of display avoid spurious readings of the data?
  • Are appropriate comparisons and contexts shown?

When I think about visualization of evidence-based medicine, the evidence-based pyramid immediately comes to mind. It is an image used in many presentations related to evidence-based medicine:

EBM Pyramid and EBM Page Generator, copyright 2006 Trustees of Dartmouth College and Yale University. All Rights Reserved. Produced by Jan Glover, David Izzo, Karen Odato and Lei Wang.

While there is a lot of information in this image, I don’t think it is very clear. I have spoken to librarians (in the health sciences and not in the health sciences) that agree. I think this is a problem. I don’t think all librarians need to immediately know what cohort studies are, but I do think they should understand its context within the visual.

From what I have gathered and discussed with other professionals, quality of evidence/limited bias increases as you go up the pyramid. The pyramid is often explained in a hierarchical way; systematic reviews are considered highest standard of evidence, which is why it is at the top. There are usually fewer systematic reviews (since they take a long time and gather all the available literature about one topic), so the apex also indicates the least quantity. So let’s take a look each of the integrity questions about information design and investigate this further:

Is the display revealing the truth?

Is it? How do we know if this truthfully represent the quantity of each type of study/information? I believe that systematic reviews are probably the least in quantity and expert opinion are the most in quantity. That makes logical sense given the level of difficulty to produce and disperse this type of information. However, what about the types of research in between? Also, is one type of evidence inherently less biased than the ones below? Several studies suggest that systematic reviews may be systematic, but are not always transparent or completely reported and are outdated. This includes systematic reviews published in Cochrane, the highest standard of systematic reviews. While there are standards, they are very frequently not followed. However, following these standards can be very challenging and paradoxical. It’s very possible that a cohort study can be designed in a way that is much more systematic and informed than even a systematic review.

Is the representation accurate?

When I see the word “representation”, I am thinking about visual representation – the pyramid shape itself. There is an assumed hierarchy not just in terms of evidence, but also superiority here. This is a simplistic and elitist way of thinking about this information rather than being informative and useful. If you think about it, a systematic review cannot be conducted without having supporting RCT’s or case reports, etc. Research had to start somewhere. It this was seen as more of a scholarly conversation, I wonder if there would be a place for hierarchy.

I have learned that the slices of the pyramid represent the quantity of publications of each level of evidence. However, this is not something that can be easily understood by looking at this visual alone. Also, if the sizes of the slices represent quantity, why so? Quality is indicated in this version with the arrow going up the pyramid. This helps to represent idea of quality and quantity. However, if evidence-based medicine wants to prioritize quality, maybe the sizes of the slices should represent the quality, not quantity, of evidence. If it is viewed from that perspective, the systematic review slice should be the biggest because it is ideally the highest quality. Or, should the slices represent the amount of bias? This is all quite unclear.

Are the data carefully documented? Do the methods of display avoid spurious readings of the data?

I don’t believe that any data is actually represented here. Moreso, it feels like it’s being told to us so we believe it. I understand this is a visual model, but this image has been floating around so much that it is taken as the truth. I don’t think one can avoid spurious readings of the data because data aren’t represented here.

Are appropriate comparisons and contexts shown?

I do think that this pyramid provides visual way to compare information, however, I don’t think contexts are shown. Again, should the amount of each level of evidence referring quantity or quality? Is the context meant to indicate research superiority? If not, perhaps a pyramid isn’t the best shape. By virtue of its definition, a pyramid has an apex at the top, indicating superiority. Maybe a different shape or representation can provide alternate contexts.

So, how should evidence-based medicine be represented?

I have presented my own perceptions sprinkled with perceptions from others. I’m a new librarian, and my opinion has value. However, I also think this concept needs to be re-envisioned collectively with healthcare practitioners, researchers, librarians, and patients.

Another visualization that has been proposed is the Health Care Literature Wedge. It would look like  a triangle with the apex facing right indicating progressive research stages. I do think there are other shapes or concepts to consider. Perhaps concentric circles? Perhaps this can be a sort of spectrum? 3D maybe? I really don’t know. Another concept to consider is that systematic reviews are intended to reduce bias pertaining to a research question. Instead of reducing bias, maybe we can look at systematic reviews as having increased perspectives? How could this change the way evidence-based medicine is visualized?

I think the questions posed by Tufte can help to guide this. And I’m sure there are other questions and models than can also help. I would love to hear other epistemologies and/or models, so please share!

References

  1. Chang, S. M., Bass, E. B., Berkman, N., Carey, T. S., Kane, R. L., Lau, J., & Ratichek, S. (2013). Challenges in implementing The Institute of Medicine systematic review standards. Systematic Reviews, 2, 69. http://doi.org/10.1186/2046-4053-2-69
  2. Garritty, C., Tsertsvadze, A., Tricco, A. C., Sampson, M., & Moher, D. (2010). Updating Systematic Reviews: An International Survey. PLoS ONE, 5(4), e9914. http://doi.org/10.1371/journal.pone.0009914
  3. IOM (Institute of Medicine). (2011). Finding What Works in Health Care: Standards for Systematic Reviews. Washington, DC: The National Academies Press.) Retrieved from http://www.nationalacademies.org/hmd/Reports/2011/Finding-What-Works-in-Health-Care-Standards-for-Systematic-Reviews.aspx
  4. McKibbon, K. A. (1998). Evidence-based practice. Bulletin of the Medical Library Association, 86(3), 396–401.
  5. The PLoS Medicine Editors. (2007). Many Reviews Are Systematic but Some Are More Transparent and Completely Reported than Others. PLoS Medicine, 4(3), e147. http://doi.org/10.1371/journal.pmed.0040147
  6. Tufte, E. R. (1997). Visual Explanations: Images and Quantities, Evidence and Narrative. Cheshire, CT: Graphics Press.