Vocational Awe and Professional Identity

A few days ago, In the Library with the Lead Pipe published an article by Fobazi Ettarh titled Vocational Awe and Librarianship: The Lies We Tell Ourselves. Ettarh uses the term “vocational awe” to “refer to the set of ideas, values, and assumptions librarians have about themselves and the profession that result in beliefs that libraries as institutions are inherently good and sacred, and therefore beyond critique.” Her article masterfully traces the root of this vocational awe, from the intertwining history of faith and librarianship to our current state, where librarians are expected to literally save lives. Ettarh argues that vocational awe leads to some of the structural problems in our profession, like lack of diversity, undercompensation, and burnout.

I will admit that I initially felt some defensiveness when I started reading this article. One of the reasons I became a librarian is because I wanted to care about and be engaged with the mission of my work, and I do deeply believe in the values that libraries try to uphold. When I got past that initial reaction, I realized how Ettarh’s research allows us to talk about our profession more honestly. As the author clearly states, the article doesn’t ask librarians not to take pride in their work. Nor is it an indictment of our core values (although it does, rightly, point out they are inequitably distributed across society).  Rather, it encourages us to challenge the idea that our profession is beyond critique, and therefore opens up space for us to better it.

Although this is not its primary intent, I wonder whether this research direction will help us resolve some of our own tortured professional identity issues. I am among those who became a librarian partly out of passion and partly out of convenience. I didn’t feel called to the profession. Instead, I made a conscious decision based on my interests and the sort of life I wanted for myself. I knew I wanted to be in a job where I would be helping people, with the opportunity for intellectual growth, and that I wanted to have a stable job with a balance between work and my other personal interests. Librarianship seemed like a very natural fit. But the vocational awe in librarianship means that you’re surrounded by the idea that being a good librarian means being driven solely by passion. Heidi Johnson previously wrote about the isolating feeling of not being a “born librarian” here at ACRLog, and I remember this post resonating deeply with me when I first started to become self-conscious that my professional identity was built less on my sacred calling to it than some of my peers. I think that unpacking the vocational awe that makes us feel this way might help to dispel some of the professional identity issues that so many librarians, and particularly new ones, seem to have.

As I was thinking about this article, I also realized that my own version of vocational awe usually manifests when I’m talking to non-librarians. Telling people I’m a librarian produces surprisingly revealing responses. Some people respond a well-meaning, but misinformed, “how fun! I wish I could read books all day, while others respond with some variation of “but aren’t libraries dying?” I suspect that this is partially a result of the slew of articles that are published every year on the decline of libraries and the death of librarianship. After responses like this, I feel compelled to defend librarianship in the strongest terms. I talk about information literacy, intellectual freedom, public spaces, privacy, access to information, democracy, you name it. I turn into a library evangelist. None of my own hesitations, challenges, or frustrations find their way into these conversations. Several people have already written about the exhaustion of constantly defending and explaining our profession. But this article made me wonder if there is some connection between how often we find ourselves needing to defend what we do — to friends, to faculty, to funding agencies, to the public — and tendency to resist the idea that there is a lot of internal work we need to do to truly uphold the values we claim. Ettarh’s article made me think about how to balance these two ideas: believing in and advocating for my profession, while working to make it better for the people in it.

What does that look like? I’m not entirely sure yet. But I think it entails being more honest. It means advocating for our value, but not pretending that we can do everything. And it means contributing to a culture that doesn’t valorize martyrdom. For me, that means saying no if I don’t have the bandwidth for a project. It means using my all my vacation time, and stopping using busyness as a measure of worth. There is much more to the article than I can unpack here, and I hope that everyone will go read it. I’m looking forward to hearing other people’s thoughts on how vocational awe impacts our profession, and how we might work to stop using it, as Ettarh puts it, as the only way to be a librarian.

From Mentor to Mentee: Navigating Peer-to-Peer Relationships

As I am writing this blog, I am sitting in my bedroom. It’s an odd combination of the hot air coming from the vents, but the freezing air coming from the window. Like everyone and their grandmas, the pipes at our house have frozen a couple times this week. Living in DC, we don’t experience much single digit weather, but we make do. As I have been wallowing in self pity of my household woes, I caught up with a friend who is currently in her last semester of library school and applying to jobs. 

However, this blog is not about the job hunt, it’s about informal peer-to-peer mentorship. While there are many formal programs, relationships, or structures to this type of mentorship, a lot of us “fall” into it. When I got my first job at American University, this particular friend was thinking about going to library school and I found myself being giving her the information and mentorship that I wish I had gotten before starting library school.

Almost two years have passed since my friend entered library school and our friendship has evolved in many ways. While we are experiencing different phases of our lives, our conversations changed from the topic of library school and recommended classes, to the very first job search and the stress and anxiety that comes with that.

In the midst of sharing my own interview tips, I was caught by surprise. I noticed that the roles had flipped. She was now mentoring and giving her assurance to me. Seeing as how I am geographically bound on my current job search, she was sending me job posts and assuring me that the job market would pick up. She was telling me that everything was going to be alright. 

This type of informal peer-to-peer mentorship relationship is important because it helps the next generation of librarians, but also because of the shared experiences the two of you might encounter and share, both positive and negative. Peer-to-peer mentorship relationships are an integral part of your growth as a librarian.

Anyone can share their experiences in library land, their journey through that first job hunt, or any other helpful information. However, there are some topics that may be a little hard to discuss and share with just anyone. 

I believe that the foundation of a good mentorship relationship of any kind is trust, empathy, and respect. It takes these three things to be able to talk openly about some difficult topics in librarianship. Topics such as being the very few women of color at your workplace, microaggressions, unspoken rules of interviewing for a job, or navigating difficult relationships.

Having these conversations allows for reflection, discussion, and action. The great thing about this relationship is that you’re able to see the growth of the other person, both personally and professionally. It’s almost as if you’ve grown up together and become adults. I encourage everyone to cherish these relationships because your education as a librarian is never over. The best teachers are the ones closest to you. 

Small Steps, Big Picture

As I thought about composing a blog post this week, I felt that familiar frustration of searching not only for a good idea, but a big one. I feel like I’m often striving (read: struggling!) to make space for big picture thinking. I’m either consumed by small to-do list items that, while important, feel piecemeal or puzzling over how to make a big idea more precise and actionable. So it feels worthwhile now, as I reflect back on the semester, to consider how small things can have a sizable impact.

I’m recalling, for example, a few small changes I’ve made to some information evaluation activities this semester in order to deepen students’ critical thinking skills. For context, here’s an example of the kind of activity I had been using. I would ask students to work together to compare two sources that I gave them and talk about what made the sources reliable or not and if one source was more reliable than the other. As a class, we would then turn the characteristics they articulated into criteria that we thought generally make for reliable sources. It seemed like the activity helped students identify and articulate what made those particular sources reliable or not and permitted us to abstract to evaluation criteria that could be applied to other sources.

While effective in some ways, I began to see how this activity contributed to, rather than countered, the problem of oversimplified information evaluation. Generally, I have found that students can identify key criteria for source evaluation such as an author’s credentials, an author’s use of evidence to support claims, the publication’s reputation, and the presence of bias. Despite their facility with naming these characteristics, though, I’ve observed that students’ evaluation of them is sometimes simplistic. In this activity, it felt like students could easily say evidence, author, bias, etc., but those seemed like knee-jerk reactions. Instead of creating opportunities to balance a source’s strengths/weaknesses on a spectrum, this activity seemed to reinforce the checklist approach to information evaluation and students’ assumptions of sources as good versus bad.  

At the same time, I’ve noticed that increased attention to “fake news” in the media has heightened students’ awareness of the need to evaluate information. Yet many students seem more prone to dismiss a source altogether as biased or unreliable without careful evaluation. The “fake news” conversation seems to have bolstered some students’ simplistic evaluations rather than deepen them.

In an effort to introduce more nuance into students’ evaluation practices and attitudes, then, I experimented with a few small shifts and have so far landed with revisions like the following.

Small shift #1 – Students balance the characteristics of a single source.
I ask students to work with a partner to evaluate a single source. Specifically, I ask them to brainstorm two characteristics about a given source that make it reliable and/or not reliable. I set this up on the board in two columns. Students can write in either/both columns: two reliable, two not reliable, or one of each. Using the columns side-by-side helps to visually illustrate evaluation as a balance of characteristics; a source isn’t necessarily all good or all bad, but has strengths and weaknesses.

Small shift #2 – Students examine how other students balance the strengths and weaknesses of the source.
Sometimes different students will write similar characteristics in both columns (e.g., comments about evidence used in the source show up in both sides) helping students to recognize how others might evaluate the same characteristic as reliable when they see it as unreliable or vice versa. This helps illustrate the ways different readers might approach and interpret a source.

Small shift #3 – Rather than develop a list of evaluation criteria, we turn the characteristics they notice into questions to ask about sources.
In our class discussion, we talk about the characteristics of the source that they identify, but we don’t turn them into criteria. Instead we talk about them in terms of questions they might ask of any source. For example, they might cite “data” as a characteristic that suggests a source is reliable. With a little coaxing, they might expand, “well, I think the author in this source used a variety of types of evidence – statistics, interviews, research study, etc.” So we would turn that into questions to ask of any source (e.g., what type(s) of evidence are used? what is the quantity and quality of the evidence used?) rather than a criterion to check off.

Despite their smallness, these shifts have helped make space for conversation about pretty big ideas in information evaluation: interpretation, nuance, and balance. What small steps do you take to connect to the big picture? I’d love to hear your thoughts in the comments.

Questioning the Evidence-Based Pyramid

As a first year health sciences librarian, I have not yet conducted a systematic review. However, as a speech-language pathologist, I learned about evidence-based medicine and the importance of clinical expertise combined with clinical evidence and patient values. As a librarian, I’m now able to combine these experiences, allowing me to view see evidence-based medicine more holistically.

In the past month, I attended two professional development courses. The first was a Systematic Review Workshop held by the University of Pittsburgh. The second was an Edward Tufte course titled “Presenting Data and Information”. While these are two seemingly unrelated subjects, I left both reconsidering how we literally and figuratively view evidence-based medicine.

One of my biggest takeaways from the Systematic Review workshop was that a purpose of  systematic reviews is to search for evidence on a specific topic in order limit bias. This is done by searching multiple databases, reviewing grey literature, and having multiple team members  to screen papers and resolve disputes. One of my biggest takeaways from the Tufte course was that space should be used well to effectively arrange information and that displayed content should have integrity. In his book Visual Explanations, Tufte poses the following questions to test the integrity of information design (p. 70):

  • Is the display revealing the truth?
  • Is the representation accurate?
  • Are the data carefully documented?
  • Do the methods of display avoid spurious readings of the data?
  • Are appropriate comparisons and contexts shown?

When I think about visualization of evidence-based medicine, the evidence-based pyramid immediately comes to mind. It is an image used in many presentations related to evidence-based medicine:

EBM Pyramid and EBM Page Generator, copyright 2006 Trustees of Dartmouth College and Yale University. All Rights Reserved. Produced by Jan Glover, David Izzo, Karen Odato and Lei Wang.

While there is a lot of information in this image, I don’t think it is very clear. I have spoken to librarians (in the health sciences and not in the health sciences) that agree. I think this is a problem. I don’t think all librarians need to immediately know what cohort studies are, but I do think they should understand its context within the visual.

From what I have gathered and discussed with other professionals, quality of evidence/limited bias increases as you go up the pyramid. The pyramid is often explained in a hierarchical way; systematic reviews are considered highest standard of evidence, which is why it is at the top. There are usually fewer systematic reviews (since they take a long time and gather all the available literature about one topic), so the apex also indicates the least quantity. So let’s take a look each of the integrity questions about information design and investigate this further:

Is the display revealing the truth?

Is it? How do we know if this truthfully represent the quantity of each type of study/information? I believe that systematic reviews are probably the least in quantity and expert opinion are the most in quantity. That makes logical sense given the level of difficulty to produce and disperse this type of information. However, what about the types of research in between? Also, is one type of evidence inherently less biased than the ones below? Several studies suggest that systematic reviews may be systematic, but are not always transparent or completely reported and are outdated. This includes systematic reviews published in Cochrane, the highest standard of systematic reviews. While there are standards, they are very frequently not followed. However, following these standards can be very challenging and paradoxical. It’s very possible that a cohort study can be designed in a way that is much more systematic and informed than even a systematic review.

Is the representation accurate?

When I see the word “representation”, I am thinking about visual representation – the pyramid shape itself. There is an assumed hierarchy not just in terms of evidence, but also superiority here. This is a simplistic and elitist way of thinking about this information rather than being informative and useful. If you think about it, a systematic review cannot be conducted without having supporting RCT’s or case reports, etc. Research had to start somewhere. It this was seen as more of a scholarly conversation, I wonder if there would be a place for hierarchy.

I have learned that the slices of the pyramid represent the quantity of publications of each level of evidence. However, this is not something that can be easily understood by looking at this visual alone. Also, if the sizes of the slices represent quantity, why so? Quality is indicated in this version with the arrow going up the pyramid. This helps to represent idea of quality and quantity. However, if evidence-based medicine wants to prioritize quality, maybe the sizes of the slices should represent the quality, not quantity, of evidence. If it is viewed from that perspective, the systematic review slice should be the biggest because it is ideally the highest quality. Or, should the slices represent the amount of bias? This is all quite unclear.

Are the data carefully documented? Do the methods of display avoid spurious readings of the data?

I don’t believe that any data is actually represented here. Moreso, it feels like it’s being told to us so we believe it. I understand this is a visual model, but this image has been floating around so much that it is taken as the truth. I don’t think one can avoid spurious readings of the data because data aren’t represented here.

Are appropriate comparisons and contexts shown?

I do think that this pyramid provides visual way to compare information, however, I don’t think contexts are shown. Again, should the amount of each level of evidence referring quantity or quality? Is the context meant to indicate research superiority? If not, perhaps a pyramid isn’t the best shape. By virtue of its definition, a pyramid has an apex at the top, indicating superiority. Maybe a different shape or representation can provide alternate contexts.

So, how should evidence-based medicine be represented?

I have presented my own perceptions sprinkled with perceptions from others. I’m a new librarian, and my opinion has value. However, I also think this concept needs to be re-envisioned collectively with healthcare practitioners, researchers, librarians, and patients.

Another visualization that has been proposed is the Health Care Literature Wedge. It would look like  a triangle with the apex facing right indicating progressive research stages. I do think there are other shapes or concepts to consider. Perhaps concentric circles? Perhaps this can be a sort of spectrum? 3D maybe? I really don’t know. Another concept to consider is that systematic reviews are intended to reduce bias pertaining to a research question. Instead of reducing bias, maybe we can look at systematic reviews as having increased perspectives? How could this change the way evidence-based medicine is visualized?

I think the questions posed by Tufte can help to guide this. And I’m sure there are other questions and models than can also help. I would love to hear other epistemologies and/or models, so please share!


  1. Chang, S. M., Bass, E. B., Berkman, N., Carey, T. S., Kane, R. L., Lau, J., & Ratichek, S. (2013). Challenges in implementing The Institute of Medicine systematic review standards. Systematic Reviews, 2, 69. http://doi.org/10.1186/2046-4053-2-69
  2. Garritty, C., Tsertsvadze, A., Tricco, A. C., Sampson, M., & Moher, D. (2010). Updating Systematic Reviews: An International Survey. PLoS ONE, 5(4), e9914. http://doi.org/10.1371/journal.pone.0009914
  3. IOM (Institute of Medicine). (2011). Finding What Works in Health Care: Standards for Systematic Reviews. Washington, DC: The National Academies Press.) Retrieved from http://www.nationalacademies.org/hmd/Reports/2011/Finding-What-Works-in-Health-Care-Standards-for-Systematic-Reviews.aspx
  4. McKibbon, K. A. (1998). Evidence-based practice. Bulletin of the Medical Library Association, 86(3), 396–401.
  5. The PLoS Medicine Editors. (2007). Many Reviews Are Systematic but Some Are More Transparent and Completely Reported than Others. PLoS Medicine, 4(3), e147. http://doi.org/10.1371/journal.pmed.0040147
  6. Tufte, E. R. (1997). Visual Explanations: Images and Quantities, Evidence and Narrative. Cheshire, CT: Graphics Press.


An instruction librarian, a digital scholarship librarian, and a scientist enter a Twitter chat…

A quick note to preface this post: Thank you, Dylan Burns. After reading your post–What We Know and What They Know: Scholarly Communication, Usability, and Un-Usability–I can’t stop thinking about this weird nebula of article access, entitlement, ignorance, and resistance. Your blog post has done what every good blog post should do: Make me think. If you haven’t read Dylan’s post yet, stop, go back, and read. You’ll be better for it. I promise.

I am an instruction librarian, so everything that I read and learn about within the world of library and information science is filtered through a lens of education and pedagogy. This includes things like Dylan Burns’ latest blog post on access to scholarship, #TwitterLibraryLoan, and other not-so-legal means of obtaining academic works. He argues that faculty who use platforms like #Icanhazpdf or SciHub are not “willfully ignorant or disloyal to their institutions, libraries, or librarians. They just want what they want, when they want it,” and that “We as librarians shouldn’t  ‘teach’ our patrons to adapt to our obtuse and oftentimes difficult systems but libraries should adapt to the needs of our patrons.”

My initial reaction was YES, BUT…which means I’m trying to think of a polite way to express dissent. Thankfully, Dylan’s always up for a good Twitter discussion, so here’s what ensued:

My gut reaction to libraries giving people “what they want, when they want it” is always going to be non-committal. I’ve never been one to subscribe to what a colleague a long time ago referred to as “eat your peas librarianship” (credit: Michelle Boulé). I don’t think things should be difficult just for the sake of being difficult because things were hard for me, and you youngin’s should have to face hardships too! But I am also enough of a parent to know that giving people what they want when they want it without telling them how it got there is going to cause a lot of problems (and possibly temper-tantrums) later on. Here’s where the education librarian in me emerges: I don’t want scholars to just be able to get what they want when they need/want it without understanding the deeper problems within the arguably broken scholarly publishing model. In other words, I want to advocate for Lydia Thorne’s model of educating scholars about scholarly publishing problems. To which Dylan responds:

To which I can only respond:

Point: Dylan. Those of us who teach have all had the experience of trying to turn an experience into a teaching moment, only to be met by rolling eyes, blank stares, sighs, huffs, etc. Is the scholarly publishing system so broken that even knowing about the problems with platforms like SciHub, scholars will still engage in the piracy of academic works because, well, it’s all a part of the game they need to play? Is this even an issue of usability then? Creating extremely user-friendly library systems won’t change the fact that some libraries simply can’t afford the resources their community wants/needs, but those scholars still need to engage in the system that produces that resources. Is it always going to be a lose-lose for libraries?

At this point a friend of mine enters the Twitter discussion. Jonathan Jackson is an instructor of neurology and researcher at Massachusetts General Hospital:

Prior to this conversation I’d not thought about #TwitterLibraryLoan and similar efforts at not-so-legal access to scholarship as acts of resistance, but Jonathan’s entrance into the discussion forced me to think about the power of publicly asking for pdfs. I’ll admit that part of me skeptical that all researchers are as politically conscious as Jonathan and his colleagues. I’m sure there are some folks who just need that article asap and don’t care how they get it. But there is power in calling out that one publisher or that one journal again and again on #ICanHazPDF because your library will never be able to afford that subscription.

I’ll admit that the whole Twitter exchange made me second guess motivations all around, which is what a good discussion should do, right?