Don’t Write the Comments?

We had a month of especially active blogging in January and early February this year here at ACRLog. In addition to the regularly scheduled posts from Erin and Lindsay in our First Year Academic Librarian Experience series, there were also great posts about the upcoming Symposium on LIS Education from Sarah, and on better communicating our ideas to different audiences from Jennifer.

But what really pushed us over the top last month was a group of guest posts about the new ACRL Framework for Information Literacy in Higher Education. First we featured the open letter from a group of New Jersey information literacy librarians sharing their concerns about the new Framework replacing the old Standards. Several responses followed: Ian Beilin and Nancy Foasberg wrote in support of the Framework, then by Jacob Berg responded to both the open letter and Beilin and Foasberg’s response. Donna Witek contributed a post on the Framework and assessment, and Lori Townsend, Silvia Lu, Amy Hofer, and Korey Brunetti closed out the month with their post expanding on threshold concepts.

Since the Framework was scheduled to be discussed and voted on at Midwinter at the end of January, the timing of this flurry of posts isn’t surprising. These Framework (and related) posts tackled big topics and issues, issues that academic and other librarians have been discussing in many venues. So I have to admit that I was surprised to see that there was practically no discussion of these posts here on ACRLog. One person left a comment on the threshold concepts post sharing a citation, and there were a couple of pingbacks from other blogs around the web linking to these posts.

The absence of discussion here on ACRLog seems even more remarkable given the presence of discussion in other venues. I’m active on Twitter and there have been many, many discussions about the IL Framework as a replacement for (or supplement to) the Standards for months now. Whenever a post is published on ACRLog it’s tweeted out automatically, and these Framework posts sparked many a 140 character response. I’m not on any listservs right now (I know, I know, somewhat scandalous for a librarian), and I’m also not on Facebook, but from what I gather there was discussion of these posts on various listservs and FB too.

Even in our post-Andrew Sullivan era, I still read plenty of real live, not-dead-yet blogs — indeed, trying to keep up with my RSS reader is sometimes a challenge. But it’s been interesting to see the comments, the conversations, move elsewhere on the internet lately. Not that our ACRLog comments have been totally silent, but more often than not I login to find that the comment approval page is pretty quiet. This is despite some of the obvious advantages to blog comments over other options (though as anyone who’s ever encountered a troll can attest, there are disadvantages too). While Twitter can offer the opportunity to immediately engage with folks over a topic or issue — and there are many, many librarians on Twitter — the 140 character limit for tweets can often feel constraining when the topic or issue is large or complex. Listservs allow for longer-form responses, but of course are limited to those who subscribe to them; as a walled-garden, Facebook also suffers from audience exclusivity.

All of which has me wondering if there’s a way to combine these different media to enable interested folks to participate in the conversation using whichever platform they prefer. I know there are plugins out there that can pull media streams together, but can these be combined in a way that’s less about displaying information and more about encouraging discussion? Or is that too much work to solve a problem that’s not really a problem? Should we be concerned that different conversations about the same topics in librarianship are happening in different online places, perhaps with little crossover?

I’d be interested to hear your thoughts in the comments. :)

A Day (or 3) in the Life

Yesterday I spent an hour going through my inbox, turning each email that needs attention into a task, saving the ones that I need into relevant folders on a shared drive, deleting some, categorizing some and then dropping some into an inbox folder so that I can keyword search them if I ever need them again. It was so satisfying. Now my inbox has exactly one (ONE!!) message in it and that message has been in my inbox since my first week here at UNT. I guess I’m saving it for a rainy day. Of course, now my task list is longer than it was before I started doing inbox organizing so…

Anyway, looking at my ever-evolving task list made me realize how varied my days really are. I am preparing for a presentation at the Electronic Resources & Libraries Conference at the end of this month. For the introduction to my presentation I am writing a description of a “day in the life” of an Electronic Resources Librarian in an academic library. I am struggling a bit to do so, however, simply because no two days are alike. That is one of the really great things about my job as an academic librarian, actually! There is very little down time and things are different every day, always interesting. Seriously, if you get bored doing this job then you are not doing it right!

Instead of writing about ONE typical day I thought I would do three days. That way I can summarize a “typical” (really, though, there is not a “typical”) day spent mostly at my computer, a “typical” day that involves more collaboration/meetings with members of my division and a “typical” day that involves more work outside of the division. I would say that a majority of my time is spent working fairly independently or interacting with others mainly online. Interspersed with that, though, are days where I go from one meeting to the next. And one day a week I office in the main library at a desk in Research and Instruction instead of at my official desk in the Collection Management building off campus. So here are my never-typical three days:

Day One: Let’s pretend that this is a Monday. Actually, I just looked at my calendar and completed-task list to see what I did last Monday. (So: Last Monday):
•   Pulled a list of ebooks that were recently added to one of our online reference collections; created a spreadsheet to organize the titles by subject areas and subject librarians; emailed each subject librarian to let them know about our new acquisitions.
•   Spent 30 minutes trying to answer a seemingly-simple question from a professor about ebooks that, as ebook questions usually do, got complicated and involved emails between myself and, eventually, five other librarians before the question “How much of an ebook can be put on course reserve and in what format?” was finally answered.
•   Spent another hour or so trying to answer more seemingly-simple questions, these from a student who was having trouble understanding how our ebooks work and how to interact with the various ebook platforms. The question “How do you check out an ebook from the UNT Library?” seems so simple…but, trust, it is not simple to answer.
•   Learned that one of my collaborators on a presentation for the upcoming ER&L Conference is not attending the conference or interested in participating in the presentation at all. Began working on a new outline to restructure the presentation to include content from two instead of three presenters. (Sigh).
•   Fielded some random promotional emails from vendors, decided which products being promoted might be of benefit to various people or departments in the library, emailed various people in various departments to determine if there was interest. Saved all feedback in appropriate files for future product evaluations.
•   Pulled usage statistics for a Graduate Library Assistant to add to our ever-growing database of statistics.
•   Updated the Promotions Workflow. Part of my job is promoting our electronic resources – because what is the point of buying them if nobody knows about them. Another important part of my job is creating workflows for what I do because, in some ways, I’m creating my job every day. I document processes for everything and I keep these updated constantly.

Day Two (What I Did on Wednesday):
•   On Wednesdays I office in the main library with some of the Research & Instruction Librarians. This gives me an opportunity to have some face time with colleagues that I otherwise only communicate with by email and/or phone.
•   Established an inter-departmental workflow for cataloging, maintaining and promoting electronic resources purchased by a faculty support department.
•   Spent a frustrating amount of time trying to figure out if IP authentication was working for a new database and, if not, why not.
•   Chatted with several subject librarians about various ER-related issues including how to get access to the images in a specific journal when the digital access we have only includes text, a possible future research/publication collaboration, and several upcoming trials that were requested by faculty.
•   Created a LibGuide modules for a database trial that went live and communicated the availability and parameters of that trial to various subject librarians.
•   Did some last-minute confirmations and planning with a vendor who spent the day at UNT on Thursday.
•   Emailed my student mentees to check in with them, see how their spring semester is going.
•   Attended a meeting of the University Undergraduate Curriculum Committee of which I am a member. Was surprised and pleased to see that there were pastries!

Day Three (Finally Friday):
•   Spent a fair amount of time on email communicating with vendors (got set up for a trial of several interdisciplinary databases we are looking at, followed up on some invoicing issues, etc).
•   Checked in with librarians in my department to determine how close we are to completing recent orders for electronic resources. It is my job to ensure that once an order is begun the process is completed within a reasonable amount of time. Orders involve, at minimum, two other librarians in the division. Noted expected dates of availability and scheduled times to follow up if necessary.
•   Typed up notes from vendor demonstrations I participated in on Thursday.
•   Meeting before lunch to talk about how our budget plan is being implemented and plan for future communications, purchases, reporting, etc.
•   Lunch at a restaurant with librarians from a part of the UNT library world that I don’t typically work closely with: new connections, yay!.
•   Meeting after lunch to coordinate a comprehensive evaluation of one of our largest electronic resources, one that we rely on heavily in our day-to-day collection management tasks.
•   Weekly Friday activity of going through my task list in Outlook to make sure I didn’t miss anything, finishing up tasks as possible and marking them complete, changing dates or adding reminders to upcoming tasks as needed.

Obviously, there are many, many details that I did not mention about these days – phone calls, conversations, emails, the unceasing attempt to keep the massive amount of electronic resource information and data organized in a useful fashion, etc. But you get the idea. A day in the life of an Electronic Resources Librarian is a bit unpredictable. Even more interesting is the fact that no two ERLs seem to have the same job descriptions but that may be a topic for another post.

Mixed messages, missed opportunities? Writing it better

At the Bucknell Digital Scholarship Conference a few months ago, Zeynep Tufekci gave a great keynote presentation.  Tufekci, who grew up in Turkey’s media-controlled environment,  researches how technology impacts social and political change.  She described how the accessibility of social media enhanced the scale and visibility of, for example, the Gezi Park protests.  In her talk, Tufekci also advocated for academics to “research out loud,” to make their scholarship visible and accessible for a wider, public audience.  Rather than restrict academic thought to slow, inaccessible, peer-reviewed channels, she said, academics should bring complex ideas into the public sphere for wider dissemination and consumption.  Through her “public” writing (in venues like Medium and the New York Times, for example), Tufekci said she is “doing her research thinking out in the open” and trying to “inject ideas of power, of equity, of justice” to effect change.  There’s a lot of public demand for it, she told us, if you make it accessible and approachable.  We just, she said with a chuckle, have to “write it better.”

In a recent Chronicle of Higher Education article, Steven Pinker explored the various reasons why academic writing generally “stinks.”  Is it because academics dress up their meaningless prattle in fancy language in order to hide its insignificance?  Is it unavoidable because the subject matter is just that complicated?  No, Pinker said to these and other commonly held hypotheses.  Instead, he said, academic writing is dense and sometimes unintelligible because it’s difficult for experts to step outside themselves (and outside their expert ways of knowing) to imagine their subject from a reader’s perspective.  “The curse of knowledge is a major reason that good scholars write bad prose,” he said.  “It simply doesn’t occur to them that their readers don’t know what they know—that those readers haven’t mastered the patois or can’t divine the missing steps that seem too obvious to mention or have no way to visualize an event that to the writer is as clear as day.  And so they don’t bother to explain the jargon or spell out the logic or supply the necessary detail.”

Tufekci and Pinker, then, are on the same page.  The ideas of the academy can and should be accessible to a wider audience, they’re urging.  To reach readers, academics should write better.  In order to write better, academics must know their readers and think like their readers.  Sure, you might be thinking, I could have told you that.  We library folks are rather accustomed to trying to think like our “readers,” our users, aren’t we?  So what message might there be in this for us?  Is it that we should continually hone our communications whether in instruction, marketing, web design, systems, cataloging, or advocacy?  Yes.  Is it that we should stop worrying that if we make things too simple for our users we’ll create our own much-feared obsolescence?  Probably.  Is it that we should reflect on whether we’re truly thinking like our audience or trying to make them think (or work) like us?  That, too.

Just the other day, I was chatting with a friend who is a faculty member at my institution.  We were both expressing frustration about recent instances of not being heard.  Perhaps you know the feeling, too.  During class, for example, a student might ask a question that we just that minute finished answering.  Or in a meeting, we might make a suggestion that seems to fall on deaf ears.  Then just a few minutes later, we hear the very same thing from a colleague across the table and this time the group responds with enthusiasm.  If you’re like me, these can be discouraging disconnects, to say the least.  Why weren’t we heard?, we wonder.  Why couldn’t they hear us?  These are perhaps not so different from those larger scale disconnects, too.  When we might, let’s say, advocate with our administration for additional funding for a new initiative or collections or a redesign of library space and our well-researched, much needed proposal isn’t approved.  Perhaps these are all opportunities we might take to reconsider our audience and “write it better.”

So what does “writing it better” mean exactly?  While it likely varies for each of us, I expect there’s some common ground.  “Writing it better” is certainly about clarity and precision of ideas and language.  But I think it’s also about building and establishing our credibility and making emotional connections to our audience, while thinking strategically.  I think it’s about our relationships and values–to the ideas themselves and to our audience.  It’s about an openness and generosity of mind and heart that helps us to consider others’ perspectives.  What does “write it better” mean to you?

What’s the Matter with Threshold Concepts?

ACRLog welcomes a guest post from Lori Townsend, Learning Services Coordinator at the University of New Mexico; Silvia Lu, Reference and Social Media Librarian and Assistant Professor, LaGuardia Community College, CUNY; Amy R. Hofer, Coordinator, Statewide Open Education Library Services, Linn-Benton Community College; and Korey Brunetti, Librarian at City College of San Francisco.

Recent conversations about ACRL’s draft Framework have raised questions about both the theoretical value of threshold concepts and their usefulness as applied to information literacy instruction. This post responds to some of the arguments against threshold concepts and clarifies why the authors believe that the model can be a productive way to approach information literacy instruction.

Threshold concepts aren’t based on current research about learning
Au contraire: threshold concepts are grounded in research on teaching and learning. The theory initially developed from qualitative research undertaken by education faculty as a part of the Enhancing Teaching-Learning Environments in Undergraduate Courses project in the UK. The references for Meyer and Land’s initial series of papers on threshold concepts represent a well-rounded list of important thinkers in education.

That said, we understand why some might see threshold concepts as “old wine in new bottles” (as Glynnis Cousin puts it) (1). If you have a background in educational theory, threshold concepts may seem like a repackaging of other theories. Threshold concepts might even be understood as a shortcut through the theory thicket for those who don’t possess an advanced degree in education.

It’s also helpful to note that the threshold concept model works well when used alongside other pedagogical approaches. To provide just one excellent example, Lundstrom, Fagerheim, and Benson (2) used threshold concepts in combination with Decoding the Disciplines and backward design as a frame to revise learning outcomes for information literacy in composition courses.

Everything is a threshold concept
Another common objection has to do with the fuzziness of Meyer and Land’s definitional criteria (transformative, irreversible, integrative, bounded, troublesome) and the hedging language Meyer and Land use to articulate the criteria (probably, possibly, potentially). We see the use of these qualifiers as Meyer and Land’s way of saying: just because a proposed threshold concept doesn’t meet X criterion, doesn’t necessarily mean it’s not a threshold concept. Along these lines, Wiggins and McTighe’s work is highly respected as a now-standard approach to curriculum design, but if you look at the chapter on “big ideas” in their classic work Understanding by Design, you’ll see similarly fuzzy, but still useful, language.

However, regarding these fuzzy definitional criteria, some have asked “How can probable characteristics be defining characteristics?” (3) Let’s look at a furry example: dogs. Do dogs bark? Do all barks sound the same? Are there dogs that don’t bark? Yet somehow, we can still identify dogs as dogs for practical purposes. Likewise, instructors can still identify threshold concepts because we possess professional and disciplinary expertise.

Arguing for the existence of threshold concepts that meet none of the definitional criteria is a rhetorical device, not a practical concern. Librarians are just not going to waste precious instructional time on nonsensical learning objectives that aren’t real teaching content.

Threshold concepts are unproven
Threshold concepts are an emerging theory. However, many disciplines have used them to effectively re-think curricula, including Computer Science and Economics. We maintain that much of the value of threshold concepts lies in encouraging instructors to re-engage with and re-examine teaching content. They are a wonderful catalyst to spark discussion among colleagues and encourage deep and creative thinking about instruction.

Nevertheless, some librarians are bothered by an approach that isn’t supported by a certain kind of evidence. There are many possible pedagogical approaches out there and we don’t have a stake in people adopting threshold concepts if the model doesn’t work for them. At the same time we can also ask, how much of what librarians do effectively in the classroom is supported by positivist proof?

To take one example, we don’t need a double-blind study to know that the Cephalonian method works in our classes. We know it works because students who came in slouching and checking their email are paying attention, sitting up straight, and asking their own questions. This is a form of evidence. Other kinds of evidence are forthcoming for threshold concepts (for example, we are slowly writing up the results of a Delphi study on threshold concepts for information literacy), but that does not mean we cannot use them now to improve our teaching.

Threshold concepts don’t address skill development
We want our students to demonstrate new skills and abilities based on our instruction. Which is to say, we want them to learn. Threshold concepts help us think about where students may encounter stumbling blocks in understanding difficult or transformative concepts that underlie skill development. Wiggins and McTighe’s big ideas share a similar aim:

What we are claiming, based on both common sense and the research in cognition, is that no skill can be integrated into a powerful repertoire unless the learner understands the big ideas related to using the skill wisely. (4)

We find it nearly impossible to teach a skill-based learning objective effectively if we don’t have a firm grasp on why it’s important because of its connection to an overarching concept. Students can smell busywork a mile away. And transferrable skills are the ones anchored in conceptual understanding.

Threshold concepts ignore the diversity of human experience
Threshold concepts have been characterized as monolithic dictates that impose one linear path to one correct understanding. In fact, threshold concepts leave room for variability for instructors as well as for learners

In applying the anthropological concept of liminality to learning, Meyer and Land imagine and explore a liminal space that learners pass through in the process of crossing a threshold. They write about how individuals will move through this liminal space in different ways, spend more or less time there, and experience affective dimensions of learning there. (5) As Silvia’s diagram below shows, some people encounter a learning threshold and walk right across; others will take a few steps forward and a few steps back before crossing; others will sit down in one spot for weeks when the threshold comes into view.

threshold
Learners do not start a course in the same place, nor do they learn at the same pace.

On the other hand, to suggest that student experiences are so fundamentally different that there are no common points of confusion is anathema to the possibility of curricular design. Moreover, what then would be the point of teaching and learning in communities? We can focus our teaching efforts by pinpointing the places where students are most likely to get stuck, without ignoring their differences.

Threshold concepts are hegemonic
Threshold concepts are not tools of oppression. Or at least, they may be so, but only to the extent that an individual practitioner using threshold concepts is oppressive.

Threshold concepts expose the tacit knowledge that we expect our students to absorb along with our stated learning objectives. This approach forces us to consider the implications of asking students to look through our disciplinary lens. For example, if a student wants to search the catalog using the keyword “drag queens,” you can imagine how well the Library of Congress subject headings reflect the current thinking on respectful ways to talk about this topic. Ignoring such an issue would implicitly validate the problematic subject terms. It is paramount to acknowledge the language problem and explain that the subject terms reflect the point of view of a certain group of people. We run into trouble when we don’t acknowledge our particular lens and act as if it’s the natural way to see the world.

Our disciplinary lens has scratches, deformed areas, and blind spots — all rich fodder for teaching and exploration — and yet it still offers something of value to our students. Admitting that we are asking to students to risk their identity and take a leap of faith with us as teachers is only being honest.

Threshold concepts require us to agree on all the things
Do we all agree on what constitutes our disciplinary content? Does every discipline share a unified body of knowledge? Threshold concepts don’t claim so. However, we all make choices when teaching. If you consider your content with the threshold concepts criteria in mind, it helps identify some things that might prove problematic for students and stall their learning, yet that are needed in order to move forward in their understanding.

Individual subject experts will have differing perspectives on their disciplines and will thus choose to teach different content, but there are transformative, irreversible, troublesome, and integrative moments along many strands of knowledge. Your curriculum doesn’t have to be identical to mine for both of them to include threshold concepts that challenge our students and enlarge their perspectives.

In conclusion
We see the Framework draft as a part of an ongoing conversation and an attempt to nudge our profession in a positive direction toward conceptual teaching. Threshold concepts gave the Task Force one starting place to think about big ideas in information literacy. As we all know, many librarians already take a challenging, big picture approach to content and have been teaching that way for years without threshold concepts or the new Framework.

Nobody asserts that the frames are The Only Frames forever and ever. So please, engage with them. Think of new ones. Rewrite them to fit your context and your students. Think hard about what you teach and how you teach it. We have interesting, transformative, transferrable content to teach and it is grounded in our own disciplinary area — threshold concepts or no.

And finally, it’s useful to think of threshold concepts as a model for looking at the content we teach in the context of how learning works. “…(A)ll models are wrong; the practical question is how wrong do they have to be not to be useful.” (6) We’re less interested in breaking down the model and examining its component parts exhaustively than in trying it out and seeing if it’s useful. And then maybe tweaking it. For us, despite its flaws, the threshold concepts model continues to be useful. Your mileage may vary.

 

Notes:

  1. Cousin, G. (2008). Threshold concepts: Old wine in new bottles or a new form of transactional curriculum inquiry? In R. Land, J. Meyer, & J. Smith, (Eds.) Threshold Concepts within the Disciplines. Rotterdam: Sense Publishers.
  2. Lundstrom, K., Fagerheim, B.A., & Benson, E. (2014). Librarians and instructors developing student learning outcomes: Using frameworks to lead the process. Reference Services Review, 42(3).
  3. Wilkinson, L. (2014, June 19). The problem with threshold concepts [Web log post]. Retrieved from https://senseandreference.wordpress.com/2014/06/19/the-problem-with-threshold-concepts/.
  4. Wiggins, G. P., & McTighe, J. (2005). Understanding by design. Alexandria, VA: Association for Supervision and Curriculum Development.
  5. Meyer, J.H.F., and Land, R. (2008) Threshold concepts and troublesome knowledge (5): Dynamics of assessment. 2nd International Conference on Threshold Concepts, Threshold Concepts: From Theory to Practice, Kingston, Ontario, Canada.
  6. Box, G. E. P., & Draper, N. R. (1987). Empirical model-building and response surfaces. New York: Wiley, p. 74.

“Sunrise, Sunset”: A Reflection on Assessment and the Framework for Information Literacy for Higher Education

ACRLog welcomes a guest post from Donna Witek, Associate Professor and Public Services Librarian at the University of Scranton.

url

Photo by Moyan Brenn on Flickr

When I first learned about assessment at the very beginning of my professional work as a librarian, there was one aspect of the process that made complete sense to me. I was instructed that an assessment plan is just that–a plan–and that it is not only OK but expected for the plan to change at some point, either during or after it’s been put into action.

Now, the specifics on how these changes happen, what are best practices in altering an assessment plan, and the relationship between the integrity of the assessment data gathered and any changes made, are all complex questions. I am in my seventh year working as an instruction librarian in an academic library, and I consider myself at best an engaged learner-practitioner when it comes to assessment–I am by no means an expert, and I offer this as a disclaimer as I share some thoughts on assessment and the Framework for Information Literacy for Higher Education [pdf].

In the years since I was first trained in basic assessment practices, I still find the recursive, cyclical nature of assessment to be the aspect of the process that legitimizes the rest. Learning is a messy process, and as instructors we understand that there are multiple ways to reach the same goal–or learning outcome–and that different learners learn differently. It could mean our approach to teaching (i.e., our pedagogy) needs to be adapted–sometimes on the fly!–to meet the needs of the students in front of us. Or, maybe the way I articulated one of the learning outcomes for an instruction session turns out to be way too ambitious for the scope of the instruction, and ten minutes in I realize I need to change the formulation of the outcome in my mind in order for my teaching and the students’ learning to harmonize.

What I love about the principle that an assessment plan is meant to be changed (at some point) is that it means the above scenarios are not failures, but part of an authentic teaching and learning process. This is empowering for teachers and students alike.

Now, it is my understanding that all assessment plans change eventually. In the case of an assessment plan that from the outset is harmonized perfectly to the learning context to which it is applied, it isn’t changed until the end of the assessment cycle, but it still changes and develops in response to the information (call it data if you’d like) gathered throughout the process.

At the end of this week and after almost two years of development and review by the profession, the Framework will be considered for adoption by the ACRL Board of Directors during ALA Midwinter. The Framework is not conceived as an assessment document, as it “is based on a cluster of interconnected core concepts, with flexible options for implementation, rather than on a set of standards or learning outcomes or any prescriptive enumeration of skills” (Framework [pdf], p.2).

This begs the questions: What is the relationship between the Framework and assessment? And how does this in turn relate to the revision task force’s recommendation that the Information Literacy Competency Standards for Higher Education be sunsetted in July 2016 (Board of Directors Action Form [pdf], p.3)?

Before I share some ideas in response to these questions, Megan Oakleaf offers to the profession “A Roadmap for Assessing Student Learning Using the New Framework for Information Literacy for Higher Education” [pdf] (JAL 40.5 2014). I highly recommend reading Oakleaf’s roadmap, as my own ideas touch on many of the same points found in her “Ok, So Now What?” section, though I want to fold into the discussion the relationship between this process and the proposed sunsetting of the Standards.

Here I offer just one of many possible paths toward incorporating the Framework into your local information literacy instructional practice. It is a theoretical model, because it has to be at this point: the Framework is not yet adopted. As will hopefully be made clear, not enough time has passed for this model to have been fully implemented, though some libraries have begun the process. (1)

The first step I would recommend, based on evidence from libraries that have taken this approach and found it fruitful and impactful on both student learning and programmatic practices, is to read the Framework, both individually and as a group with colleagues in your instruction program, and through reflection and discussion identify intersections between the Framework and the information literacy instruction work you are already doing. (2) Rather than feel pressured to overhaul an entire instruction program overnight, instead use the Framework as a new way to understand and build upon the things you’re already doing on both the individual and programmatic levels.

If your current practices are heavily situated within the Standards, I think this exercise will surprise by unearthing the connections that do in fact exist between the Standards and the Framework, even as the latter represents a significant shift in our collective approach to teaching and learning. (3)

The next step would be to review your learning outcomes for individual instruction sessions in light of the Framework, to be inspired by the connections, and to be challenged by the gaps–and to rewrite these outcomes based on both your engagement with the Framework and your recent assessment of your own students’ learning using these outcomes. The cycle of assessment for learning outcomes tied to individual instruction is short–these outcomes can and should be reviewed and revised in the period of reflection that immediately follows each instruction session.

In many ways, this makes individual instruction the most immediately fertile context in which to use the Framework to be inspired and to transform your instructional practice, keeping in mind the complex concepts that anchor the frames require learners to engage them in multiple learning contexts throughout the curriculum in order to be fully grasped. Still, even a one-shot can incorporate learning outcomes that will help learners progress toward understanding of these concepts in a manner appropriate to the learner’s current level of training in a discipline or disciplines.

But what of your programmatic information literacy learning outcomes? What about the places where information literacy has been integrated into curricular programs within or across the disciplines? And what about those (fortunate!) institutional contexts in which information literacy is integrated explicitly into the learning outcomes for the institution as a whole?

The beauty of assessment, as I suggest above, is that it is cyclical. Just as all ACRL guidelines and standards undergo cyclical review, so too do our local assessment and curriculum plans–or at least, they should. As each assessment plan comes up for review, librarians who have been engaging the Framework in their individual instructional practice can share “upwards” their experiences and the impact on student learning they observed through that engagement, and so fold the concepts underpinning the Framework into each broader level of assessment.

In this way, the Framework’s influence will cascade upwards within a local institutional context according to a timeline that is determined by the review cycles of that institution. While the revision task force’s recommendation to the ACRL Board is for the Standards to be sunsetted a year and a half after the Framework’s recommended adoption, I would argue that it is in the spirit of the Framework for local timelines to necessarily trump ACRL’s: as long as librarians are engaging the Framework, both individually (in instruction) and collaboratively (as local assessment plans and curricular documents come up for review), and doing so in light of the information literacy instruction work your library has been doing since (or even prior to) the adoption of the Standards fifteen years ago, (4) then the worry associated with sunsetting the Standards on the national level will be eclipsed by the particular, robust influence the Framework is having on your own campus, with your own students.

And anyway, we do our best work when we’re focusing on the students in front of us. So, let’s get to work.

 

Notes:

(1) Nicole Pagowsky shares the first steps of a similar process underway at the University of Arizona.

(2) The first example of this I’ve encountered is at Trinity College; librarians leading in different areas of Trinity’s information literacy instruction program presented at the 2014 Connecticut Information Literacy Conference their success with this initial approach to implementing the Framework (video and prezi).

(3) Amanda Hovious has created a helpful series of Alignment Charts for ACRL Standards and Proposed Framework, which represent one practitioner’s approach to connecting these two documents. I would argue that there are as many potential charts/models for connecting the Standards to the Framework as there are practitioners interpreting the meaning and content of each. It is for this reason I believe it was prudent for the revision task force to abstain from developing a model for alignment themselves, as such a model would run the danger of being wrongly interpreted as “canonical” because of its association with the task force that developed the Framework. That being said, Hovious’ charts are informed by her training as an instructional designer, and coupled with her notes for interpretation at the beginning of the document, represent a valuable perspective on how these two approaches to information literacy instruction relate. Another example that is equally compelling, in this case because the alignment is anchored to locally developed core competencies, is offered by Emily Krug, King University. It is compelling because it models (literally) the notion that information literacy is locally situated, by using King University’s core competencies as the concrete bridge between Standards and Framework.

(4) Barbara Fister offers an historical perspective in which she recalls the anticipated reception of the Standards when they were first adopted in 2000, and the remarkably similar conversations we are having now in relation to the Framework.