Small Steps, Big Picture

As I thought about composing a blog post this week, I felt that familiar frustration of searching not only for a good idea, but a big one. I feel like I’m often striving (read: struggling!) to make space for big picture thinking. I’m either consumed by small to-do list items that, while important, feel piecemeal or puzzling over how to make a big idea more precise and actionable. So it feels worthwhile now, as I reflect back on the semester, to consider how small things can have a sizable impact.

I’m recalling, for example, a few small changes I’ve made to some information evaluation activities this semester in order to deepen students’ critical thinking skills. For context, here’s an example of the kind of activity I had been using. I would ask students to work together to compare two sources that I gave them and talk about what made the sources reliable or not and if one source was more reliable than the other. As a class, we would then turn the characteristics they articulated into criteria that we thought generally make for reliable sources. It seemed like the activity helped students identify and articulate what made those particular sources reliable or not and permitted us to abstract to evaluation criteria that could be applied to other sources.

While effective in some ways, I began to see how this activity contributed to, rather than countered, the problem of oversimplified information evaluation. Generally, I have found that students can identify key criteria for source evaluation such as an author’s credentials, an author’s use of evidence to support claims, the publication’s reputation, and the presence of bias. Despite their facility with naming these characteristics, though, I’ve observed that students’ evaluation of them is sometimes simplistic. In this activity, it felt like students could easily say evidence, author, bias, etc., but those seemed like knee-jerk reactions. Instead of creating opportunities to balance a source’s strengths/weaknesses on a spectrum, this activity seemed to reinforce the checklist approach to information evaluation and students’ assumptions of sources as good versus bad.  

At the same time, I’ve noticed that increased attention to “fake news” in the media has heightened students’ awareness of the need to evaluate information. Yet many students seem more prone to dismiss a source altogether as biased or unreliable without careful evaluation. The “fake news” conversation seems to have bolstered some students’ simplistic evaluations rather than deepen them.

In an effort to introduce more nuance into students’ evaluation practices and attitudes, then, I experimented with a few small shifts and have so far landed with revisions like the following.

Small shift #1 – Students balance the characteristics of a single source.
I ask students to work with a partner to evaluate a single source. Specifically, I ask them to brainstorm two characteristics about a given source that make it reliable and/or not reliable. I set this up on the board in two columns. Students can write in either/both columns: two reliable, two not reliable, or one of each. Using the columns side-by-side helps to visually illustrate evaluation as a balance of characteristics; a source isn’t necessarily all good or all bad, but has strengths and weaknesses.

Small shift #2 – Students examine how other students balance the strengths and weaknesses of the source.
Sometimes different students will write similar characteristics in both columns (e.g., comments about evidence used in the source show up in both sides) helping students to recognize how others might evaluate the same characteristic as reliable when they see it as unreliable or vice versa. This helps illustrate the ways different readers might approach and interpret a source.

Small shift #3 – Rather than develop a list of evaluation criteria, we turn the characteristics they notice into questions to ask about sources.
In our class discussion, we talk about the characteristics of the source that they identify, but we don’t turn them into criteria. Instead we talk about them in terms of questions they might ask of any source. For example, they might cite “data” as a characteristic that suggests a source is reliable. With a little coaxing, they might expand, “well, I think the author in this source used a variety of types of evidence – statistics, interviews, research study, etc.” So we would turn that into questions to ask of any source (e.g., what type(s) of evidence are used? what is the quantity and quality of the evidence used?) rather than a criterion to check off.

Despite their smallness, these shifts have helped make space for conversation about pretty big ideas in information evaluation: interpretation, nuance, and balance. What small steps do you take to connect to the big picture? I’d love to hear your thoughts in the comments.

Leave a Reply

Your email address will not be published. Required fields are marked *