The biggest news in higher education yesterday, at least in the technology sector, was the merger of Blackboard and WebCT. It seems the impact on academic libraries will be far less than on our colleagues in IT who, to a greater extent, will be dealing with the cascading consequences of the merger. There is no immediate impact as all of the merged company’s products and platforms will be maintained. For academic librarians who are actively involved in their campus courseware at some level, and I hope this is the case at a growing number of institutions, particularly at the administrative and support levels, the eventual impact may be more significant especially for those at WebCT institutions. It’s a merger but it appears the new company will be called Blackboard and I would expect that the products and resouces supplied by WebCT will eventually diminish, perhaps even more so than for customers of Dynix after the creation of SirsiDynix – at least this one keeps the Dynix name intact. One common element in these mergers: all the companies say “the merged company will give customers the best features of both products, no matter which system you own now.” That sounds great but can they deliver? Or will this elimination of one more competitor allow the new company to grow more powerful, eliminate smaller competitors (as Blackboard has done in the past through outright purchases), and ultimately raise product costs? The Blackboard discussion lists (and I imagine those for WebCT folks as well) were abuzz with speculation on the meaning of and potential outcomes of the merger. One comment got me thinking though. The writer said, “These guys must really be worried about Moodle.” If you’re not familiar with it Moodle is an open source courseware system. It initially was used more heavily in K-12 settings, but in the last year more IHEs (institutions of higher education) started using it as well – primarily to save money (well there is that argument that open source has its costs too) but also to escape the bureaucracy and control of behemoth system vendors. This leads me to look at our own library automation systems industry and ask why no open source solution has evolved. There is certainly no dearth of OPAC complainers. You have Andrew Pace (OPACs suck), and Roy Tennant (You Can’t Put Lipstick on a Pig) writing and presenting about the need for change (more simplicity) in the OPAC world. I can appreciate their arguments for a simpler OPAC (not to mention the rest of the system) but other then present their arguments, neither has much in the way of suggestions nor have they sparked a movement among librarians or the automation vendors to do anything about the situation. I’m not criticizing Andrew or Roy, after all, someone needs to at least start the ball rolling. But what’s not happening is any development, coming from within our profession, on an open source library automation system. I am not sure why that is, but I can think of a few reasons. Perhaps these systems are far too complicated for someone to step up and create an open source version (you mean courseware systems are not that complicated – right!). Is it possible that because we’re not IT folks we lack the programming knowledge needed to create an open source library automation system? Perhaps to do something of this sort requires extensive organization and some financial support from IHEs (think the SAKAI project). Heck, with all of our associations and networks we’re so overly organized almost nothing happens in our world without some sort of inter-organizational collaboration. You’d think we could get organized over this issue. I suppose these are all possibilities for why we will continue to complain about our automation systems, but the vendors will continue to hold us over a barrel and give us products that too frequently do not work for us or our user communities. Perhaps the number one barrier is organizational support. Until our parent institutions truly understand the extent to which our libraries are dependent on these systems, until they truly recognize how deeply these systems impact each and every student, and until they are willing to provide the financial and human resources to create a coalition effort to develop an open source solution, I don’t think much will happen and we’ll all just continue to complain and hear compaints from folks like Andrew and Roy. I can only imagine what might be happening in the library automation arena if we did have an open source alternative emerging as profoundly as Moodle has in the courseware marketplace. And what is even more amazing about Moodle – a lesson about open source that we must follow – is that it was simple enough for K-12 schools to implement. An open source automation system that requires a team of programmers to implement and support will be of use only to ARL libraries and their peers. I am looking forward to a talk next month at PALINET’s annual user conference. A good colleague, Gregg Silvis, the systems librarian at the University of Delaware, will be presenting about an intriguing topic, “The Impending Demise of the Local OPAC.” I will be interested in what Silvis has to say, and wonder if he’s been thinking about the potential of an open source library automation system.
I have to say I am somewhat confused by this press release issued by the University of Calgary. Perhaps I’m just not thinking broadly enough. The University announced a plan to build the $113-million Campus Calgary Digital Library. Now that’s clearly enough money to build a fine facility, but isn’t a digital library by its very definition something that only exists in electronic format. In fact, they are building a new facility. It will offer 3,500 student spaces, loads of computers, and of course, access to digital resources. Does this make sense? Can a physical library building be named the “digital library”? Is this the start of a trend? And just what sort of message are they trying to send to users? That their building is so advanced that it’s not physical, but digital. Clearly there is a digital library somewhere at the University of Calgary. Referring to the library’s electronic holdings as the “digital library” seems more commonplace. But I think this is the first time I’ve heard of an actual, physical library building that will be called the digital library. Am I missing something here? I hope someone else can clear up the confusion for me. Maybe I am just a luddite after all.
An article that has been discussed recently on the ILI-L discussion list (sponsored by the Instruction Section of ACRL) is well worth reading. “Librarians as Disciplinary Discourse Mediators: Using Genre Theory to Move Toward Critical Information Literacy” by Michelle Holschuh Simmons (published in portal: Libraries and the Academy, 5.3: 297-311) shifts the focus in information literacy efforts from finding and using information to the interpretive work of understanding both the context of the texts students use and the disciplinary conventions that shape it. Simmons argues that librarians are uniquely situated as mediators among disciplinary discourses and that by helping students understand the rhetorical underpinnings of texts we will help them “see that information is constructed and contested not monolithic and apolitical.” It’s well worth a look, since we frequently stumble when it comes to the aspects of information literacy that involve evaluation and understanding the ethical, economic, and social issues surrounding information called for in the IL Standards. This article is not available free online but can be found in some libraries through Project Muse.
I admit I thought of this article when reading a story in today’s Inside Higher Education. In “Too Much Information?” Scott Jaschik raises the issue of faculty members blogging before they have tenure. In part, this is really a genre question: will scholars take blogging seriously as a form of expression? how do blogs blend otherwise distinct genres – opinion, scholarship, personal narrative? is blogging is invading the space previously owned by journalists and public intellectuals, where speech is limited to those who hold the proper credentials? The more our genres morph and reinvent themselves, and as new kinds of discourse communities arise, the more agile we all need to focus information literacy on the critical work involved.
Seems there has been more attention on teaching with technology just recently. If you consider the article and colloquy from the recent Chronicle of Higher Education that discussed the Millennial Generation and a new article in this week’s U.S. News & World Report, that’s two high profile pieces on how the needs of learners are changing, and how higher education is experimenting with technology as a means of revolutionizing how students learn. If your campus is like mine, then faculty are likely divided on both issues related to learners and the impact technology has on their learning process. Many are firm advocates and do all they can to push technology to its limits. Others are more suspect and would like to see concrete evidence that technology improves learning. And of course the majority are somewhere in the middle, dabbling with technology and implementing changes to their teaching methods a small step at a time. I’m a firm believer that technology has a place in the classroom, and students are increasingly expecting to find it being used. The divide tends to occur when we debate the extent to which technology should be used, and whether or not it truly helps to enhance student learning. While it’s important to pay attention to changing demographics (e.g., Millennials) and the ways in which technology might better fit changing student expectations, one needs to be cautious about buying the argument that learning needs to change right now and dramatically so. Based on what I’ve learned in the instructional technology courses I’ve taken at my institution, there is an entire spectrum of learning methods and media that we have at our disposal in creating a memorable learning experience for students. Some involve technology and others do not (i.e., lecture and discussion still have their place). Sometimes group learning is best, and at other times individual effort is more effective. The U.S. News article reflects that segment among both faculty and students who believe that technology, when used inappropriately or simply because it is there, can hurt learning more than it may help. We’ve all heard stories on our own campuses about students who will explode if they have to sit through one more set of PowerPoint slides (and you’ve surely seem them printing out endless pages of those slides that are now embedded in course texts and courseware sites). Our role as teaching librarians (or “blended librarians” as I like to say) is to become familiar with all the teaching tools and techniques at our disposal – just as we want our user communities to be aware of all the information databases and retrieval systems at their disposal – and to work at using them wisely to help students achieve learning outcomes. We also need to help our faculty do the same when it comes to library technology. Let’s remember that in addition to podcasts, tablet PCs, discussion boards, smartboards, clickers, and all the rest, that library databases have a place in the universe of classroom learning technologies. You won’t seem them mentioned in most mainstream articles about teaching and learning technologies. It’s our job to make sure students and faculty and integrating them into what happens in the classroom.
The Chronicle has reported that, like Google, Yahoo is in a project to digitize libraries – with a difference. Yahoo is partnering with a number of players in something called the Open Content Alliance, inspired by Brewster Kahle of the Internet Archive. Libraries will pay a minimal cost per page to add their volumes to the collection, but they must be either not under copyright or have permission of the copyright holder. The New York Times calls it a “challenge to Google” – for a number of reasons. Results won’t be available only through one search engine, entire texts rather than snippets will be visible, and – gasp! – books will be chosen specifically for the project, rather than entire libraries being scanned wholesale.
First we had Amazon’s Search Inside, then the Google project, now this alliance. All of them are interestingly different takes on making the full text of books searchable online, all of them with a different commercial bent and each with different strengths and weaknesses. One thing that they do reveal, though, is these publicly traded corporations all seem to believe there is a future in making books searchable. And each offers varous challenges to traditional notions of copyright in a digital world.