Wednesday, September 19, 2007

Symposium on the Future of ILSs: Let's see numbers

I had the good fortune to attend the Symposium on the Future of Integrated Library Systems recently. It was an excellent, excellent conference. The Lincoln Trail folks should be proud of themselves. I have pages and pages of notes, but I figured I would do a couple of posts on what seemed to be to be several of the points that seem to be common among several of the talks. We'll start with the one that has the most resonance with me: "We need to see the evidence".

This point kept coming over and over, although no one was obnoxious about it. We need to have more evidence to support our actions and decisions as we move forward. One part of this is we simply need better information on our own costs and expenditures. Chip Nilges from OCLC mentions the value of a link in one of his talks. Do you know how many people view an individual catalog record? Can you estimate how much that space is worth? This seems vague and fuzzy in the library world, but I'm not in administration. Perhaps that's just the view from below.

Perhaps even more importantly seems to be a gap in our knowledge about our own users. We see organizations like Google and Amazon rise because they focus a lot on the average user. They are constantly studying logs, creating and reading usability studies, and just talking with people. It's not to say that librarians haven't done this in the past. I know there's some excellent papers out there in libraries and from some of the Information Retrieval folks. However, it seems to be that in our day-to-day planning we make wild guesses, ones that are frequently wrong.

It's difficult to get funding or budgets for usability studies. Some of this seems to be changing recently, but it's difficult to tell if this is a general trend or just a local one. I'd like to think some people at least have gotten used to me trying to figure out what our users are actually doing and have stated to try to find better evidence for changes they'd like to make, but I'm really not that important. More likely it's become clear to some who resisted things like this that our current ways just aren't working.

Now, I want to clarify something. The need for evidence shouldn't be a chilling factor. I've seen some people recently become overly critical of fledgling efforts and seemingly requiring usability studies and the like before a project even starts. This is a severe burden when someone is just starting a cycle of development. Usability should come early, but you need experimentation as well. It shouldn't be something that each research and experimenter needs to be an expert on, but something that gets built into the overall process for research and development. Ideally there's a constant cycle of experimentation, feedback, development, and feedback.

To clarify, this is one time when not having much data shouldn't be a sin. It shouldn't be an excuse to kill a project before it starts. Yes, it's a good indication if there's existing studies that a user might like recommendations. It's madness not to move forward with at least examining, experimenting and researching with the idea of recommendations just because there's no documented usability studies about how people like them. The foundations for the actual usability and user studies should be allowed to be created. an attempt to stave off the book I could probably write about this, let me just conclude: user testing and user-orientated design is great. It should be much, much more involved in all levels of the library. It should be a re-occurring part of the feedback loops within the library. A healthy institution has a feedback loop between it and the real world. It feels like a living, breathing, reacting thing. An unhealthy one seems like a machine shambling along blind, deaf, and oblivious of its surroundings. Keep working at trying to incorporate actual information about patrons and your own people and your library might just start a little bit more alive, maybe even a little more human.

No comments: