I had the good fortune to attend the Symposium on the Future of Integrated Library Systems recently. It was an excellent, excellent conference. The Lincoln Trail folks should be proud of themselves. I have pages and pages of notes, but I figured I would do a couple of posts on what seemed to be to be several of the points that seem to be common among several of the talks. We'll start with the one that has the most resonance with me: "We need to see the evidence".
This point kept coming over and over, although no one was obnoxious about it. We need to have more evidence to support our actions and decisions as we move forward. One part of this is we simply need better information on our own costs and expenditures. Chip Nilges from OCLC mentions the value of a link in one of his talks. Do you know how many people view an individual catalog record? Can you estimate how much that space is worth? This seems vague and fuzzy in the library world, but I'm not in administration. Perhaps that's just the view from below.
Perhaps even more importantly seems to be a gap in our knowledge about our own users. We see organizations like Google and Amazon rise because they focus a lot on the average user. They are constantly studying logs, creating and reading usability studies, and just talking with people. It's not to say that librarians haven't done this in the past. I know there's some excellent papers out there in libraries and from some of the Information Retrieval folks. However, it seems to be that in our day-to-day planning we make wild guesses, ones that are frequently wrong.
It's difficult to get funding or budgets for usability studies. Some of this seems to be changing recently, but it's difficult to tell if this is a general trend or just a local one. I'd like to think some people at least have gotten used to me trying to figure out what our users are actually doing and have stated to try to find better evidence for changes they'd like to make, but I'm really not that important. More likely it's become clear to some who resisted things like this that our current ways just aren't working.
Now, I want to clarify something. The need for evidence shouldn't be a chilling factor. I've seen some people recently become overly critical of fledgling efforts and seemingly requiring usability studies and the like before a project even starts. This is a severe burden when someone is just starting a cycle of development. Usability should come early, but you need experimentation as well. It shouldn't be something that each research and experimenter needs to be an expert on, but something that gets built into the overall process for research and development. Ideally there's a constant cycle of experimentation, feedback, development, and feedback.
To clarify, this is one time when not having much data shouldn't be a sin. It shouldn't be an excuse to kill a project before it starts. Yes, it's a good indication if there's existing studies that a user might like recommendations. It's madness not to move forward with at least examining, experimenting and researching with the idea of recommendations just because there's no documented usability studies about how people like them. The foundations for the actual usability and user studies should be allowed to be created.
So....in an attempt to stave off the book I could probably write about this, let me just conclude: user testing and user-orientated design is great. It should be much, much more involved in all levels of the library. It should be a re-occurring part of the feedback loops within the library. A healthy institution has a feedback loop between it and the real world. It feels like a living, breathing, reacting thing. An unhealthy one seems like a machine shambling along blind, deaf, and oblivious of its surroundings. Keep working at trying to incorporate actual information about patrons and your own people and your library might just start a little bit more alive, maybe even a little more human.
Wednesday, September 19, 2007
Saturday, September 08, 2007
Open Source Misconceptions: Evaluating Software
I've noticed lately in the library software world lately that there seems to be a false distinction that's being made about commercial vs open source software. I"m not saying there's not difference, but I'm saying when evaluating software it's often useful to ignore whether it's open source, proprietary and actually decide some of the qualities you're looking for. I watch a lot of food network. They constantly have food contests where judging is done by choosing some qualities (originality, smell, flavor, flammability, what not). There's a table where the food's marked with numbers or letters. Judges taste the food, rate it in every category.
So, ok, we can't do it blind. But doing criteria can help avoid some biases.
So, for example, let's look at some possible categories.
Support:
Don't be fooled here. Multiple vendors can service proprietary software, just the same for open source software. True, open source supports will be quick to remind people that you can pay someone to develop any software. But an actual vendor with experience is really required.
Reputation is important here. Very important. What's the use of going with a vendor that's infamous for taking money and bug reports and doing nothing for years. Of course, people in the library world seem terrified about complaining about bad vendors. That's another post though.
Ease to modify:
Difficult to judge if you're not experienced. Some software is really easy to configure, but a pain to extend and modify. If it has an API, direct database access, or the code is visible, it's probably easier to modify than something locked on a vendor service.
Ease to configure:
A bit different than the above. Is there any way to change how the software function? Do you have to stumble through badly documented and bizarre text files? A screen full of unexplained little icons?
Expense of the software itself:
Well, it's a consideration. Really.
Quantity of customers/community:
Are there a lot of people using the software? Some guy and his friend?
Quality of customers/community:
Are they enhancing it, tweaking it, generally loving it? Or do they mostly buy it, install it on some server, and then write a bit in the newsletter and forget about it?
Longevity:
How long has the vendor/community/software been around? How healthy does it look?
You'll probably see some general trends that distinguish open source from proprietary solutions, but you might be surprised when you start examining it. Some open source projects might have a vibrant community with lots of users. Others are dead on arrival in the undergraduate's dorm room. A vendor might have built up an excellent product with a high level of quality. Or it could have transformed into a company full of managers and salesman striving to milk every dollar out of product they actually no longer know how to enhance or fix.
So, hopefully we can start moving beyond simplifications.
So, ok, we can't do it blind. But doing criteria can help avoid some biases.
So, for example, let's look at some possible categories.
Support:
Don't be fooled here. Multiple vendors can service proprietary software, just the same for open source software. True, open source supports will be quick to remind people that you can pay someone to develop any software. But an actual vendor with experience is really required.
Reputation is important here. Very important. What's the use of going with a vendor that's infamous for taking money and bug reports and doing nothing for years. Of course, people in the library world seem terrified about complaining about bad vendors. That's another post though.
Ease to modify:
Difficult to judge if you're not experienced. Some software is really easy to configure, but a pain to extend and modify. If it has an API, direct database access, or the code is visible, it's probably easier to modify than something locked on a vendor service.
Ease to configure:
A bit different than the above. Is there any way to change how the software function? Do you have to stumble through badly documented and bizarre text files? A screen full of unexplained little icons?
Expense of the software itself:
Well, it's a consideration. Really.
Quantity of customers/community:
Are there a lot of people using the software? Some guy and his friend?
Quality of customers/community:
Are they enhancing it, tweaking it, generally loving it? Or do they mostly buy it, install it on some server, and then write a bit in the newsletter and forget about it?
Longevity:
How long has the vendor/community/software been around? How healthy does it look?
You'll probably see some general trends that distinguish open source from proprietary solutions, but you might be surprised when you start examining it. Some open source projects might have a vibrant community with lots of users. Others are dead on arrival in the undergraduate's dorm room. A vendor might have built up an excellent product with a high level of quality. Or it could have transformed into a company full of managers and salesman striving to milk every dollar out of product they actually no longer know how to enhance or fix.
So, hopefully we can start moving beyond simplifications.
Subscribe to:
Posts (Atom)