||[Jan. 3rd, 2007|09:09 pm]
My workshop is about forecast integration. That is, taking various pieces and interconnecting them so everyone gets more value. We've gotten to "usability" pretty quickly, which is to say, not just "throwing data over the transom", but engaging with users to find out what they need. Of course, that's not just asking people what they want, because sometimes you need to push back on what people ought to be asking for, so that means getting involved in a two-way dialogue.|
People on both sides would benefit from dialogue, so why isn't it happening already? The usual answer to "why doesn't X happen" is "because nobody's being paid to do it". That applies here, I think. Scientists don't engage end-users in dialogue because they're not rewarded for doing so, and there's an opportunity cost for doing that instead of something that will get you more funding. So I think the big question for tomorrow is: how do we pay for the dialogue? That, and: will my poster have a demo?
Unrelatedly, I really can't cope with non-tabbed browsing anymore. Even though I regard IE with great scorn, I'm glad the latest version copied it from Firefox. It lessens the annoyance of being stuck with it.
I think there's scientists and engineers discount engaging with users because they're not 'statistically significant'. Chances are, if even you go out and talk to users, you'll only talk to three or four or five -- and it feels like it takes a lot of time to do it. How can those three or four or five be 'statistically significant' compared to the enormous body of users that the scientists/engineers know are out there? I think that this is a huge block that's deep at the heart of the way a lot of scientists and engineers think about doing user research.
The answer is, of course, that you're not doing a scientific experiment in which things need to be statistically significant; rather, you're doing something closer to ethnographic research, in which in which it's engaging with the users that count. (Some flavors of ethnographers, such as more traditional anthropologists, would point out that it's only long-term engagement with users that counts, in a not unrelated sort of "many users * short time = few users * long time" kind of way. But let's ignore that for now; those are about different kinds of knowledge creation.)
The important bit is that spending any time at all with real, honest-to-god users while they use your product/device/system is a *postive* investment of time, because it will reduce the amount of time you spend fixing stuff you screwed up on later by more than the amount of time you spend talking to users.
That really depends: are you producing a system/product/device to show that one can be produced, or to influence your users' experiences. It is quite common in my field to do the former...
Absolutely. But I think -- definitely in my field -- it's unfortunately common to simultaneously claim that one is producing a system/product/device to show it can be produced, and yet also have real, honest-to-god users trying to use the thing. And in that case, I (personally) feel it's a moral imperative to talk to users.
Yeah, I agree then. (And as you say, if you're going to actually have real users, it will save vast quantities of time to have them involved early. But this is one of those basic software engineering things, like having a decent spec, etc., that people are always atrocious at, and at which academics and scientists are the worst.)
Interesting - I'd never thought about it this way.
I guess my response would be that if you're doing a random sampling of users (which is what I would presume if they're trying for statistical significance), then you're not thinking beforehand. You should have some idea of who your "typical" user is (i.e. have an Alan Cooper persona or three), and go find examples of that.
And for usability testing, you only need 5 users
before you start to lose value anyway.