I think there's also more to it than just that. Scientists tend not to engage end users because our skills tend not to be especially strong for that.
To give an analogous example, here in Ontario, most grants of any reasonable size require "matching funding" to be secured from industry, as a way of ensuring that the science that is developed is "relevant". Even though this would come with lots of $$, many of us refuse to go after those kinds of grants. It pushes us in directions we're just not good at.
And yes, IE sucks.
I agree with the above comment - a lot of the reason it doesn't happen is that scientists and engineers don't like dialogue. Talking to people? Yuck!
Also, many tech-oriented people are unable to conceptualize that non-tech people might be different than themselves. At one point, I was in a meeting with a software team where the software lead said "Oh, that's not a bug! There's a workaround for it!" He detailed the _15-step_ workaround, and crossed it off as a bug, because anybody could do those 15 steps to avoid the crashing behavior.
The hardest thing for me is always trying to figure out what a user is trying to do. And I don't mean what function they are trying to execute. What task are they trying to perform? If I can figure that out, then I can design a good solution for them. But it's so difficult because they often can't articulate what they want, so it requires a lot of dialogus and langage matching to get to that point. Anyway.
I think there's scientists and engineers discount engaging with users because they're not 'statistically significant'. Chances are, if even you go out and talk to users, you'll only talk to three or four or five -- and it feels like it takes a lot of time to do it. How can those three or four or five be 'statistically significant' compared to the enormous body of users that the scientists/engineers know are out there? I think that this is a huge block that's deep at the heart of the way a lot of scientists and engineers think about doing user research.
The answer is, of course, that you're not doing a scientific experiment in which things need to be statistically significant; rather, you're doing something closer to ethnographic research, in which in which it's engaging with the users that count. (Some flavors of ethnographers, such as more traditional anthropologists, would point out that it's only long-term engagement with users that counts, in a not unrelated sort of "many users * short time = few users * long time" kind of way. But let's ignore that for now; those are about different kinds of knowledge creation.)
The important bit is that spending any time at all with real, honest-to-god users while they use your product/device/system is a *postive* investment of time, because it will reduce the amount of time you spend fixing stuff you screwed up on later by more than the amount of time you spend talking to users.
That really depends: are you producing a system/product/device to show that one can be produced, or to influence your users' experiences. It is quite common in my field to do the former...
Absolutely. But I think -- definitely in my field -- it's unfortunately common to simultaneously claim that one is producing a system/product/device to show it can be produced, and yet also have real, honest-to-god users trying to use the thing. And in that case, I (personally) feel it's a moral imperative to talk to users.
Yeah, I agree then. (And as you say, if you're going to actually have real users, it will save vast quantities of time to have them involved early. But this is one of those basic software engineering things, like having a decent spec, etc., that people are always atrocious at, and at which academics and scientists are the worst.)
Interesting - I'd never thought about it this way.
I guess my response would be that if you're doing a random sampling of users (which is what I would presume if they're trying for statistical significance), then you're not thinking beforehand. You should have some idea of who your "typical" user is (i.e. have an Alan Cooper persona or three), and go find examples of that.
And for usability testing, you only need 5 users
before you start to lose value anyway.
Unfortunately, IE did it wrong. I drag a link into another tab, and it opens in the same tab I'm in. It basically makes tabs worthless and I end up using multiple windows anyway. (Well, mostly I use Firefox, but I have reasons for using IE sometimes.)
I've never tried dragging a link into the tab-bar. I've always middle-button-clicked to add a new tab. I suppose this is because I always have monotonically increasing numbers of tabs until I kill the browser.
Still, I suppose it's nice to know I could replace a tab, if I ever want to. :)
I typically have monotonically-or-nearly-so-increasing numbers of tabs too, but they increase much slower than the rate at which I want to look at another page without losing the one I'm on. For example, when reading my LJ friends page, I typically drag a comments page into the second tab, then return to the first tab to go back to my friends page (rather than using the back button, which is slower). I'm happy to reuse the second tab for every comments page, though - no need to open a new tab for each one!
OK, and I also see that as useful to avoid losing focus on the current tab.
Just to toss some more ideas into the mix:
Some marketing firms do software UI testing as their bread and butter. The better ones would be able to push hard and get what the users really wanted not what they're asking for.
And of course there are scientists studying UI; if an area of resarch fits in with some Human/Computer Interaction researcher's work, they might be persuaded to do research on your research.
Finally: one way to pay for the dialog is to convince Google that it's valuable as a public service and furthermore they might find it useful for them. I don't know how they handle their UI, but I feel fairly certain they do a lot of it.
Hey, that's what a support tech is for!
Sometimes I think a quarter of my job is explaining to programmers why the software is buggy, and a quarter of my job is figuring out what the user really wants to do.
Seriously, though, isn't it the development team that talks to the users and defines the need in the spec, then the programmers that build it to spec, then the development team that implements it with the users and gets feedback? Or am I talking about the wrong industry?