?

Log in

No account? Create an account
Suppose you wanted to find out what needs to be improved in a computer program..... - Input Junkie
September 19th, 2014
10:47 am

[Link]

Previous Entry Share Next Entry
Suppose you wanted to find out what needs to be improved in a computer program.....
Perhaps I'm especially thinking about computer programs with user interfaces.

There's always introspection by the programmer. What's been annoying you? What do you think might please users? This has limits, partly because the programmer is just one person, and not necessarily much like anyone else, and in particular, may have differences from non-programmers in general. Also, sometimes people get used to annoyances.

I can think of two more approaches. One would be semantic-- looking for complaints (online, in company records, maybe in additional places) and having a program which looks for common themes. Or human beings could do this with their naked minds. I hope at least that much is being done.

Another would be to go over the records from the programs themselves, and see whether there are repetitious patterns (especially if there are errors) from the users. Something like this might already exist. Let me know.

Here's something that I haven't gotten used to. I enter my email address. I can't remember my password. I click on the can't remember your password link. I'm asked to enter my email address again. Why?

Any other approaches to finding out what could use improvement?

This entry was posted at http://nancylebov.dreamwidth.org/1054722.html. Comments are welcome here or there. comment count unavailable comments so far on that entry.

(10 comments | Leave a comment)

Comments
 
[User Picture]
From:kalimac
Date:September 19th, 2014 03:25 pm (UTC)
(Link)
I've hated user interfaces ever since the GUI was introduced.
[User Picture]
From:supergee
Date:September 19th, 2014 10:03 pm (UTC)
(Link)
Likewise.
[User Picture]
From:madfilkentist
Date:September 19th, 2014 04:15 pm (UTC)
(Link)
It's surprisingly hard to get user feedback. At the last place I worked, we often discovered things that weren't working by looking at the logs and fixed them, all without getting a single customer complaint.

One of the best ways to get input is with an in-person usability study. Bring in some people and have them try out the software. Ask them in detail what they like and don't like. Most software never gets this kind of study.

Programmers can make suggestions about useful changes that less technical people might not think of, but we shouldn't be the final judges of the user interface. We've got too much investment in the work we've already done.

I'd never really thought about the question of entering the email address again, but the main reason I can think of is to slow down the users and get them to think before resetting their passwords. In typing it again, they might realize they typed in the wrong address the first time, or they might remember that they really do know the password and had just left caps lock on or used the wrong password. Both of those have happened to me.
[User Picture]
From:nancylebov
Date:September 19th, 2014 05:12 pm (UTC)
(Link)
Users don't necessarily give *you* feedback, but they may have a thing or two to say online.
[User Picture]
From:madfilkentist
Date:September 20th, 2014 12:31 pm (UTC)
(Link)
Our customers were child welfare clinics dealing in confidential data. I rather hope they'd talk to us before talking about the system's flaws online.
[User Picture]
From:siderea
Date:September 20th, 2014 11:33 pm (UTC)
(Link)
At the last place I worked, we often discovered things that weren't working by looking at the logs and fixed them, all without getting a single customer complaint.

Most businesses go to great, if often unconscious, lengths to prevent customers from complaining to them.

It can be as simple as having no indicated way for making a complaint. I'm enrolled in a beta test right now -- a beta test! -- for which users were not in any way solicited for feedback, and provided with no contact information or way to communicate with the project team, except admin@theprojectinquestion.com.

These can be things like burying the contact information, making the "report a bug" field have a length limit of 200 characters, or other mechanical nonsense.

It can also be things like punishing users who send feedback. LJ did this with its Suggestions area (I don't recall if DW still does this): the only way to submit a feature request is to do so through an interface which subscribes you to the public discussion of the feature, no opt out. So if you don't have the spoons/interest in getting all those notifications or participating in a public discussion about it, doing so becomes aversive.

I just had an interesting discussion with another developer about trying to submit bugs and feature requests to open source projects. Both of us had, many times, and neither of us had ever, to our recollections, had a bug we reported fixed or a feature implemented without a huge fight, and then only rarely. I reported a security hole once and had the lead maintainer reply in the bug tracker that that wasn't a bug, that was the auto-re-log-in convenience feature. A bunch of other users saw that in the bug tracker and were like "HOLY HELL, THAT'S NOT OKAY" and jumped on him, and it got (I think) fixed. But that kind of response is typical.

Maybe your place of employment did none of these things -- I don't mean to harsh on you -- but the great majority of people claiming to want user feedback go to significant lengths to make sure they don't get it, or at least not from the same person twice.
[User Picture]
From:agrumer
Date:September 19th, 2014 07:35 pm (UTC)
(Link)

From my (limited) experience in software development:

  • You often start out with a list of Features You Would Like The Software To Have. This gets winnowed down to a list of Features You Can Afford To Implement Now. If the software is successful, there’s stuff left over from the first list, and more development money for version 2.0.
  • A company of any decent size has a Quality Assurance (QA) department going over the software as it’s in development, looking for bugs. Not all bugs get fixed before the first official release.
  • User complaints. There are lots of ways this can get done. I once had a problem with a program being developed by a single guy, and I just emailed him, and we narrowed down what the problem was, and he fixed his code and sent me the fixed version.
[User Picture]
From:madfilkentist
Date:September 20th, 2014 12:34 pm (UTC)
(Link)
It's more like: You start with a list of features you would like the software to have. This gets expanded to a list of features marketing insists is absolutely necessary.
[User Picture]
From:metahacker
Date:September 20th, 2014 01:16 am (UTC)
(Link)
Welcome to my career! This is one of the key aspects that a User Experience professional provides.

You've put your finger on the reason we exist: programmers do not resemble their users, and even when they do, they resemble only one of their users. When making something, it turns out to be _really_ challenging to step outside the design and envision how someone unlike you will use it. It also generally requires different skills and aptitudes than programming, so it's become its own career.

There are a number of approaches UX uses to identify areas for improvement; you've mentioned some of the key ones. Here's a quick overview:

1. User complaints, aka "active feedback". You have to translate these back from what they say, to what they need (requirements). People generally don't know what they need; they want things, but those things might not actually help them. They also tend to report things which are painful ("it forgot my password!") rather than things that impair their productivity but are less painful.

2. Direct or indirect observation of people using your stuff, "passive feedback". This can be "in the field" ("contextual inquiry" is a fancy name for "I'm going to watch you work for a while and see what you do"). Direct observation is intrusive and costly and has problems but provides very rich data.

Indirect observation can include automated observation, for example, watching what pages people visit (or more notably, what they don't visit); you can often log what commands people issue, what buttons they click on, and so forth. You can look at what they produce, and try to work backward from there.

We definitely also use automated observation to look at error patterns; for example, if an app crashes, these days it often sends a report back to HQ, where it is bucketed along with all the other crash reports and used to improve software quality. When Facebook says "something bad has happened, it's our fault, and it's been reported!", this is surely what is occurring.

If users are working with other people, you can watch what they talk about (this was the thrust of my PhD thesis); what they talk about is a valuable resource that tells you both what the work itself is, and what parts of it are hard to do. People tend to talk more about things that are challenging to accomplish, and the structure of what they say tells you something about what to build in response.

You can look at the modifications they add themselves to make their own lives easier. The obvious example is when someone takes a piece of paper and writes "PULL" on the door above the handle that screams 'push' by its design. Fix that. With software it's harder to modify, but you often see adjacent tools like the infamous "password on a sticky on the monitor" pattern.

3. Pro-active feedback: surveys, focus groups, brand investigations, etc. I'm less of a fan of these, but they can be useful when shaping new products and when evaluating subjective impressions of your software. It's really hard to get good data from these, but surveys are easy to send out, and everyone thinks they know how to write one. Okay, I hate surveys. And focus groups quickly converge on groupthink. Moving on.

4. Experiments. Take one of the above methods, and deploy it in a semi-controlled environment. Usability testing is one example: give a user a version of your software (possibly even just a fake version made out of pieces of paper), give them a task, and ask them to think aloud as they try to perform it.

Or put two people in a room together and watch what they say as they perform the task. ("co-discovery")

Or make ten versions of a web page, surreptitiously select a random one of them to show to each user, and compare their task completion rates. (Google is notorious for overdoing this particular method.)

Experiments are great when you have some way to measure success, but there's a big trade-off triangle between prep time, user time, and feedback quality.

Err. I could probably go on for a few more hours, but perhaps that gives you an taste?
[User Picture]
From:darius
Date:September 22nd, 2014 09:12 pm (UTC)
(Link)
Why you're asked for the email address again: that info isn't automatically passed along to the site by your browser when you click a link instead of a button. (As far as the web browser knows, the link has nothing to do with the form you started filling out.) They could design a form with multiple buttons, or address the problem other ways like with Javascript, but that'd take conscious design.
nancybuttons.com Powered by LiveJournal.com