Welcome to my career! This is one of the key aspects that a User Experience professional provides.
You've put your finger on the reason we exist: programmers do not resemble their users, and even when they do, they resemble only one of their users. When making something, it turns out to be _really_ challenging to step outside the design and envision how someone unlike you will use it. It also generally requires different skills and aptitudes than programming, so it's become its own career.
There are a number of approaches UX uses to identify areas for improvement; you've mentioned some of the key ones. Here's a quick overview:
1. User complaints, aka "active feedback". You have to translate these back from what they say, to what they need (requirements). People generally don't know what they need; they want things, but those things might not actually help them. They also tend to report things which are painful ("it forgot my password!") rather than things that impair their productivity but are less painful.
2. Direct or indirect observation of people using your stuff, "passive feedback". This can be "in the field" ("contextual inquiry" is a fancy name for "I'm going to watch you work for a while and see what you do"). Direct observation is intrusive and costly and has problems but provides very rich data.
Indirect observation can include automated observation, for example, watching what pages people visit (or more notably, what they don't visit); you can often log what commands people issue, what buttons they click on, and so forth. You can look at what they produce, and try to work backward from there.
We definitely also use automated observation to look at error patterns; for example, if an app crashes, these days it often sends a report back to HQ, where it is bucketed along with all the other crash reports and used to improve software quality. When Facebook says "something bad has happened, it's our fault, and it's been reported!", this is surely what is occurring.
If users are working with other people, you can watch what they talk about (this was the thrust of my PhD thesis); what they talk about is a valuable resource that tells you both what the work itself is, and what parts of it are hard to do. People tend to talk more about things that are challenging to accomplish, and the structure of what they say tells you something about what to build in response.
You can look at the modifications they add themselves to make their own lives easier. The obvious example is when someone takes a piece of paper and writes "PULL" on the door above the handle that screams 'push' by its design. Fix that. With software it's harder to modify, but you often see adjacent tools like the infamous "password on a sticky on the monitor" pattern.
3. Pro-active feedback: surveys, focus groups, brand investigations, etc. I'm less of a fan of these, but they can be useful when shaping new products and when evaluating subjective impressions of your software. It's really hard to get good data from these, but surveys are easy to send out, and everyone thinks they know how to write one. Okay, I hate surveys. And focus groups quickly converge on groupthink. Moving on.
4. Experiments. Take one of the above methods, and deploy it in a semi-controlled environment. Usability testing is one example: give a user a version of your software (possibly even just a fake version made out of pieces of paper), give them a task, and ask them to think aloud as they try to perform it.
Or put two people in a room together and watch what they say as they perform the task. ("co-discovery")
Or make ten versions of a web page, surreptitiously select a random one of them to show to each user, and compare their task completion rates. (Google is notorious for overdoing this particular method
Experiments are great when you have some way to measure success, but there's a big trade-off triangle between prep time, user time, and feedback quality.
Err. I could probably go on for a few more hours, but perhaps that gives you an taste?