Future psychological theories - Input Junkie
Future psychological theories|matociquala says
I am writing a book set a thousand years in the future. Do I actually believe that our modern-day theory of mind and mental illness will hold any water then?
Do I have characters blithely going on about sociopathy anyway? Anyway, 900 words. Going to bed now, because I can't keep my eyes open.
The only hint of a real (as distinct mere satire  ) future theory of psychology I've ever seen was in a Delany novel (almost certainly Stars in My Pocket Like Grains of Sand
 )-- a character's apparent stupidity was explained by his belief that he shouldn't try to take in more information than he could output. I asked Delany, and he said that this wasn't part of a larger theory, but it seems to imply a rich cybernetic system of how minds could work or go wrong.
 In John Barnes Timeline Wars
series, there's a character from an utterly nasty culture who describes himself as having some sort of excessive compassion disorder. It's described in a very familiar sort of jargon.
 I don't think a sequel is possible, though I may take another crack at reading the fragment
. Aside from Delany seeming to have lost interest in the project, Stars</> predicted the net, but only as a sort of encyclopedia look-up. I don't see how it could be retrofitted into a plausible future where everyone can add to the web. I'd still like to know what cultural fugue is.
|Date:||March 2nd, 2010 02:09 pm (UTC)|| |
Re: Cultural fugue.
I think there's some pre-1988 sf which comes near the idea of the Singularity without developing it or making it explicit.
Sooner or later, I'll take another crack at Cyteen. For no obvious reason, I bounce off it near the beginning.
|Date:||March 2nd, 2010 02:27 pm (UTC)|| |
Re: Cultural fugue.
You're not the only one.
Re: Cultural fugue.
So do I. Possibly the completely new theory of mind is why--I certainly spent a lot of time explaining why conditioning Does Not Work That Way. I wonder if it's possible to come up with far future psychological theory that doesn't seem totally implausible to someone familiar with the current ones.
I'd be tempted to blow the question off by saying that anything set more than 20 minutes into the future is entirely in the author's pocket anyway: write the story you want to write, use the reasoning you want to use, and if it doesn't look futuristic enough, make it weird. What was happening a thousand years ago? Millenialism, crusades, iconoclasm, the occultation
of the 12th imam. Elliptical references to these should be enough to keep the reader playing catch up so that they'll be grateful for your reference to sociopathy, as an foundation they can build on.
At least, that's how I am as a reader. I love being in the position of having to figure out what's going on - I say when in doubt, throw away all the wordy explanation of how your characters got here and just have them talk about what they're going to do.
That didn't really help, did it?
It's Elizabeth Bear's problem, not mine. I think part of the problem is the tension between wanting something to be futuristic, comprehensible to the modern reader, fit with the story in mind, and be a manageable amount of work for the writer.
I'm not saying that all of these are necessarily pulling in different directions.
yes. That's a whole set of reasons why I'd start by looking long and hard at my assumption: "this story is set a thousand years in the future".
I also think "mind" might dissolve if we poke at it, but I feel the same way about secular rationalism, which takes just as much effort to maintain as religion.
But I'm still ducking what I take to be the point of your post, because the only example I can think of is Stanislaw Lem's Futurological Congress, which neatly anticipates the way pharmaceuticals seem to be displacing psychology and psychoanalysis.
|Date:||March 2nd, 2010 01:58 pm (UTC)|| |
Paul Churchland has some interesting speculations on a future science of "mind" in one of his books. Basically, Churchland is an eliminative materialist: he thinks that the answer to the problem of mind and body is that the concept of mind is a "folk science" idea, like caloric fluid or impetus or phlogiston, and that a scientific theory will dissolve the whole schema into nothing. He envisions future people saying things like "I'm feeling my serotonin level increase."
That's interesting-- if it can be conveniently summarized, what does he make of the way people are affected by memetic input?
|Date:||March 3rd, 2010 04:28 am (UTC)|| |
It's been a few years since I read him closely, so I can't pin it down, but I don't think he talks much about memetic input at all. He's primarily interested in mind-body theory and the implications of rejecting dualism. In particular, he says that there are two ways a scientific theory can be dealt with when you gain an understanding at a more basic level: reduction, as when all the concepts of thermodynamics were reinterpreted as referring to average states of large populations of molecules; and elimination, as when chemists decided that there was no such element as phlogiston and that combustion was not giving off fire but taking on oxygen. He favors elimination of all the language of traditional mentalistic psychology; he thinks they are founded on a folk theory of cognition and motivation that has no relation to reality.
On the other hand, if you read him closely, what he's proposing to "eliminate" seems to be primarily propositional attitudes. That is, we can take a sentence, such as "You are playing with the dog"; relate it to a "proposition" which is effectively the mental core of meaning that is still present if you translate it into French, Japanese, or Sumerian; and then describe mental stances toward the proposition, such as "He thinks you are playing with him" or "He wants you to play with him." And we tend to say such things even about animals that have no linguistic capability and cannot actually be forming mental propositions. Churchland wants to (a) talk about some sort of internal vector state of the nervous system as describing animal cognition and (b) take that level as basic to human mentality, with "propositions" resulting when we come up with verbal expressions for our cognitive states. That is, he doesn't think our natural cognitive processes take place by "All A are B," "No B are C," "So no A are C," or by imagining, remembering, believing, etc. such propositions, but by some more basic form of neural processing. It's kind of like thinking of the brain not as running explicit step by step algorithms but as running neural net programs (funnily enough).
A lot of treatments of "memes" seem to treat them as "propositions," so I suppose Churchland might say that they are not basic, any more than the cursor on the screen as I type this is basic to the functions of the CPU.
Does that make any sense? This is hard stuff to explain. . . .
There's probably something to the idea that prepositions aren't really the way thinking works.
Prototype theory (the idea that we think in terms of best examples rather than sharp definitions) and the pervasive habit of using generalizations and then saying "but I didn't mean all x are y" suggest that people default to some loose system of processing.
That being said, I think animals use some sort of propositional thinking occasionally, even if it isn't in words.
|Date:||March 3rd, 2010 05:43 am (UTC)|| |
Both boundaries and central points are important to thinking, but in different ways. And I think some people may find it more natural than others to think in boundaries. Mathematicians, theoretical economists, logicians, and similar people have to come from somewhere.
He envisions future people saying things like "I'm feeling my serotonin level increase."
Given that I talk like that sometimes, it seems plausible. On the other hand, I don't think we'll ever entirely replace discussion of emotions with discussion of neurotransmitters, any more than we talk about exact wavelengths rather than color. Most of the time, it's more useful to talk about the subjective experience rather than the underlying causal system.
It seems much harder to create a plausible psychology of the future than a physics of the future. We can imagine all kinds of scientific discoveries, which people as we understand them today try to deal with. But fiction is about understanding people, no matter what else it's about, and offering a much better understanding of people that we have today without actually having that understanding is really hard to pull off.
It doesn't have to be plausibly good-- if you think psychology is nonsense, then you could have a new sort of nonsense, though even coming up with the sort of nonsense people might believe is apparently difficult.
Slight sidetrack: I've read one story (by Tom Purdom, can't remember the title) which keeps the new psychology completely offstage. All we're told is that it works perfectly and is very expensive. The story is about people wrecking their lives trying to get enough money together.
|Date:||March 3rd, 2010 01:51 am (UTC)|| |
It doesn't have to be better, it just has to be plausibly different.
It seems to me that if you want to sketch a future psychology that’s plausible enough to suspend disbelief, you can approach it from two angles:
(1) What would characters in this universe consider the boundaries of normal behavior or attitudes? (Of course, “normal” could be parameterized by gender, occupation, social class, phenotype version number, etc.)
(2) What features of this universe would be used as metaphors for the mind? (For example, pop-Freudian psychology seems to rely heavily on the metaphor of the steam engine: if you repress something, it leaks out into places you don’t want it to go.)
Memory is almost always described in terms of the current state-of-the-art information storage technology. Aristotle thought it was a wax tablet; modern cognitive science is built around computer metaphors.
Another set of variables, related to your (1), would be how atypical behavior and thought get framed, what the culture-bound syndromes are, et cetera.