Wednesday, June 06, 2012

What Happens After Death?

At death I suddenly awake from the dream, so compelling, so long. I am disoriented. I would blink, try to rejoin the dream, but I have no body. I call out, but no one hears my screams. Then I realize it is pointless, for I can no longer distinguish myself from anyone else. There is only we, no me. I dissolve into you. You must remember me, for that keeps the dream alive for you.

Friday, April 20, 2012

Five Kinds of God

I was in a bar, having a beer with some friends, and the conversation turned to the existence of God. Only one person of eight claimed to be a believer. Most were dismissive, name-calling atheists, along with with a few rational, unmoved skeptics.

For myself, before jumping in, I wanted to know what we were talking about. Did we have a working definition of God? The atheists insisted we were talking about a delusion, a mental disorder of psychiatric proportions. They cited books by Richard Dawkins, and Christopher Hitchins, among others. There was no possible definition of God, they said, unless we wanted to talk abnormal psychology.

I suggested we stipulate the Biblical God of the Old Testament as a working definition, for the sake of discussion. The atheists would not agree. One woman with a charming foreign accent insisted that we would just be discussing “a fairy in the tooth.”

What about Thomas Aquinas’s five proofs of the existence of God, someone suggested, the only person in the group with any formal training in theology? Even the atheists were familiar with these famous arguments (not really “proofs”) from 13th century Europe. “All disproven, every one of them,” the atheists insisted, and that settled it.

So the discussion went nowhere; never got started really. When students ask me if I believe in God, my standard answer is, “You tell me what God is, and I’ll tell you if I believe in that.” So after my frustrating discussion at the bar, I decided to formulate a list of definitions of “God,” and see which ones, if any, I could justify.

Five Kinds of God:

1. A readily available definition of God is the anthropomorphic, monotheistic God of the Old Testament and the Koran, a blend of many ancient pagan gods. This God is superhuman and supernatural, in other words, divine and not of this earthly world. It is the creator of all things, the bringer of death, the perfection of Good and The Righteous, and the ultimate judge of each person’s merit. This God is omnipotent (all-powerful), omniscient (all-knowing), and omnipresent (present everywhere). It is personal, hears prayers, responds to them (sometimes), directs individual lives, dispenses ultimate reward and punishment, and can intervene into personal, social, and natural events, either on request, or arbitrarily. This God also requires a great deal of praise and worship, and in return, might grant a strange kind of partial immortality.

A variation on this theme is the God of the Christian New testament, Jesus, a God cloaked in flesh for better communication with humans, but basically a messenger for the Old Testament God.

Do I believe in those Gods, or anything like them? Definitely not. I think the monotheistic God of Christianity and Islam is at best, crusty tradition, at worst, a projection of monumental human egocentricism. When we were children, each of us was, for a while, the unquestioned center of the universe. A lot of people never get past that. As adults they project and reify a paternal figure or tribal leader that will continue their infantilization. I think Freud nailed that analysis.

I don’t believe there is any need for such a God, except among those people who cling to childish egocentricism, believing that a benevolent, all-powerful parent still watches over them and assures them that everything is all right. That’s why this kind of God persists in so many cultures: it indulges a real human need. But taken at face value as a deity, it is too incoherent to be believed, or even understood, as anything but a human projection, and there is no evidence or non-circular reason to justify it otherwise.

2. An alternative is a social god, what Anthony Freeman called “God In Us” (Imprint Academic, 2001). According to Freeman, God is not "out there," in heaven, outside of history, distant, aloof, and silent. No, God is a force within human beings, alive and present to us. What kind of a force is it? Freeman is vague on that. It is whatever is the source of our highest values. What are those? The usual suspects: goodness, truth, justice, beauty, compassion and so on.

This approach has the advantage of dispensing with the trappings of churches, priests, idols, sacrifices, rituals, superstition, hierarchy, paternalism, and all the murky mumbo-jumbo that goes with traditional religion, while retaining the best of human values. Freeman's God is warm and fuzzy, but on the down side, it’s hard to say what makes a set of values into a kind of “God.” All values are culturally-agreed-upon principles. No educated person would propose that there are universal, cross-cultural, non-historical, transpersonal, absolute values. Would they? What would be the evidence?

A related idea of God, quite a bit more convincing, in my opinion, arises from the psychology of deep intersubjectivity, articulated by philosophers and theologians such as Emmanuel Levinas and Martin Buber. According to this idea, when you encounter another person honestly and authentically, not defensively, not egocentrically, but in the other person’s space, you find yourself facing something holy. You feel love, humility, and a sense of being in a sacred presence. This is not a personal divinity, but an inter-personal divinity. It’s not in you, but in us. The immanence of the other’s subjectivity defines that spiritual encounter. Meeting someone like that is what Buber called an I-Thou relationship, to distinguish it from the more mundane I-You relationship. Your response to the presence of that spirit defines ethics and morality in the rest of life.

Do I believe in that kind of God? Yes. It is attractive because while it is a transpersonal spiritual experience, larger than the individual, it is not supernatural, because it is a naturally occurring phenomenon in human experience. It is a well-acknowledged and documented experience among psychotherapists and other counselors, and ordinary people with a honed intersubjective sensitivity.

On the down side, encountering intersubjective holiness is not an everyday occurrence, at least not for me. For someone not susceptible to deep intersubjectivity, it is basically an inaccessible kind of spirituality. And I must admit, it is a “thin” God as far as deities go. In other words, it does not provide for worship, prayers, burning bushes, everlasting life, or any of the other alleged benefits attributed to a more traditional god. But it does exist, it is transpersonal and holy, and its existence can be verified on demand, empirically, not scientifically, but observationally, by direct personal experience. I believe it.

3. A third kind of God is a transcendent spiritual experience that some scientists attribute to specialized activity in certain parts of the brain, but which other people attribute to God. Psychotic patients hear God’s voice all the time, talk to God, and get along in jolly conversations with the Big Guy. But they’re crazy, right?

Social science and medical research reveal that most ordinary (non-crazy) people have had auditory hallucinations at least once or twice in their life (“Visions for All,” Science News, April 7, 2012, 22-25). These are non-psychotic hallucinations, and people having them report experiencing God in a physical, sensory way. Such experiences can be correlated with heightened activity in certain parts of the brain, the so-called “God-spot.” (Scientific American, October, 2007). For these people, God is real, “out there” and He/She presumably tells them things, like what to do or what is right.

Do I believe in that kind of God? Yes and no. I believe this is a natural phenomenon, a genuine neurological and cultural event that occurs in some people, and for those people, there is no denying their experience that they have encountered a self-transcendent “otherness,” which they name God. Do I believe these phenomena are best interpreted as evidence for a divine, supernatural God? No. A side-effect of brain activity is a better explanation, in my best judgment.

Dreams, which we have every night, are also mental phenomena that arise from activity in certain areas of the brain (e.g., Alan Hobson: Dreaming as Delirium. MIT Press, 1999). Dreams have that quality of otherness, that is, the feeling that they arise not from me, but from somewhere outside of me. Yet that feeling alone does not justify, in my view, the claim that dreams originate from a divine source. Throughout most of history, dreams were thought to have divine origin. But today, I think the evidence favors a brain origin.

4. A fourth kind of God is approached from one of Aquinas’ five “proofs,” the argument from contingency. Aquinas did not have a convincing argument but I believe it can be fixed up into a fair argument for the existence of God. First Aquinas’ argument:

A. Contingent things exist (those things that just happen to exist now, but might not have, and didn’t exist in the past, and probably won’t in the future. You are an example of a contingent thing, as am I, and as is everything in human experience).

B. Each contingent thing at some time does not exist (by definition –that’s what contingent means).

C. If everything were contingent, there would be a time when nothing existed (by definition of contingent).

D. If there was a time when nothing existed, that would still be the situation today (because nothing comes from nothing).

E. Hence if everything were contingent, nothing would exist today.

F. Things do exist today. Hence, everything cannot be contingent. Therefore a non-contingent (eternal) being must exist and that is God.

There are two obvious errors in the argument, in my judgment (and I am neither logician, philosopher, or theologian). The first is in statement C. It presupposes without justification that there had to be a moment when all contingent things did not exist. But why? Animals and plants, for example, go out of existence (die) at different times. Subatomic particles go in and out of existence at different times and rates, all through the vacuum of space. There is no reason to suppose that there must have been a single instant when nothing existed.

The second error is in statement D, which supposes that nothing comes from nothing. We know that in the quantum world, particles pop into existence all the time, for no reason at all. And more obviously, we know that creativity exists, and one definition of creativity is to make something exist where it did not before, like a bridge or a television. Aquinas would not have known about quantum mechanics, but he surely would have known about creativity.

I think an argument similar to Aquinas’ can be made without these errors:

A. Creativity exists as a natural phenomenon, observed in nature, and known personally by introspection and other human experience.

B. Creativity produces something out of nothing. This depends on how you define “something” and “nothing,” but surely we can say with confidence that humans have produced gunpowder, airplanes, and computers which did not exist before.

At a fine grain of psychological analysis, we can argue that human creativity is inherent in every act of intentionality (e.g., Brentano, Psychology From An Empirical Standpoint, 1874), and that creativity therefore is a fundamental property of human psychology. That can be verified by introspective observation and reasonable generalization.

C. Everything that exists was created. Nothing comes from nothing (in other words, nothing is uncaused), but creativity is something rather than nothing.

D. Humans exist; the world exists.

E. Therefore, there is a supernatural creator of all things not created by humans or other natural sources of creativity.

This argument establishes the existence of a superhuman, supernatural creator, which easily falls within the scope of entities that can be called “God.” This is not a personal god, only a divine creator, like Brahman, or the Creator of the Deists.

By implication, we humans are gods also, demiurges, if you will, because we are also endowed with the quality of creativity, the power to make something out of nothing. This in turn suggests, but does not prove, that we humans can know God the creator, inasmuch as we have the same or similar power of creativity.

The weak point of the argument is statement C, nothing comes from nothing, or more exactly, nothing is uncaused. That can’t be proved and it might be wrong, but I side with Einstein, who said, “God does not play dice.” I don’t think there is such a thing as pure randomness, only limits to our powers of pattern recognition. That’s an article of belief based on life experience, but I admit it could be mistaken. Certainly it is contradicted by principles of statistics and the physics derived from statistics, but so be it.

If we allow the assumption in statement C, I think this makes a pretty good argument for the existence of a divine Creator – not the bearded guy on the ceiling of the Sistine Chapel, and not the Old Testament creator whose actions were documented (by whom?) in Genesis. This argument establishes only a principle of supernatural creativity. It’s not much, but it’s something, not nothing. Do I believe it? Yes.

5. The fifth and final kind of God is approached in a way similar to the previous one, and also derives from Aquinas, this time his argument from design. Like the previous one discussed, I think Aquinas’ argument here is fatally flawed, but can be fixed up. First, Aquinas’ argument:

A. We see around us evidence of intelligently designed objects, such as the wing of a bird.

B. Things do not design themselves.

C. They must have been designed.

D. Hence a great (superhuman, supernatural) intelligent designer must have designed complex natural things.

The rebuttals to this argument are well-known and well-worn today. Chief among them is the fact that the theory of evolution shows that natural objects, even those as complex as the wing of a bird, come about not through the efforts of a divine designer, but as a result of accidental mutations and arbitrary environmental selection pressures.

Evolution is an extremely compelling theory that has withstood many thousands of empirical tests and observations. People who do not accept this rebuttal do not sufficiently understand the theory of evolution.

So basically, Aquinas’ argument fails because its first premise is unsound: the estimation that a complex thing was intelligently designed is an opinion, a judgment based on ignorance. It is not a necessary inference or a defensible assumption. A wing looks complex, yes, but that is not sufficient reason to say it was created by an intelligent designer. A brain is complex too, but also a natural product of evolution. There is no basis for the assumption of statement A.

A common counter-argument is a question: If you found a pocket watch on a deserted beach, would you assume it was the natural fruit of some exotic tree, or would you assume it had an intelligent designer? I would assume intelligent designer, of the human variety. Humans are intelligent and we design and build many complex things, from calendars to computers. The things we design and build are indeed the products of intelligent designers. But we’re not God and the existence of our own complex products are not arguments for the existence of God, unless you want to argue, as some have, that humans invent God by analogy from themselves.

And yet, I think there is some merit in Aquinas’ argument from design, that can be salvaged and reshaped.

What if there were empirical evidence of a transcendent creator/designer, one that designed and produced complex natural phenomena and whose existence was independent of human intentionality (or that of any other animal or plant, just to be complete).

To say again, what if you could verify, at any time, by repeatable, personal observation and conservative logical inference, that complex natural phenomena were being produced “de novo,” (not by evolution, but apparently from nothing).

I’m not talking about near-infinitesimal subatomic particles out in space, but great big, human-scale phenomena that you can bump into and which fit perfectly into the course of your life as if designed for it. And these phenomena are produced without a shred of ordinary human creativity, intentionality, or consciousness; and they are not produced over the span of millennia, but in the frame of hours, days, and weeks; and not gradually, in some drawn-out evolutionary trial-and-error, but right now, fully-formed.

You may be thinking it is a trick, a word-game. How about an apple? It fits perfectly into my life as if it belonged, keeps me alive, is tangible and real, and no human created it. Close, but wrong. Apples are products of natural selection, evolution over millennia. So not an apple, not an egg, not a horse. I’m talking about natural phenomena that do not arise from biological evolution (nor from the geological and cosmological processes of the environment that complement biological evolution by natural selection).

Okay, then it must be ideas or cultural products: Fermat’s last theorem, Shakespeare’s tragedies, Beethoven’s string quartets, Maxwell’s equations. Those are well-designed things that change lives. Close, but no. They are explicit products of human effort, intention, creativity and consciousness, all ruled out in this scenario. I’m talking about phenomena that do not arise from human intentionality.

It seems to me, if we could verify a process of production for important, well-designed, complex natural things such as I have in mind, outside the principles of evolution, and without the slightest touch of human intentionality, it would be justifiable to concede that there must be a superhuman, transcendent, intelligent designer of those phenomena.

Okay, that’s the setup, here’s the answer: the products are luck and insight. They are real and important phenomena of human experience, and famously they are not produced by human intentionality. You cannot “do” luck, and you cannot force insight. They happen to you, sometimes, for no reason, often when you least expect them.

In the vague category of “luck,” I include events that are positive, desirable, and fit into your life in an important way, like winning the lottery or falling in love or finding the perfect parking space. I’m ruling out so-called “bad luck” for now, because I’m not sure what that is. Luck is what we call the source of an outcome that is desired but apparently did not come about as a result of intentional effort or known natural processes. Saying that something was just lucky is tantamount to saying it was uncaused by nature or by oneself.

In the nebulous category of “insight” I include sudden knowledge about the nature of things, or of something in particular, or of relationships among things, or about how to do something, or what something means. Again, insight may come to you, but you can’t make it happen. Insight may favor the so-called “prepared mind,” but to say something occurred to you by insight is tantamount to saying it was uncaused by nature or by yourself.

Now all we need is to demonstrate that there is an identifiable, empirically verifiable, non-human, non-evolutionary source of those two products. Then we’d be in a position to say we had a case for the existence of a God, an intelligent designer.

Elsewhere, I have called this source, “the black hole of non-experience” (Adams: The Three-In-One Mind, Paperless Press, revised edition, 2012, ISBN 978-0-9837177-1-3).

It is essentially what it sounds like, a suppression of intentional consciousness that terminates all known experience. It is the culmination of certain well-known (among practitioners) meditative techniques. It is analogous to non-dreaming sleep, in which the sleeper has no experience, no awareness of self or world, and performs no action directed at self or world. The difference is that the Black Hole is accessible from full wakefulness.

If this so-called Black Hole is a non-experience, what justifies identifying it as the source of self-transcendent design? That comes retrospectively, after the encounter with the Black Hole. During hours, days, or weeks after the encounter, one notices a meaningful increase in the frequency and intensity of good luck and sudden insight in one’s experience. These are large effects, easily identifiable. Could they be mere coincidence? Yes.

However, the effects are repeatable and reversible (in the sense that their frequency and intensity decline over time). So it is possible to perform a traditional ABA quasi-experimental reversal study. When that is done, one finds that the effects are reliably prevalent and intense after an encounter with the Black Hole, noticeably less so in the control condition (the “B” condition), in which the Black Hole is not encountered for a period of time.

So based on that personal-empirical evidence and the arguments proposed above, there is reasonable justification to identify a self-transcendent creator/designer that is not a product of evolution by natural selection. That qualifies as a category of God. Do I believe it? Yes.

Monday, September 05, 2011

Consciousness Before Birth and After Death

What is the difference, to consciousness, between being dead and being not yet born, or more exactly, being not yet even conceived? Are they equivalent states of consciousness?

After you die, your individual consciousness ceases to exist. Religion generally denies this, but that is wishful thinking. The main purpose of religion, after all, is to deny death. For believers, who are alive, not dead, this is a comforting, though delusional idea that has no basis in evidence or logic.

So let us agree, for purpose of this essay, that individual consciousness ceases to exist when, or sometime very soon after, all the systems of the physical body cease to function.

Now, how is that different in principle, to an individual’s state of consciousness prior to being conceived? In that condition (or non-condition), there is no physical body to define the boundary of an individual, and with no body, no individual consciousness. Functionally then, being unborn (unconceived) is equivalent to being dead. In both cases there is no individual consciousness because the functioning, individual embodiment to support it does not exist. Individual consciousness depends on individual embodiment – which is not to say that consciousness is caused by embodiment; there is no evidence for that. There is simply a dependency, of an unknown kind, between embodiment and consciousness.

It is only during a brief segment of less than 100 years, while we have a functioning body, that we have a functioning individual consciousness. Prior to the beginning of my tiny moment of individualism, the world extended backward in time beyond history and took place entirely without my presence (difficult though that is to imagine). And after my flicker of time is over, the world will continue on, in some form or other, without me (difficult though that is to imagine). Beyond the boundaries of my particular individual life, my consciousness simply does not exist in the universe. Why then, do we so carefully distinguish between being dead, and being not yet born?

An easy, and wrong, answer, is that the unborn are full of “potential” while the dead are not. This is a linguistic confusion, for “the unborn” do not exist. What the expression means is that some hypothetical individual who might be conceived and born at a future date, would have the potential to have experience, and to cause things to happen in the world. But that is a fact about someone who hypothetically will be alive, not an entity actually unconceived and unborn at this time. That entity does not literally exist yet. Something that does not exist has no potential for anything.

A person might take a God’s-eye view of human life and declare from that omniscient mountaintop that new individuals will be born, and when they are, will have “potential” for life whereas the dead never will again (assuming that dead is forever). But there is no God’s-eye view. We are humans, not gods, and we only have a human point of view, which is not omniscient. To take a God’s eye view is either imagination or self-delusion. If you’re going to pretend you have a God’s eye view of life and death, you might as well imagine reincarnation, or zombies and vampires if you like, because it is unconstrained fabrication anyway.

From actual human, not presumptive divine, knowledge, we can again only conclude that there is no functional difference between the state of (non-) consciousness prior to conception and after death. That conclusion is an inference based on evidence available to living humans.

However, there is a psychological difference that matters to living humans. I have memory of personal experience that seems to extend backward in time before my birth. This is possible through the magic of history. By contrast, except for religious stories, I do not imagine any personal experience beyond my death, since, unlike for history, there is no human evidence that any experience continues beyond death.

Of the uncountable billions and billions of people who have died on this planet, and among the millions who die every day, not a single person has ever “come back” to the living and reported any experience beyond death, or even communicated with us “from the other side” about what postmortem experience is like. In this assertion, I rule out fictional stories, religious fabrications, fraudulent reports, and tales from the mentally abnormal. By comparison, with history, we have written records, fossils, geology, astronomy, genetics and so forth, which give us verifiable, scientific evidence of what happened or probably happened before my individual experience began.

World War II ended before I was born, which seems odd to me, because I feel like I remember it, but that’s because of having studied history. My father fought in WWII and he actually remembers it (or would, if he were not dead). But what would he remember? He would remember his naval experience, his buddies, the situations he was in. He would not remember the entire war, though, because nobody could, because nobody experienced the entire war. People can only literally remember their own experience, not somebody else’s. And yet, after a lifetime of reading about the war, and watching uncountable movies and newsreels covering all aspects of it, I feel I have a personal memory of it, although that is not literally possible because I wasn’t yet born when the war ended. Still, that quasi-memory, a function of internalized history, extends my memory of collective human experience back in time beyond the moment of my conception.

There is a complementary, if not parallel, kind of quasi-experience after death. After a person dies, their memory continues in the collective experience of those who knew that person. In cultures that emphasize ancestor worship, this mnemonic persistence can last quite a long time. Eventually, and inevitably, it fades from the collective memory. Even if a family has an extensive, documented genealogy, we can be confident there is little, if any, collective memory of individuals who lived thousands of years ago, or who lived before history. Some individuals who are deemed noteworthy by a cultural tradition may be remembered less intimately for much longer than average. We collectively remember Albert Einstein, Thomas Aquinas, Jesus Christ, Socrates, and a collection of Egyptian Pharaohs. As more time elapses since a person’s death, the less detailed is the historical record of them and the dimmer the collective memory.

Nevertheless, there is a sense in which an individual’s experience persists beyond death in the collective consciousness of the community in which that individual lived. The dead individual has no personal consciousness or memory, but as long as the community persists, there is yet a persistent psychological trace of that individual’s experience.

To the extent that an individual, while living, defines himself or herself as a member of that community, psychologically constituted of it, then the individual can anticipate being remembered in the collective consciousness after death. That is, in a sense, another form of quasi-memory, an imagined future memory in the minds of the community. That is why some people are so extremely motivated to “leave a mark,” “make a difference,” “leave a legacy,” or otherwise make a noteworthy impression on their community so that their imagined, future, collective memory will persist longer than average.

The quasi-memory after death is actually an imagination of the community’s future rememberance, not a literal individual postmortem memory, but it can be conceived as a hybrid form of postmortem consciousness. In comparison, the quasi-memory of experience before birth feels like an individual form of consciousness, but it is derived from the collective experience of historians, scientists, and the like, and so is also a hybrid of personal and collective consciousness. The two kinds of hybrid quasi-experience have different qualitative feels.

Thus, there is, after all, a difference in consciousness between being dead and not yet having been conceived. While there are hybrid forms of quasi-personal consciousness before birth and after death, they are strangely different, and complementary rather than parallel or equivalent.

Sunday, August 14, 2011

Why Solipsism is Impossible

Solipsism is a huge problem for anyone interested in promoting introspection as a way to understand the mind (which includes me). You can only introspect on your own mind, not on anybody else’s. So technically, all you really know for sure is your own mind. The existence of any other minds is purely hypothetical.

The same would go for the existence of the entire world. If you accept introspectively known sense impressions as valid information, you realize that you have no other information. All your sensory data are known to you and only you, by mental impressions. A touch on the arm is known as the mental feeling of a touch on the arm. The arm itself knows nothing. All you can know for sure is the mental impressions you have of the world. You can’t know if anything else is really “out there.”

In the most extreme form, a solipsist asserts, “I am the only self that exists. All the rest of the world is, at best, a hypothesis, or possibly just a figment of my imagination.”

There is no way to refute solipsism. Any counter-argument against it would just be another figment of my imagination. If it is false, I could never know it, because my own mind is the only thing known to me. Solipsism is an extreme form of idealism, which says that only mental events can be known to exist (or, only mental events do exist).

Once you take introspective findings as valid knowledge, you are confronted with the question, How is introspective knowledge different from other, empirical knowledge, such as scientific knowledge? The difference is that introspective knowledge of one’s own mind is certain, whereas scientific knowledge is hypothetical, merely a set of agreed-upon propositions. Scientific knowledge cannot be certain because it is not acquired through introspection, which gives the only direct, certain knowledge. (Image:

Consequently, in scientific psychology, introspection is not allowed. No introspective observations can be accepted into discussion of how the mind works because introspection is private, and if you accept private data as valid, it takes precedence over hypothetical, consensus-based scientific data, and no further scientific agreement or progress can be expected or achieved. In other words, introspection implicitly carries the threat of idealism, and then solipsism, which is ultimately nihilistic. If my own mind is the only mind that can be known directly for sure, how is a scientific psychology possible? It isn’t. The threat of solipsism therefore is serious. It would destroy everything else. That’s why it is simply outlawed, and so is introspection. And that’s why there is no generally accepted methodology like “scientific introspection.” (Despite that, I have published a book by that title, explaining how it would be possible).

The threat of solipsism is false; not a real threat at all. It is based on a misunderstanding of the human mind, which does not, and cannot exist in isolation from other human minds. One's own self and mind are learned (acquired) from socialization and cannot ever be separated from that context. The image of Rodin’s solitary thinker is profoundly misleading. We are not monads, and never have been.

The philosophical problem of solipsism is posed by abstracting one’s own mind from that of others, but this abstraction presupposes that the world is already given as a shared world. Hence solipsism presupposes its own refutation. It is a confusion, not a valid proposition.

True solipsism would require that I do not experience myself as a single self in distinction from other selves, but that I experience myself as the only self that exists. But that is impossible, for self is only defined by other. So again, solipsism is impossible in principle.

What about a person, say, an infant, who has virtually no self-awareness. Could that person be a solipsist? Such a person does not have the resources to contemplate the possibility of solipsism. So the thesis of solipsism is impossible in principle in this case also.

Suppose a philosopher, using reason and analysis, abstracts the personal self away from its social origins and maintenance, and considers it as an absolute, transcendental ego, disconnected from all others. From that position of the abstracted transcendental ego, could solipsism be taken seriously?

Husserl, inventor of the transcendental ego, might seem to have believed that. But he also wrote that only his reflections on intersubjectivity make “full and proper sense” of the transcendental ego (Husserl cited by Zahavi, 1996). This is why Husserl claims that a phenomenological discussion of subjectivity in the end turns out to be a discussion not simply of the I, but of the we. Thus once again, even from the position of the transcendental ego, solipsism is not possible in principle.

What is possible: An object can be experienced in different mental attitudes. Hegel noted that a book can be experienced by the senses not as a book, but as merely an existent object with properties, not as a social, historical object with meaning. So it is possible to “pretend” or imagine that one’s own self is merely an existent object, divorced of its social meaning. But that is imagination. We can imagine flying pigs, too, but that doesn’t prove a thing. We can imagine an isolated, mondadic self, but to take that fantasy seriously is the delusion that constitutes solipsism. So that solves a problem you didn't even know you had. Don't thank me.

Zahavi, D. (1996). Husserl's Intersubjective Transformation of Transcendental Philosophy. Journal of the British Society for Phenomenology 27 (3), 228-245.

Thursday, June 23, 2011

What do you know when you know you are going to sneeze?

What causes a sneeze? Is it a tickle in your nose? The sneeze is a surprise, a reflex, not a response to a stimulus comparable to the one that leads you to brush a mosquito from your arm. When you’re going to sneeze, you open your mouth and get ready. Sometimes nothing happens and the sneeze “goes away.”

We should assume that a sneeze is a response to some biological event. You can’t sneeze at will. It is a reflex response to something going on in the body. Most probably a sneeze is a response to an irritation of the mucus membranes somewhere in the nasal passages.

But I have no awareness of my mucous membranes, in the nose or anywhere else. I can’t visualize them; they don’t give me any information; and I am unable to introspect on their state of being. This is true for most of the inside of the body. We have no direct mental access to its various states of being. You can feel a pain in your knee, but you cannot introspect on the various parts of the knee itself. You know when your bladder is full, but you have no direct mental communication with your bladder.

Yet there is warning for a sneeze. Rarely, if ever, is a sneeze completely a surprise. We are aware that a sneeze “is coming.” What is the nature of that awareness?

My hypothesis is that we are aware of a particular kind of brain activity that is distinctive enough to be discriminated from others, and associated with the actual sneeze. How that could be so is a mystery. The brain does not give off any sensory data, like the heart does. I can hear and feel my heart beating so to that extent I am aware of its location and activity in my body. But I have no direct awareness of my brain. I only know it’s in my head because I have been told. I can’t feel it in there. It doesn’t make any noise and doesn’t jump around.

But somehow, we can discriminate brain states from each other. We know the difference between having a full bladder and a pain in the knee and being about to sneeze. But since we do not have direct awareness of the brain, we have no easy way of describing these brain states, so we talk about them in terms of associated effects. For example, the sneeze itself is sensory and observable, so we say of the pre-sneeze condition, “I am going to sneeze.” All the talk is about the sneeze. But actually, what we’re referring to is not the sneeze-to-be, but the pre-sneeze condition of the brain, which we have learned to discriminate but not name.

Other examples of awareness of, and discrimination of, specific brain states include being aware of blood sugar level, pre-orgasm, pain, a feeling of nervousness or restlessness, being overcaffeinated, and being drunk. We talk about these brain states in terms of their observable bodily effects, but actually, we can discriminate the phenomena as specific brain conditions before there are overt bodily effects.

I think the most dramatic example of being aware of a brain condition, without being able to name it directly, is dreaming. We make up all sorts of fantastical stories upon awakening, because we have no vocabulary for naming or discussing the brain activity that we just experienced.

It is inconceivable that someone properly socialized would not be aware of their heart. We have anatomy books, the history of medicine, the doctor’s stethoscope, Poe’s story of the “Tell-tale heart,” and so on. But we do not have comparable socialization in our culture to name and discuss brain activities. We don’t even have any reliable visual imagery to attach to different brain states. That’s too bad. If we did have appropriate vocabulary, we could contribute a lot to understanding of the brain simply by discriminating and naming its various states when they occur.

We don’t understand the interface between the biological neurology and the mental experience, but the answer is, when you are about to sneeze, you are aware of a particular brain state.

Wednesday, May 11, 2011

Why is Logic Logical?

For years I have puzzled over the validity of logic. Why does one idea compel another? What is the nature of that compulsion? For example, why is the “law” of the excluded middle true: A thing cannot be, and not be, simultaneously. A equals A, and A does not equal not-A. There is nothing “in the middle” between A and not-A. That’s what Aristotle said, and it’s been true ever since. But is it only true by convention, or does logic follow some natural laws, either laws of the world or laws of the mind?

In day-to-day experience, the middle is not excluded. There is the luxury car and there is the economy car, and plenty of choices in between. There is one dollar, and no dollars, and fifty cents in between. There are guilty and innocent, and shades in between. So why is it true that there is nothing in between A and not-A?

At first consideration it seems that the difference is that the law of the excluded middle is about existence. It says a thing cannot BE and not-BE simultaneously. That’s about what IS. By contrast, everyday examples are all about degrees of qualities that all exist. The economy car exists, and so does the luxury car, and all the ones in between. The qualities of price and value vary along some (abstract) dimension, but all of it exists.

But we cannot say that THIS particular car (not in the abstract, but this one right here) exists and doesn’t exist at the same time. Why not? Because that would be illogical. But why? That is the question.

Is it a matter of abstraction? In algebra, which is very abstract, we all agree that A cannot be equal to not-A. that is uncontroversial. But we refuse to say the same about a particular stone.

The difference seems to boil down to what exists and doesn’t exist. But how is that determined? How do we know what exists and doesn’t exist? Do flying elephants exist? Well, yes and no. It depends on what you mean by “exist.” They exist in animated movies and in the minds of millions of children, but not on game reserves in Africa.

So do we restrict the scope of the question to things that exist physically, not mentally? That would seem an arbitrary restriction. Anyway, it would make algebra and logic, and science, higher mathematics, and law, and much else, not susceptible to the law of the excluded middle, and by extension, not susceptible to logic and reason. The purpose of logic is to bring the order of reason to abstraction. So it can’t be right to exclude mental abstraction from logic.

Besides, even in the so-called physical world, there are counterexamples to the law of the excluded middle. Light exists as light waves and as photons, simultaneously. That seems to violate the rule, doesn’t it? Hawking radiation around a black hole exists and doesn’t exist at the same time. There aren’t too many examples like that however, and in general, we tend to quarantine the principles of relativity theory when we consider logic in general.

I think the answer lies not in abstraction itself, but in the human capacity for discrimination. When we are ignorant of a thing or a topic, we cannot perceive distinctions. Someone who does not know wine literally cannot distinguish between cabernet and merlot. A person who does not know philosophy cannot tell the difference between Kantian and Cartesian ideas. Someone who does not know airplanes cannot tell if they are about to board a Boeing or an Airbus. I remember once looking over a locksmith’s shoulder as he fixed a lock on my door. “Look at that!” he exclaimed in disgust when he took off the outer cowling to expose the insides of the lock. “The quality these days is just disgusting.” I saw nothing but a jumble of metal parts. I wasn’t disgusted because I didn’t know what I was supposed to be seeing. I failed to discriminate what he did.

After training or other experience however, it becomes possible to discriminate parts from wholes and parts from other parts. Then a person can discuss the merits of cabernet and merlot, or well-made from poorly-made lock mechanisms. It works the same in the world of abstract ideas. It takes instruction or experience to discriminate democracy from authoritarianism and A from not-A.

Simple sensory discriminations enable abstraction. A door lock is a door lock, but a well-made lock is an abstraction, it is a kind of lock, or a category of locks. Once the discrimination has been made and conceptualized, multiple instances of a like kind can be grouped into an abstract category.

Thus “dog” is a category of animals, but that abstraction was developed only after I became able to discriminate dogs from cats, and from other kinds of animals. In turn, that discrimination was explicitly taught by parents and teachers, who dwell obsessively on helping children discriminate categories of animals. Why that is considered important is a separate mystery. Finally, there must have been some sensory discrimination at the bottom, by which I learned to identify my dog, a particular, concrete, sensory dog, as a “dog” and discriminated it from myself. So the sequence of abstraction goes from a particular, sensory being that exists right now in my presence, to a category of all such animals, which are then further discriminated and contrasted with other animals, and so on up the chain of abstraction.

The sequence of discrimination, conceptualization, and categorization is so automatic that I suspect it is a faculty of the human mind. Teachers teach us how to discriminate and identify, and categorize dogs, cats, forms of government, and much else, but nobody teaches us how to discriminate in the first place. We just do it.

Other animals discriminate in a similar way. In classical conditioning, a type of learning, the dog learns to salivate when the bell rings. Why? We say the dog has “associated” the bell with forthcoming food. However the dog first had to discriminate the bell from the general background noise, and also the occurrence of food from other events, and also the fact that the bell sounds just before food appears. Those are all sensory discriminations that the dog learns fairly easily, without the benefit of language. As far as we know the dog does not conceptualize any of it, but does manage somehow to generalize a more-or-less abstract category about what we would call the conditioned stimulus, because if a buzzer is sounded instead of a bell, the dog salivates in the same way he would to the bell. He obviously has an abstract category of sorts.

I’m not aware of any animal species with a nervous system that is not susceptible to classical conditioning, so I would have to conclude that discrimination and abstraction are built into the architecture of animal neurology.

Does that answer the question of what compels one idea to follow another and why logic is logical? Partially, it does. But the rules of logic are themselves so abstract that it is difficult to believe they are neurological manifestations. Suppose a proposal says that if p exists, then q will always occur. But if we look and find that q did not occur, what is the only logical conclusion? It has to be that p does not exist. This rule is the absolute foundation of reasoning in science and statistics. What makes it valid?

According to the analysis given here, that rule of logic, called modus tollens, is valid because it is an abstraction of sensory, bodily experience that many humans have discriminated and agreed is universal. We have all observed that if the bulb inside the refrigerator is working, then when you open the door, there will be light. If you do not see light, the conclusion is that the bulb is not working. Enough people have had experience like this, so that as a community, we have agreed the relationships involved are worthy of becoming a “law,” the law of modus tollens. It’s logical because we all say it is, not because of neurology.

The implication of this finding is that reason compels one idea to follow from another because of generalization of discriminations that many people have similarly made and conceptualized and categorized. The validity of logic is a social construct, not a natural phenomenon.

So what are we to make of the situation where people do not agree? Different groups insist that their god and only their god exists. Is there any concrete sensory discrimination at the bottom of those abstractions? I would say, no, and virtually all scientists would agree with me. Are there neurological differences supporting the abstractions? No. The human nervous system and brain is 99.999% similar across individuals.

But are there discriminations among abstract ideas beneath the disagreements? Of course there are. Different groups have different ideas about history, justice, virtue, beauty, and many other abstract categories, and they assiduously teach these discriminations to their children. Higher abstractions are based on discriminations made among lower abstractions and it is around these higher abstractions that wars are fought. Fundamentally though, the mid-level abstractions upon which they are based do not rest upon sensory discriminations. The validity of logic in the abstract realms is socially constructed.

At the bottom we are all the same kind of animal and make the same kinds of sensory discriminations and the same kinds of basic abstractions. It is only our teachers that guide us to abstractions among the abstractions, and therefore to differences we will kill for. Anybody can discriminate a brown skin from a white skin, narrow eyes from round eyes, male from female, but what those differences mean must be taught to us. There is no universal sensory or neurological basis, and therefore no intrinsic rationality that justifies what our teachers make of those differences. Whether my god or your god is the true god, is essentially culturally constructed, and we would say, “not logical.”

Ideas compel other ideas then, not because there is some intrinsic validity to the rules of logic that make it so, but only for two reasons.

One, because concrete, sensory discriminations that anyone, even a dog, can make, seem universal, as in classical conditioning. Red is different from blue, and we all agree on that, regardless of culture. Therefore it is “logical” to insist that Red cannot be Blue and vice versa.

And Two, logic is logical because the teachers in a cultural tradition decide, based on contingent values (that is, arbitrarily), that some abstract ideas “should” compel other abstract ideas. That compulsion is valid inasmuch as everybody lives in a culture and nobody can live outside of culture, so nobody is immune from cultural values. So if “The Bible is the word of God,” it follows that the Biblical God is the “correct” God. That is cultural logic.

These two kinds of logic are both valid, but for different reasons.

Saturday, January 08, 2011

New Evidence for ESP?

The Journal of Personality and Social Psychology, a respected academic journal published by the American Psychological Association, will soon release an article by Cornell psychologist Daryl Bem, that supposedly demonstrates the existence of “extrasensory perception,” or ESP. A preprint of the paper is available at

ESP is a term used in popular culture for unexplained psychic effects. It is used exclusively, for example in the New York Times article of Jan 5, 2011 reporting on Bem’s paper ( Academics refer to such effects either as “paranormal,” “parapsychological,” or “psi” phenomena. These psi phenomena allegedly include a potpourri of unexplained effects, such as mental telepathy, remote viewing, clairvoyance, telekinesis, precognition, and communication with the dead, to name just a few varieties. Bem’s paper focuses on precognition, which is unexplained knowledge of the future, and premonition, which is the same thing only felt emotionally instead of known intellectually.

The paper reports nine experiments, only 4 in any detail, that were conducted over a decade, with a thousand people tested. In a typical experiment, participants make a prediction about where a picture, (called the “stimulus”) will appear on a computer screen. If the prediction is correct, then either it was a lucky guess or, the person had a premonition of where the stimulus was going to be. In a typical experiment, participants had to guess whether the picture would appear on the left or right side of a computer screen. Random guessing would produce a 50% correct rate, but the guesses were correct 53% of the time, a percentage greater than chance. That doesn’t seem like much of a difference, but since the test was run many times on each person, the finding is statistically rare enough that it is probably meaningful. Therefore, according to Bem, a slight, but scientifically proven result of precognition, or premonition of the future, was demonstrated.

Bem notes in his paper that “Psi is a controversial subject, and most academic psychologists do not believe that psi phenomena are likely to exist.” That is correct, and I am one of those psychologists. I do not believe any psi phenomena have ever been demonstrated scientifically, nor indeed that they exist at all. How do I explain then, scientific findings such as Bem’s (and there have been many such supposed demonstrations of psi phenomena over the years)? There are four obstacles to acceptance that any such scientific demonstration must overcome:

Methodology. . The experiment must be designed and conducted in such a way that the best, most reasonable conclusion is that psi phenomena have been demonstrated, rather than some other explanation, such as pure chance, lurking (uncontrolled) variables, equipment or procedural error, biased sampling, unintended clues being given to participants, inadequate experimental controls, or other kinds of unintended bias or error (deliberate fraud is not considered, as that is rare and easy to detect).

2. Statistical. The experimental data must be conceptualized, analyzed and reported in a simple, correct, and non-controversial way, so that the best, most reasonable conclusion is that psi phenomena have been demonstrated. Even if the experimental procedure was sound, the statistical handling of the data can introduce biases that lead to invalid conclusions, such as when the data are manipulated inappropriately (e.g., leaving out some data from the analysis), or conceptualized strangely (such as counting certain results in one way, other results in another), or analyzed with controversial or questionable statistical techniques, or interpreting the outcome in obscure or inappropriate ways.

3. Theoretical. The findings must be coherent with an existing body of scientific data, or if they are not, some revision in understanding of the existing data must be specified which accommodates the new, anomalous finding. There are two reasons for this requirement.

One is that according to the scientific method (a consensus model of scientific reasoning), the hypothesis that an experiment tests is drawn from the existing body of scientific data. A scientist does not just wake up one morning with a hypothesis that says, “I suspect that giraffes would float in water as well as raspberry marshmallows.” That is not how science is done. Instead, the scientist finds areas in the existing body of scientific knowledge where there are questions, errors, gaps, unexplained connections, or incomplete understanding. A hypothesis is then generated that could extend the existing knowledge or make it more understandable or more internally consistent.

The second reason for requiring that scientific findings must mesh with existing knowledge is that science is a cumulative exercise in knowledge production. Even if some arbitrary and idiosyncratic hypothesis were experimentally tested and confirmed, the result would be uninterpretable because it would not connect to existing knowledge, would not further general understanding, and would not even contradict what is already known. There would be no context for making sense of the experimental result, making it essentially meaningless, no matter what it purports to demonstrate.

Historically, strange things have sometimes been found in nature that could not be explained until much later, such as lightning or x-rays. But technically, those discoveries were anomalous observations, not scientific findings, until some explanation was hypothesized that could be tested under a scientific hypothesis.

4. Philosophical. A scientific finding that meets all of the foregoing requirements still must be interpreted in a scientific way. For example, a finding that concludes, “All human beings are therefore merely ideas in the mind of God,” cannot be accepted without a great deal of further explanation. The interpretation of the finding must conform to principles of scientific reasoning and evidence. This examples fails on both counts, because there is no scientific evidence of God, and to characterize humans as ideas rather than as biological objects is not consistent with scientific reasoning.

Alternatively, the interpretation of the finding can go too far in the other direction, being so scientifically overspecified that the result admits of no generalization, an error of “external validity.” An example would be an experiment that claims to study “violence in children” but defines violence as a high frequency of button presses on laboratory equipment. Since that does not describe what we normally understand as violent behavior, even if the study meets all other criteria, we are unable to say anything about the result beyond the specific experimental procedure.

Another common error is that an study defines its variables in terms of laboratory procedures, but interprets its results in different terms, an error of “internal validity.” In the example above, if college students were used as participants, it is not valid to conclude anything about violence in children.

Bem’s studies that purport to demonstrate psi phenomena fail to overcome any of the obstacles described, and therefore I remain unconvinced of the existence of so-called psi-phenomena.

To prove this definitively, I would have to study the experimental protocol, data, and statistics to make detailed criticisms, and that would require either that I have access to Bem’s laboratory notebooks, which is not going to happen, or I would have to repeat his experiments, step by step, in order to understand what he did and what kind of data he obtained. That is also not going to happen. So, like any other ordinary consumer of scientific information, I must base my acceptance of, or criticism of the experiments based only on the scant information provided. Here are some criticisms then, within that constraint.

Summary of Bem’s Experiment 1

1. Methodological factors. In Experiment 1, a featured experiment supposedly demonstrating precognition, participants had to guess which of several pictures would be randomly shown. I’ll start by summarizing the experimental procedure.

One hundred Cornell undergraduates were self-selected (volunteer) participants, half men, half women and were paid for their participation. They all knew it was an experiment in ESP.

A picture of starry skies was shown on the screen for three minutes, while new-age music played. Then that picture was replaced (and presumably the music terminated, although that is not stated), with two pictures of curtains, presumably side-by-side (although that is not stated). When a participant clicked on one of the pictures of a curtain, it was replaced with another picture, either a picture of a wall, or a picture of something other than a wall.

The content of the “other than a wall” pictures is not described, except to say that 12 of the 36 non-wall pictures showed humans (presumably – this is not specified) engaging in “sexually explicit acts” (not further described), while another 12 of the non-wall pictures were “negative” in emotionality (not further described), and the final 12 non-wall pictures were “neutral” (but not described).

All these pictures had previously been (although when, is not stated) rated by other people not in this experiment as being reliably “arousing” for males and females (although “arousability is not defined), or as being reliably “emotional.” There is no information about whether any arousing pictures were also emotional, and it is hard to imagine that they were not. There is no definition of what constituted a “neutral” photograph, and there is no description of the arousability or emotionality of the wall picture or of the curtain pictures.

Part way through the experiment, some or all (not specified) of the “arousing” pictures were replaced by more intense (not otherwise described) internet pornography pictures, which were not reported to be scientifically rated for arousability and emotionality, so in the end, the nature of these pictorial stimuli is essentially unknown. (We assume that among the 36 non-wall photographs, none of them was in fact, of a different sort of wall, although this is not actually stated.).

The non-wall pictures were selected at random from the group of 36, with randomness defined by a software algorithm. Whether the wall or non-wall picture was placed on the left or the right of the screen was also randomized by the computer.

Each participant’s task was to click on one of the two pictures of curtains to indicate which one they thought would be replaced by a non-wall photograph. They were told that some of the pictures were sexually explicit and allowed to quit the experiment if that was objectionable. No information is given on how many participants quit. After the participant’s choice was made, the curtain picture was replaced by another picture, either of a wall or a picture of non-wall content.

Errors of Internal Validity

That summarizes the experimental protocol. According to Bem, this methodology made “the procedure a test of detecting a future event, that is, a test of precognition” (p. 9). However, that is not how the results were recorded. You would think that the scientist would simply record whether or not the participant had correctly predicted which side of the screen the non-wall picture had appear on (since that was the instruction given to the participant, and that was the hypothesis to be tested). Instead some other, strange measure was recorded, namely, the number of correct predictions of which side of the screen the “erotic” (meaning sexually explicit) pictures occurred, even though that was not the hypothesis being tested. This odd recording of the results constitutes an error of internal validity.

The hypothesis that college students will be better at predicting the location of a sexually explicit picture is unconnected to the introductory literature review, which referred only to a previous body of findings that asked for straightforward predictions of visual content, with no special reference to sexually explicit material. This new (implicit) hypothesis is then, essentially like the “giraffe and marshmallow” hypothesis, arising “out of the blue” rather than being logically derived from existing knowledge. This is another methodological error. If there is, in fact, a previous body of knowledge about predicting the locations of sexually explicit photographs, then the error is one of scientific reporting, since the literature review was obviously grossly incomplete.

One other, rather minor error, is the experimenter’s referring to the participants’ prediction of the location of a non-wall photograph as a “response” to that photograph. But this is a semantic distortion, since the participant’s choice is made before the photograph is shown. Ordinary, common-sense language would call that choice a “prediction” not a “response.” For the experimenter to call it a “response,” presupposes the validity of his belief that the participants are seeing into the future, but until that is proven by the experimental results, it is scientifically inappropriate to use the language in a non-standard way without justification.

Statistical Errors
Next, Bem reports that participants correctly predicted the position of the sexually
pictures significantly more frequently than the 50% rate expected by chance, and in fact were correct 53.1% of the time. But this is an incorrect analysis. To be counted as correct, a prediction would have to correctly say on which side of the screen a non-wall photograph would appear (one chance out of two, or 50% chance rated) AND, if that guess were correct, they would also have to predict that it was a sexually explicit photograph (12 chances out of 36, or 33%) for an overall probability of 0.50 x 0.33 = 0.165, which means that one would expect a person to guess correctly fewer than 17 times out of a hundred.

Did that happen? No information is reported on how many times the participants DID actually predict the location of sexually explicit photographs. It was not 53.1%. That is the number you get when you ignore, or leave out of the calculation, all the wrong predictions of the non-wall photograph. But that is an illegitimate way to count the results, unless there is a very good reason, and none is given.

Still, can we at least say that the participants correctly predicted the location of ANY non-wall photograph better than chance (53.1% vs. 50%)? No we can’t, because that information is not reported either. Instead, what is reported, is that participants predicted the location of only the sexually explicit photographs at 53.1%. But that leaves out all the results for the non-sexual predictions, which is not a legitimate way to count the results. So in the end, the results that bear on the experiment’s stated hypothesis are not reported at all.

This kind of anomalous counting of the results constitutes a statistical error and makes the experimental findings uninterpretable.

The same kind of anomalous, illegitimate, and uninterpretable counting of results is given for non-sexually-explicit pictures, emotional pictures, and neutral pictures, and even for pictures that were “romantic but non-erotic pictures,” a category that was never defined in the description of the pictures (let alone in any experimental hypotheses).

The experiment also reports that there were no significant differences in response findings between males and females. That is a legitimate “control variable” to be reporting, although the experimental hypothesis being tested has nothing to do with gender. So that is not an error, as much as an irrelevance.

Then the experiment reports on a history of findings in other experiments that turns up a small correlation between the ability to predict the occurrence of visual materials at a rate greater than chance, and the participant’s score on an extraversion test, with extraverts supposedly being better at making such predictions over introverts. There are two problems with this so-called result.

One, is that it is based on a statistical technique called meta-analysis, in which the main findings of individual experiments are treated as if they were individual response data points observed in individual participants. While this statistical technique is now widely used in the medical literature, it is by no means without controversy when applied to psychological experiments, and I reject it as a valid statistical technique for psychology.

The main reason for my rejection is that the technique generally does not take into account the quality of the underlying experiments, or if it does, does so inadequately. For example, if some future meta-analysis includes this experiment, that will introduce significant undocumented error into the meta-analysis because this experiment does not actually report any valid results, despite its claim to the contrary.

The second problem with this so-called reported result between predictive success and personality is that it is irrelevant to any scientific hypothesis, implicit or explicit, that was supposed to be tested by this experiment.

Errors of Interpretation:
The experimental report goes on at great length to determine what “kind” of psi phenomenon had been demonstrated by the test results (which were never properly reported). Was it simple clairvoyance or was it a subtle form of psychokinesis? Or was it actually pure chance (admirably, the report does consider that possibility).

But a simpler explanation is hinted at by the experimental procedure itself. After the participant made his or her prediction of where the non-wall photograph would appear, the curtain picture chosen was replaced with either the wall, or a non-wall photograph. This essentially gave the participant feedback on the correctness or non-correctness of the prediction. But why was that necessary or desirable?

The experimenter knew immediately upon the participant’s click whether the prediction was correct or not. That could be scored right on the spot by the computer. Why was it important to give the participant “feedback?” The experimental hypothesis was about ability to see into the future, precognition. Why is feedback necessary to do that? Was the hypothesis really that ability to predict the future can be taught by a computer and learned with practice? There is no theoretical or practical reason to believe so, and the experimental report does not suggest it.

The only reason I can think of to give the participants feedback on the correctness of their predictions is so that they might be able to learn from their mistakes and improve their performance. That is a standard learning procedure going back over a hundred and fifty years in experimental psychology and thousands of years in human experience. The experiment thus introduced a spurious learning paradigm into a procedure that was supposed to test only ability to predict the future. That is a serious error of internal validity that renders the experiment uninterpretable (if it was not already).

What would the participants be learning, with this embedded learning procedure? I am unable to say without more detail about the experiment. Could they be learning (even if only implicitly) to detect a non-random pattern in the order of presentation of the materials? A non-random pattern could have emerged. Either the random number generator could have been imperfect (since there is, theoretically, no such thing as a perfect random number generator), or, within the pseudo-random sequence of events, an identifiable non-random pattern could have emerged, just as when one flips a fair coin “heads” 7 or 8 times in a row, just by chance. These things happen. It wouldn’t take much non-randomness to produce a mere 3% deviation from chance expectations.

Or, more likely in my opinion, the participants could have been learning something else, some other clue that was unintentionally left in the procedure by the experimenters. I cannot say what that might be. For example, it would be interesting to know if an experimenter was in the room while the participant performed. There is no reason why one should have been, but if one was, there are all kinds of opportunities for subtle, unintended clues (or “experimenter effects”) to be transmitted to the participants.

Bem reports that he re-ran the experiment but using randomized, simulated computer inputs for the “predictions,” with no human participants involved. Under those conditions, no psi effects were detected. I am not surprised, but that result furthers my skepticism about the human-based findings: that if there really were any legitimate ones (which I doubt), they were due entirely to unintended experimenter effects or performance biases.

The only way to satisfy my skepticism on this point would be to re-run the experiment, with humans, but omitting the spurious learning component from the procedure, and isolating the participant completely from any contact with the experimenter or any other participant. I would be extremely surprised if any so-called “psi effect” were reported under those conditions.

Theoretical and Philosophical Errors:
Aside from the methodological and statistical problems with this study, there are additional theoretical and philosophical problems. First, I must emphasize again that no psi phenomena were demonstrated by any of these experiments, as reported. But even if there were such a thing as psi phenomena, for example, ability to predict the future at a rate better than chance, what sense would it make?

There is no known mechanism, either biological, physical, or psychological, by which that would be possible. Human beings are simply not able to predict the future very well. Would that it were otherwise! Bem does some hand-waving around quantum indeterminacy and the earth’s magnetic field to suggest possible explanations of psi phenomena (if they existed), but that verbiage constitutes, most generously, only loose metaphor, nothing close to an explanation.

Could the explanation of psi effects, if there were any, just turn out to be something bizarre, something we have never thought of yet, not related to anything familiar, not like anything ever reported in the accepted scientific literature? Well, yes, that is possible in principle. I’m sure Socrates himself would not have been able to understand a butane lighter or a sheet of plastic food wrap, let alone some of our more complex technological marvels. So it is not a denial of the possibility of psi phenomena to assert that there is presently no conceivable explanation of them, as they have been described. But it is utterly idle to speculate on explanations until the phenomenon to be explained has been demonstrated, and I am not convinced it ever has been.

In his forthcoming paper, Bem describes three additional experiments, similar to the first one, in some detail, and refers to five others, not fully described. However, as is always the case when I take the trouble to read such experimental reports, after analysis of the first one (an analysis that was by no means exhaustive), I simply have no energy to go on to the rest of them. The quality of the first one is so poor that there is little promise that the others will be much better. So I give up at this point and return to my default belief, that has not been challenged by Bem or anybody else, that no psi phenomenon has ever been scientifically demonstrated. Show me a proper demonstration and I’ll change my mind.

Sunday, May 02, 2010

What is the purpose of the cerebral cortex?

The main part of the human brain is the cerebrum, the big piece of folded, wrinkly meat that covers the older, more primitive “snake brain” or limbic system and brainstem. Different areas of the cerebrum support different cognitive and bodily functions. In nearly all mammals, the brain has an extremely thin (no more than two-tenths of an inch thick) wrapper around it made up of neurons, and that is the cerebral cortex (“cortex” comes from the Latin for “cap.”). The cortex, thin though it is, actually is made of even thinner layers of cells, up to six distinct layers of so-called gray matter. While there are connections in and out of the cortex to the cerebrum underneath, more than 99% of cortical activity takes place strictly within the cortex alone.

Most animals do not have a cerebral cortex. Only mammals do, and among mammals, the human version is the largest and most complex. If billions of animals get along just fine without a cerebral cortex, it raises the question, what is it for? That is a mystery.

We know that sensory signals coming from the receptors eventually end up in the cortex. Visual data, for example, ends up at the back of the head in the so-called visual cortex. What it does there, we do not know. And we know that parts of the cortex send signals out to the muscles, presumably as part of coordinated actions. But what about the 99% of a cortex’s activity that goes on within the cortex itself? What is that about?

We don’t know what the function of the cortex is, but scientists believe, based on observations of people with brain damage, and on animal studies, that somehow, activity in the cerebral cortex produces meaningful experience of the world, and also, somehow, abstract thinking, planning and language. How that could be possible is a mystery, but that seems to be what is going on.

What’s the great mystery? The mystery is that we don’t know how a physical organ like the cortex could produce mental functions like thinking, planning, and language understanding. According to the principles of science, it is not actually possible. No physical activity can produce any nonphysical effect (energy is “physical”) like a thought. Why not? Because if it did, that would violate the law of conservation of matter and energy (and many other laws of nature besides), and if that can happen, well, then we don’t know anything about anything.

E=MC2 is only true because of the law of conservation of matter and energy, for example. Violate that law and you have nothing.

And it’s not just a matter of preserving the integrity of science’s precious little formulas. We can’t even conceive of how a physical thing like a group of neuron, which are just protein, fat, and a few chemicals, could cause or create something as intangible as an abstract thought or even the experience of color. How would that work ? It would have to be magic. We can’t think of any example of any machine, no matter how complex or fantastic, that could do such a thing.

Some scientists have become so frustrated with this problem that they have just declared that thoughts, experiences, and other intangible mental phenomena do not exist, except as illusions. But that is just crazy talk. Even an “illusion” is a mental phenomenon.

Despite this impenetrable mystery, we still want to ask, what is the cortex for and why do we have one, because its occurrence is quite rare in evolution.

1. Does the cortex produce or create the conscious mind in some way? That is scientifically impossible and even unintelligible, for reasons just described. Parts of the cortex are proven to be correlated with aspects of the conscious mind, but we cannot explain that correlation.

2. Does the cortex create and store a map of the whole body, including its history and modifications? Some scientists think so (e.g., Antonio Damasio). That would require an awful lot of capacity, since the body has a lot of parts and a very long history. Still, it might be possible. But what good would it do to have such a map? Who would look at it? There is no little man in the head.

3. Could the cortex have/be historical record of bodily connections as suggested above, not used as a map, but rather, as some kind of a switchboard, so that signals incoming to the brain get routed to the correct output action signals? That seems highly implausible to me, since there are an infinite number of possible combinations and sequences of sensory information that one encounters every day and just as large a number of movements that could be made in response. The brain is very large and complex, but it is not infinite in capacity, and the cortex is, after all, only 4 millimeters thick. Also, such a “switchboard” or “blackboard” hypothesis does not allow any scope for creative action, if every input is wired to an output or even to a selection of outputs. Some scientists deny that there is any such thing as creativity, but I am quite sure they are wrong.

4. Here is my hypothesis about what the cortex is for. I think it supports intersubjective social life. Intersubjectivity is a kind of empathy that allows humans to understand each other, and that’s what is necessary to have complex civilization like ours. Without empathy, there could be no poetry, no arts of any kind, no jurisprudence, no government, no sports, no teaching and learning, not even symbolic language.

Since we are the only species that indulges such things with such intensity, it makes sense that we have the most developed cerebral cortex. Chimps have societies and maybe elephants grieve over their dead. Most mammals have a cerebral cortex and so most are intersubjective to some degree. But no other mammals use symbolic language or have courts of law or try to entertain each other. We are the only ones with a hyper-developed cortex.

How would it work? It has been proven that the brain does physically change in response to learning and adaptation. So it is plausible to imagine that the cortex is a matrix for social learning. It stores all the intermediate states on the long social journey each one of us takes from infancy to adulthood and on to the grave.

The cortex does not store individual experiences as you would store marbles in a bag, but it would store developing subsystems. You need some kind of storage to accumulate and integrate experience over time, experience like complex social understanding; like intersubjective social learning. It is the skills of social mind-reading that are accumulated and integrated and refined in the cerebral cortex.

Those cortical representations of complex social understandings are not retrieved, as from a file (because there is nobody to read such a file anyway). Rather the representations are the basis for creatively responding to new social situations. They form the basis for creative projection beyond what is known, to what might be, and at the same time, they constrain creativity to what is feasible and acceptable within the social community. So each time a new situation comes up (and every situation is new in some way), you do not need to start from square one. You start your response from what you already have in the vast network of your cerebral cortex and creatively project something from that.

Where does that creative urge or impetus come from? I don’t know. That’s the magic part.