HOME |  RANTS |  SOFTWARE |  PRESENTATIONS |  STUFF |  CONTACT


Morals Without God (or Gods)

An Online Discussion

I engaged in an online debate about these issues at IntellectualConservative.com in July of 2007. I'd suggest reading my paper from this link, since the footnotes are clickable hyperlinks and the formatting is a bit cleaner, but the published version is here. Much of it is adapted from my discussion below.

I was responding to the paper What kind of car would Jesus drive to take his girlfriend to an abortion clinic? by Dr. Phillip Jackson; his response to my response is here: The True Nature of Human Morality: A Response to the Critique "Universal Morality And The Morality Of The Universe".

The comments following my paper are interesting and hopefully enlightening.

How can anything be 'good' or 'evil' without a Lawgiver?

Many theists assert that morals cannot exist apart from an absolute standard, a 'lawgiver' who dictates the rules. Without a God, they claim, there can be no way of defining 'good' and 'evil'. At best, it's just a matter of opinion.

Obviously, I disagree. Aside from the fact that this runs smack into the Euthyphro Problem, I think that morality can be derived, and justified, without recourse to a supernatural 'beat cop' to enforce it.

The Logical Conclusion

Some people actually claim that they see no reason to be good without God telling them to do so. I have my doubts. I wonder if they really wake up in the morning and think, "What a beautiful day! I think I'll go down to the mall and shoot a bunch of people. Oh, wait - God says I shouldn't do that, darn it all. Guess I'll just go fishing."

Consider your own life. Do you really behave morally most of the time because you're scared that God will punish you? No? Neither do most people.

"There's a general assumption that people make that religious people are more honest than non-religious people," says [Michael] Josephson. "They are," he says, pausing for emphasis. "Slightly."

Most people behave morally most of the time, or society would utterly collapse. Studies like the one above show that religion has some effect, but not a strong one. What else might cause people to have an inclination to behave morally?

The Chess Analogy

David Hume is generally credited with describing the so-called is-ought problem. The claim here is something like "it's impossible to determine 'how things ought to be' from 'how things are'", or "morals cannot be derived from simple facts about the world". Of course, I disagree rather strongly.

Consider the game of chess. There are certain fundamental structures of chess that define it - the 'rules of the game'. An 8x8 board, 8 pawns per side that move in certain ways, two rooks per side that move in other ways, castling, the initial configuration of the pieces, etc.

Now, when playing chess, there is no rule that you can't sacrifice your queen in the first few moves of the game. It's illegal to move your king to a threatened square, but it's perfectly acceptable by the rules to stick your queen in front of a pawn at the start of the game.

However, if you want to win the game, you shouldn't do that. There are almost no situations (at least, assuming evenly-matched opponents) where giving up your queen at the start will lead to your victory. Similarly, it's rarely a good idea to move your king out to the center of the board. It's usually a bad move.

Note words like "shouldn't" and "bad". They are value judgements. They prescribe 'oughts'. They are not part of the 'rules' of chess. From where do they come?

They arise from the combinations of two things - first, the rules and structure of chess, and second, from the player's desire to win the game. They are strategic rules. A player is free to disregard them, but they do so at their peril - it's unlikely to further their goal.

Hopefully the parallel to wider life is obvious. We have 'rules of the game' in life, too - the laws of physics, for example. We are not free to violate these strictures. (Well, technically, if we find a case where they are violated, we reformulate the laws and our theories to take into account the anomalous case.) Many of them are so well-established that it's difficult to see how they could be wrong to a significant degree. (Unless you can produce a magic carpet, I think we can expect to have to obey the laws of gravity, for example.)

We also have desires and goals as well. Some are very basic and inborn and apparently universal (air, water, food, sleep, shelter, etc.) and some are so common that only extremely rare individuals seem not to need them (e.g. the company of other people), and some are deeply personal and not common at all (a desire to write a novel, say).

Might there be strategies that would arise from the combination of natural laws, and our own desires? What might they look like?

A Brief Digression - 'Objective' and 'Contingent'

There's an objection that frequently pops up at this point - some people claim that a 'strategic rule' like "don't sacrifice your queen for a pawn at the start of the game" isn't 'objective'. It's contingent on your actual goals. If your goal isn't to win the game, then it might be a good move. Maybe you're playing with a small child and just want to let them win.

But this is a confusion about 'objective' and 'contingent'. Let's consider a different situation. Imagine a room full of different opaque objects scattered around. You have a spotlight, and you point it in some direction.

Now, the objects are going to cast shadows, right? The specific shadows that get cast will depend on the shape of the objects and what direction you point the spotlight; if you point it in a different direction, you'd get different shadows.

But that doesn't mean the shadows aren't real! It's objectively true that, given (a) particular objects, and (b) a particular direction the spotlight's pointed, then there will be (c) particular shadows. If someone walks into one of those shadows, they won't be imagining that less light is falling on them. There's an actual, objective fact-of-the-matter about those shadows.

Shadows are cast when light smacks into opaque objects. Similarly, strategies arise when goals smack into fixed constraints. And they are equally 'objective'.

I'm not a big fan of Philip K. Dick's fiction, but he did say something once that I found profound. He said, "Reality is that which, when you stop believing in it, doesn't go away." If you want to win a chess game, then by the rules of chess, throwing away your queen at the start is a bad idea - and it makes no difference if you believe it's a bad strategy or not. If you play that way, you will be more likely to lose - objectively.

Now, there is a sense in which moral strategies wouldn't necessarily be universal - what if people have different goals? But let's return to our 'shadow room' analogy for a moment.

Let's say you have to stand in a particular half of the room. And the spotlight's on a mount so that you can't point it at the walls behind you - you have to point it into the room. You still have plenty of freedom to wave it around, but not unlimited freedom. There are some limits on your range.

Probably there'd still be a lot of variation in the shadows in the room. But it also seems likely that there'd be some constants, too. Some areas of the room would never be illuminated, because from the angles you have available, the spotlight is always blocked by some object or another. Maybe some areas would always be lit up, because there just was no way to put an object between it and the light.

Now, is there such a thing as a 'human nature'? Does it actually mean something to say, "That person is human"? Certainly there's a rather wide range for the category 'human' - humans are quite diverse. But the range isn't unlimited. Perhaps we can find some universals, some strategies that apply equally to all humans.

Game Theory

Game theory attempts to analyze interactions among competing and cooperating agents in the context of systems of rules governing the options available to them. It's exactly the tool we need if we're going to investigate strategies with the potental to be widely-applicable.

The Prisoner's Dilemma

One useful model in game theory is the Prisoner's Dilemma. Basically, two players have the option of cooperating or betraying each other. If both cooperate, there is a moderate payoff, e.g. 3 points. If one cooperates and one betrays, the betrayer gets a large payoff (say, 5 points) and the cooperator gets nothing. If both betray, there is a small payoff (e.g. 1 point each).

What's the optimal strategy in this case? On average, betraying pays 3 points (5 * 50% + 1 * 50%), while cooperating pays only 1.5 points (3 * 50% + 0 * 50%). Rationally, if you're playing the game it's in your best interest to betray.

The Iterated Prisoner's Dilemma

But what if we change the situation slightly, by repeating the game over and over with the same opponent? If you both betray all the time, your payoff is 1 point per game. If you both cooperate all the time, your payoff is much better, 3 points per game. What about more complex strategies?

It turns out that an extremely simple strategy is also among the best. It is called "Tit for Tat". It starts out cooperating, and simply repeats the move that its opponent played the last time. Note that if the other player cooperates, TfT will be friendly, but if it is betrayed, TfT will retaliate. In the rules given above, it is very difficult to beat TfT's usual score.

If we vary the rules a bit, and allow for imperfect players where there can be occasional 'accidents' where someone mistakenly betrays when they 'intended' to cooperate, or vice versa, things get even more interesting. It is possible for even a pair of TfTs to get caught in a loop of 'mutual recrimination', with both betraying over and over. Their payoff per game plummets from 3 to 1. In such situations, a more 'forgiving' strategy actually does better.

The Conspiracy Of Doves

Another model is worth noting; the "Hawks and Doves" model. Imagine a population of Hawks and Doves. Hawks always fight for resources, until seriously injured. Doves run away instead of fighting, and split resources 50/50 if working with another Dove. Imagine that each 'resource' is worth 100 points. If a Hawk fights a Dove, it gets 100 points and the Dove gets nothing. If a Dove encounters another Dove, each gets 50 points. If a Hawk fights another Hawk, it either gets 100 points if it wins, or -300 (from being injured losing the fight).

Given these ratios, everyone should be a Dove. Everyone will average a 50 point payoff. But a lone Hawk among Doves gets a hugely disproportionate payoff; the "Conspiracy of Doves" is not stable. In the above situation, the stable state of the population is 1/3 Hawk, 2/3 Dove, and the average payoff is only 33 points. (Note that this still applies if really there's only one kind of bird, but it has varying chances of acting like a Hawk or a Dove.)

The Real World

There is a key difference between games like chess and games like the Iterated Prisoner's Dilemma. In games like chess, there is a winner and a loser; one player has a positive outcome, the other negative. Such games are called 'zero-sum'; the benefits one player receives are equal and opposite to the penalties the loser suffers. Games like the IPD, on the other hand, are 'non zero-sum'; it's possible for all players to lose, or all to profit, or a mix of both.

I think we can agree that, overall, our lives are 'non zero-sum'. We have the option of cooperating with others, fighting, betraying, helping, lending and borrowing, and so forth. We can see analogs of Hawk, Dove, and Tit for Tat strategies in daily life.

For example, it sure looks to me like Israelis and Palestinians are stuck in a loop of mutual recrimination that results in an overall worse situation for both groups. But neither side is willing to forgive the other, and they've been at it so long they can't even imagine forgiving each other.

An article in Scientific American ("The Dynamics of Social Dilemmas", March 1994, pp. 76-81) summarized some research in this field. It points out that Tit-for-tat is not directly applicable to some kinds of group interaction, since it can be difficult to punish one defector in particular. However, there are analogous results in broader groups. In an extremely large group, the effect of any one defection is diluted, but the longer one expects the interaction to go on, the less attractive antisocial behavior is. Your willingness to cooperate is affected by your estimate of how many people around you are cooperating, too.

Another Scientific American article points out some interesting evidence derived from studying how people respond to the Traveler's Dilemma, a variation on the Prisoner's Dilemma. In this game, people - all kinds of people, from countries around the world - routinely and apparently instinctively behave in a way that is not strictly rational but leads to a better outcome than if everyone was playing in a ruthlessly rational manner.

The "Moral Sense"

Are humans similar enough to each other in fundamental desires and capabilities that a basic 'universal moral framework' is possible? I believe so; while there may be aberrant individuals who have some inborn need to become a serial killer, I think such are few and far between. Most people understand the Golden Rule and similar guidelines. Almost everyone is willing to cheat sometimes, under some circumstances, but I think nearly everyone understands the reasons for moral behavior.

There exists at least some research to support this view. An interesting point made in that article - most people make a distinction between "right and wrong" and "permitted and forbidden", between following morals and following conventions. Raising your hand before you speak is conventional in class, but not at a romantic dinner for two. However, it's not morally wrong to raise your hand for permission to speak to your date (though you probably won't make a good impression). People do understand that table manners vary across cultures, but stealing is considered to be wrong in just about every culture.

Some people don't make this distinction - they don't understand that some things are just conventions (e.g. driving on the left side of the road) and some are morals. They are called 'psychopaths'. What does this tell us about those who try to say that morals are fundamentally just a matter of authority, of following the rules of whoever happens to be most powerful (e.g. God)?

The fact that psycopaths are exceptional shows that there is some kind of innate "moral sense" that most people have. Some theists point to this as a sign that some kind of god(s) planted it there.

But we've already established that moral behavior has - or, at the very least, can have - practical consequences and practical justification. People willing to cooperate and behave morally with each other - willing to trust and work with one another as part of a group - have a powerful advantage over those who don't, in a very wide range of situations. So there's a perfectly reasonable evolutionary reason for such a "moral sense" to exist. And, additionally, there is evidence that such a "moral sense" does, in fact, exist.

Asking why we have such a moral sense is like asking why we have a desire to eat. And an evolutionary account of such a sense seems more reasonable, overall, than a theological one. We all have a desire to eat, and there is a great deal of commonality in what and how we eat... but there are a lot of differences as well. There was some truly excellent steak I had on my trip to Brazil... but I was quite surprised at the sour cream on it. If you want an example of culinary diversity, just look at the variety of pizza toppings that are popular around the world.

Similarly, the details of moral behavior vary quite widely across the world. Consider the differences in dress style between a Polynesian islander, a typical American, and an Afghan woman in a burqua. And none of them feel any twinge of concience about how they dress. Why should this be the case if a single deity were impressing the same moral sense on everyone?

That's not to say that there aren't a lot of things in common among morals across cultures. There are structures that are common to nearly all human languages, too. That doesn't mean that translating between languages is generally easy. However, while it may take a fair amount of time and effort on occasion, there doesn't appear to be a concept that can't be translated, or at least conveyed. English might not have a word single word conveying the concept of schadenfreude but once the definition is explained, there's no 'limitation' of the English language that prevents English-speakers from grasping the idea.

This article by Steven Pinker discussed the evolutionary basis for the moral sense (and the evidence for it) in great detail.

We All Reason Like This Anyway

Every theology actually uses pragmatic reasoning. Christianity, Islam, etc. propose punishments for disobeying God and rewards for following It (and note: not vice-versa[1]), Buddhism talks about escaping suffering and reincarnation, and so forth. Every one of them claims that it's in a person's best interest to follow their teachings.

When preachers condemn the state of the world and how terrible things are, they are very explicitly appealing to self-interest to encourage people to behave morally. Obviously I think they are wrong about the actual nature of reality, and this leads to a lot of silly prohibitions and dubious 'goods' like, say, blowing up people who don't believe like them, but you'd expect erroneous conclusions from false premises.

An Objection

Some would argue that developing such moral codes from practical consequences is a hopeless task, that reasoning out morals from the uncountable 'facts of nature' and our own desires is too big a problem to tackle, that no final moral code of Ultimate Truth could be developed. The following quote elaborates on this. It's from Daniel Dennett's book "Darwin's Dangerous Idea", which is well worth reading:

"I do not intend this to be a shocking indictment, just a reminder of something quite obvious: no remotely compelling system of ethics has ever been made computationally tractable, even indirectly, for real world moral problems. So, even though there has been no dearth of utilitarian (and Kantian, and contrarian, etc.) arguments in favor of particular policies, institutions, practices, and acts, these have all been heavily hedged with ceteris paribus clauses and plausibility claims about their idealizing assumptions."

It's important to realize that there's quite a difference between 'computationally intractable' and 'nonexistent'. Newton's laws of motion are 'computationally intractable' for any set of bodies greater than three. We can't solve them completely for even the Earth/Moon/Sun system, let alone for the solar system with its myraids of objects whizzing around. Add in General Relativity and you're really screwed.

And yet, we can still get space probes to Neptune (and, in 2015, Pluto). For many specific cases, there are very reliable solutions. And progress is continually being made - read about, for example, the so-called "Interplanetary Transport Network".

Dennett was simply stating that the real world - and hence, real-world morality - is very complex. This is, indeed "quite obvious", as he says. (He then goes on to elaborate on ways to manage that complexity, similarly to how physicists have worked on ways to manage the complexity of the real world for centuries now.) No simple moral code will work everywhere. Sane individuals realize this - it's the fanatics who follow the letter of every single rule they receive that, for example, blow themselves and other people up. How many difficult moral quandries have you wrestled with in your life?

Engineers have an even more complex job than physicists, putting together working mechanisms in the face of many uncertainties and unknowns. They frequently have to resort to 'rules of thumb', approximations, and techniques that have historically worked, even if why they work isn't always fully understood. As Alan Cox pointed out, "[P]eople built perfectly good brick walls long before they knew why cement works." Engineers generally have to design conservatively and build in redundancy and margins for error.

Engineering moral (and legal) codes is similarly complicated... but that does not imply that it's impossible. Engineering continually improves and finds new ways of doing things, sometimes better than the old, sometimes merely applicable in certain special cases. There may never be an Ultimate Engineering that can accomplish all possible things... but that doesn't mean we should abandon engineering.

Empirical Evidence

There is good evidence that this style of thinking actually does work in the real world. As that link makes clear, violence in real terms has been decreasing for centuries - decreasing by huge amounts. E.g. "24 homicides per 100,000 Englishmen in the fourteenth century to 0.6 per 100,000 by the early 1960s".

The article proposes several causes for the reduction in everything from personal homicide to wars between nations. First, effective government that imposes reliable order helps prevent people from taking defense and vengeance into their own hands. Second, when life isn't cheap, when everyone has a potential to live longer and healthier, there's less incentive to initiate violence on others. Third, as societies become more complex, and become non-zero-sum games, other people become much more valuable alive than dead. There's far more incentive to cooperate than to violate.

It's difficult to argue that religion had much influence on this; indeed, not many people argue that we're more religious now than in the fourteenth century - just the opposite, in fact. It seems that technology and economics and medicine have had far greater a civilizing effect on people than religion. And as noted before, there's no good evidence that religion makes that much of a difference in people's actual behavior. As H. L. Mencken said, "People say we need religion when what they really mean is we need police." Case in point:

As a young teenager in proudly peaceable Canada during the romantic 1960s, I was a true believer in Bakunin's anarchism. I laughed off my parents' argument that if the government ever laid down its arms all hell would break loose. Our competing predictions were put to the test at 8:00 A.M. on October 17, 1969, when the Montreal police went on strike. By 11:20 A.M. the first bank was robbed. By noon most downtown stores had closed because of looting. Within a few more hours, taxi drivers burned down the garage of a limousine service that competed with them for airport customers, a rooftop sniper killed a provincial police officer, rioters broke into several hotels and restaurants, and a doctor slew a burglar in his suburban home. By the end of the day, six banks had been robbed, a hundred shops had been looted, twelve fires had been set, forty carloads of storefront glass had been broken, and three million dollars in property damage had been inflicted, before city authorities had to call in the army and, of course, the Mounties to restore order.
- The Blank Slate, by Steven Pinker

Now, presumably, God didn't go away that day. But the police did. Which had a stronger influence on people's behavior?

There are other studies that indicate that religion - at least, too much religion - is bad for a society and its members. To quote, "In general, higher rates of belief in and worship of a creator correlate with higher rates of homicide, juvenile and early adult mortality, STD infection rates, teen pregnancy and abortion in the prosperous democracies." (Actual study here.)

The Conclusion

By cooperating with others, we improve our own lives. I enjoy living in a house that I could never have built by myself, and eating food that I could never have grown by myself, and using a computer that I could never have built on my own, and listening to music I could never have composed, and so forth.

I contend that I am ethical and moral, that people in general are ethical and moral, because the alternative is running naked in the woods fighting over scraps of food.


[1] If you wanted to find the really altruistic people, wouldn't you punish them for doing good? Only someone with genuine concern for their fellows would do good then...