Richard Rorty’s review of Marc Hauser’s Moral Minds is pretty good. Hauser argues for a fairly strong moral nativism, involving a dedicated moral capacity, analogous to a Chomsky-style linguistic capacity. (Rawls floats the idea in Theory of Justice.) Rorty’s pretty widely known, but not a lot of non-philosophers know he was a top-flight philosopher of mind back when he was a philosopher (one of the first eliminative materialists), and he makes a pretty good case against the Chomsky analogy:
Hauser thinks that Noam Chomsky has shown that in at least one area — learning how to produce grammatical sentences — the latter sort of circuitry [i.e., general purpose] will not do the job. We need, Hauser says, a “radical rethinking of our ideas on morality, which is based on the analogy to language.” But the analogy seems fragile. Chomsky has argued, powerfully if not conclusively, that simple trial-and-error imitation of adult speakers cannot explain the speed and confidence with which children learn to talk: some special, dedicated mechanism must be at work. But is a parallel argument available to Hauser? For one thing, moral codes are not assimilated with any special rapidity. For another, the grammaticality of a sentence is rarely a matter of doubt or controversy, whereas moral dilemmas pull us in opposite directions and leave us uncertain. (Is it O.K. to kill a perfectly healthy but morally despicable person if her harvested organs would save the lives of five admirable people who need transplants? Ten people? Dozens?)
According to Chomsky, the parameters of the universal linguistic capacity can be set in different ways to produce the grammars of the various natural languages. But any setting of the parameters produces grammaticality, and is fully on par linguistically speaking. No language is better qua language, or more authentically languagey. Now, it may be that Yanomamo warriors, queer-stoning Islamists and gay Dutch vegans are all living out various dialects of morality, but if so, then it turns out that morality is a pretty useless category. The liberal morality of sympathy, reciprocity, and fairness, isn’t just an equivalent way of deploying moral judgment and emotion. It’s better than the alternatives. That’s basically the problem I’ve had with moral psychology based on Chomsky, such as John Mikhail’s and Sue Dwyer’s [pdf]. Rorty sums it up nicely.
Now, I’m a fan of Jonathan Haidt’s social intuitionist theory according to which specific moral emotion and moral judgment is a function of different settings on several general dimensions of moral emotion. This is also a kind of parameters approach, but, unlike Chomsky-based theories, it is grounded in emotion rather than a kind of innate knowledge (or “cognizance” to use Chomsky’s dodge word.) But the same critique applies. Certain ways of calibrating the dimensions of moral emotions are evidently, and seemingly paradoxically, immoral. Obviously, if you’re going to say that, you’re assuming the authority of one calibration as a secure basis for passing judgment on the others. Isn’t that arbitrary? Well, I think one thing to say is that it is possible to determine, in evolutionary terms, what moral capacities are for. As the environment of human interaction changes through history, certain ways of calibrating the moral sense fail to function in the appropriate way. So while we can say that a certain calibration is “a morality,” in the sense that it a way of deploying the moral capacity, it is not authoritatively moral, in the sense that it violates the principles of a calibration that does serve the proper function of morality given the present social/institutional setting.
Now, I don’t actually think that’s quite right. Because it’s not clear why the proper biological function of the moral capacity ought to have normative force. But I think it’s a place to start when trying to think through the bindingness of morality in a non-spooky natural world.
For a different view, John Mikhail defends Hauser’s book (which I haven’t read yet, by the way) on the Georgetown Law blog.