Sketchpad post…
The creature moved solely by instrumental, self-regarding rationality has a name: sociopath. A sociopath with supernatural epistemic and computational capacities is called homo economicus or “economic man.” A handful of sociopaths exist, but gods live only in myths and textbooks. Flesh and blood human animals are, like the naked mole rat, “hypersocial.” The idea of a species of hypersocial sociopaths is as close as one comes to biological contradiction, which may be why homo economicus has not been observed in the wild. Normal humans are born cooperators — “strong reciprocators” in the language of Gintis and Bowles. “Homo reciprocans” is a conformist beast freighted with culture. A norm sponge. But we humans are not socially programmed robots. We are clever conformists. We can glimpse the advantages in “defection,” in pretending to pull our weight and writing our own rules when it suits us. But why can we do this? Why can we defect? Why aren’t we socially programmed robots? Maybe this: the point of such high-fidelity conformism is the ability to adapt to our environments (or to adapt our environments to us) at the speed of cheetahs compared to natural selection’s dumb glacial grope. The point of high-fidelity conformism is to take advantage of adaptive innovation. So we are equipped with the ability to imagine a better way, which happens to include the ability to imagine shirking or bucking the norm. Sociopathy is not our problem. Imagination—the engine of adaptive conformism—is. Nature’s solution is our taste for “altruistic punishment,” the disposition to hammer norm shirkers despite the personal cost. How not self-interested are we? This not self-interested: We are so obsessed with conformity that we will hurt ourselves to hurt those who refuse to conform. And we don’t even need to know the point of conforming, or whether or not it helps. The stone heaved through the window of the suspicious gentlemen bachelors: this too is altruism. To learn that humans are not sociopaths, are capable of other-regarding acts, are willing to sacrifice ourselves to keep hearts and minds in harmony, is not to discover there is no problem. A main lesson of the Scottish Enlightenment is the possibility and necessity of recruiting potentially destructive self love into the service of public happiness. But if self love is not in fact the mainspring, or the only spring, of human action, then maybe their lesson for us now is that we must also learn to civilize the capacity for norm-enforcing self-sacrifice. What matters is not how we are motivated. What matters is how our higher-order norms (our institutions?) channel and coordinate our various motives to produce the elements of flourishing. What matters is which norms we’re willing to pay dearly to enforce.
I'm sorry but I find this passage immensely confusing. are we the sociopaths? or is it homo economicus? in any event your comparison of a sociopath to homoeconomicus is a false one. Cooperation and reciprocity are not necessaryly an irrational behaviour given certain preferences. If we were truly irrational and we had have a high sense or reciprocity, when someone would steal from us we would say :”good for us” and then commence to hand them our possessions while resenting the entire process.
Let me try to explain my understanding of what Will is saying here: homo economicus is, if you think about it, basically a sociopath. But people aren't sociopaths, which is why homo economics isn't such a great model. People are conformist, both in terms of themselves conforming and in terms of getting others to conform. In fact, people are so driven to make others conform that they're willing to sacrifice their own immediate self-interest to do so. The Scottish Enlightenment (Adam Smith and such) model was based on using individuals' enlightened self-interest to produce an outcome that is better for everyone. We need to put “norm-enforcing self-sacrifice” to such a good use too.
A possibly-incoherent sketchpad response:Of course it's possible that we could have evolved to be Goody Two Shoes rule-followers. But we didn't. So why not? The question poses a real problem because social cooperation is so beneficial. Assume X is the observed quantity of social cooperation in humans. If X quantity of cooperation is so astonishingly great, isn't X + 1? And why didn't nature select for X + 1? It seems unproblematic to assume that the world would be better if we could just follow rules designed to make life optimally harmonious. Maximally efficient work. No wasted efforts — and less effort to begin with. Minimal suffering. Nothing wasted on conflict and strife. Marxist/libertarian utopia without the technological eschatology. So why didn't we evolve to be Goody Two Shoes? Because in practice the Marxist/libertarian utopia is underspecified. It imagines a world without law. Man is a craftsman in the morning, fisher in the afternoon, and scholar by the dusklight; but not according to any rules — he just does what he wants. But in no imaginable world could people agree on the extraordinarily complex rules that would produce and distribute the economic output in a maximally efficient way. The coordination problems are just too hard to solve. Even in very small populations the number of permutations of choices over time in the social game is mind-boggling. Makes Go look like a child's blocks. I'm assuming to make everyone perfectly content with the Goody Two-Shoes utopian order, everyone would have to have more-or-less perfect knowledge and therefore complete trust in the others (X will go hunt, return, and share; Y will gather; Z will &c.). But then why didn't we evolve perfectly reliable rule following? A few explanations jump to mind. Maybe there are computational limits that prevent us from selecting the optimal or right rules (and changing them when appropriate), either in human biology (we didn't evolve to be smart enough, or haven't yet, to develop and follow sufficiently complex rules, maybe because there hasn't been enough time) or in biology in general (it isn't possible for computers built out of amino acids to develop and follow such complex rules — if it were and there are no limits to computational growth inherent in human design and there's been enough time, we'd already have developed that capacity unless its impossible or, for unknown reasons, catastrophic) or in nature itself (it's possible but extremely unlikely that human cognition is the upper limit of computational sophistication, or that computational capabilities more sophisticated than a human being's aren't very useful in practice). Maybe technology will dissolve the problem and bring on the utopia — we'll all just do what the computers say. But until then, conventional moral rules and law, and the human instinct to blindly enforce both, are ugly substitutes. They serve to solve the coordination problem well enough not to have killed us yet. But there's a problem. We can select the wrong rules because we aren't smart enough to see which ones are the right ones, and that can get us killed. One of the ways human nature copes with this problem is by giving us the desire to cheat under certain circumstances. It's not clear to me whether the cheating impulse is in equilibrium with the norm-enforcing impulse, and even if it was in the ancestral environment what does that equilibrium mean now that we've invented nuclear weapons? I tend to agree that the Scottish Enlightenment's teaching — that the content of institutions, broadly understood, is important here — is a valuable one. Perhaps the valuable one. But why doesn't this just kick the can down the road? Given the original reasons for the defection impulse, how can we ever know we've encouraged the right defectors?
Will, you are confusing the libertarians. What they want to hear is stuff about guns and John Galt and the first two pages of the Econ 101 textbook (not more!) and why global warming is the one part of science they ought to disregard and why every other mode of thought or policy besides libertarianism is a slippery slope to The Road To Serfdom.Posts like this one undermine the carefully constructed libertarian mental model, where square reality is hammered into round theoretical holes, and cannonballs fly through frictionless vacuums. Down your road is the messiness of reality, complete with human irrationality and collective action problems and moral ambiguity.Have some compassion for your Reason-subscribing readers in their safe dream-like state.
(snark aside…great post, this is going somewhere. One day you wonder whether WW2 was worth fighting and I think you're another hopeless libertarian following elegance over a cliff. Next moment you come up with stuff like this.)Here's a challenge: if you follow this line of thinking and end up on the left (or the right for that matter), will you reflexively disown it?
[…] The Finance Bee placed an interesting blog post on Making a Virtue of AltruismHere’s a brief overviewFlesh and blood human animals are, like … […]
“But people aren't sociopaths, which is why homo economics isn't such a great model.”The map isn't the terrain so its a bad map?Isn't it possible that for the collective behavior we want to understand (or most of it), hypersociality is second-order? The bee hive acts as-if it reasons about the optimal division of labor between indoor and outdoor bees when in fact individual bees are unthinkingly reacting to certain pheromones. Ascribing the ability to reason to the hive doesn't do injury to our ability to predict its behavior. Make no mistake, economics is about predicting the behavior of the hive not the individual bees.Humans respond to incentives… or they appear to at astonishingly low levels of aggregation. There may be a few exceptions (name one!), and maybe then we need to bring in more complex models of human behavior, but let's not throw the baby out with the bath water.
Hat tip to anonymouse, clarified alot!!!
What is the empirical basis for saying people are conformists, or are hypersocial?Or anything else for that matter? We all differ in our tastes, our convictions, our behavour, our virtues and vices, etc.And that's just within one culture at one period in human history. If you widen your observations to take in different cultures and societies at different times and places you see an even greater array of human motivations and behaviours.Which leads me to ask how anyone can take seriously this kind of pseudo-scientific reductionism? Especially when the real work of philosophy should be to argue for an ethical position (altruism versus egoism, say) and not assume that we are programmed to act irrespective of the norms our cultures accepts, or the values we choose to adopt.
Well of text CRITS you for 100,000 health point!
You die so hard you get punted from the game and bant.
This is brilliant stuff.This line: “The creature moved solely by instrumental, self-regarding rationality has a name: sociopath. ” reminded me of GK Chesterton's epigram:'A lunatic is a man who has everything but his reason.'This: “The stone heaved through the window of the suspicious gentlemen bachelors: this too is altruism. ” is brilliant, profound and unforgettable. It will be on my mind for weeks.
Terrific, insightful stuff.
Craig,Great post. To further your thought. Nature or nurture? Even in a benign, altruistic utopia with a history of peace and harmony, don't you think there will be selfish, antagonistic pains-in-the ass who everyone (nice as they are) will just have to give up on and avoid? Face it: we are all different in as many ways as we are alike. We have the capability of responding to the same situation in unlimited ways, based on our unique experiences as well as that most unique and tough to predict self. To some degree, who or what we are is just imbedded in us. Does our environment influence the always evolving product? You bet. But there is an important ingredient that we are born with: self.
Humans are not eusocial hive creatures like naked mole rats (mole rats being the only known eusocial mammals). There are no hermit mole rats.
Bryan Caplan argued in The Economics of Szasz that the insane are often more rational in the economic sense than average:http://econlog.econlib.org//archives/2006/09/th…
Interesting. So, what would it mean to set up a world in which norm-enforcement is harnessed for the greater good? The simplest answer is that norm-enforcement is a resistance to change, so we should set up a good world with good norms, and then norm-enforcement will help us by protecting that world against bad change. But that's not really interesting, because we already know that we want a good world with good norms. It's not interesting that conformism could help stabilize that world.A second answer would be that we could use this understanding of conformity to create a psychologically-aware theory of social change. If you put this theory together with the right vision for society, you get a blueprint for beneficial social change.It would be worth mentioning that social change is a deeply-studied topic. Anyone who wants to change the world has to know something about conformity (and self-interest, and all sorts of other psychological phenomena). Is Will proposing to extend this field of study in a certain way? Maybe sic the economists on it? A quantitative (economics-style) approach can certainly be helpful, although I would guess some work has been done in this direction…Finally, conformity and economics seem to intersect especially in the fields of advertising, innovation, and politics. It might be worth investigating the psychological or economic subfields relevant to these topics, to see whether the social scientists are using fuller models of humans when they study what happens here.The thing is, my guess is that they already are. But it's possible they might not be.
Will, there really is no such thing as altruism.Every action someone does they do for their own reasons. Even the person who wants to helps the poor does it because they get off on helping the poor (it makes them feel better or less guilty, or helps them get to heaven). Even the person who jumps on the grenade does it because they want to see their friends live (that provides greater joy, meaning, etc. than preserving his or her life).
This is the other possible response. Rather than enriching Homo Economicus by allowing him to pursue ends which reduce his expected utility, enrich Homo Economicus by widening the set of “goods” she pursues, to include abstract goods like social cohesion, or another person's well-being.In other words, we preserve the notion that Homo Economicus is solving an optimization problem, but now it is solving a less crude optimization problem.There's something to this, and it seems useful to preserve the idea of optimization. In fact, I doubt there's any alternative. When you make a decision you weigh options, and you must reject some options as bad. That's optimization. Every model of a human must have this property.
I have just noticed that I missed the crucial word from the GK Chesterton epigram that I quoted above making nonsense of it. It should read:'A lunatic is a man who has lost everything but his reason.'D'oh!
This post was referred to as a “sketchpad post,” not a formal philosophical paper. I think is an unfair burden to ask for empirical warrants on it. The give-and-take of a blog forum isn't the best place to ask someone for empirical proof of every single claim every time it is uttered. It's too quick of a moving forum for that. What, for example, is your proof that the “real work” of philsophy is arguing on ethical positions — particularly black and white manifestations of ethical positions (“alturism OR egoism” . . . “Coke OR Pepsi . . . why not neither?)At this point, I think Will is exploring one of the premises that we base our ethics on. If another premise is true, then perhaps another intellectual direction, and thus another ehtical position might be justified.
This isn't useful at all. It replaces an oversimplified view of human nature (people are selfish) with a content-less tautology (people's revealed preferences show they do what they do).
First, it's not entirely a tautology to say that a human's behavior is reducible to a computational process. That's more or less what this is saying. Humans can be simulated as actors that quantitatively weigh options.Second, revealed preference is a red herring. It is perfectly reasonable to have a research program which says “I don't really care what preferences are, all I care about is the accuracy of my model's predictions of observed behavior.” Third, simulation is a good thing, not “tautological.” Once you can simulate humans you can predict the impact of various policies/actions. If you can simulate their happiness you can predict which policies or actions will make people happiest.The name of the game is simulation and making good predictions. Enriching the “homo economicus” actor to create a higher-fidelity simulation can only be a good thing.Of course sometimes you get good enough simulation with a low-fidelity homo-economicus. In such cases “simulation” degenerates into solving some simple linear equation (demand/supply, etc.). So the goodness of different varieties of homo economicus is entirely a question of how appropriately they tradeoff between simplicity and accuracy, for the purposes of a particular analysis.
“A sociopath with supernatural epistemic and computational capacities is called homo economicus or “economic man.” An insult to sociopaths.Good stuff, Will.
First, it's not entirely a tautology to say that a human's behavior is reducible to a computational process. That's more or less what this is saying. Humans can be simulated as actors that quantitatively weigh options.Second, revealed preference is a red herring. It is perfectly reasonable to have a research program which says “I don't really care what preferences are, all I care about is the accuracy of my model's predictions of observed behavior.” Third, simulation is a good thing, not “tautological.” Once you can simulate humans you can predict the impact of various policies/actions. If you can simulate their happiness you can predict which policies or actions will make people happiest.The name of the game is simulation and making good predictions. Enriching the “homo economicus” actor to create a higher-fidelity simulation can only be a good thing.Of course sometimes you get good enough simulation with a low-fidelity homo-economicus. In such cases “simulation” degenerates into solving some simple linear equation (demand/supply, etc.). So the goodness of different varieties of homo economicus is entirely a question of how appropriately they tradeoff between simplicity and accuracy, for the purposes of a particular analysis.
“A sociopath with supernatural epistemic and computational capacities is called homo economicus or “economic man.” An insult to sociopaths.Good stuff, Will.