Colin Camerer

February 25, 2005
7th Annual Lecture

Colin Camerer

 


0

Colin Camerer
Robert Kirby Professor of Behavioral Finance and Economics
California Institute of Technology

 


Behavioral Game Theory

Lecture :: Behavioral Game Theory

Camerer
So I define behavioral game theory – first, what is game theory? It’s very useful, I think, to be reminded, as in almost any good course you take that in a sense game theory has two components. There’s the idea of a game. There are players, there are strategies, they have information, there’s an order of rules that could be endogenous - usually it’s exogenous - and the information moves together to yield a set of outcomes. We usually talk about them in the lab. We’ll relate them to dollars in money or other foreign currencies depending on where we’re getting the experiments. But they could be fitness in biology, territory in war, status and prestige, and so forth. So it’s pretty important that game theory is not just about people doing things for money, and often the games are very complex.

Now the idea of a game and the taxonomy, the idea that games parse the social world is right by itself extremely useful. One of the most basic things you can teach is just the parts of the game. Game theory, the theory part which we spend most of the time on in teaching mathematics and so on, is a set of ideas in what might actually happen in a game. And most of game theory - that is, particularly almost all of game theory - up until the eighties and nineties and perhaps increasingly less so now, revolves around the idea of equilibrium, as in almost a physics concept of equilibrium. And equilibrium is where people will guess correctly what others will do, and they’re choosing their own best response based on their guesses. So the way to think of this is it’s sort of a “no surprise” condition. When everyone’s strategies are announced, everyone should say “Oh, yeah I figured that out already”. And I should say, because the rest of the talk might be misunderstood as kind of belittling the usefulness of the idea, and it’s a very useful idea, it often gives you tremendous precision, and the way to think of it maybe as a limiting, as a limiting point of some adaptive learning, imitation, or evolutionary process. So it’s a very useful thing. But in the short run it is probably not a good description of how people coming to the lab are likely to play.

So what is a good description is behavioral game theory. That is, it’s a name for theories that are meant to be about what actually happens. And we’re trying to do a bunch of things. There are meant to be mathematical theories, usually they’re not as, actually sometimes they’re more precise than equilibrium theories, as you’ll see. I think the important thing is that they be constrained by facts. And here I mean an extremely broad range of facts, including anthropological evidence, neuroscience evidence - I’ll show you some tidbits, a smorgasbord of these - and evidence with special populations: like autistics, who have trouble, and deficits in theory of mind in imagining what others think; sociopaths, who don’t really care about other people, who will keep everything that’s given to them and exploit others who don’t do so; and Caltech undergraduates (laughter), who are extraordinarily skilled mathematically, and also very homogeneous. So they’re very useful for studying behavioral economics, because anything they can’t do naturally is almost beyond basic human capacity. Not that some people couldn’t do it but it would have to come from expertise or special training, or some sort of exogenous thing. Okay.

So I’m going to talk about three components today, what I call thinking, learning, and feeling and sharing. Thinking goes to the question: what happens the first time? If people aren’t reasoning fully through some complex game, what are they doing? And I’ll suggest a kind of very simple algorithmic model, sort of a mixture model a statistician would say, of steps of reasoning that seems to fit lots of data pretty well. Second thing is learning. So it may be that people don’t compute an equilibrium or figure it out but some adaptive process leads them toward it, and that focuses our attention on learning. This has been a big, obviously a huge topic of research in almost every field, in machine learning and computer science, in psychology, and increasingly in economics. I’m going to talk about behavior based on a kind of two-part model, now we call it a dual-process model, which involves a component of reinforcement and a component of regret, or imagination. And then hopefully we’ll have time to talk a little bit about feeling and sharing; by that I mean a maintained assumption, by no means a bedrock assumption but a maintained assumption in game theory is usually that whatever currency we pay people in, or whatever it is that they’re getting that they just care about getting the most for themselves. That means they’re not willing to sacrifice to help other people or to hurt other people. And there’s a lot of evidence that goes against that so the goal here is to build up alternative theories of norms of sharing and social preference that may involve social emotions like envy, altruism, and so on.

Let me start with an extremely simple example. I’m going to assume that some of you know very little game theory so I’ll walk through some basics, and others are real experts on it so I’ll try to say things that will interest you as well. This is a game called “matching pennies”. And so two players simultaneously choose either heads or tails, If they match, on heads or on tails, the row player gets a path of one, the first path in each cell indicates the row players, the second indicates the column players, so matching the row player wins and if they mismatch the column player wins. So it’s an extremely simple game. In fact it’s so simple that game theorists might say “I don’t know why people would be amused by playing it”, because it has a very simple equilibrium. So let me talk about how subjects often go about thinking about this. So if you play these sorts of games with actual people they’ll say things like: “Well, I pick heads, because if they do I’ll get one. But wait a minute. If they think I’ll do that, then they’ll pick tails. So I should pick tails. But wait a minute, if they think I’ll do that…” - and so this reasoning goes around and around in circles. So let me give you an example. This is a scene from The Princess Bride. How many people have seen it? So some of you will see it again and some for the first time. Anyway, you’ll get the idea.

Okay, so technical difficulties aside, Wallace Shawn, or Vizzini, dies. The trick turns out to actually be something interesting as well which is that our hero, Cary Elwes the actor, actually had built up immunity to iocane powder over the years, a la Rasputin, and had put powder in both goblets, which is an example of a concept that’s not really in mainstream game theory you might call secret information, something that never even occurred to people as being a possibility and never entered into calculations. Anyway the point of the scene is that steps of thinking are a very natural thing to do. At some point it gets cognitively very difficult, and maybe not even conclusive. And hence, one way to think about games like this is as various people, perhaps in a regular way and perhaps changing across time (some games are simple, some games are not simple and sometimes you’re tired, sometimes you’re not tired), doing various steps of reasoning, and behavior in a game that we observe in the world and in the lab being the outcome of some mixture of some people doing a lot of thinking and some people doing not as much thinking.

So here’s a slight version, it’s slightly more interesting than matching pennies. You might call it asymmetric matching pennies. It’s the same game I showed you before, except - and I’ve changed the column labels just to remind you of the asymmetry - the row player chooses T or B, for top or bottom, column left or right, and again if they match then the row player gets some money and the column player gets zero. But, the row player prefers to match on T/L than on R/B. So there’s an asymmetry in which of the matches they prefer, kind of a bias. And Nash equilibrium, which is a concept that says people will choose strategies that are best responses to their beliefs about one another and their beliefs are correct, makes the following kind of odd, or I’ll say half-odd prediction. It predicts that, and this is a unique Nash equilibrium, there is no other, that the column player will chose right two thirds of the time and left one third of the time. And the reason that maybe there’s some logic there, [which] we’ll see emerge from an alternative theory, is that the row player says “I might be able to get one or I might be able to get one. So those seem equally good to me.” But the row player is thinking about the column player, the row player might say “I think the column player will probably try to choose T and win two, but knowing that I should choose R”. So that introduces a little bias toward R for the column player. Interestingly, however if the column player chooses this way, one-third/two-thirds, than T is really a one third chance of getting two, and a two thirds chance of getting the one, and so the row player should be indifferent. So that makes a very bizarre prediction. By bizarre I mean unlikely to be true statistically even when people play for the first time, and difficult to reconcile with alternative theories of rationality. That top and bottom should be played equally often. If you think this is likely to be true, I’ll just take your money. And what’s even odder is if I change this two to any number above one, the fifty-fifty still holds. What happens is that I change the two to x, this becomes (x/1)+x. This becomes one and one plus x. So if I change the two to a billion, what happens is the column player is almost always going to play right, in such a way that makes you indifferent to T and B, and this is still fifty-fifty. So not only do you get a kind of odd prediction parametrically, but it is completely insensitive for the row player for the parameter which is the row player’s own payoff. This just seems like a behaviorally very curious property.

So let me sketch quickly a model that sometimes matches up with Nash equilibrium when it predicts accurately, and sometimes accounts for deviations in a way that’s really, more simple principles of what I would call cognitive science or part of cognitive science. So Selten, Reinhard Selten the Nobel laureate said in 1998, the natural way of looking at game situations is not based on circular concepts [or] a so called fixed point or recursion, but rather on a step-by-step reasoning procedure. So our model, which we call CH for cognitive hierarchy - it’s also related to models others have worked on for quite a few years now in experimental economics and probably will look familiar from entirely different domains - is a model where there are discreet steps of thinking. The zero-step players just choose completely at random. Think of them as just choosing heuristically. Or one of the studies would be that we measure response times and sometimes there are players that look to be doing zero steps of thinking are actually taking a long time. So maybe like Shawn in the Princess Bride, they’re doing a lot of thinking but it’s leading them to something that looks random. They’re kind of overthinking the problem. One-step players are going to think that others are zero-step and choose randomly in response to them.

Here’s the hierarchy building up, and you can write this down in a simple little program with a loop so it’s very easy to compute, algorithmically, when you’re a scientist, and maybe for the human brain. Two-step players think that others are one and zero step players, and just respond to that, and three step players think everyone is zero, one and two, and so forth. Notice, by the way, that if you think people are really quite strategic, people of that sort are in this model, too. Those are the highest-step thinkers, like a fifth-order thinker, who thinks that some people are random and some people respond to random and some respond to the randoms of the responders, so there are very strategic people in the game, the highest-step thinkers. And generally they’re going to make the most money. In order to pin this theory down we need to say something about percentages of these players and where that comes from.

One way to do it would be to endogenize that through some kind of cost-of-thinking model, which is something we’ve never done but would be a very useful step to tie the project up in a nice way, or tie it maybe to something like evolutionary adaptation in how many steps are valuable in the games that people played in the ancestral past. Again, that’s something we haven’t done but would be very useful to do. We just do it in a very simple axiomatic way: suppose that the percentage of players, the ratio of players doing K steps relative to k-1, think of that as like the transition rate, how many players graduate from k-1 to k declines with 1/k. So if you think of it as like school, you know going from fifth grade to sixth grade isn’t so hard, sixth to seventh is a little bit harder, eighth to ninth, twelfth to thirteenth and so forth. That proportion goes down. This simple axiom yields a Poisson distribution. The number of players doing k-steps (formula – visual material???). What’s nice about Poisson is its just one parameter. So we really would like to complicate game theory, the Nash equilibrium after all has zero parameters. It’s a zero parameter theory which is really perfect thing. Often you have multiple [-] so we like to be able to have some precision there. So if we do have parameters we don’t want to have too much, we like to add as few as we can. I call this a kind of minimal parameterization of τ. τ is both the mean and the variant. I’ll show you a bunch of numbers suggesting that for average groups of subjects playing simple types of games its around one and a half, reflecting a mean of one or two steps of reasoning, some people do more but not too much more, not more than four or five. So it’s very parsimonious. That means you can kind of prove some theorems about it relatively easily, with various values of τ certain things happen. Also if you’re at Caltech and you’re in social science and you use E in your work you get a special note from the provost “thank you for [being so scientific]”. We have an online calculator that’s really pretty easy to use, if you go and plug in a game it can spit out various predictions for different τ. Here are some τ Poisson distributions for various τ. So, I focus on one and a half, in red. Again, we think that’s parametrically often a good guess, at least as good of a guess as a Nash equilibrium and often much better. In one and a half what happens is the mode is one. The most typical thing around a third is just best responding to a random average of everyone else. Notice that might seem like kind of a dumb rule by the way, but at least it uses all the data, and also it will never violate dominance, so it’s a simple heuristic that will avoid making dumb mistakes. Another thing about this Poisson is they drop so very fast, that’s because of proportionality to 1/k, so that if you think that five or six or seven steps of reasoning are typically beyond average computation for modest stakes then you don’t get very many [-].

Okay, in the asymmetric penny game I showed you, here’s what happens. I’m just going to focus on the row thinker. The zero-step row thinkers randomize across the two. That’s by definition zero-step. That’s again, just a starting point, that’s not going to import upon any theory, it’s just a way to the recursion, the hierarchy going. The one-step thinkers think that they’re playing column players who randomize equally. So they say “I might get two, or I might get zero, or I might get zero/one, well gee, I’m going to pick top.” So, a hundred percent of the time, they choose top. Two-step players think that they’re playing zero-step column players who randomize and one-step column players who also randomize, so they also choose top. Three-step players start to get a little more cagey. The three-step players say “Hey wait a minute - two-step column players play it right, because they’ve figured out that they think they’re playing someone who is playing top, so I’m going to outfox them and play bottom because I know they’re playing right.” And then they force the players to switch over.

When you mix these together, using a τ of 1.5, right so you put in 22% zeros, 33% ones, and so forth, when you mix them together you get a prediction of 68% choosing top. Remember the mix [-] is fifty, the data, this is from one little study with Caltech students, is 72%. So the CH model bias has results, I think, in the right direction. It says that if one of the [-] is two instead of one, people will move toward that - but not everybody. And notice interestingly also that the CH model and the mixed equilibrium, that’s here, make very similar predictions. It says that most likely the column player chooses right. Because a lot of the column players, the level twos and above, think that they’re playing people who are choosing T. So there is a fair amount of strategic thinking that’s pushing results through. And the data in this case is 67% so they’re very close. Nash is right on as a matter of fact. So the way to think of it is that the logic here underlying more column players choosing R because they think people are choosing T and they’re trying to get this one, seems to be correct in the data, and the CH model kind of reproduces it. So here the Cognitive Hierarchy model and Nash make similar predictions, even though they really have a different basis. Here the Nash prediction is kind of odd, and the CH model I think makes [-].

Here’s a game – I’m going to skip this over since we don’t have too much time – I’ll do it very fast. It’s a very simple game. This is a game called the beauty contest game. It’s a really easy, simple game to illustrate steps of reasoning and we happen to have collected a lot of data on. So if you’re interested in different groups of people, I’ll show you a whole bunch in a minute. This is a game in which everyone chooses a number from zero to a hundred, and we’re going to compute a target which is the average that’s just this expression times two thirds. If we had more time and a smaller group we would do this and that’s where some of the data come from that I’ll show you, and the closest to the target wins a fixed prize, like twenty dollars. So if you think about the logic of this game, you want to choose two thirds of the average of what everyone’s picking. But everyone else wants to do the same thing. So, what people often do is say “I don’t know what’s going to happen so I’ll assume the average will be fifty and I’ll pick thirty three”. But if you think other people are picking thirty three then you should pick twenty two; but if the others pick twenty two, and so forth.

That leads you inevitably to the unique Nash equilibrium theorem. Remember Nash equilibrium is a game in which we all have numbers on our cards and when they’re all announced nobody is surprised and everyone is optimized. So the only such outcome would be zero. Because after all, if you pick twenty two, and other people pick twenty two then you’d be surprised. So you should figure out what everyone will pick. And this integrated picking process yields a guess of zero. Okay this is eight thousand observations. How do you get eight thousand observations? At Caltech that’s ten times the undergraduate population. What you do is you do newspaper contests. In Expansión which is a Spanish magazine, Spektrum which is German and Financial Times, which you probably know, that famous salmon colored financial newspaper from the UK. And what you do is you write about this game and you invite people to enter, and then there’s some big prize offer depending on who wins, like in the FT it was round-trip first-class tickets from London to New York. And you get eight thousand observations. What’s nice about eight thousand is you can see very clearly these spikes. Very few people pick above fifty. Occasionally you’ll get these people picking one hundred. Either they’re dyslexic, like they just got the two thirds wrong, because if I change this to three halves then the average one hundred would be the equilibrium, or they’re trying to vandalize and scar your graph, or they might be trying to collude. It’s very hard in a three thousand person contest. Because sometimes these small groups, like if you do it by electronic mail, a couple of people will say “I’ll pick a hundred” and no one will expect that, so you pick a higher number that takes that into account so there’s kind of a conspiracy. Anyway it’s not very many people but it’s interesting that it happens at all.

Mostly you’ll see people picking thirty three and twenty two and then a bunch of people choosing one. And various little things here and there that are not distinguishable statistically. So, a simple model – here’s the CH model – the CH model isn’t as smooth, it’s got this sort of stubble here and then a bunch of spikes. But compared to the Nash equilibrium which has a spike at one hundred percent, it basically says that a lot of people will be choosing numbers that look like [-]. So its not getting the fine detail there right at all, it’s not crafted to do that. So I promised you lots and lots of data; I'm going to go over that. The point of this graph is that there are a lot of [rows], number one. Number two, the data aren't that grammatically different – you will get groups in certain variations of the game.

This is PCC which is a city college near Caltech. They choose around fifty they seem to be confused or maybe our instructions weren't very good. That game was on a computer screen. At the low you get [-], actually Mike Kearns might have been there.  I think this was data I got from [-] who worked on this game for the NIPS meeting, a machine model. This is the mean, by the way. Sometimes the mean is pulled upward by an outlier like a fifty or a seventy. And notice that even at Caltech, we see the newspaper data – if you estimate separately for each group, this τ factor, or the steps of thinking τ, which given our model best fits each particular data set, you get sort of a range. Here they're close to zero, so it basically looks like people are just very confused. In the middle it's around 1.5 or 1.6, and you get as high as maybe 3 or 4. And remember what’s important about this is, if you were to give people advice about what to do in this game,  and Mr. Nash opened up his consulting firm; if you take Nash equilibrium as prescriptive advice, I don't know if Nash would say that or not, Nash equilibrium would say pick zero. But you never win picking zero. In fact, at the NIPS meeting, there was somebody who got very upset, and said “I picked zero, because I didn't think you guys were so stupid!”  The point is, prescriptive advice should be based on a good behavioral theory of what’s likely to happen.

But also a theory most likely to change over time, you know that's what learning will do. Let me inject a little bit of neuro data. So remember, I gave a mathematical definition of equilibrium. As beliefs being correct guesses about what other will do, and optimization given beliefs. So in a sense it’s a mathematical construct.  But thinking now like a neuroscientist, in our lab, I can now say, at Caltech, we've been thinking a lot about neuroeconomics, which is a neural underpinning for the basic ideas of economics and game theory and even maybe market exchange, like price bubbles and stock markets and stuff like that. So we're now trying to redefine a lot of things in economics in terms of neural instantiations. What does it mean neurally for a brain to be in equilibrium? Well one way to think of it is that in equilibrium, I'm guessing what other players will do correctly, and I'm making a choice based on that guess and for those to match up, in a loose sense, you might think they need to be using a lot of shared circuitry. Now let me illustrate that by the opposite.

If you're a level one player, who just best responds to random, you don’t need to think about the other person at all. For example if there is a [-] on the screen you don’t need to look at their payouts, because you don't need to use those in a computation. So a level one player could make a choice without really forming a belief in any deep sense. You could be using two separate circuits. So it possible, if you're out of equilibrium, that you choose your favorite number,  or you choose the top row, or one of the highest averages without thinking very much about the other player. And when I ask you what you're thinking, you say “I don't know, any old thing”. So the choice and belief circuitry might not be operating in parallel. But to be in equilibrium I need to be figuring what others will do, and maybe there's overlapping neural apparatus in use.

So these next two slides will show you that in the most tentative possible way, I certainly can't oversell this, that maybe there's something to that. So this is the difference in activation from fMRI brain scans using 16 subjects at Caltech. These are areas that are significantly different in activation. This is the only area that is different at .001, which is a typical standard for these types of analyses, when people are making a choice, and when they are asked what to guess what other people are doing. This is from a sample of normal form matrix games that are very standard in game theory. So you ask people what do you want to pick, they say “oh, top right.” What is a good player going to do? They’re going to pick the right column. So we ask them two different questions. In equilibrium in a sense the circuitry being used in those two things should overlap a little. And in fact, it does. These are trials which are taken pulled across all the subjects. These are only the trials in which they're in equilibrium in a mathematical sense. The choices are best responses to beliefs, and their beliefs are correct. And then the activity in the areas where when they're making choices and when they're making beliefs overlaps a lot.

What do I mean by that? In mean when we subtract and we ask what areas are differentially active, when they're choosing versus making a guess or of what others will do there's only one area, it’s called the ventral striatum and it’s an area that’s involved in kind of predicted reward. So its speaking very glibly, the neuroscientists' theorem – that there are some things, forgive me, its almost like this internal cash register  in the brain that's going “ca-ching ca-ching”, like “I've figured out what's going to happen in this game, and I'm guessing how much this game will pay me”. And those are the only areas that are overlapping. So you might say equilibrium is a state of mind, in a sense that we can now define equilibrium neurally as trails in which circuitry activation is highly overlapping in the belief and the choice tasks are likely to produce equilibrium behavior in a purely mathematical sense. I'm trying to link two very different things here, a purely behavioral/mathematical definition as revealed by choices and beliefs, and brain activity. Here's the brain when you're out of equilibrium. So these are trials when their beliefs of what others will do are wrong. They're not guessing correctly what others will do. And there's a little more to it, we also measure second order beliefs, or beliefs about beliefs, but I'm going to leave that aside.

And we see four areas: there's an area called the frontal insula that’s an area that has sort of spindle cells, which are long, kind of big elongated cells that suck up a lot of information from throughout the brain, that are unique to humans, chimps and bonobos, so in a sense anything we do that seems uniquely human or almost uniquely human and primate is going on in there; this is the dorsal [lateral] pre-frontal area we'll see a little later, that’s involved in planning and higher order cognition. These are two areas in the cingulate, the interior cingulate and the posterior cingulate; these are regions that are kind of involved in conflict resolution of various types. The first point is that there is a lot more lit up here than on the previous slide. So when you're out of equilibrium I suggest that the choice and belief activity is somehow different. Let me explain what this measure is. On the x-axis, what’s nice about these games is players actually play against other people. Part of the experimental economics rules is that there is another person there, they’re instructed, and they play for money like you do. And so we can actually compute what we call strategic IQ. What that is, is how much money did you make. And some people seem to be persistently pretty good at these games. They make choices that give them high payoffs, when they're guessing what other people will do they guess more accurately, when they make these second order beliefs, guesses about what other people guess they do they do so more accurately, and when you add them up you get strategic IQ. So that’s on this x-axis. Then we can ask, "Gee, across subjects, are there some brain areas when they're making choices that are positively or negatively correlated with strategic IQ?" So a way to think about that is, I hope I don't sound like I'm overselling this because I really don't intend to, we at least ask the question: are there some areas of brain activity that seem to be positively correlated with strategic IQ?

There are for example studies of GE, or general intelligence, and there are certain brain areas that are correlated to the working memory and things like that. So the positive areas are precuneus caudae, so the caudae is also in this [-] striatum area. Its basically the same area you saw before, it's just an area in the same region. Here's an area called the precuneus which seems to be involved in reward and also in moral dilemma judgment, things like that. So the people who have the highest strategic IQ have the highest activity in these areas. So these are candidate areas for important kinds of [skills]. The insula is an interesting area we're going to see that again later - the insula is an area that gets bodily discomfort signals and bodily comfort signals. So if you're smelling a disgusting odor, or you're in pain, or you're socially excluded - you're not invited to a party you expected to be invited to, the insula will be activated. It’s sort of an empathy/pain/discomfort area. It's also an area that seems to be involved in creating a sense of self. So when you're tracking something on a screen with a mouse relative to when somebody else is tracking the same way the insula is active. It’s sort of "me, me, me". And it turns out that the people that have high strategic IQ, that are well above average in how much they earned in these games, have less activity in the insula, and the people with lowest strategic IQs tend to be maybe too self-absorbed. Maybe they're thinking too much about themselves and not enough about the other person, which is something very important to do in game theory.

So this is just a picture of where the insula is. The insula is between the frontal lobe and the temporal lobe. By the way in your brain these are not green and purple, at least not in a normal brain. So we think people who are too self absorbed are doing badly in these games, they have a lot of activity in the insula, while people with less activity in the insula are doing very well. That might also mean that they’re concentrating. When you're in these fMRI scanners they're very loud and claustrophobic and they're kind of uncomfortable, so it might be that when they're playing the games they're concentrating more and that’s deactivating the insula. So with self-absorption we predict - with someone like Paris Hilton, she's a big star in Los Angeles and on the internet - would have a very low strategic IQ maybe lower than this, and have a lot of insula activity. By the way, one reason to study these brain regions is, the next step is people often say "do you really need to know what the brain regions are in order to make predictions and understand things?".  Then we can study things like patients with brain damage - so people with insulectomies or brain damage to the insula maybe function differently. Some of these areas we can also stimulate. We don't do this in our lab but many people do with something called TMS. They have a coil that kind of stimulates the brain, and stimulates in enough to create a sort of temporary lesion.

I'm going to talk about learning very fast. I'm going to go through this very quickly. This is stuff we've done and maybe has some interest – most interest in computer science, and I'll skip a little bit on social norms. So our theory of learning, I'm going to ours because of my insula activity and also it encompasses a bunch of other theories. So it's a way to get across quickly some of the theories people in economics have been using all in one fell swoop. Our theory we call EWI or now we call it dual process theory. Here's the idea: each strategy is going to have a numerical attraction. It's some number, maybe it's neurons firing in the brain or something. And the attraction for player I, the attraction is to be different for different people so I is the subject across players for strategy J that’s going to change according to this formula that we prove. So if we actually chose strategy J we're going to take the old attraction from t-1 multiply it by φ, which is kind of a forgetting factor, and n which is a strength and previous experience factor, we're then going to add in the actual money you earned, and then kind of pseudo-normalize by  [-], in this way. However if J the strategy were updated, with something you did not pick, we multiply the forgone payoffs, that is what you would have gotten if you had picked that strategy, or think of it as counter-[factoring], or regret, times a factor δ. Delta (δ) is sort of the strength of imagining what you would have done if you got something different relative to what you actually earned.

And then we use what you guys nicely call a softmax response function or in economics we call logit to map these numerical attractions onto a zero-one type of probability. So delta again is sort of the strength of imagination or regret, phi is sort of the speed of learning. We like to study cases of anterograde amnesia, people who seem to forget everything really fast. Like if you saw the movie Memento, that’s about anterograde amnesiacs. We're just starting to think about how to do that. Now in computer science it occurred to us, you could rewrite the equations I just showed you in exactly the same way - simple algebra. One way to think of it is the change in attraction for strategy J from t-1t has to do with the prediction error, that is π, which is defined down here, π times the payoff you earned from J, right, so this is the payoff minus the previous attraction, so if you think of these attractions as predictions, as sort of values, then this is a prediction error. So this is a prediction error but its divided by a weight φn+1 and this weight will generally increase, well, this weight will increase which generally means one other weight will decrease so for example if n+1 is one, you just start with this counter being one and phi is .8, this initial weight that is one divided by this then, one over that, will be .56, that means the first prediction error gets a weight of .56 it really updates the attractions a lot., by the end it's .2 so you're not updating as much, so it's slowing down the rate of [learning] which is almost always useful.

The reason we built the model this way is people in game theory studying learning have been focusing most of their attention on two types of theories, which are reinforcement, that’s basically delta equals zero, you only learn from what you did, and that comes from behaviorist models and animal psychology, and fictitious [play-belief learning]. It turns out that fictitious play, fictitious play means you build up a history of what other people have done, arithmetically. Like if you chose top two times and then bottom one time, I’ll say there's a two-thirds chance you'll get top next time. Then I'd best respond. In reinforcement learning I'm learning about my strategies. I may not even realize I'm playing a game, if I'm an animal pecking or pressing a lever. In fictitious play I'm learning what you're likely to do. I'm building knowledge about you and then plugging that into my best response process. Well it turns out that if delta equals one in these models, then this kind of generalized reinforcement model is exactly the same as fictitious play. So really there's no conflict in a sense between belief and reinforcement; belief learning is just, you might say, generalized reinforcement where you reinforce everything equally strongly.

And what we're about to do now using eye tracking, that is seeing what paths you're looking at in matrices, and eventually fMRI and maybe some other tools to try to see these two separate processes so we think that probably the reinforcement process is some very old thing in the limbic system that uses dopamine and so on, like pick something and numbers flash on the screen “ooh, 5, that's great!”. And the delta process, which is crucial in belief learning, is using some kind of pre-frontal imagination. Let me show you some evidence for that. These are kind of gruesome brain slices. This is a picture from a different paper by Angela [-] about patients with orbital frontal brain damage. Orbital frontal is this part of the frontal cortex that is just above and between the eyeballs. So these are slices in a standard coordinate system that's showing you. So the purple parts are locations of brain damage in actual people who are subjects in this experiment.

So we had them play the beauty contest game five times. And the blue bars, dark blue and the lighter blue, those are confidence intervals plus or minus one standard error for the OFC patients. So these are people with brain damage in those areas actually playing this game with others. And this is a control subject with brain damage in another area. What you'll see is the OFC patients start out too high and they're a little bit slower to learn, by about the fourth period they catch up. And if you talk to them - I wasn't there when they did this, this is from the University of Iowa lesion pool which is a very large well-understood pool, [Antonio Dumas and Ralph Adolf] is my collaborator on these - and if you talk to them what happens is they'll say things after the first round they'll pick thirty five, and the answer is twenty,  and the next time they play they pick thirty five again, and the average goes down to ten, and the subjects say “why did I pick a lower number?”. That’s exactly what you would say if you didn’t have the delta part of the brain that says “gee, I picked thirty five and the winner was twenty, if I had picked twenty I would have won”. If you don’t have that part of the brain you'll just kind of perseverate and just keep picking things that worked or didn’t work over and over and over. So again, I'm stitching together a few speculations from different domains but I'm suggesting that OFC might be a part of the brain you have to have in order to do this imagination-regret part of dual process learning.

Next I want to shift gears and talk briefly about what we call feeling and sharing. I've talked about limits on strategic thinking, and also on learning. We see the limits on strategic thinking part as: here's a game, what's the best possible guess of what's going to happen? So, we've looked at many games, about a hundred fifty. Generally the CH model we develop, and there are lots of variants – it's an area of research so there's lots of normal science about which things fit a little better here and there – the CH model seems to fit better and the learning models give us some sense of how these things track across time. So I'm going to talk about some extremely simple games. There are many games people have used to study what you might call social preferences, and I'll just talk about the ultimatum game. How many people have heard of this? Okay, not everybody, but most of you. So I'll go really fast.

The ultimatum game is really - oh and by the way a lot of these games, there's a famous old paper by Pruitt called “are the simplest games are the most interesting psychologically” or something like that – so a lot of these games are not strategically very complicated or interesting, these are not things that are on final exams in Andy [-]'s game theory course, because they're not mathematically complex. They're just socially useful to kind of get at measuring social preferences. So the ultimatum game is very simple: a proposer, we'll call them, is endowed a sum of money, and the base case is usually commonly known what that money is (that could be important). Usually it’s something like ten dollars. You can go to far-off foreign countries, as I'll show you in a minute, and use similar stakes which are a lot of purchasing power. So if you're worried about games that have really big stakes we can do that far cheaper overseas. The proposer offers money to the responder who just accepts it or rejects it. That's it. If they reject it nobody gets any money. And this is not for social psychologists and people like that, its not meant to be a model of lifelike bargaining its meant to be a little piece, like of the end game of a take-it-or-leave-it bargain. And really it’s a tool to measure when people say “it really bothers me when people treat me unfairly, I would never accept unfair treatment”. It's a way to make them put their money where their mouth is.  Would you really give up money in order to punish somebody who has kept a lot more because you think that's unfair?  If you think that people are purely self-interested that’s not really an essential part of game theory per se, that’s an auxiliary assumption about the nature of values or outcomes, and you think people can understand that and plan ahead, and strategize, the responder should accept anything, and the proposer should offer very low; should exploit that.

Q.: (inaudible)

Camerer: That's right. Yes, yes. So if I offer you two and I'm going to keep eight and you accept it, you get two and I get eight. We actually get paid and we leave. But if you say no, nobody gets anything, ok? By the way, you can think of lots of variants of this, like if you reject your amount I get my amount anyway, and so on. I'm going to jump immediately into the most exotic data from a first study, a cooperative study of mostly anthropologists and some economists, I was sort of one of the top coaches who gave advice on this, and a second study is in the works.

This is a picture of Lamalera, Indonesia, where people hunt for whales, and if you know the game Stag Hunt, that is a name for a game, something that’s called the assurance game. It's a game in which, if we all put in high effort, we all benefit, and if we all put in low effort, nobody benefits. But high effort is risky and low effort is safe. So it’s sort of like the prisoner's dilemma, but it's different. Because in the prisoner's dilemma I think if you'll cooperate, I'll defect anyway because I just want to make the most money. In stag hunt if I think you'll put in high effort I'll go along, too. So add pi-high effort, or cooperation, is also an equilibrium. So here's a picture of whale hunting: whale hunting is stag hunt. The reason is that you need a whole lot of people on the boat. Somebody has to row the boat, somebody spots the whales, this guy has to actually leap – this is pretty dangerous – he has to leap out of the air and harpoon the whale, and usually land on top of it. And the whales, well they're not hunting orca, killer whales. I imagine the whales aren't happy if you really harpooned them in a deadly way, and if you land on top it's really not a good thing. So whale hunting requires a lot of people to cooperate. So as they're collecting these things, everybody gets to go to the village, [and they say] “do you want to go today, do you want to go today”, and they kind of all show up. And if you know seven people that whale hunt, and they need an eighth, it pays for you to go. And if not all eight go they might not be able to go at all, or their chances of catching a whale are diminished because everybody has to be doing their job. And also they don't usually catch whales. It's only maybe every three weeks they catch a whale. Another interesting thing, just as an aside, they have extremely clear sharing norms for, if you do catch a whale, who gets what. So it’s almost sort of textbook contract theory in economics. So I couldn’t find this picture, but my collaborator Herb Gintis has a picture showing the marking, so when they bring this whale into town there's a guy with a machete who is a combination butcher, IRS tax auditor, and judge, who sort of carves the thing up, and it's like “Ok, this is going to go to – who spotted the whale?”

“Oh, it's me, me”

“Oh, okay. So here, you get this much” and he sort of starts to carve, then the guy starts to yell “No that's not enough you know I need more, I need more!” “Well how much more?” and then somebody else yells. So he sort of adjudicates and they divide the whale up in a very pre-specified way. This is a big whale, too. This is a picture of the Orma who work in Kenya where my colleague [-] goes every year, and here's twelve cites from the first study which is in this book which came out last year, your colleague Francisco Gill-White, worked here in Mongolia, and quite a few in the Amazon sub-basin. The original study was in the Machiguenga, in Peru, by Joel Heinrich, he's really a pioneer and deserves lots of credit here.  Here's whale hunters in Indonesia, and I mentioned quite a few in Africa. The reason people go back to these studies by the way is if you're interested in sort of evolutionary psychology and how people behave in simple societies, we'd like to be able to build a time machine and go back to when the brain was developing in the societies, but until we can do that, the best we can do is to go back to places in the world now that are similarly small-scaled - or ‘primitive’ apparently is politically incorrect, you can't say that – and these places are developing pretty rapidly. So the anthropologists have to go out there quickly and study as much as they can.

So in this study what happens is they went out to all these things and did ultimatum games to see number one: are these norms of fair sharing universal? I forgot to mention something important. For those of you who know these studies, the typical finding in the ultimatum game in most countries is that, contrary to the perfect self-interest prediction, most people offer around four or five dollars out of ten. They're reluctant to offer too little. And when they do offer too little, like ten or twenty percent, it's rejected a high percentage of the time. So if you offer two out of ten it might be rejected say, half the time - but not always. Some people will accept anything. So again, the game theory prediction is sort of an extreme prediction but it does seem to account for some of the percentage of behavior. So this is an action-packed picture of ultimatum offers and percentages, on the x-axis. So fifty percent, which stands out here, that's offering half. And remember, offering half is being kind of timid, but that’s what you offer if you're afraid that people think anything less than half is unfair and will be rejected.

In a minute I'll show you brain areas that may become active when we're thinking that. On the other hand there are groups that offer very little, like ten percent, if you think ten percent is the minimal amount you can get away with. You do see some offers of ten percent, and also there are very few rejections in here. If anything, if you compare the offers that are made with what you would make if you were trying to get away with the most you could people are actually not being quite as impressive as they should be in a mathematical sense. The main effect is there are a lot of fifty percents. This is a bubble chart by the way, so let’s take an “exotic, strange” place with a “weird” culture that hasn't changed much over time, and bizarre eating habits... Pittsburgh! Okay, here's Pittsburgh. So in Pittsburgh you get a big bubble at fifty percent. The size of the bubble is the percentage of offers at that level, okay. And the grey bars I think show the medians. So the mode is fifty percent, the median is forty-five percent, and then there's a little bit of a tail of people trying to get away with a little more and often less. Then there are all these other groups. These are ordered from top to bottom in terms of the mean. So as you can see, Pittsburgh is a little more generous. In most of these places people are a little more aggressive. In a sense, as you go down the list you're getting closer and closer to the prediction of pure game theory and self-interest. So I like to tease the economists sometimes: “Hey we found a group of people who obey the [-] prediction, assuming pure self-interest.” “Oh, are they PhD students at Penn?” “No! It’s the Machiguenga in Peru and the Quichua in Ecuador!” One of these groups is head-hunters - it might be the Quichua or the Achuar.

And the reason is, according to Joe, although some anthropologists have emailed me in dispute of this so let's say it's controversial, Joe Heinrich when he came back to UCLA, showed this to his next door neighbor Alan Johnson, who is a real expert on this group of people. Alan said “yeah, they're completely asocial”. They basically don't hang out together, they don't share, they don't feel any sense of community, they really don't have proper names for people other than 'kin'. They'll just say 'the tall one' or 'the one who smiles a lot'. Because they don't need names, right? If you don't need to address people by a name or tell somebody who somebody else was, you don't necessarily need proper names. So this is like the opposite of Cheers where everybody knows your name. This is when nobody knows your name. And as a result they have a sharing norm that says if you have some money and you are forced by some white person to offer it to somebody else, you don't have to offer very much, and they'll take whatever they're given.

So that just shows you that that self-interest equilibrium does exist somewhere in the world in a place where the sharing norm permits very unequal sharing. And responders don't get mad, because they don't expect to get much. Other places you do get very strong sharing norms. You also get a little bit of hyper-fair offers or potlatch offers. These are often places where my interpretation like the Ache is that (and often these offers are rejected) is that too big of an offer is kind of an insult. And in some of these societies there's also the potlatch convention which is that if John goes hunting for pig and he comes back with two wild boar and he says “hey, I have lots of wild boar I'll give you an extra. Why don't you take one? This is my gift” It's not really a gift, right? It's like Tony Soprano offering to help you out in your bar. What it really is John's way of saying “I'm a really good hunter, I know you're not”, so it's an insult, and next week he comes back and says “hey, remember that gift?”  Apparently this really happens in some of these cultures – It's an IOU. The extra pig he gave me as a gift can be kind of transmitted it's like convertible debt. It's convertible at the whim of the debt holder into some other currency. So he comes back a month later and says “Hey, remember that pig I gave you a few weeks ago? I want your son to marry my daughter.” So people often reject these gifts. The newest study is very cute. We actually have used the strategy method, although I should say “they”, all these incredible anthropologists who have done all these things. They use the strategy method which means for every offer you might get, they ask people “will you take this, will you take this, will you take this”, and you get a lot more information. And you get v-shaped curves. Basically people say “so I reject the really low ones, I definitely would accept half”, and some people say “I'd reject ones that are too high”. And what they say is “that's just unacceptable”. Deviating from the norm in either direction is bad. Most people don't say that, but many.

A brain picture and then I'm done. This is a beautiful study from Princeton. These are differential areas of brain activity when players got low offers, one dollar out of ten, relative to when they got even offers, like five out of ten. So this is the extra activity in brain areas when you're thinking “should I take the low offer?” And those offers were rejected a fair amount of the time, like a third, and the fifty percent offers were always accepted. So this is the brain areas that are thinking “is that too little?” Just three areas: the dorsal-lateral area, we also saw that if you might recall as an area when you are out of equilibrium choosing relative to believing, that area is active so that maybe is an important area for thinking about strategizing and weighing other people. Here's the interior cingulate, which is a kind of a [-] area. If you're trying to overcome a natural instinct, like you really have to go to the bathroom, or you're playing Simon says and you're concentrating on the “Simon says raise your right hand” the cingulate is active in those kinds of so-called [spook] tasks. And here's the insula we saw before. The left and right insula. So they interpret the insula activity as discomfort. Remember the insula is an area that gets bodily signals from the nervous system and expresses them to the brain. So if you smell a disgusting odor or you feel uncomfortable, the insula is active. And so one way to think of this is when people say 'that offer was really disgusting' they mean it almost in a literal sense, in that the same brain regions are being activated when you're being treated disgustingly in a social way as when you're smelling something disgusting. And also, very interestingly, when there's more activity in the insula relative to the LPFC, the dorsal-lateral-prefrontal, people are more likely to reject. So, in a crude sense, they're not very high correlations, the correlations in these types of things are never more than like .5 or .6 or .7. But that tells you that you could look at the brain evidence and make a guess about whether they'd reject and tie it to the behavioral evidence as well.

Let me conclude. So this is just to wrap up. I talked about three things: one was a thinking model. I see there being at least three components in behavioral game theory based on the things we've been trying to study, although there are many things we're leaving out that are huge, interesting problems too. One problem is somebody comes into the lab and plays [-]. So the two we have there, that competes with and in a sense uses pieces of equilibrium thinking is the steps of community model. We've also talked about how equilibrium might be thought of as a neural state of mind, as a pattern that's in the brain, rather than just mathematical connections between [-].

I talked about learning. We have these learning models that are kind of dual-process models maybe involving two brain areas that seem to attract data sets pretty well. I didn't show you lots and lots of data but we've studied many games and also compared them with lots of other models, and finally I talked about feeling and sharing. I put the feeling part in because the insula area seems to really be involved in body discomfort and is showing up in that one study. There aren't too many other studies. Rachel [Posen] is a big expert on these sorts of things, that measure emotions, but I would imagine these types of bargaining games and things like public good contribution games, emotions seem to be part of what's going on and we can measure those and link those to behavior. So there's not a lot of direct clear measure of feeling evidence but surely that's what’s going on. And we see really interesting cross-cultural variation there.

Some applications and new directions: one is, if you're interested in war games and stuff like that, I think now we have a pretty good idea, we could build a kind of life-like simulated opponent, we could create a computerized opponent that would behave in a way that some people do, that could be useful. These theories also may have economic value. What that means is and we've done this in a few numerical exercises, you could ask the question: if somebody used this theory to forecast what people would do, and then best responded, would they earn more money than the average subjects? Like in the beauty contest game, given all the data we've studied, if you called me up and said “I have to play one of these games for millions of dollars, what number should I pick?” I would make some prediction that might be like 19, and you would do better than if you followed Nash's advice and picked zero, and you might do better than the average person, who isn't using a model when they're reasoning. So this is like a Wharton school test, the market test; can these theories actually generate value because they embody more knowledge than the average participant has in the economy?

By the way, the highest level thinkers, like in the CH model, the people that are doing the highest level of thinking, they will give you exactly the same advice as the model will give. That's sort of a property of the model. So that tells you again that, we're not saying that all people are dumb, or etcetera. There are agents in the models that are doing really thoughtful intelligent things but they understand that not every other agent is as thoughtful - that’s the disequilibrium part. Herb Gintis has pointed out that there's a sense in which game theory can unify behavioral sciences and maybe even areas like computer science, because the whole point is the basic building blocks of games are meant to work at various different levels.  Theoretical biologists talk about evolutionary game theory, and so does Andy [-] . So even though we may not be studying the same games, you know, how genetic evolution occurs over large time scales and how imitation occurs as companies introduce new products, may not really be exactly the same thing, we might be able to share kind of a pool of ideas and mathematical tools. And in that sense there's unification in the sense that there's a certain amount of shared enterprise. And that would be useful because the behavioral sciences in some ways really are the most contradictory, so like imagine if you were an undergraduate at most places, and you took a  course in sociology and another course in anthropology and another in game theory, and the final exam question was in general, should you trust people? It might be that the correct answer in one class, [gets you] an F in another class. So there should be some language by which you could reconcile these things that are of common interest across many levels of analysis and across even biological sciences as well as behavioral sciences.

And finally I can't help but point out that the standard theory in economics may be useful as a limiting case, and also it does appear that some ironic cases in which predictions of standard theory actually explained behavior of unusual groups of people. So I have another neural study we're trying to finish about the Ellsberg paradox and the ambiguity versus risk. That is, what happens when you know a lot about a probability versus you don't know very much. Some decision theories argue that you should really treat those routes similarly, even though you don't know enough about a probability, you should infer that it doesn't matter. We did find a group of people who behave that way, who are kind of neutral for ambiguity. They're patients with this orbital-frontal damage. In the case of the ultimatum game, also we have two other groups who seem to obey the standard theory. There's a group I didn't tell you about: autistic subjects often offer zero. They basically act like they're playing a game where there's not another person. They're playing a kind of mistakenly truncated game in which the other person doesn't get to react because they don't have this theory of mind that they already can imagine what others will do. So here's the Machiguenga and well, that's it.

Questions & Answers

Q: So the first example you gave in which the more sophisticated players played a zero-one strategy, my question, which seems to contradict your claim that the more sophisticated players match the prediction from your CH model. I wonder if you actually look at more sophisticated players, do they actually play zero-one?

Camerer: Those are two good questions. Well, first of all in games of mixed equilibrium the claim I made doesn't hold in quite the same way because our model assumes best response, so what happens is these things always cycle around. So in a steady-state sense, it might come to match it, but not an as precise a sense as I might have, ah, I wasn't precise enough. Do people actually do that? Yeah, we've tried that a couple times. We're going to go back into some more tools, we've tried to measure steps of sophistication across people and correlate them with brain stuff and response times. It's not that easy to do. Statistically it's kind of tough, like if you look at one person playing 22 games, they don't seem to be sticking to three-step behavior every time. So for people interested in personality psychology this is nothing new at all, you know if you think of these as sort of stable abilities or traits, they're not highly correlated item to item. So we need to just go back and do better psychometrics or classification. It does look like there's some predictive power. If you take a person's kind of apparent number of thinking steps say in the first eleven games and the next eleven games, and you correlate, using a mixture of Caltech students who are really analytically skilled (the median math SAT is 800, we actually choose based on verbal scores) and PCC students which is an open admissions city college, so there's a nice big variation. You do get a cross-correlation of the first batch of games and the second batch is around .5, so that's pretty good for these sorts of things. We were kind of disappointed about the response times. We learned two things we looked at, we classified people by how many steps they seem to be doing and the response times that we thought the more steps the longer. First it's not that simple because there may be skill differences. So it's like if you took a famous chef and me cooking bouillabaisse, and the chef is much faster than me. It takes me a long time because it's skill. So it could be that people who are high-step thinkers have so much skill that they can do more steps faster. This is a classic problem in labor economics basically in figuring out marginal returns. We did find out, I think as I alluded to in the talk, that people who look like they're often zero steps here they're hard to distinguish statistically in a Bayesian sense, but like in the beauty contest game if you choose above sixty seven, I don't know what to call you other than thoughtless or mistaken. And they often take a long time. Zero-step thinkers have longer response times than one. And then two and three step thinkers have longer response times. It's a U-shape. We thought it would just go up. We now have modified our view a little bit. Remember the zero step thing is just a starter, it's a starting point. And some of those zero-step thinkers are just people that are doing things that are very complex that are not classified in a structure. So in a statistical sense the zero-category is sort of this residual category. For example, in the beauty contest game if you pick zero because it’s Nash, you'll get misclassified as a zero thinker because there's no other place to classify you. So the zero-step is sort of this garbage category. And we need to go back in there and think a little more carefully about what's going on.  Now we think of it as a combination of heuristics. It's because we're doing something quick and dirty, and other types that are being mistakenly classified into that category.

Q: Maybe you've said this, but when the offers with the sharing games, in the different societies was the total amount that they played for changing or was it averaging over the amount of money they can normally get?

Camerer: No I did not mention that. That's a good point. Usually I'd ask people what their prediction is, because often people have something in mind. The anthropologists worked pretty hard at trying to maintain constant in purchasing power across the groups. So there are two things about money. One is maintaining constant purchasing power is pretty hard, actually. So in many of these this will give me an opportunity to make a sort of speech. I learned a lot about how to do experimental economics from anthropologists doing experimental economics for the first time. And the reason is that they face some serious challenges. One is language. So first you have to worry about translating these things, the standard method of course is what's called back translation. You translate from English into Spanish and then a local speaker translates from Spanish into Machiguenga, and you have another person listen to him and translate that. And you see when the text makes a round trip from English to Spanish to Machiguenga back to English how much does it suffer in the translation. And if it doesn't come back to you the same then you think something might have been mistranslated along the way. For some of these types of games the translation might really matter a lot. Because there is some evidence that single words might really effect what people perceive as the sharing norm. Like if I say “this is a game in which I'm giving you money that you can keep or share” that might be different than if I say “here's some community money that you're in charge of”, right? So the wording could be activating quite different things. So these are tough experiments because they might be very sensitive to things which are really hard to control. This is really hard work. So they work pretty hard at income as well, the purchasing power. The tricky part there is some of these economies are so simple they weren't even really monetized. Usually there is some money floating around but a lot of it is just barter. And they'll go work at an oil pipeline packing weeds for money so they'll have some cash, so when the guy comes by in the boat once a week they can buy some salt and batteries and    Michael Jackson posters and stuff. So they work pretty hard at trying to match the purchasing power. To maybe answer a question implicit in yours, if you didn't explicitly ask, there have been quite a few studies looking at raising the stakes and obviously doing it across these cultures. My sense of the evidence, again Rachel is your local expert on many of these things, is that as you raise the stakes in ultimatum games like if we could do it in the US for ten thousand, ten thousand million, the dollar amounts that people reject would go up. For example, no one would reject five out of ten but many people would reject five out of a hundred. So the dollar threshold will go up in absolute terms. But the percentage amounts they would reject would go down. So people would reject ten percent out of ten dollars that's one buck out of ten, many fewer will reject ten percent of a million. And so any theory of what's going on will have to respect those regularities: that people reject bigger dollar amounts but smaller percentages. But there are still quite a few rejections. John List has an experiment with four hundred dollars where people rejected fifty dollars a couple of times. And by the way, we work extremely hard to be sure that the subjects are not confused. In the sense that they understand how the decisions will lead to money. At Caltech we usually go in with a big batch of money, and we wave it around. We have a long history of error, and again, Penn has an experimental economics lab that’s been operating for fifty years or something, and we try to make sure they understand they really wouldn't be paid according to their earnings. We usually have a quiz after we give the instructions saying “if you're offered five dollars and the other person gets 15 and you say no, how much do you both get?” And if they don't write down zero and zero we either kick them out of the experiment or we explain the correct answer.  So I really bristle when people say “oh, they must be confused” because we really try to make sure they're not confused.

Q: (inaudible) ...correlating brain regions with strategic IQ, or performance in the game as measured by how much money people won. I wondered how you separated cause and effect there. So I think the insula was the region that correlated with poor performance. How do you know, or can you tell that insula activation is in a sense causing poor performance because people are thinking about themselves too much, or is it that poor performance is causing them to feel bad about themselves and therefore the insula is activated?

Camerer: This is a good question. One thing that makes neuroscience powerful I think is that people never really accept anything from one tool. And so usually you try to go in there with a couple of different types of tools or studies that really triangulate. So that's why I mentioned you could do it in something like TMS, the insula is hard to reach, but in principle suppose we could deactivate the insula, that might tell us a lot about whether its really cause or is it sort of a byproduct. We could also, for example, one interpretation of what's going on is that the interesting part is not really the Paris Hilton region right here but its that these people here have deactivation of the insula relative to activity levels across the whole thing. So that means they're actually not feeling much body discomfort when they're thinking about how to play the game. That may just be that they're good at concentrating. And in a scanner there's a normal level of discomfort because it's noisy and claustrophobic, and they manage to kind of put that aside. If that's the case we can then look at, say performance in and out of the scanner. We could see if reducing scanner discomfort, or creating other kinds of discomfort, you know loud distractions and noises. I really put these up to show that in principle that one could make some graphs and someday learn something about them. And I think in the case of the insula that both things are going on. Some people are being too self-absorbed and thinking about themselves or not playing the game as well, and for some people it’s some very different thing like you suggested. So we'd either try to stimulate it, turn the insula on or off through some other mechanism, look at insulectomy and patients with damage in that region to see if they're doing something different. One thing about the neuroscience that's important too is, economists have always been comfortable with the notion of individual differences but didn't know much about how to measure it in a psychometrically interesting way. Usually in our textbook theories there's no subscript ‘i’ for different people, but it's understood that, well let's assume that everyone's the same, because that’s mathematically convenient. On the other hand, specialization in games of exchange is the core driver in economics. So we understand that people are different in interesting ways but we haven't had every scientific language to get at it. So these sorts of studies invite you to look at both deficit patients on one hand and experts as well. We want to do some studies at Caltech with lots of fantastic chess players. Do they have big extra-developed systems, or do they have other domain-specific knowledge, what is it that they're doing. There are many studies like that.

Q: (inaudible)

Camerer: We try to do a little bit of everything, because we like to have at least some answers to all the questions. A lot of the CH data I mentioned come from playing once, because we really wanted to build up a lot of people knowing they're only playing once. Maybe they're thinking extra hard. But when we're studying learning we're almost always playing the same game many times, from ten to maybe fifty or sometimes a hundred. In some labs other than ours people play five hundred times. So they have a huge history, more like in the mathematical psychology tradition of one person doing something many times and averaging. And we're also interested in and have studied, going back to when I was at Penn, repeated games like repeated trust games. A trust game is like a sequential prisoner’s dilemma, I cooperate and then I hope you cooperate back. And there's a big difference between playing once and playing ten times with the same person knowing you're playing the same person so they can build up a reputation. So we're interested in whether the behavior seems to fit Bayesian/Nash reputation type models, how is it that they're reaching that reputation. We've been studying what’s called strategic teaching, which means what if one person knows and another person's learning from experience? And they can sort of influence what they know. That could be bad, because it's bad for you. If it’s a trust game I teach you that I’m always going to pay and I teach you beneficially. So we've been studying all those kinds of things. Modifying the CH model is exactly the kind of thing we're working on now. IN the CH models, suppose you know you're going to play a game that unfolds like in five periods. It seems odd to say “well I'm a level one player but I look ahead to the fifth period.” So we've been playing with models in which you kind of yoke the number of steps of reasoning about others to the number of steps of planning. Or you could treat them as separate features. And that's actually pretty interesting because you might be able to get things like cooperative behavior, there are some theorems like this in game theory as well. You might be able to get what looks like cooperative behavior, not out of social preferences or a taste for cooperation but out of limited looking ahead and then some other stuff on top of it.

 

University of Pennsylvania