Behavioral science received a nice pat on the back in September, when California Institute of Technology neuroeconomist Colin Camerer won a $625,000 no-strings-attached MacArthur Foundation “genius grant.”
Using fMRI imaging, game-theory laboratory experiments, and a variety of other empirical tools, Camerer’s work reaches across several disciplines and has been instrumental in the ongoing attempts to rewrite standard economic accounts of human decision-making so that they better map real-life behavior. As the Foundation put it, Camerer’s “innovative thinking and modeling acumen are fostering an even more nuanced analysis of individual behavior and the practical policy implications of neuroscientific insights about human decision making.”
Camerer recently conducted an email interview with Pacific Standard which touched on everything from financial regulation to the origins of brain-scan pseudoscience to the ongoing problems with economic orthodoxy. The interview has been shortened and lightly edited from its original form.
One of the cool things about behavioral economics is that it offers insights that just about everyone can use in their daily life. So let’s start with a simple, practical question: If you could pick just one surprising finding from your work about how we make decisions that could get magically transmitted to everyone, what would it be?
In behavioral economics we often see that people focus myopically on a small unit of evaluation or achievement and are overly sensitive to falling short (perceived “loss-aversion”) on that unit. Kids are distraught over losing a toy (even if it’s one of dozens); teens are crushed by one disappointing social encounter. Sometimes this myopic focus makes sense—when you’re studying for the huge exam that is crucial for your career, for example. But usually such a focus vastly overemphasizes what’s happening now and crowds out the longer view. We call this decision isolation. Remedy: Always take the long view.
We speculate a lot about when neuroscience findings might have broad impact in economics. I am optimistic about the science quality but not optimistic about how rapidly innovation will diffuse.
How about the world of finance? In light of your research, what are the easiest, most straightforward tweaks that could be made either to U.S. financial regulations or to how finance professionals are taught best practices to make markets safer and more resilient?
In principle this is easy: One type of professional wants to help people make simple low-cost decisions which will not lead to a constant flow of fees. For example, my advice for retirement is to find the lowest-fee funds with substantial diversification. You can’t control the return but you can control the fees.
Another type of professional wants to stoke your optimism, will encourage trading and chasing returns, and will charge high fees. The best evidence is very clear: It is very hard to predict which funds or managers will beat the market, so if you stick with low-fee conservative funds you will do OK. The regulatory tweak here is to force better disclosure of fees and educate consumers to look at fees.
More generally, your financial institutions are frenemies: They want your business and don’t want you to leave, but they often create hidden fees, upsell, and hawk products you don’t need. Now banks are under a lot of consumer and media pressure, so if you push back they will often retract some fees or be more reasonable. Usually I ask, “What would you do if you were me?” It is hard for financial professionals to then stick to the script (especially if you put on the sad face) since they know the script answer does not address your question.
As for financial regulation, the core issue is how to “price” systemic risk, which means: If a firm buys a credit default swap from one seller, and that sellers trades with another counterparty, what is the risk for the entire chain of trades? Who should pay to insure against this type of risk, and what happens when risks go bad? The 2008 crisis was a big wake-up call to everyone—academics, companies, regulators—that we don’t know very much about pricing systemic risk and how to broadcast those kinds of risk prices. A lot of smart academics are working on this and making rapid progress. My sense is that regulators are paying attention to good ideas that academics are coming up with. However, there is a political economy issue because Too Big to Fail institutions will always resist reform.
This seems like straight textbook “moral hazard” (in economic jargon): Big financial institutions believe that if their business goes badly, governments will bail them out. (TARP probably strengthened their confidence, though it could have scared them instead.) One regulatory idea is to have big institutions pay in to a bailout fund; if you haven’t paid in you will not get bailed out. (Sounds familiar? It’s called insurance.) That could create the political will to allow a non-payer bank to fail, because the public would not be sympathetic if a big bank did not pay in. After one such failure they will all get in line.
What explains the historical tendency in economics (including game theory) to reduce human beings to simple, utility-maximizing robots free of such human traits as anxiety and myopia? How much of it comes down to the fact that homo economicus is simply easier, quantitatively, to model?
In economics, the reason for avoiding realistic modeling was exactly that a simpler caricature of human nature is easier math and—importantly—therefore easy to aggregate into market level demand or supply, which are key economic concepts. For example, if people don’t care what anybody else buys (there are no fashions, fads, or “participation externalities” from using the most popular software) then it is easy to add up individual demand to get market demand. If there are social influences it is not so easy.
In addition to simplicity, part of the long-standing appeal of the rational model in academic economics is technology and scientific conservatism: Until relatively recently, economists did not produce much of their own data. Most data studied came from census and panel data collected by governments, or by various non-scientific organizations. High-quality experimental data became available in the 1970s or so, but even those experiments have taken a long time to diffuse. (Most Ph.D. students at top economics programs are not required to learn anything about experimental methods. This is true even at Caltech where we helped pioneer experimental economics.)
So historically economics as a profession is timid about measuring and collecting new kinds of data. This is an unfortunate contrast to scientific innovation in most other fields, where new methods and data lead the way. As a result, when most graduate students in economics are making up their minds about what theories are plausible and worth working on, they are not being exposed to the clearest behavioral evidence or thinking about new ways to measure.
Kvetching aside, there is plenty of good news for those of us in behavioral economics. Virtually every theorist who begins to do experiments becomes “behavioralized” by the results. And there are a lot of senior economists who are enthusiastic about behavioral economics, and also neuroeconomics, even if they aren’t using the methods themselves.
More broadly, where exactly are we in the struggle, or collaboration, or whatever else it is between the rational actor model and behavioral economics? I know it’s a bit of an overstatement to say the two are simply at loggerheads and one clear winner will prevail, given the extent to which researchers from both sides of the fence borrow from the other’s insights, but what’s your take on where this will all end up? What role should therational actor theory play when all is said and done?
The central struggle over whether a behavioral view is achievable and useful is over. Over about three decades we went from: “If you move away from rationality, anything can happen” to “You have a clear alternative psychological theory, but what’s the evidence?” to “You have convincing lab evidence, what about natural markets and important choices?” to where we are now. Behavioral ideas are well-received in top journals and ambitious graduate students are working on these ideas. However, there is an unfortunate trend in which “anti-behavioral” papers that are inconclusive (in my view) also have been easy to publish.
The next challenges are on two fronts:
• Core orthodoxy and textbooks. Behavioral economics has not made much impact in basic texts. In other fields I know well (such as cognitive psychology and neuroscience) textbooks at all levels are updated briskly. Economists are slowpokes on updating the basic books.
• Measurement. Theoretical frontiers in economic theory discuss thinking costs, limited attention, impactful macroeconomic uncertainty, in-group preferences, anxiety, social norms, and so on. All these constructs have biological and other markers. Figuring out which theories are right will be more rapid if we measure biological correlates of these variables. Imagine doing medicine, and having lots of theories about what causes disease, but refusing to use a microscope to look at tissue samples!? That’s where economics is now with respect to individual decisions. Unfortunately, economists are slow on technology adoption. Fortunately, graduate students are very sensitive to social authority. A few well placed authority calls will speed up rapid adoption.
So, brain scans: They’re hugely important to your research, and can reveal a lot about human actions. But we’re also in something of an age of neuro-shysters—people slap brain scans up on projection screens and make wild claims that don’t have much of a connection to reality—which has led to some soul-searching about what these scans can and can’t tell us. How do you think those of us who lack scientific expertise should approach claims involving brain scans? Given current technology and neurology, what are the limits of this sort of knowledge?
The reasonable view here that brain scanning is noisy, sometimes underpowered (i.e. samples are too small), and the flashiest studies may get oversold in mass media (not always by scientists). At the same time, the basic method is sound and the knowledge cumulation is very rapid and systematic.
Furthermore, brain scans, per se, are not the only ingredient in economic neuroscience. Measuring brain activity in different ways is what is very important. Every method is fantastic in one way and obviously weak in other ways. EEG is fast but cannot extrapolate activity that well in the “old” inner brain. fMRI is expensive at the margin (e.g. each additional subject is expensive) and can see the whole brain, but with slow resolution. Animal models (monkeys, rats) are great for neural measurement and, in some species, genetic changes, but we are never quite sure how their activity and behavior generalize to humans. Behavior of people with lesions in certain brain areas also helps validate what we see from fMRI.
Most neuroskepticism has been directed, often reasonably, at overclaiming human studies that have flashy results but are based on weak data. It is reasonable to be skeptical about every “the first new imaging…” study you read about. Those of us who are serious about self-policing human neuroscience don’t like these studies either. However, we have limited control about how studies are described in the media and even where they’re published.
In neuroeconomics we get criticized a lot because most economists don’t talk to journalists regularly. So our peers don’t always appreciate the extent to which a cautious scientific claim can be wildly paraphrased for a large audience. For example, a recent fMRI paper of ours in Neuron on mentalizing in experimental stock bubbles was described by the CNBC headline “Science Proves Why the Best Traders Are Jerks.” We did not write that headline and disavow it.
Any questions I should have asked, but didn’t? Anything else you’d like to add?
We speculate a lot about when neuroscience findings might have broad impact in economics. I am optimistic about the science quality but not optimistic about how rapidly innovation will diffuse. The main reason for optimism about decision neuroscience is the rapid pace of progress, and the ability to check mistakes rapidly and across methods. Bad ideas don’t last long. Journals publish rapidly. There is good communication across substantial language gaps.
But the acceptance of neural ideas in economics has often been discouraging. Here’s an example: Last year a top journal invited two articles on neuroeconomics. This was an ideal opportunity to present the latest evidence and explain why it could be important in economics. However, the editors apparently wanted a balanced view, and invited one of the two articles, a skeptical one, by van Rooij and Van Orden. (These articles are not peer-reviewed, by the way.)
When I showed their article to my weekly research group the students burst out laughing. Nobody had heard of these authors, which meant they were not qualified to write about economics. The article was full of stale, poorly-sourced, and bizarre arguments. The authors included an appendix with brain region locations that were marked incorrectly … and then refused our offer to help correct the labeling! This is just one article but it indicates some inability among mainstream economics to even organize a useful debate.
I am still optimistic that neural measures can improve economics analysis on its own terms, but discouraged about the tendency of the economics profession to choose to sit on the sidelines and ignore these results while wonderful work is being done in neuroscience, which would benefit from more collaboration.