### Risk Taking and Intelligence

Common wisdom says it's "stupid" to take unecessary risks, but some surprising results from the Journal of Economic Perspectives suggest that intelligent people might be the most likely to make these "stupid" decisions. In an NYT interview with Professor Shane Frederick of the MIT Sloan School of Management, the author relates how a short math test has unearthed some fascinating insights into individual differences in risk taking. Consider the following problems:

1) A bat and a ball cost $1.10 in total. The bat costs $1 more than the ball. How much does the ball cost?

2) If it takes five machines five minutes to make five widgets, how long would it take 100 machines to make 100 widgets?

3) In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half the lake?

If you got all three answers right, you may be good at math, but you're also probably someone who reflects on their answers; each question has an intuitive "foil" which is in fact completely wrong. Frederick found that the score on this test predicted the amount of risk that individuals would take in choosing between various financial payoffs - the higher your score, the more likely you are to either wait for a reward or to take risks in order to get a better reward. In addition, there was an interaction with gender, such that high-scoring women show slightly more willingness to wait for a payoff than high-scoring men, but low-scoring women are even more risk averse than low-scoring men.

These differences are not well explained by current theories of decision making, such as Kahneman & Tversky's "prospect theory" which states that subjects evaluate risks according to an asymmetric utility curve in which risks are weighted more heavily than potential gains. Frederick suggests that these diversions from prospect theory may actually be a result in differences in intelligence between the groups, although there are problems with this interpretation.

First, many of the scales used by Frederick explicitly rely on introspection and self-report, two processes known to be inaccurate in exactly this context (e.g., the self-report of SAT scores). Second, any scale containing only three questions is likely to show a lot of variability, especially one containing math problems (since "smarter" participants could simply have been exposed to the questions more frequently), but Frederick does not report the statistics that bear on this question. Third, his analysis is full of references to vague terms like "cognitive ability," and the suggestion that this is related to an interaction of working memory, processing speed, and IQ, but with no actual explanation of exactly how those factors interact to produce cognitive ability, nor which of them is particularly relevant to correctly solving these math problems.

Many of his control analyses are also problematic, above and beyond the use of relatively undefined terms. For example, there are no reaction time measures for the math problems: what if good performance is more strongly related to how much time you spend reviewing your answer than to 'cognitive ability'? Or consider one of the results from a question meant to control for individual differences in time preference: those in the low scoring group "[the 'cognitively impulsive'] were willing to pay significantly more for the overnight shipping of a chosen book (item 1) which

In summary, this is an interesting starting point for the study of individual differences in decision making, but it is plagued by several methodological problems. Nonetheless, the idea that intelligence (or other measures of executive control, such as self-monitoring) may interact with risk aversion is fairly interesting; more elegant designs will be needed to conclusively prove this fact.

EDIT: Today COGBlog is also running a very nice story on this study's methodological flaws - and finds even more!

The correct answers to the math problems are 5 cents, 5 minutes, and 47 days.

1) A bat and a ball cost $1.10 in total. The bat costs $1 more than the ball. How much does the ball cost?

2) If it takes five machines five minutes to make five widgets, how long would it take 100 machines to make 100 widgets?

3) In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half the lake?

If you got all three answers right, you may be good at math, but you're also probably someone who reflects on their answers; each question has an intuitive "foil" which is in fact completely wrong. Frederick found that the score on this test predicted the amount of risk that individuals would take in choosing between various financial payoffs - the higher your score, the more likely you are to either wait for a reward or to take risks in order to get a better reward. In addition, there was an interaction with gender, such that high-scoring women show slightly more willingness to wait for a payoff than high-scoring men, but low-scoring women are even more risk averse than low-scoring men.

These differences are not well explained by current theories of decision making, such as Kahneman & Tversky's "prospect theory" which states that subjects evaluate risks according to an asymmetric utility curve in which risks are weighted more heavily than potential gains. Frederick suggests that these diversions from prospect theory may actually be a result in differences in intelligence between the groups, although there are problems with this interpretation.

First, many of the scales used by Frederick explicitly rely on introspection and self-report, two processes known to be inaccurate in exactly this context (e.g., the self-report of SAT scores). Second, any scale containing only three questions is likely to show a lot of variability, especially one containing math problems (since "smarter" participants could simply have been exposed to the questions more frequently), but Frederick does not report the statistics that bear on this question. Third, his analysis is full of references to vague terms like "cognitive ability," and the suggestion that this is related to an interaction of working memory, processing speed, and IQ, but with no actual explanation of exactly how those factors interact to produce cognitive ability, nor which of them is particularly relevant to correctly solving these math problems.

Many of his control analyses are also problematic, above and beyond the use of relatively undefined terms. For example, there are no reaction time measures for the math problems: what if good performance is more strongly related to how much time you spend reviewing your answer than to 'cognitive ability'? Or consider one of the results from a question meant to control for individual differences in time preference: those in the low scoring group "[the 'cognitively impulsive'] were willing to pay significantly more for the overnight shipping of a chosen book (item 1) which

*does*seem like an expression of an aspect of*pure*time preference (the psychological 'pain' of waiting for something desired)." [emphases in original] Even if you assume that terms like "psychological pain" could actually be measured in a way that bears on this question, it's difficult to know whether the low-scorers are actually impatient or if they just happen to be voracious readers - especially since other measures of "time preference" were unrelated to score.In summary, this is an interesting starting point for the study of individual differences in decision making, but it is plagued by several methodological problems. Nonetheless, the idea that intelligence (or other measures of executive control, such as self-monitoring) may interact with risk aversion is fairly interesting; more elegant designs will be needed to conclusively prove this fact.

EDIT: Today COGBlog is also running a very nice story on this study's methodological flaws - and finds even more!

The correct answers to the math problems are 5 cents, 5 minutes, and 47 days.

## 2 Comments:

Stimulating article, thanks.

The "doubling" problem may be unfairly biased in favor of anyone who's taken computer science or binary math courses. More men than women take those courses, so the better scores by men can be partially explained by that. Otherwise the author's assumption that men being better at math explains the difference, may be right. Not that your friend over at cogblog would agree. Heh.

Nooo!! The doubling problem is just common sense. What stops most people is that they look for more complexity than is there.

Post a Comment

<< Home