Th willingness to take risks is often treated as stable personality trait. That’s not entirely true. Risk seeking depends on what we need and when we need it.
Individuals close to starvation risk more for food than they otherwise would (risk taking being defined as action with highly variable consequences). This risk-seeking is predicted by a theory (Houston & McNamara, 1988) from animal foraging. The theory, which in animals only partially holds (Kacelnik & Bateson, 1996), is increasingly confirmed for human risk taking (Korn & Bach, 2018). In the typical experiment, people are asked to do something like this: You have five choices and must accumulate at least 10 points by repeated choice between two lotteries, which are described to you. After each choice the selected lottery’s draw adds to your point total. You only get a payoff if you reach 10 points.
Here comes the interesting bit. Computationally speaking, solving this puzzle is quite complex. However, humans achieve near-optimal performance in this task. Comptuationally one needs to find the optimal lottery in terms of which of the lotteries has the higher expected value given that there are only some trials left and that one still needs a certain number of points. The solution involves dynamic programming (Houston & McNamara, 1988). This involves enumerating all possible future choices and draws of the lotteries that could happen. This begs the question: How does the human mind solve the task?
We test how humans solve risky decisions (decisions from description) if they fact time constraints and minimum requirements. We test prospect theory (Tversky & Kahneman, 1992) and show that the standard functional form and parameter do not suffice to solve this task. However, a simple modification does. We further explore cognitive process models that implement the steps to solving this task efficiently. I cannot detail the final results, because the analyses are ongoing. Let me update this post, once the manuscript is accepted.