Judging two events in combination (A&B) as more probable than one of the events (A) is known as a conjunction fallacy. According to dual-process explanations of human judgment and decision making, the fallacy is due to the application of a heuristic, associative cognitive process. Avoiding the fallacy has been suggested to require the recruitment of a separate process that can apply normative rules. We investigated these assumptions using functional magnetic resonance imaging (fMRI) during conjunction tasks. Judgments, whether correct or not, engaged a network of brain regions identical to that engaged during similarity judgments. Avoidance of the conjunction fallacy additionally, and uniquely, involved a fronto-parietal network previously linked to supervisory, analytic control processes. The results lend credibility to the idea that incorrect probability judgments are the result of a representativeness heuristic that requires additional neurocognitive resources to avoid.
When people use rule-based integration of abstracted cues to make multiple-cue judgments they tend to default to linear additive integration of the cues, which may interfere with efficient learning in non-additive tasks. We hypothesize that this effect becomes especially pronounced when cues are presented numerically rather than verbally, because numbers elicit expectations about a task with a simple numerical solution that can be appropriately addressed by linear and additive integration. This predicts that, relative to a verbal format, a numerical format should be advantageous for learning in additive tasks, but detrimental for learning in non-additive tasks. In two experiments, we find support for the hypothesis that a verbal format can improve learning in non-additive tasks. The division-of-labor between cognitive processes observed in previous research (Juslin et al., 2008), with cue abstraction in additive tasks and exemplar memory in non-additive tasks, was only present in conditions with numeric information and may therefore in part be driven by the use of numeric formats. This illustrates how surface characteristic of stimuli can elicit different priors about the nature of the variables and the generative model that produced the cues and the criterion. We fitted cue-abstraction and exemplar algorithms by PNP-modeling (Sundh et al., 2021). At the end of training both cue abstraction and exemplar memory processes primarily involved exact analytic processes marred by occasional error, rather than the noisy and approximate intuitive processes typically assumed in previous studies – specifically, cue abstraction was primarily implemented by number crunching and exemplar memory by rote memorization.
Six- and 12-month-old infant’s eye movements were recorded as they observed feedingactions being performed in a rational or non-rational manner. Twelve-month-olds fixatedthe goal of these actions before the food arrived (anticipation); the latency of these gazeshifts being dependent (r = .69) on infants life experience being feed. In addition, 6- and12-month-olds dilated their pupil during observation of non-rational feeding actions. Thiseffect could not be attributed to light differences or differences in familiarity, but wasinterpreted to reflect sympathetic-like activity and arousal caused by a violation of infant’sexpectations about rationality. We argue that evaluation of rationality requires less experiencethan anticipations of action goals, suggesting a dual process account of preverbalinfants’ everyday action understanding.
There is considerable evidence that judgment is constrained to additive integration of information. The authors propose an explanation of why serial and additive cognitive integration can produce accurate multiple cue judgment both in additive and non-additive environments in terms of an adaptive division of labor between multiple representations. It is hypothesized that, whereas the additive, independent linear effect of each cue can be explicitly abstracted and integrated by a serial, additive judgment process, a variety of sophisticated task properties, like non-additive cue combination, non-linear relations, and inter-cue correlation, are carried implicitly by exemplar memory. Three experiments investigating the effect of additive versus non-additive cue combination verify the predicted shift in cognitive representations as a function of the underlying combination rule.
While a wealth of evidence suggests that humans tend to rely on additive cue combination to make controlled judgments, many of the normative rules for probability combination require multiplicative combination. In this article, the authors combine the experimental paradigms on probability reasoning and multiple-cue judgment to allow a comparison between formally identical tasks that involve probability vs. other task contents. The purpose was to investigate if people have cognitive algorithms for the combination, specifically, of probability, affording multiplicative combination in the context of probability. Three experiments suggest that, although people show some signs of a qualitative understanding of the combination rules that are specific to probability, in all but the simplest cases they lack the cognitive algorithms needed for multiplication, but instead use a variety of additive heuristics to approximate the normative combination. Although these heuristics are surprisingly accurate, normative combination is not consistently achieved until the problems are framed in an additive way. (C) 2014 Elsevier B.V. All rights reserved.
Research on probability judgment has traditionally emphasized that people are susceptible to biases because they rely on "variable substitution": the assessment of normative variables is replaced by assessment of heuristic, subjective variables. A recent proposal is that many of these biases may rather derive from constraints on cognitive integration, where the capacity-limited and sequential nature of controlled judgment promotes linear additive integration, in contrast to many integration rules of probability theory (juslin, Nilsson, & Winman, 2009). A key implication by this theory is that it should be possible to improve peoples' probabilistic reasoning by changing probability problems into logarithm formats that require additive rather than multiplicative integration. Three experiments demonstrate that recasting tasks in a way that allows people to arrive at the answers by additive integration decreases cognitive biases, and while people can rapidly learn to produce the correct answers in an additive formats, they have great difficulty doing so with a multiplicative format.
In a recent paper Wagenaar (1988) suggested that overconfidence can be used as an indicator of reconstructive processes which allow responses based on inference to be distinguished from responses based on retrieval. The ecological models (Björkman, in press; Gigerenzer, Hoffrage, & Kleinbölting, 1991; Juslin, 1993a, 1993b, 1994) provide a more positive view of the calibration of reconstructive responses. In this paper we compare these two views and argue that overconfidence cannot be considered a reliable indicator of reconstructive processes since people may be well calibrated for tasks that require inference, provided that tasks are selected in an unbiased manner. Instead, we discuss two different models: the response-independence model which is appropriate to retrieval, and the response-dependence model which applies to inference. These two models predict different distributions of solution probabilities and they therefore provide a criterion by which we can distinguish between direct retrieval and reconstruction. In two empirical studies modelled after Experiment 1 in Wagenaar's (1988) paper it is shown that calibration can be very similar and quite reasonable both for tasks that are dominated by inference and tasks that are dominated by retrieval processes. In Experiment 2 we show that the two conditions nevertheless differ in regard to the distributions of solution probabilities in the manner predicted by the two response models presented in the paper. It is proposed that the issue of which is the most appropriate interpretation of solution probabilities is neglected, and that the criterion should be of interest also to applications outside the domain of calibration research.
Behaviour benefitting others (prosocial behaviour) can be motivated by self-interested strategic concerns as well as by genuine concern for others. Even in very young children such behaviour can be motivated by concern for others, but whether it can be strategically motivated by self-interest is currently less clear. Here, children had to distribute resources in a game in which a rich but not a poor recipient could reciprocate. From four years of age participants strategically favoured the rich recipient, but only when recipients had stated an intention to reciprocate. Six- and eight-year-olds distributed more equally. Children allocating strategically to the rich recipient were less likely to help when an adult needed assistance but was not in a position to immediately reciprocate, demonstrating consistent cross-task individual differences in the extent to which social behaviour is self- versus other-oriented even in early childhood. By four years of age children are capable of strategically allocating resources to others as a tool to advance their own self-interest.
We examined 6-month-olds abilities to represent occluded objects, using a corneal-reflectioneye-tracking technique. Experiment 1 compared infants’ ability to extrapolate the currentpre-occlusion trajectory with their ability to base predictions on recent experiences of novelobject motions. In the first condition infants performed at asymptote (≈2/3 accurate predictions)from the first occlusion passage. In the second condition all infants initially failed tomake accurate prediction. Performance, however, reached asymptote after two occlusion passages.This is the first study that demonstrates such rapid learning effects during an occlusiontask. Experiment 2 replicates these effects and demonstrates a robust memory effect extending24 h. In occlusion tasks such long-term memory effects have previously only been observed in14-month-olds (Moore & Meltzoff, 2004).
We hypothesized that women with Turner syndrome (45,X) with a single X-chromosome inherited from their mother may show mentalizing deficits compared to women of normal karyotype with two X-chromosomes (46,X). Simple geometrical animation events (two triangles moving with apparent intention in relation to each other) which usually elicit mental-state descriptions in normally developing people, did not do so to the same extent in women with Turner syndrome. We then investigated the potential role in this deficit played by monoamine oxidase B enzymatic activity. MAO-B activity reflects central serotonergic activity, and by implication the functional integrity of neural circuits implicated in mentalizing. Platelet MAO-B was substantially reduced in Turner syndrome. However, contrary to prediction, in this (relatively small) sample there was no association between MAO-B enzymatic activity and mentalizing skills in participants with and without Turner syndrome.
Two recent studies - one of which was published in this journal - claimed to have found that learning on a non-symbolic arithmetic task improved performance on a symbolic arithmetic task (Park & Brannon, 2013, 2014). This finding has potentially far-reaching implications, because it would constitute evidence for a causal link between the Approximate Number System (ANS) and symbolic-math ability. Here, we argue that, due to the methodology used in both studies, the interpretation of data in terms of an improvement in ANS performance is problematic. We provide arguments and simulations showing that the trends in the data are similar to what one would expect for a non-learning observer. We discuss the implications for the original interpretation in terms of causality between non-symbolic and symbolic arithmetic performance.
Abstract Math anxiety (MA) involves negative affect and tension when solving mathematical problems, with potentially life-long consequences. MA has been hypothesized to be a consequence of negative learning experiences and cognitive predispositions. Recent research indicates genetic and neurophysiological links, suggesting that MA stems from a basic level deficiency in symbolic numerical processing. However, the contribution of evolutionary ancient purely nonverbal processes is not fully understood. Here we show that the roots of MA may go beyond symbolic numbers. We demonstrate that MA is correlated with precision of the Approximate Number System (ANS). Individuals high in MA have poorer ANS functioning than those low in MA. This correlation remains significant when controlling for other forms of anxiety and for cognitive variables. We show that MA mediates the documented correlation between ANS precision and math performance, both with ANS and with math performance as independent variable in the mediation model. In light of our results, we discuss the possibility that MA has deep roots, stemming from a non-verbal number processing deficiency. The findings provide new evidence advancing the theoretical understanding of the developmental etiology of MA.
This study examines how numeracy and probability denominator (a direct-ratio probability, a relative frequency with denominator 100, a relative frequency with denominator 10,000) affect the evaluation of prospects in an expected-value based pricing task. We expected that numeracy would affect the results due to differences in the linearity of number perception and the susceptibility to denominator neglect with different probability formats. An analysis with functional measurement verified that participants integrated value and probability into an expected value. However, a significant interaction between numeracy and probability format and subsequent analyses of the parameters of cumulative prospect theory showed that the manipulation of probability denominator changed participants’ psychophysical response to probability and value. Standard methods in decision research may thus confound people’s genuine risk attitude with their numerical capacities and the probability format used.
While five-month-old infants show orientation-specific sensitivity to changes in the motion and occlusion patterns of human point-light displays, it is not known whether infants are capable of binding a human representation to these displays. Furthermore, it has been suggested that infants do not encode the same physical properties for humans and material objects. To explore these issues we tested whether infants would selectively apply the principle of solidity to upright human displays. In the first experiment infants aged six and nine months were repeatedly shown a human point-light display walking across a computer screen up to 10 times or until habituated. Next, they were repeatedly shown the walking display passing behind an in-depth representation of a table, and finally they were shown the human display appearing to pass through the table top in violation of the solidity of the hidden human form. Both six- and nine-month-old infants showed significantly greater recovery of attention to this final phase. This suggests that infants are able to bind a solid vertical form to human motion. In two further control experiments we presented displays that contained similar patterns of motion but were not perceived by adults as human. Six- and nine-month-old infants did not show recovery of attention when a scrambled display or an inverted human display passed through the table. Thus, the binding of a solid human form to a display in only seems to occur for upright human motion. The paper considers the implications of these findings in relation to theories of infants’ developing conceptions of objects, humans and animals. ?? 2006 Elsevier B.V. All rights reserved.
Base rate neglect refers to people's apparent tendency to underweight or even ignore base rate information when estimating posterior probabilities for events, such as the probability that a person with a positive cancer-test outcome actually does have cancer. While often replicated, almost all evidence for the phenomenon comes from studies that used problems with extremely low base rates, high hit rates, and low false alarm rates. It is currently unclear whether the effect generalizes to reasoning problems outside this "corner" of the entire problem space. Another limitation of previous studies is that they have focused on describing empirical patterns of the effect at the group level and not so much on the underlying strategies and individual differences. Here, we address these two limitations by testing participants on a broader problem space and modeling their responses at a single-participant level. We find that the empirical patterns that have served as evidence for base-rate neglect generalize to a larger problem space, albeit with large individual differences in the extent with which participants "neglect" base rates. In particular, we find a bi-modal distribution consisting of one group of participants who almost entirely ignore the base rate and another group who almost entirely account for it. This heterogeneity is reflected in the cognitive modeling results: participants in the former group were best captured by a linear-additive model, while participants in the latter group were best captured by a Bayesian model. We find little evidence for heuristic models. Altogether, these results suggest that the effect known as "base-rate neglect" generalizes to a large set of reasoning problems, but varies largely across participants and may need a reinterpretation in terms of the underlying cognitive mechanisms.
In this study, we explore how people integrate risks of assets in a simulated financial market into a judgment of the conjunctive risk that all assets decrease in value, both when assets are independent and when there is a systematic risk present affecting all assets. Simulations indicate that while mental calculation according to naïve application of probability theory is best when the assets are independent, additive or exemplar-based algorithms perform better when systematic risk is high. Considering that people tend to intuitively approach compound probability tasks using additive heuristics, we expected the participants to find it easiest to master tasks with high systematic risk – the most complex tasks from the standpoint of probability theory – while they should shift to probability theory or exemplar memory with independence between the assets. The results from 3 experiments confirm that participants shift between strategies depending on the task, starting off with the default of additive integration. In contrast to results in similar multiple cue judgment tasks, there is little evidence for use of exemplar memory. The additive heuristics also appear to be surprisingly context-sensitive, with limited generalization across formally very similar tasks.
Because action plans must anticipate the states of the world which will be obtained when the actions take place, effective actions depend on predictions. The present experiments begin to explore the principles underlying early-developing predictions of object motion, by focusing on 6-month-old infants' head tracking and reaching for moving objects. Infants were presented with an object that moved into reaching space on four trajectories: two linear trajectories that intersected at the center of a display and two trajectories containing a sudden turn at the point of intersection. In two studies, infants' tracking and reaching provided evidence for an extrapolation of the object motion on linear paths, in accord with the principle of inertia. This tendency was remarkably resistant to counter-evidence, for it was observed even after repeated presentations of an object that violated the principle of inertia by spontaneously stopping and then moving in a new direction. In contrast to the present findings, infants fail to extrapolate linear object motion in preferential looking experiments, suggesting that early-developing knowledge of object motion, like mature knowledge, is embedded in multiple systems of representation.
Bayesian approaches presuppose that following the coherence conditions of probability theory makes probabilistic judgments more accurate. But other influential theories claim accurate judgments (with high "ecological rationality") do not need to be coherent. Empirical results support these latter theories, threatening Bayesian models of intelligence; and suggesting, moreover, that "heuristics and biases" research, which focuses on violations of coherence, is largely irrelevant. We carry out a higher-power experiment involving poker probability judgments (and a formally analogous urn task), with groups of poker novices, occasional poker players, and poker experts, finding a positive relationship between coherence and accuracy both between groups and across individuals. Both the positive relationship in our data, and past null results, are captured by a sample-based Bayesian approximation model, where a person's accuracy and coherence both increase with the number of samples drawn. Thus, we reconcile the theoretical link between accuracy and coherence with apparently negative empirical results.