Public users are able to search the site and view the abstracts and keywords for each book and chapter without a subscription. Please subscribe or login to access full text content. If you have purchased a print title that contains an access token, please see the token for information about how to register your code. For questions on access or troubleshooting, please check our FAQs , and if you can''t find the answer there, please contact us. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a title in Oxford Handbooks Online for personal use for details see Privacy Policy and Legal Notice.

Oxford Handbooks Online. Publications Pages Publications Pages. Search within my subject: Select Politics Urban Studies U. History Law Linguistics Literature. Music Neuroscience Philosophy Physical Sciences. Read More. Hamilton and Donal E. Uleman and Laura M. Monteith, Anna Woodcock, and Jill E. Wegener and Richard E. Results in Mathematics, 43 , Shanteau Eds. Emerging Perspectives in Judgment and Decision Making.

Cambridge , England : Cambridge University Press. Review of General Psychology, 7 , Theory and Decision, 55 , A model of ratio production and estimation and some behavioral predictions. In Berglund, B Ed. Lengerirch, Germany: Pabst. Ranked additive utility representations of gambles: Old and new axiomatizations.

Psychometrika, 70 , Theory and Decision, 58 , Observer, 18 , Journal of Mathematical Psychology, 49 , Global psychophysical judgments of intensity: Summary of a theory and experiments. Dzharfov Eds.

Measurement and Representations of Sensations. Mahwah, NJ: Erlbaum. Journal of Mathematical Psychology, 50 , Journal of Mathematical Psychology, 51 , On the utility of gambling: Extending the Approach of Meginniss. Aequationes Mathematicae, 76 , Utility of gambling I: Entropy-modified linear weighted utility. Economic Theory , 36, 1— Correction to Luce Ng, C. Utility of gambling under p-additive joint receipt and segregation or duplex decomposition.

Journal of Mathematical Psychology , 55, Aequationes Mathematicae, 78 , Theory and Decision , 68 , Garner Lindzey: Obituary. Proceedings of the American Philosophical Society, , Journal of Risk and Uncertainty, 41 , Theory with Experiments Varying Loudness and Pitch. Psychological Review ,, Inherent Individual Differences in Utility. Frontiers in Psychology , 2, Journal of Mathematical Psychology, 56 , American Journal of Psychology, , Steingimsson, R. To appear in a Festschrift on the occasion of Suppe's 90th birthday. The representational theory of measurement concerns the types of data that can be summarized in some numerical way.

Much general theory has been developed exposited in Foundations of Measurement, vols. Tversky and still is being actively explored. Although some of my efforts continue on general topics of measurement, I have mostly been applying since some of these ideas to individual decision making, where the numerical measures are called utility and subjective probability or weights, and to global psychophysics spanning full dynamic ranges of intensity such as loudness. Recently, with colleagues, I have examined the kinds of behavioral laws that link riskless and risky utility, in particular, are developing theories that address the issue of the utility of gambling.

Accompanying the theoretical work is an empirical program of Michael Birnbaum in which these plausible behavioral properties are tested in computer-based, laboratory experiments. In the auditory domain, Ragnar Steingrimsson and I have extensively and successfully evaluated the psychophysical model including the forms for psychophysical and weighting functions.

Currently we are working on the time-order error, and he is evaluating the general model in the brightness domain. Many tricky and interesting questions arise about how best to evaluate these properties. Foundations of Measurement, Vol. III , Academic Press. See Errata. Geometric representations of Perceptual Phenomena: Papers in honor of Tarow Indow on his 70th birthday. Psychological Review, 97 , Bostic, R.

Journal of Economic Behavior and Organization, 13 , Rational versus plausible accounting equivalences in preference judgments. Psychological Science, 1 , ; and in W. Edwards Ed. Boston: Kluwer Academic Publishers. Pp Narens, L. Three aspects of the effectiveness of mathematics in science. Mickens Ed.

Mathematics and Science. World Scientific Press. Rank- and sign-dependent linear utility models for binary gambles. Journal of Economic Theory, 53 , Rank- and sign-dependent linear utility models for finite first-order gambles. Journal of Risk and Uncertainty, 4 , What is a ratio in ratio scaling? Gescheider Eds. Hillsdale, NJ: Erlbaum. Where does subjective expected utility fail descriptively? Journal of Risk and Uncertainty, 5 , Furthermore, the BAttM predicts direct, intuitive relations between the internal uncertainties of a decision maker and the absolute level of confidence that can be reached: Larger uncertainties lead to smaller confidence e.

As these uncertainties simultaneously control choices, response times and re-decision times, we propose to validate the consistency of these predicted relations in future experiments. We fitted the BAttM to average behaviour reported in [ 54 ] and found that the BAttM explains decision making behaviour well Fig 12B and 12C even though we assumed a simplified representation of the stimulus cf. This was expected, because 1 a similar, abstract stimulus representation was sufficient to fit behavioural data of humans before [ 23 ] and 2 [ 54 ] originally used a similar computational representation to fit a drift-diffusion model to the data considered here.

For the BAttM, estimates of the reliability of parameter fits indicate that fitted parameter values are highly reliable for experimental conditions in which subjects exhibit intermediate accuracy in response to coherences from 3. In contrast, an optimal Bayesian decision maker should have a generative model in which, ideally, r would equal s. Our observation that s exceeds r suggests that subjects indeed perform suboptimal inference in the corresponding choice task. We expect that parameter estimates become more reliable in these experimental conditions, if reaction time distributions are used for fitting instead of only mean reaction times [ 54 ].

In the original fits of behaviour in [ 54 ] the drift was constrained to be a linear function of coherence [ 54 ], Supp. In contrast, in our fits of the BAttM to the same data we allowed both, sensory uncertainty r and noise level s , to freely vary across coherences. Although this increased flexibility of the BAttM, in principle, could have led to overfitting, it is unlikely that this is the case for our results: The noise in the data is small compared to the effect of the coherence, because the data are averages based on 15, trials [ 54 ], Fig 1. The low variance of parameter estimates for intermediate coherences Fig 12A also indicates that our fitting method identified unique parameter values for these coherences.

It is currently unclear why the parameters for high coherences do not follow the previously assumed linear relation between drift and coherence. One possible explanation is that the urgency signal, which we did not model in the BAttM, has a larger effect for high coherences than for low ones. The estimated shape of the urgency signal [ 54 ], Supp. However, clearly further research is required to substantiate this potential mechanism. The BAttM explains different behaviour in response to stimuli with different strength using particular combinations of input noise level s and sensory uncertainty r Table 2 , Fig It, therefore, appears that decision makers adapt their expectations about the stimulus r to stimulus strength even before they experience the stimulus we fixed r within trials.

In experiments in which trials with the same stimulus strength are blocked, or in which stimulus strength is cued before onset of the stimulus, this is plausible. In experiments in which stimulus strength changes randomly across trials, this assumption seems flawed. This consideration has led others to discuss whether the brain implements Bayesian models [ 72 ]. Here, we speculate that decision makers rapidly adapt their expectations in parallel with decision making as they sample observations from the stimulus.

Such adaptation is compatible with the timescale of short-term synaptic plasticity in the brain [ 73 ]. Also, it has previously been demonstrated that sensory reliability akin to r can be inferred together with stimulus identity in a Bayesian model [ 25 ]. Even though we believe that decision makers adapt their stimulus expectations within a trial, the BAttM currently does not employ such a mechanism. Nevertheless, assuming fixed r led to good fits of accuracy and mean RTs as recorded in [ 54 ] cf. Fig This is not very surprising: The behavioural data has originally been fit by a drift-diffusion model with constant drift throughout a trial [ 54 ].

Such constant drift implements the assumption that the average amount of evidence extracted from the stimulus at a given moment is constant throughout the trial. Therefore, the assumption of a constant drift throughout a trial is, in the BAttM, equivalent to maintaining stable expectations about the stimulus throughout the trial. As a result, keeping r fixed in the BAttM is a simplification that follows previous approaches based on drift diffusion models and still allows to fit behaviour accuracy and mean RTs of subjects well see Fig Similar to within-trial effects of top-down gain modulation, however, future work may aim at elucidating potential effects of within-trial variations in expected sensory uncertainty r due to adaptation of stimulus expectations.

In particular, experiments with longer re-decision trials and continuously changing stimulus reliability may induce strong adaptations of stimulus expectations that have measurable behavioural effects. One of the strengths of the original pure attractor models is their link to possible neurobiological implementations in networks of spiking neurons cf.

Section: pattm. We have abstracted from this perspective and have embedded a pure attractor model in a dynamic Bayesian inference framework. Consequently, the question arises how this apparently more complicated construct may map to a neurobiological substrate. The BAttM is a probabilistic filter that recursively updates posterior beliefs by evaluating the likelihood of the state of a dynamic generative model given a stream of observations cf.

A wide range of proposals have been made for how probabilistic filters can be implemented by networks of neurons [ 47 , 74 — 81 ]. For example, [ 80 ] discusses how computations defined by predictive coding approaches, which derive from probabilistic filters cf. Section Bayesinf , can map onto canonical microcircuits in cortex. More abstractly, [ 47 , 77 , 79 ] show how networks of rate neurons may implement probabilistic filters and [ 74 — 76 , 78 , 81 ] provide implementations based on spiking neuron networks. Given these proposals, it seems reasonable to assume that the computations defined by the BAttM can be implemented by the brain.

We have presented a novel perceptual decision making model, the Bayesian attractor model, which combines attractor dynamics with a probabilistic formulation of decision making. The model captures important behavioural findings and makes novel predictions that can be tested in future experiments. In particular, we have highlighted a re-decision paradigm which can be used to investigate the tradeoff between flexibility and stability in perceptual decisions.

Furthermore, the BAttM predicts particular, within-trial modulation of sensory gain which may explain recent experimental findings. Finally, the BAttM predicts experimentally testable links between choice, response times and confidence. We used a Hopfield network as an example of a pure attractor model.

Hopfield networks have originally been suggested as a neurobiologically plausible firing-rate models of recurrently connected neurons [ 44 ]. This choice increases the range of values for which the sigmoid is approximately linear and increases robustness of the inference with the generative model. The network is driven by constant input g modulated by self and lateral inhibition between state variables z 1 and z 2. When modelling perceptual decisions, we follow [ 26 , 28 ] and initialise the attractor dynamics in a neutral state.

We set the covariance of the initial decision state to and call p 0 the initial state uncertainty which is a parameter of the model that controls the susceptibility of the decision state to incoming evidence at the beginning of a trial. In Fig 6 we plotted contour lines. These were approximated from the noisy data points underlying the grey scale maps as follows.

We defined four values for four contours for each map as reported in the caption of Fig 6. For each value, e. In particular, the Gaussian process mapped the logarithm of the noise level, log s , onto the logarithm of the sensory uncertainty, log r and used a standard squared exponential covariance function with a Gaussian likelihood [ 82 ]. The contour lines in Fig 6 represent the mean predictions of sensory uncertainty obtained from the fitted Gaussian processes for the corresponding noise level.

To fit the data from the experiment reported in [ 54 ] we defined a temporal scaling between our discrete model and the times recorded during the experiment. It was chosen as a tradeoff between sufficiently small discretisation steps and computational efficiency and means that about time steps are sufficient to cover the full range of reaction times observed by [ 54 ]. The non-decision time captures delays that are thought to be independent of the time that it takes to make a decision. These delays may be due to initial sensory processing, or due to the time that it takes to execute a motor action.

We used a form of stochastic optimisation based on a Markov Chain Monte Carlo MCMC method to find parameter values that best explained the observed behaviour in the experiment for each coherence level independently. This was necessary, because we could not analytically predict accuracy and mean reaction times from the model and had to simulate from the model to estimate these quantities. In particular, we simulated 1, trials per estimate of accuracy and mean RT, as done to produce Fig 6.

We then defined an approximate Gaussian log-likelihood of the parameter set used for simulation by using the estimated values as means: 12 where A and RT are the accuracy and mean RT, respectively, measured in the experiment for one of the coherences and and are estimates from the model. P s , r is a penalty function which returned values greater than 10,, when more than half of the simulated trials were timed out cf.

Fig 5A. We then ran the MCMC method for 3, samples, discarded the first samples and chose every 5th sample to reduce correlations within the Markov chain. The resulting set of parameter samples is a rough approximation of the posterior distribution over parameters for the given data. It is not statistically exact, because of the approximate likelihood, but it still indicates when parameter estimates become unreliable, as demonstrated in Fig The parameter values reported in Table 2 are those of the sample of the which fitted the behaviour for a given coherence best, as determined by Eq Note that, different from [ 54 ], we did not a priori assume a particular relationship between coherence and the parameters of the BAttM during fitting.

- Measurement, Judgment, and Decision Making : Michael H. Birnbaum : .
- Shop now and earn 2 points per $1?
- 40 Things....
- La mejor venganza (La Primera Ley) (Spanish Edition).

In [ 54 ] coherence linearly scaled the drift in their drift-diffusion model using a scaling parameter K that was common across coherences [ 54 ], Supp. In the BAttM the fitted parameters, sensory uncertainty r and noise level s , determine how stimulus features are translated into momentary evidence. Since we did not want to assume, a priori, a specific relationship between the level of coherence and parameters s and r , we chose to let the parameters vary independently of coherence during fitting.

However, we investigated whether an equivalent relation between r and coherence holds for the fitted values of r. As suggested by one reviewer, it may be useful to assume the above relation between r 2 and c as a constraint when fitting noisy data. Performed the experiments: SB JB. Analyzed the data: SB JB. Abstract Even for simple perceptual decisions, the mechanisms that the brain employs are still under debate. Author Summary How do we decide whether a traffic light signals stop or go? This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited Data Availability: All relevant data are within the paper and its Supporting Information files.

Introduction Research in perceptual decision making investigates how people categorise observed stimuli. Models The BAttM consists of four major components: i an abstract model of the experimental stimuli used as input to the decision process of a decision maker, ii a generative model of the stimuli implementing expectations of the decision maker, iii a Bayesian inference formalism and iv a decision criterion, see also [ 23 ].

Pure attractor models Attractor models of perceptual decision making were originally proposed as neurophysiologically plausible implementation of noisy decision making [ 26 ]. Download: PPT. Schematic comparing a pure attractor model of decision making with the Bayesian attractor model. Input model Bayesian models infer the state of an unobserved variable here the identity of a stimulus from realisations of an observed variable [ 24 , 45 — 47 ]. Example stimulus of single dot task, with a switch of target location.

Generative model with attractor dynamics The generative model of the decision maker implements its expectations about the incoming observations. Bayesian inference By inverting the generative model using Bayesian inference we can model perceptual inference. Fig 3. Illustration of the inference scheme used for decision making in the BAttM. Decision criterion The final component of the Bayesian attractor model is its decision criterion.

## The Wiley Blackwell Handbook of Judgment and Decision Making, 2 Volume Set

Fig 4. Speed-accuracy tradeoff in the BAttM In the BAttM, the speed and accuracy of decisions are primarily controlled by the noise level of the sensory input s , the sensory uncertainty r and the dynamics uncertainty q. Fig 5. Example trajectories for the Bayesian attractor model on a binary decision task for varying sensory uncertainty r. Fig 6. Mapping from sensory uncertainty r and noise level s to behavioural measures.

Re-decisions As our environment is dynamic, a specific stimulus may suddenly and unexpectedly change its category. Fig 7. Re-decision behaviour of Bayesian attractor model for switching stimuli. Top-down gain modulation There is growing evidence that higher level cognitive processes modulate neural responses already in early sensory areas [ 36 — 38 , 55 — 58 ]. Fig 8. Example of a decision making trial with evolution of cross-covariance and gain for parameters of point B in Fig 7.

Confidence-based decision criterion A graded feeling of confidence appears to be a fundamental aspect of human decision making. Fig 9. Evolution of decision state for pure attractor model left and Bayesian attractor model right for different input strengths or different uncertainty parameters, respectively. Example evolution of the posterior density of the decision state and the associated confidence values for one trial with a switch of stimulus at ms vertical, dotted line.

### Table of Contents

Confidence in relation to stimulus strength as predicted by the BAttM for the experiment of [ 54 ]. Fitting of a reaction time experiment To establish the validity of the proposed model and show that the model can be used to analyse data of decision making tasks, we fit behavioural macaque monkey data on the RDM two-alternative forced choice task presented in [ 54 ]. Model fit to experimental data presented in [ 54 ]. Table 2. Fitted parameter values best fitting sample for each coherence. Discussion We have embedded an attractor model into a Bayesian framework, resulting in a novel Bayesian attractor model BAttM for perceptual decision making.

Re-decisions In typical perceptual decision making experiments, e. Benefits of a probabilistic formulation As stated above, although there may be differences in detail, pure attractor models can, in principle, explain re-decisions as well.

Uncertainty and top-down modulation In the BAttM, there are two different ways how top-down gain modulation of sensory processing emerges. Interpretation of the fit to [ 54 ] We fitted the BAttM to average behaviour reported in [ 54 ] and found that the BAttM explains decision making behaviour well Fig 12B and 12C even though we assumed a simplified representation of the stimulus cf.

Adapting stimulus expectations The BAttM explains different behaviour in response to stimuli with different strength using particular combinations of input noise level s and sensory uncertainty r Table 2 , Fig Bayesian inference and neurobiological implementation One of the strengths of the original pure attractor models is their link to possible neurobiological implementations in networks of spiking neurons cf. Conclusion We have presented a novel perceptual decision making model, the Bayesian attractor model, which combines attractor dynamics with a probabilistic formulation of decision making.

Methods Hopfield dynamics We used a Hopfield network as an example of a pure attractor model. Network diagram for two-alternative Hopfield network cf. Eqs 9 , 10 with interpolated output that was used as generative model. Initial decision state When modelling perceptual decisions, we follow [ 26 , 28 ] and initialise the attractor dynamics in a neutral state.

Approximated contour lines In Fig 6 we plotted contour lines. Fitting of data in [ 54 ] To fit the data from the experiment reported in [ 54 ] we defined a temporal scaling between our discrete model and the times recorded during the experiment. References 1. Annual Review of Neuroscience — Journal of Neuroscience — Newsome WT, Pare EB A selective impairment of motion perception following lesions of the middle temporal visual area mt. Journal of Neuroscience 8: — Pilly PK, Seitz AR What a difference a parameter makes: a psychophysical comparison of random dot motion algorithms.

Vision Res — John ID A statistical decision theory of simple reaction time. Australian Journal of Psychology 27— View Article Google Scholar 6. Number 8 in Oxford Psychology Series. Oxford University Press. Ratcliff R, McKoon G The diffusion decision model: theory and data for two-choice decision tasks.

Neural Comput — Trends Neurosci — Roitman JD, Shadlen MN Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task. Neuron — Nature — Nat Rev Neurosci 9: — Curr Biol — Front Hum Neurosci 5: J Neurosci — View Article Google Scholar Psychol Rev — Ratcliff R A theory of memory retrieval. Psychol Rev 59— Bayesian brain: Probabilistic approaches to neural coding: — Frontiers in Human Neuroscience 8. Cogn Affect Behav Neurosci 8: — Front Neurosci 6: Wang XJ Probabilistic decision making by slow reverberation in cortical circuits. Roxin A, Ledberg A Neurobiological models of two-choice decision making can be reduced to a one-dimensional nonlinear diffusion equation.

PLoS Comput Biol 4: e Albantakis L, Deco G Changes of mind in an attractor network of decision-making. PLoS Comput Biol 7: e Miller P, Katz DB Accuracy and response-time distributions for decision-making: linear perfect integrators versus nonlinear attractor-based neural circuits. J Comput Neurosci — Wang XJ Decision making in recurrent neuronal circuits. Nature —U Annu Rev Neurosci — Nature 89— Academic Press series in cognition and perception.

Academic Press. Kepecs A, Mainen ZF A computational framework for the study of confidence in humans and animals. Hopfield JJ Neurons with graded response have collective computational properties like those of 2-state neurons. Cambridge University Press. MIT Press. Neural Comput 9: — In: Haykin [48]. In: Proc. Murphy KP Machine learning: a probabilistic perspective. Adaptive computation and machine learning series. Nat Neurosci 6: — Nature Neuroscience — Nat Rev Neurosci 2: — Nat Rev Neurosci — Summerfield C, de Lange FP Expectation in perceptual decision making: neural and computational mechanisms.

Ann N Y Acad Sci 88— Neuroimage — Nat Neurosci — Friston K The free-energy principle: a unified brain theory? Rao RP, Ballard DH Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat Neurosci 2: 79— Dayan P Twenty-five lessons from computational neuromodulation. Front Hum Neurosci 4: Lund FH The criteria of confidence. The American Journal of Psychology pp.

## Measurement Judgment and Decision Making - PDF Free Download

Kiani R, Shadlen MN Representation of confidence associated with a decision by neurons in the parietal cortex. Science — PLoS One 9: e Ding L, Gold JI Neural correlates of perceptual decision making before, during, and after decision commitment in monkey frontal eye field. Cereb Cortex — Neuron 30— Annu Rev Physiol — Neural Computation — Wilson R, Finkel L A neural implementation of the kalman filter. Bitzer S, Kiebel S Recognizing recurrent neural networks rrnn : Bayesian inference for recurrent neural networks.

Biological Cybernetics — Legenstein R, Maass W Ensembles of spiking neurons with noise support optimal probabilistic inference in a dynamically changing environment. PLoS Comput Biol e