Cognitive Dynamics Lab
What is this about?

This website is a companion app for a journal article in preparation. We are researching how humans make decisions about where they place their attention, dependent on context variables (e. g., how quickly are demands changing and how rewarding are adaptations of behavior to the environment) and intraindividual variables (e. g., speed). This website features a discussion of the general framework and theoretical background here on the Welcome page. It also includes a Computation/Simulation tool for you to test the effect of different variables, and a presentation of our findings.

Welcome


Decision-Making about our Attention

We are interested in how our cognitive system controls where to focus attention. The answer to this question depends on the specific context, but our general approach to this question is through the lens of rational decision-making.

For this project, I am highlighting two contexts in which humans decide on an attentional checking-for-information policy - and our research has found that they consider the costs and benefits of possible strategies to make an attentional decision.


Two Contexts of Attentional Decision-Making

Choose a context below to explore its theoretical background:

EXAMPLE SCENARIO: You're driving a car with the assistance of a GPS. If you use a GPS on a familiar route, it offers little information (~ the environment does not demand frequent updating of information, instead you can navigate from memory alone). Attending to a GPS will also be rather risky if the encoding of the information takes rather long (~ time cost). This cost may be mitigated if you're moving rather slow and need to make few turns (~ long primary task duration). As another factor, if you're on your way to an important meeting, you might rely on the GPS more (~benefits of correct, costs of incorrect, performance).


Characteristics of the context of interest:

To more generally characterize the context that we present research on:

  1. The primary task must be ambiguous and might benefit at least to some degree from additional information
  2. Completing a primary task takes time, and obtaining additional information costs additional time
  3. Performing the primary task correctly leads to benefits, performing incorrectly leads to costs
  4. Time for completing the task is limited.

Decision Variables to consider:

  • Payoff for correct responses vs. Losses for incorrect responses: High payoffs for correct responses and high losses for incorrect responses will bias optimal behavior to more checking.
  • Uncertainty: Higher switch rates, or in other words, greater uncertainty about the rule that leads to correct task performance, will bias optimal behavior to more checking.
  • Information retrieval time: Lower time costs for obtaining additional information will bias optimal behavior to more checking.
  • Your response speed: Slower primary task response rates will bias optimal behavior to more checking. This may appear counter-intuitive, but the intuition is that if tasks take longer while the information retrieval time remains constant (lower relative checking cost), it is more adaptive to check.

EXAMPLE SCENARIO: You're sitting in a classroom and your main task is to pay attention to the instructor. However, you know that your classmates are talking about something that is relevant to you. You don't need to check their conversation for extended periods of time, but you want to briefly check what they are talking about. And, they are not always talking about something relevant, so sometimes you will check and it doesn't come at any benefit, sometimes you check and it's useful information, but most importantly, if you miss something, then this can come at smaller or greater consequences or costs ('you're missing out').


Characteristics of the context of interest:

  1. The primary task is completely independent of the state.
  2. Completing a primary task takes time, and obtaining additional information costs additional time
  3. Performing the primary task correctly leads to benefits, and missing out on additional information (i. e., the active state) represents a cost.
  4. Time for completing the task is limited.

Decision Variables to consider:

  • Payoff for correct responses vs. Losses for incorrect responses: High payoffs for correct responses and high losses missing an active state will bias optimal behavior to more checking.
  • Information retrieval time: Lower time costs for obtaining information about the state will bias optimal behavior to more checking.
  • Your response speed: Slower primary task response rates will bias optimal behavior to more state checking. This may appear counter-intuitive, but the intuition is that if tasks take longer while the information retrieval time remains constant (lower relative checking cost), it is more adaptive to check.

Computational Model and Simulations

We have developed a computational model and Monte-Carlo simulation that allows us to calculate the payoff for different attentional strategies. From a rational decision-making perspective, humans should select an attentional strategy that aligns with the maxmimum payoff. However, as you can see in the Computation/Simulation Tool, many contexts yield an optimality curve that is fairly broad, i. e., a function where a realtively wide range of checking strategies come relatively close to the optimum.


This website

The goal of this website is to help readers of the paper explore how different settings of the value context change the decision landscape.

You can read more about our work on our lab website, and the pre-registrations for experiment 1, experiment 2, and experiment 3 on OSF. You can find posters that I presented on this work under my ResearchGate profile.

Funding

Funding for this work comes from NSF grant 2120712.


Scenario Context

In this scenario, participants perform a task with an ambiguous task stimulus. The task rule that allows for correct (and rewarded) responses can change from trial to trial with a specified switch probability. To find out the correct task, task rule cues are present on the screen. In the most typical scenario, checking task rules reduce uncertainty abotu the current task, but also come at a time cost. Here, you can insert various parameters to identify the optimal strategy.

How to use this tool:

1. Tweak the 'Model Parameters' in the sidebar.

2. Click the 'Compute' and/or 'Simulate' button below the parameters.

3. The resulting graph will show you the optimal checking rate (the peak of the curve).

Theoretical (Computed)
Loading...
Stochastic (Simulated)
Loading...
Model Results
Simulated vs. Computed Rewards
Model Results

This page is designed to compare a Monte-Carlo simulation with our computational model. The key idea of this model model is that the relative payoff for different run lengths of trials (e. g., run length of 1 means checking in every trial or 100 %, 2 would be 50 %, 3 would be 33 % and so forth is compared. The simulation and computational model agree very nicely in their predictions.

Simulation

The simulation that can be plotted here generates task sequences and the simulates an agent performing with a stochastic check rate, and assigns corresponding RTs to simulated trials with and without cue checks, assigns gains and losses for simulated correct and incorrect trials, and then summarizes the iteration in the payoff.

Computational Model

The model calculates the payoff for all attentional strategies such that one can visualize the optimality curve. It takes into account the probability of a task switch, p, the gains g per correct trial and losses l for incorrect trials. To compute the relative time cost, individual RTs for trials with and without cue checks are needed. To interpolate over the entire block duration, the duration of the inter-trial-interval and the total block duration are needed.

First, we calculate the probability that the task remains the same at trial \(n\):

$$p_{same}(n) = \frac{1}{2} [1 + (1 - 2p)^n]$$

The expected payoff for a single trial \(n\) without checking:

$$PONC_n = p_{same}(n) g + (1 - p_{same}(n)) l$$

Where \(g\) is gain and \(l\) is loss.


The average payoff over a run of length \(r\) without checking:

$$APONC_r = \frac{1}{r} \sum_{n=1}^{r} PONC_n$$

If a check occurs on the last trial (guaranteeing a win), the average payoff is:

$$APOC_r = \frac{1}{r} (\sum_{n=1}^{r-1} PONC_n + g)$$

We calculate a time-cost multiplier based on RT, ITI, and check duration:

$$CC_r = \frac{RT + ITI + \frac{CT+D}{r}}{RT + ITI}$$

The average payoff of checking is adjusted by this time cost:

$$APOCTA_r = \frac{APOC_r}{CC_r}$$

Finally, we compare the relative payoff of NOT checking vs. checking:

$$PNCC_r = APONC_r - APOCTA_r$$

Decision Policy:

If \(PNCC_r < 0\), the optimal strategy is to check (Cue Fixation).


Scenario Context

In this scenario, participants perform a simple task. Additionally, a state can be on or off. When an 'on-state' is not checked and missed entirely, this leads to a Loss (as specified in the parameters). You can manipulate the probability that the state turns on and off (increasing both will lead to relatively short states). Because checking on a state comes at a time cost, the payout depends on the frequency of checking. You can explore here the optimality of different startegies.

Pilot Mode: Model Under Development

Please note that this specific computational model is currently in a pilot phase. While functional, the underlying parameters and simulation logic are still being refined. Use these preliminary results for exploration purposes only. You can check on the newest developments of the more comprehensive models here.

How to use this tool:

1. Tweak the 'Model Parameters' in the sidebar.

2. Click the 'Compute' and/or 'Simulate' button below the parameters.

3. The resulting graph will show you the optimal checking rate (the peak of the curve).

Theoretical (Computed)
Loading...
Stochastic (Simulated)
Loading...
Model Results
Simulated vs. Computed Rewards
Model Results

This page is designed to compare a Monte-Carlo simulation with our computational model. The key idea of this model model is that the relative payoff for different run lengths of trials (e. g., run length of 1 means checking in every trial or 100 %, 2 would be 50 %, 3 would be 33 % and so forth) is compared.

Simulation

The simulation that can be generated here plots generates task sequences and then simulates an agent performing with a stochastic check rate, and assigns corresponding RTs to simulated trials with and without cue checks, assigns gains and losses for simulated correct and incorrect trials, and then summarizes the iteration in the payoff.

Computational Model

The model calculates the payoff for all attentional strategies such that one can visualize the optimality curve. It takes into account the probability of a state change, p, the gains g per correct trial and losses l for incorrect trials. To compute the relative time cost, individual RTs for trials with and without cue checks are needed. To interpolate over the entire block duration, the duration of the inter-trial-interval and the total block duration are needed. Calculating the probability of a state change is here based on the steady state of a Markov Chain with known probbilities (here, P(Go On) and P (Go Off)). In reality the probability of a given state and accurate reward calculations depend on the frequency of checking and the trial number of a block, but for simplicity, I am here using the steady state. I am currently working on a more accurate model that more accurately calculates reward based on these additional factors.

First, we calculate the probability that the task remains the same at trial \(n\):

$$p_{\text{steady state - off}} = \frac{p_{\text{go off}}}{p_{\text{go off}} + p_{\text{go on}}}$$

Therefore, $$p_{\text{steady state - on}} = 1 - p_{\text{steady state - off}}$$

Because the states are instantaneous, a Loss will happen with the probability that a state is on and turns off in the next trial:

$$p_{\text{Loss}} = p_{\text{steady state - on}} * p_{\text{go off}$$

The expected payoff for a single trial \(n\) without checking:

$$PONC_n = r * g + r * p_{ ext{Loss}} * l$$

Where \(g\) is gain and \(l\) is loss.


The average payoff over a run of length \(r\) without checking:

$$APONC_r = \frac{1}{r} \sum_{n=1}^{r} PONC_n$$

If a check occurs on the last trial (guaranteeing a win), the average payoff is:

$$APOC_r = \frac{1}{r} (\sum_{n=1}^{r-1} PONC_n + g + (p_{ ext{Loss}} * l))$$

We calculate a time-cost multiplier based on RT, ITI, and check duration:

$$CC_r = \frac{RT + ITI + \frac{CT+D}{r}}{RT + ITI}$$

The average payoff of checking is adjusted by this time cost:

$$APOCTA_r = \frac{APOC_r}{CC_r}$$

Finally, we compare the relative payoff of NOT checking vs. checking:

$$PNCC_r = APONC_r - APOCTA_r$$

Decision Policy:

If \(PNCC_r < 0\), the optimal strategy is to check (Cue Fixation).

Project Overview

In Experiment 1, we manipulated four variables:

  • Task completion rate (perceptual difficulty was low vs. high)
    • Rate was manipulated by making the primary task easier or more difficult to encode. The response-relevant information needed to be decoded from a random dot motion task. In the slow condition, dot coherence built up over the course of one second, significantly delaying the availability of taks-relevant information. The normative model predicts higher check rates when task response rates are slow because the absolute time check cost is relatively small when the primary task takes longer to predict.
  • Task rule switch probability (10 % vs. 25 %)
  • Cue onset delay (task cue shows up instantly or with 1 s delay)
  • Placeholders (cues were present on the screen, or absent)
    • We manipulated bototm-up salience of cues, we implemented two conditions that determined whether cues were present on the screen by default, or absent. We expected that participants would check the task cues more frequently when they were present, supported by no or only a small difference in check rates.

Participants had 150 seconds to complete a block of trials.


Findings

The normative model makes individual predictions for each person in each condition, given individual RT parameters. These predictions are displayed in the upper graph, here colored by the level of the rate manipulation. As can be seen, when RTs are relatively slow, the optima are shifted to the right. The violin plots indicate the distribution of optimal check rates.

The lower figure plots the empirically measured check rates (y axis) against the optimal check rates predicted by the model. Nearly all subjects have positive slopes, though there is quite some variance in slope intercepts.

Main Effects

As perdicted, participants checked task cues more frequently when a) the task completion rate was slow, the task switch probability high, task cue delays short and placeholders present on the screen.

Interestingly when placeholders were present, this led to a small, but significant, relative cost benefit (due to the easier findability of task cues). However, the main effect of this manipulation on observed checking rates was drastically higher than on the optima.

Participants

We recruited 46 subjects (43 were included in the analysis) from the human subjects pool at the University of Oregon. Participants were compensated with course credit and paid according to rewards accrued in the study.

Visualizations
Loading...
Loading...
Project Overview

In Experiment 2, we manipulated three variables:

  • Task completion rate (response difficulty was low vs. high)
    • Rate was manipulated by making the primary task easier or more difficult to respond to. While the critical stimulus display remained the same, the response had to be made by dragging a cursor from the center of the screen to one of four rectangles that corresponded to the choice, but that varied in size and distance. In the difficult (and slow) condition, these 'response squares' were small and relatively far from the center of the screen, whereas in the easy (and fast condition), they were close to the respective option and big in size.
  • Task rule switch probability (5 vs. 15 %)
  • Placeholders (cues were present on the screen, or absent)
    • We again manipulated bottom-up salience of cues and implemented two conditions that determined whether cues were present on the screen by default, or absent. We expected that participants would check the task cues more frequently when they were present, supported by no or only a small difference in check rates.

Participants had 120 seconds to complete a block of trials.


Findings

The first figure again shows the prediction for the levels of the response difficulty/rate manipulation.

The lower figure plots the empirically measured check rates (y axis) against the optimal check rates predicted by the model. Nearly all subjects have positive slopes, though there is quite some variance in slope intercepts.

Main Effects

Again, as perdicted, participants checked task cues more frequently when a) the task completion rate was slow, the task switch probability high, and placeholders present on the screen.

Similar to experiment 1, when placeholders were present, this led to a small, but significant, relative cost benefit (due to the easier findability of task cues). However, the main effect of this manipulation on observed checking rates was drastically higher than on the optima.

Participants

We recruited 41 subjects from the human subjects pool at the University of Oregon. Participants were compensated with course credit and paid according to rewards accrued in the study.

Visualizations
Loading...
Loading...
Project Overview

In Experiment 3, we manipulated two variables:

  • Task completion rate (inter-trial-interval was short (0.1 s) vs. long (1.6 s))
    • Rate was manipulated by varying the inter-trial interval duration. Longer ITIs affect lower the costs because per unit of time, fewer trials can be completed, reducing the opportunity cost experienced during cue checks.
  • Placeholders (cues were present on the screen, or absent)
    • Given the consisten placeholder salience effect in Experiments 1 and 2 on both, the optimal check rate and the outcome, we decided to manipulate salience on three levels. Here, as a new level we introduced a sudden onset placeholder that flickered at the beginning of the trial. Given the results of prior experiments, we were interested in whether this additional level is associated with more checking and a reduced relative cost. Finding increased check rates and lower relative costs could have implications on how salience-related increases of 'capture' are interpreted.

Participants had 120 seconds to complete a block of trials.


Findings

The first figure again shows the prediction for the levels of the response difficulty/rate manipulation.

The lower figure plots the empirically measured check rates (y axis) against the optimal check rates predicted by the model. Nearly all subjects have positive slopes, though there is quite some variance in slope intercepts.

Main Effects

Again, as perdicted, participants checked task cues more frequently when a) the task completion rate was slow, and as placeholder salience increased.

As placeholder salience increased, we again replicated the effect of salience from absent to present task cues on both, reduced relative costs and increased checking rates. However, while sudden onsets resulted in more checking of task cues, it was not accompanied by a further reduction in relative checking costs.

Participants

We recruited 44 subjects (41 subjects were included in the final sample) from the human subjects pool at the University of Oregon. Participants were compensated with course credit and paid according to rewards accrued in the study.

Visualizations
Loading...
Loading...

People

Dominik Graetz

Doctoral Candidate

Dominik is a researcher specialized in modeling attentional decision-making. He developed the simulation framework powering this application.

His current work focuses on empirical research on top-down controlled, bottom-up-driven, and context-dependent human attention.


Ulrich Mayr

Principal Investigator

Ulrich Mayr is a Professor of Psychology. His research focuses on the architecture and developmental trajectory of executive control processes, as well as the intersection of attention and decision-making.

He leads the Cognitive Dynamics Lab in exploring how humans navigate complex task environments.