People make decisions that have consequences. Those decisions vary in complexity and the amount of time available to address them. Complex problems can be solved given enough time, but when that time is limited, the quality of decision making quickly degrades. Cosimmetry uses Cognitive Digital Integration (CDI) to bring the human and technical together in harmony to help people make better decisions faster in those complex contexts. It is worth looking closer at the cognitive part of CDI: How do we make sense of the human in the equation? How do they best fit with the technical tools? Ultimately, how do they and their tools become more than the sum of their parts?

The Art of the Abductive Link to heading

Imagine you are responsible for defending a large corporate IT network under cyber attack, or a rapidly changing situation like public health in a pandemic. These problems are complex, and time is critical - the longer you spend deciding, the bigger the problem gets. But a quick decision isn’t always the best one.

Uncertainty is a natural part of any system that involves human behaviours. The cognitive psychological approaches of the nineties and aughts aimed to model human cognition as if it were similar to computation, where a series of inputs produce an output that is mappable back to those inputs. Nice and tidy, right? These computational models work reasonably well for autonomic responses (reflex or knee-jerk behaviours). There is no opportunity for deliberative thought here. You don’t think about whether to move your knee. Your body just does it. However, these models fail to capture the complexity of the cognitive process when conscious decision making is factored in, i.e. when the somatic system is engaged.

Here’s an example: you’re thirsty, and you feel like a treat. You go to the milkshake parlour. You look at the menu. There are three choices (it’s 1954 - we don’t have matcha flavoured milkshakes yet). You base your decision on your previous preference, built from experience. Let’s call it your “milkshake schema”. You have an existing preference for Strawberry and you order accordingly. But how did you develop that schema? How do you know you prefer strawberry over chocolate unless you sometimes order chocolate? When do you choose to experiment with new flavours? How do you factor in others’ recommendations? How does your mood affect your choice? Maybe you’ve never tasted chocolate, but your arch enemy likes chocolate, and there’s no way you will associate yourself with them by ordering a chocolate milkshake. All of these issues could affect your final decision..

Let’s add in a new variable: creativity. What possesses someone to put espresso, matcha, or even wasabi into that milkshake for the first time? This is where we encounter the limitations of input-output models. For instance Nudge Theory lacks validity1 because it assumes human decision making is linear. These models fail to factor in individual proclivities, differences, creativity, or variance over time. If we want our models to be useful in load-bearing situations we have to find a way of incorporating uncertainty.

Accounting for Uncertainty Link to heading

Finding the right balance isn’t easy. I’ve talked about what happens if you don’t have enough uncertainty built into your model. If you include too much uncertainty, however, then your model becomes chaotic. Everything and anything is possible: results are true, false, and neither all at once. Fine for quantum physics perhaps, but less helpful for transport planners trying to anticipate the flow of holidaymakers and hauliers through a major port in the event of train strikes or bad weather delays. Here’s a parallel: when assessing a dream, the psychotherapist asks the client what they see in their inherently fantastical narrative. The aim is to reveal the motivations underlying the client’s behaviour based on what their unconscious mind prompts them to see in their sleep. However, there is no way to validate the client’s commentary. We can’t prove that any psychological condition is related to their dreams, nor disprove it. Models of decision making must feature a degree of uncertainty, but when they move away from being verifiable, they can do more harm than good.

Prioritising Immediacy and Integration Link to heading

The volume and timeliness of data collection is increasing: The Splunk survey found that nearly two thirds of businesses said that they cannot keep up with the growth in the volume of their data. As organisations address this change, they are embracing new and sophisticated data analysis tools that do allow for uncertainty.

There are two new challenges here: immediacy and integration.

Let’s take immediacy first. When technical specialists factor in that uncertainty using machine learning, they develop ‘black boxes’. These may have predictive power, but they are rarely understood by the decision makers for whom they are intended. Specifically, they struggle to recognise when the black box is failing. Generally, because they don’t understand the black box’s technical strengths and weaknesses, they do not feel empowered to critique them. This leads to a trust gap.

To compensate, we place an expert (or a team of them) to sit between the tool and the decision maker. These expert analysts bring huge potential value to decision-making, but they also introduce new complexities: the costs of building, developing and maintaining their tools, and all the human frailties of adding new people into the mix, particularly if the scenario is existential for an organisation. Imagine a car chase where the turn-by-turn decisions are taken by the passengers (plural) and not the driver (also, potentially, plural). How much control does that driver have over the vehicle any more?

New tools can force experts into providing data in a form that fits the decision makers’ existing understanding of the world. However, to have a transformational impact on the speed and quality of decision making, perhaps a completely new way of thinking is required. The tool needs to be fully integrated, not retro-fitted. For instance, leaders of education institutions are currently trying to figure out how to respond to the rise of large language models such as Chat GTP. Students can now create a uniquely worded essay in seconds without having to demonstrate their mastery of the subject. Under these new conditions, assessing a student’s learning by means of coursework based essays will inevitably lose validity. University staff are being tasked with generating essays with ChatGPT and submitting them to the plagiarism software, which works by identifying verbatim strings of text. The hope is to train the plagiarism software to spot ChatGPT-produced essays in the future. This is futile because large language models don’t plagiarise in this way.

Moving Forward with CDI Link to heading

So how do we make the most of the new digital tools at our disposal? Complex, time-challenged decision making must lean into understanding the cognitive alongside the digital in order to integrate the two. The CDI approach that we use at Cosimmetry focuses on mapping decision making activity and output parametrically. This means taking inputs and outputs from a range of sources and pulling together a picture of how each part interacts with everything else, such that the decision maker can see the overall picture. Doing this well facilitates access to the right digital tools at the right time by the right primary decision maker, and thereby genuinely augments decision-making effectiveness. If you want to dig deeper into some of the technical thinking, our CTO, Chris Major, talks in his blog here about writing efficient code that’s intuitive to use, creating higher levels of immediacy and integration.

If you think we can help you with the decision-making challenges you are facing, then get in touch at info@cosimmetry.co.uk, we would love to discuss how we can help.


  1. Bakdash, J. Z., & Marusich, L. R. (2022, February 25). Left-truncated effects and overestimated meta-analytic means (comment on Mertens et al. “The effectiveness of nudging”). https://doi.org/10.31234/osf.io/9q67c ↩︎