Cognitive Biases Part 1
Digital investigators all face a common challenge: the influence of cognitive biases on their work. In an age of abundant data, the human factor of bias can skew investigative findings and lead analysts astray. Recognizing and mitigating these mental pitfalls is important for protecting accuracy, as well as the integrity of your investigative work. Formal standards like the U.S. Intelligence Community’s Directive 203 explicitly require analysts to work “with objectivity and with awareness of [their] own assumptions and reasoning,” using techniques to reveal and mitigate bias. We're going to talk about the challenge of cognitive bias in digital investigations and offer practical strategies for awareness and mitigation (in two parts, since this is a big topic!). We will define key biases (confirmation, anchoring, availability, groupthink, and more), illustrate how they manifest with real-world scenarios, discuss how you can recognize your own blind spots, and outline proven mitigation techniques, from structured analytic methods to red-teaming and assumption checks.
Understanding Cognitive Bias in Investigations
Cognitive biases are systematic errors in thinking that affect decisions and judgments. They stem from the brain’s use of heuristics (mental shortcuts for processing information) which can sometimes mislead us. In other words, even the most rational investigator is prone to subconscious biases simply because of how human brains handle complex information. Unlike overt prejudices or intentional deception, cognitive biases operate subtly, often without our awareness. They cause us to see patterns that aren’t there, favor certain information over other, or make snap judgments based on limited data. One cybersecurity blog defines cognitive bias as “when an investigator’s own assumptions or beliefs impact their judgment of information and evidence.” This means our interpretation of open-source data, crime evidence, intelligence reports, or social media content can be skewed by pre-existing beliefs or mental shortcuts, rather than grounded purely in facts and logic.
Research shows that people often recognize bias in others but not in themselves, a phenomenon termed the “bias blind spot.” Even highly trained analysts are not immune. Simply knowing about biases does not eliminate them; like optical illusions, biases remain compelling even when we are aware of them. Richards J. Heuer, Jr., in Psychology of Intelligence Analysis, emphasizes that awareness of bias alone is “not an adequate antidote.” Experiments find that biases persist even after people are informed of them. Therefore, investigators must go beyond passive awareness and actively employ techniques to check and correct their thinking. Before exploring those techniques, let’s first examine some of the most common cognitive biases that can derail investigations.
Confirmation Bias
Confirmation bias is the tendency to seek out or interpret information in a way that confirms one’s pre-existing beliefs or hypotheses, while ignoring or discounting contrary evidence. Once an investigator develops a theory, there is a natural pull to find data that supports it and to gloss over information that challenges it. This bias is automatic and unintentional. Our brains subconsciously favor reinforcing information because it’s mentally easier to stick with an initial story than to overturn it.
In digital investigations, confirmation bias is extremely common. For example, consider a corporate incident response team investigating a series of network breaches. Suppose the team quickly settles on the hypothesis that a recently disgruntled employee is behind the attacks (perhaps because the company had layoffs). Investigators might then focus their inquiry on confirming an insider threat and start scrutinizing internal logs and employee communications that support this suspicion, while overlooking signs of an outside hacker.
Confirmation bias can creep into OSINT research and due diligence as well. An OSINT investigator might start with a preconceived idea about a person or organization, for instance, suspecting someone of extremist leanings, and then unknowingly tailor their searches to prove that assumption. Instead of neutrally gathering all relevant posts or data, they may subconsciously choose keywords, platforms, and sources likely to yield confirming evidence (like finding the subject’s possible ties to one extremist group) while ignoring disconfirming evidence (like the subject’s other affiliations). If you begin with the hypothesis that “Person X is involved in Y,” you risk collecting only the information that supports that narrative, ending up with a one-sided analysis. In one example, an analyst investigating whether an individual had become radicalized started with the hypothesis that the person was an ISIS sympathizer. They focused only on evidence of Islamist extremism, overlooking earlier evidence that the individual had actually been involved with a neo-Nazi group. The result was a faulty conclusion tailored to the initial assumption.
The antidote to confirmation bias is an investigator who constantly reminds themselves to seek out contradictory information. A skilled analyst intentionally looks for disconfirming evidence and alternative explanations. For instance, when reviewing social media posts about a developing event, an OSINT investigator on guard against confirmation bias will actively search for posts or sources that conflict with their working theory, not just those that agree with it. They will cross-check between multiple platforms and sources (e.g. verifying a story on Twitter against Facebook posts, Telegram channels, or official statements) to ensure they aren’t cherry-picking facts that fit their narrative. By deliberately asking “What would I expect to see if my hypothesis is wrong?” and then looking for those signs, investigators can catch confirmation bias early. Always challenge your initial assumptions, and if you only find evidence supporting your theory, it’s a warning sign that confirmation bias may be at work.
Anchoring Bias
Anchoring bias refers to the tendency to rely too heavily on the first piece of information encountered (the “anchor”) when making decisions. An initial clue or assessment can unduly influence all subsequent analysis. The danger is that once the mind latches onto an anchor, it becomes difficult to adjust away from it, even as new information emerges.
A classic example of anchoring in intelligence work was seen in the pre-war assessments of Iraq’s weapons of mass destruction (WMD) capabilities in 2002. Analysts latched onto early reports and assumptions that Iraq must be hiding WMD stockpiles. This idea became an anchor that framed all incoming intelligence. As later investigations revealed, because analysts had anchored on the WMD hypothesis, they gave disproportionate weight to weak pieces of evidence that seemed to support it and ignored significant evidence to the contrary. The first information (some suspicious activities, defectors’ claims, etc.) set a narrative that Iraq had WMD, and analysts found it exceedingly difficult to “reset” their thinking as new reports came in. Looking back now, this anchoring, compounded by groupthink, as we’ll discuss next, was a major factor in one of the most costly intelligence misjudgments in recent history. If you haven't read the 9/11 Commission report, I highly recommend it, as it is eye opening to several of these biases (and larger intelligence failures in general).
In everyday investigative scenarios, anchoring bias can manifest in various ways. A digital forensics examiner might be told by police upfront that “we think the suspect’s phone contains evidence of X.” That initial briefing serves as an anchor. The examiner, even unintentionally, may then focus on finding X on the phone, potentially overlooking other evidence not related to X. In fact, studies in forensic science have shown that contextual information can bias experts’ interpretations of evidence. If a forensic analyst knows a suspect has confessed, they might be more inclined to interpret ambiguous DNA or fingerprint evidence as a match, because the confession becomes an anchoring context. The FBI found that even subtle expectations (like a hint at a suspect’s guilt) can sway how examiners evaluate forensic evidence, a phenomenon termed contextual bias. To combat this, some forensic labs now implement “blind” analysis procedures, where examiners are kept unaware of the initial investigative theory or the side that submitted the evidence. By removing the anchoring context, examiners evaluate the raw data first on its own merits, reducing the risk that an initial narrative will color their judgment.
Anchoring bias often works hand in hand with confirmation bias: the first piece of evidence shapes your theory, and then you seek confirming data for that theory. To counter it, investigators should delay firm conclusions in the early stages. It helps to consider multiple hypotheses at the outset rather than zeroing in on one. If you find yourself thinking “I’m pretty sure suspect A did it” based on an initial clue, pause and force yourself to also imagine scenarios where suspect B or an unknown third party could be responsible. If you consider multiple hypotheses from the beginning (using techniques like Analysis of Competing Hypotheses, which we will talk about later), you prevent the first plausible idea from unjustifiably dominating your mind.
Availability and Selection Bias
Availability bias is a cognitive bias where people judge the likelihood or importance of something based on how readily examples come to mind or how easily information can be retrieved. In an investigation, this often means giving undue weight to evidence that is most accessible, memorable, or prevalent, while undervaluing information that is harder to find or less immediately present. In OSINT work, availability bias can lead us to rely on the easiest-to-find sources or the most loudly repeated information, under the false impression that what’s readily available is the most representative or true.
One manifestation of availability bias is when an analyst equates quantity of information with quality of evidence. For instance, if there is a flood of open-source data supporting a particular theory, one might assume the theory is true simply because so much information is available on it.
An OSINT researcher reading about a breaking news event (say, a large industrial fire) might see a great volume of social media posts and commentary proposing Cause X for the fire. The sheer volume and visibility of those posts can create a subconscious conviction that Cause X is the correct explanation, after all, “everyone is talking about it.” However, this could be misleading; the most popular theory is not necessarily the correct one. Important evidence (like a sober official report or a minority eyewitness account indicating a different cause) may be buried amid the noise. The investigator who succumbs to availability bias might focus on the well-publicized narratives and miss the quiet clues.
Selection bias occurs when the data you collect (the sample) is not representative of the reality you seek to analyze. This can happen because of the sources and channels you choose. If an analyst only uses one search engine, or only monitors Twitter and ignores other platforms, the information gathered will be skewed and incomplete. If you always rely on the same few tools or keywords, you might miss whole swaths of information that fall outside that narrow funnel. This was dubbed the “law of the instrument” bias by Ntrepid, meaning an over-reliance on familiar tools such that “if you always use Twitter, Facebook, and Instagram, your results will always be limited to those sources.” Your go-to tools and habits pre-select the evidence, creating a biased picture before analysis even begins.
Availability and selection biases lead to overconfidence in what is easily found. A threat intelligence analyst might disproportionately focus on threats that have been in the recent news or in their Twitter feed (since those come to mind right away), while overlooking emerging threats that haven't hit the news yet. In due diligence research on a company or individual, an investigator might find a ton of readily available positive information like press releases, LinkedIn profile, and a clean criminal record, and decide that there's nothing important there. fIf you quit at this step, you might not find the foreign corporate filings, civil litigation records, or dark web mentions that are actual red flags. The most accessible facts are not always the most relevant. In fraud investigations, there’s an old saying that “absence of evidence is not evidence of absence.” If an investigator only looks in obvious places (and finds nothing bad), availability bias might lull them into thinking all is well, when in truth a more exhaustive search would have revealed critical issues.
To counter availability bias, investigators should consciously expand their search and consider information that is not immediately at hand. This means checking a variety of sources and platforms, not just the ones that pop up first. It also means weighting information on its credibility and relevance, not just on how eye-catching or prevalent it is. As a best practice, ask yourself: “Am I focusing on this piece of evidence because it’s truly important, or just because it was easy to find or remember?” If the latter, make sure to look further. Important information requires digging, translating foreign-language sources, querying specialized databases, or consulting archival material. These are the tasks that our brains may not default to because they’re harder than reading the first page of Google results. You should also be careful of things that are recent: just because something happened recently (and is fresh in memory) doesn’t mean it’s more significant than older data.
Availability bias reminds us that what you see is not all there is. Always assume there is additional information out there that might tell a different story, but you have to look for it. Use multiple search engines, explore alternative social media platforms, and include less common sources (academic papers, niche forums, local news, etc.) in your research. You reduce the risk that an investigative conclusion is merely a product of the easiest-to-find data rather than the most accurate data by broadening your collection pipeline.
Groupthink
While the biases discussed above occur at the individual level, groupthink is a bias phenomenon that occurs in team settings. Groupthink is a psychological phenomenon in which the desire for group cohesion or consensus leads members to suppress dissenting opinions and overlook alternative solutions. A close team may prematurely pick one explanation or course of action because questioning the majority view feels unwelcome. This results in poor decisions that you might not have made on your own. Groupthink is dangerous because it can amplify other biases (like confirmation or anchoring) and give them a collective momentum. Once a team likes a theory, contrary evidence might not just be an may never even be brought up due to social or organizational pressure.
A well-documented case of groupthink impacting analysis again comes from the 2002 Iraq WMD intelligence failure. Within the analytical groups, a prevailing narrative took hold that Iraq had active WMD programs, and a combination of pressure and consensus-seeking discouraged analysts from speaking out against this view. Even those who had doubts felt it was futile or even career-risky to voice them. As a result, dissenting data was not considered. Reading the 9/11 Commission Report. analysts “didn’t feel empowered to question the prevailing narrative” and so they “accepted weak evidence that supported their theory and dismissed conflicting data.” Their desire to all be on the same page (and maybe to deliver a clear, confident assessment to policymakers) overrode the normal critical evaluation process. The groupthink dynamic meant that once a the popular opinion was clear, it became self-reinforcing, and the usual analytic rigor suffered. As we all know now, there were never any WMDs in Iraq. Peer pressure (even unspoken) can make investigators hesitant to be the lone voice saying “I’m not convinced we’re right, what if it’s something else?”
OSINT analysts worldwide often collaborate informally on forums and Twitter to analyze events. This community is a strength, but it also has the potential for echo chambers. If a few respected voices propose a theory about, say, the identity of a threat actor or the origin of a disinformation campaign, there is a tendency for many others to rally around that theory. Alternative hypotheses might get less attention or be met with skepticism not on their merits, but because they go against the established narrative endorsed by group consensus. It’s essentially groupthink at the community level, a form of collective confirmation bias where a group of independent analysts all fall prey to the same bias, reinforcing each other.
In team settings, leaders should encourage constructive dissent and remind everyone that challenging each other strengthens the analysis, not undermines it. Some intelligence agencies have institutionalized this through designated “devil’s advocates” or red teams whose job is to argue alternative viewpoints (we will discuss this in mitigation strategies). The presence of a formal Analytic Ombudsman in the U.S. Intelligence Community (as required by ICD-203) is also a structural attempt to counter groupthink – analysts can escalate concerns if they feel analytic objectivity is being eroded by consensus or pressure. On an everyday level, simply creating a safe environment for debate is crucial: team members should be able to say “I disagree” or “Have we considered this other angle?” without fear of ridicule or repercussion. Each investigator should remember that loyalty is to the truth, not to the team’s initial theory. If you’re in a group and notice that everyone is quickly agreeing and patting each other on the back, it might be time to pause and play devil’s advocate with yourselves.
Groupthink is combated by actively seeking diverse opinions. Bringing in an outsider to review the case, or even splitting the team to analyze the case independently and then compare findings, can force consideration of different viewpoints. Keep in mind that analytical consensus should be a result of evidence, not a goal in itself. A healthy investigative process sometimes involves internal debate and shifting perspectives as new evidence comes in.
I have a quick table below for reference of the five cognitive biases we've talked about so far:

Understanding these biases and recognizing their warning signs is the first step toward mitigating them. In Part II of our discussion on Cognitive Bias, we discuss how investigators can stay alert to their own biases during an investigation and what practical strategies can reduce their impact.