When the British first unleashed tanks on the Somme in 1916, they were heralded as the weapon to break the stalemate of trench warfare. Instead, most broke down, bogged in mud, or were hurled forward without any real understanding of how to integrate them into combined arms. What was meant to be a breakthrough often became a liability until years of experimentation and doctrinal refinement revealed how tanks could be used effectively. Half a century later, US troops in Vietnam were handed M16 rifles under similar conditions, in that they were fielded widely before their flaws were understood, with no cleaning kits and little training. The result was tragic. Rifles jammed in firefights, eroding trust in a weapon meant to save lives. Both cases illustrate the point: New technology should not be deployed and simply trusted. Rather, these technologies and innovations, along with their respective shortcomings and vulnerabilities, must be understood by the soldiers who use them. Beyond that, leaders must also be trained to know what to do when these systems fail. Looking to the future, the incorporation of artificial intelligence is no different than the introduction of tanks or fielding of M16s and if we rush it to the battlefield without deliberate preparation, its promise could quickly turn into a liability when soldiers need it most.
How then can the Army better prepare its soldiers to fight in an environment utilizing AI more effectively?
Understanding the Vulnerabilities of Human-AI Systems
The appeal of AI on the battlefield is clear. In mere seconds it can process sensory data, intelligence reports, and targeting information that would typically take highly trained soldiers hundreds of hours to do. There is no doubt that in future conflicts decisively leveraging human-AI systems will give a competitive advantage to the side that can do it more effectively. However, the very same efficiency that makes AI so valuable on the battlefield can also create serious vulnerabilities because it plays directly into the human tendency to rely on heuristics. Heuristics are mental shortcuts—simple rules people rely on to make quick judgments under stress and uncertainty. Psychologically, heuristics make decision-making faster, especially in environments of high stress where there is little time to make decisions. But heuristics also involve risk. In the context of battlefield AI applications, heuristics can lead soldiers to accept the AI’s output at face value without questioning it, especially if it gives its answer confidently. This can manifest in different ways—namely, passing the buck, accept-and-forget thinking, and mental dependency on AI systems.
Passing the Buck / Differing to Authority
In high-stakes environments, soldiers and commanders may unconsciously shift accountability from themselves to AI, rationalizing that the system recommended or confirmed their choice. This phenomenon is exacerbated by authority bias which is the psychological tendency to place trust in sources that appear authoritative. Because AI delivers output with extreme confidence and precision, its recommendations often masquerade as definitive guidance rather than mere statistical predictions. This dynamic is evident in modern operations. For instance, the Israel Defense Forces have utilized Habsora (“the Gospel”), an AI targeting system that generates rapid target lists alongside color-coded indicators to estimate collateral damage. But psychological deference to algorithmic output creates a critical organizational risk when decision-support systems, such as Habsora or the US Army’s Maven Smart System, are integrated into targeting operations. The legal and career implications of a wrong choice only reinforce this dynamic. Commanders who follow their own expertise or intuition and are incorrect may be judged negligent or reckless. However, those who follow an AI recommendation can argue that they used the most advanced tool available. In a profession where careers are shaped by the outcomes of high-risk decisions, it is easy to see how AI could become a shield against personal accountability. This tendency has been observed in research. A recent study found that people who rely too heavily on AI tend to “pass the buck” more when making decisions, leading them to spend less time evaluating explanations. In high-stakes leadership contexts, this increases uncritical acceptance and weakens personal ownership and accountability by introducing an assumption that culpability for mistakes will be absorbed by the system.
Combat conditions amplify this risk. Under stress, fatigue, and information overload, leaders are already predisposed to lean on heuristics to help lighten their load. AI provides the perfect shortcut, a quick, confident answer in the chaos of battle. Yet when responsibility diffuses, so does ownership of judgment. The result is a force that may act faster, but not necessarily smarter, and one in which no one fully claims accountability when AI is wrong.
Accept-and-Forget Thinking / Information Overload
A second vulnerability in the use of AI on the battlefield can be described as accept-and-forget thinking, which is the tendency for leaders to quickly accept an automated system’s output as correct, and move forward without double-checking it. When information is delivered rapidly by technology, the brain shifts into a low-review, high-speed decision cycle, assuming the machine has already vetted the answer. This pattern is also known as automation bias, meaning users accept AI’s recommendations even when other evidence suggests the system may be wrong. This can lead to two kinds of dangerous mistakes.
Omission errors occur when leaders fail to act because the system did not provide a recommendation, and they are too overwhelmed with information to make a decision on their own. Without being able to use AI tools effectively to parse data, the information overload will lead to decision paralysis. This paralysis stems from significant challenges in data utilization, where the sheer volume of data collected exceeds the capacity of traditional analysis methods, making it difficult for decision-makers to identify critical insights. Furthermore, communication breakdowns exacerbate this issue, as staff members may struggle to convey the implications of the data they have gathered. As a result, vital information may not reach commanders in a timely or understandable manner, leaving them unable to make informed decisions when they are most needed. This problem of overwhelming data, ineffective communication, and a lack of actionable insights can ultimately hinder operational effectiveness and lead to missed opportunities in critical situations.
In contrast, commission errors happen when leaders follow an automated cue even when signs indicate it is incorrect. Both errors are dangerous in combat, where hesitation or misjudgment can cost lives. NASA flight-deck experiments captured this problem clearly. Crews were given electronic checklists that included automated decision cues. When the automated decision cues presented the wrong information, crews not only followed those incorrect cues more often but also talked less with one another before decision when compared to flight crews that only used a traditional paper checklist. Instead of double-checking or debating, pilots deferred to the system and took action with less coordination. The very presence of automation shifted the way the team interacted and decreased the redundancies that usually help humans catch mistakes.
On the battlefield, the same pattern could easily emerge. Faced with having to parse an inordinate amount data to come to a solution and a confident AI output, soldiers may feel less need to question the result or discuss it with their peers. This bias does not just affect individuals; it can reshape team dynamics, inhibit communication, and accelerate bad decisions. Critically, we still do not know the full extent of how automation bias will manifest itself in Army units operating under the stress of combat. That uncertainty itself is a vulnerability, and one the Army must confront through deliberate preparation before AI systems are fielded widely.
Mental Dependency / Reduced Critical Thinking
The third vulnerability is the gradual erosion of soldiers’ critical thinking when AI is used as a constant crutch. Preliminary studies in education and cognitive science have shown that when people rely heavily on AI assistance, they devote less mental effort to problem-solving. Over time, this creates dependency. Tasks that once required deliberation and judgment become reflexive acts of deferring to AI. For an Army that already recognizes critical and creative thinking as an essential leader attribute, this potential adverse effect is particularly concerning.
As mentioned previously, the NASA study has found that individuals communicate less in situations where there is a high level of automation, even when its output is wrong. That reduction in communication made it harder to challenge mistakes and weakened the redundancy normally created by open dialogue. Additionally, more recent studies with generative AI reveal a parallel risk: While individuals may produce sharper and more polished outputs, collectively their solutions converge and become noticeably more alike. Taken together, these dynamics compound one another. Less dialogue within units means fewer chances to catch errors, and greater uniformity in AI’s outputs makes those same units’ actions easier for an adversary to anticipate.
In combat, where uncertainty is the norm, a force dulled by dependence on AI will be less adaptable. If soldiers are conditioned to accept AI’s guidance at the expense of their own judgment, the Army risks producing leaders who can execute quickly but struggle to adapt when the algorithm fails, is degraded, or is actively deceived by an enemy. Unless this tendency is countered, essential Army competencies and attributes will atrophy in the next generation of leaders.
What Army Leaders Can Do
Effectively leveraging AI on the battlefield does not relieve leaders of responsibility; rather, it magnifies it. The immediate counter to the risks posed by AI is not more technology but rather through the fundamental and deliberate application of Army Doctrine Publication 6-22, Army Leadership and the Profession and to focus on human dimensions of handling human-AI systems integration. Army leaders already have a framework for effectively anticipating and managing the risks AI will introduce on the battlefield and that framework is the leadership requirements model.
The leadership requirements model provides a direct answer to the vulnerabilities identified earlier.
Intellect requires leaders who can think critically, prepare themselves to question AI’s recommendations, and deliberately train the cognitive skills most relevant to working with or without AI. By strengthening their own judgment and innovative thinking, leaders ensure they are not sidelined when the algorithm produces flawed or biased outputs.
Leads emphasizes the human role in communication and accountability. Because automation can reduce dialogue within teams, leaders must actively foster communication in decision-making. This also requires developing their ability to model disciplined questioning of AI in front of subordinates, showing that leadership means owning decisions and reinforcing trust within the formation even when an AI system is involved.
And achieves underscores that mission success cannot depend on a single point of failure. Leaders must prepare their units to operate in environments where AI is degraded, denied, or deliberately manipulated. This means not only training on technical redundancy but also developing initiative and adaptability in their soldiers so that operations continue when the AI is wrong, slow, or silent.
By deliberately studying and applying the leadership requirements model, leaders can frame AI as a support tool rather than an oracle. Just as units practice navigation with compasses and maps when GPS is degraded, they must also be prepared to accomplish missions when AI outputs are denied, manipulated, or simply incorrect. Leaders who model this behavior and demonstrate when to leverage AI and when to question its outputs set the tone for their formations. In doing so, they counteract the diffusion of responsibility, mitigate automation bias, and safeguard critical thinking skills within their teams.
Call to Action: What the Army Needs to Do
Leaders will need to remain vigilant, but vigilance alone without institutional action will fall short. Biases in human-AI interaction will not be fully understood until they are studied under the stress and friction of combat-like conditions. Traditional academic studies can illuminate patterns of automation bias and cognitive dependency, but they cannot capture the unique conditions under which soldiers fight extreme fatigue, sustained danger, degraded communications, and the constant pressure of life-and-death decisions. To prepare for the future fight, the Army must deliberately stress test the relationship between humans and AI in its most realistic training environments: combat training centers and other high-fidelity simulations.
The Army already understands the need to safeguard its equipment under demanding conditions. M1 Abrams tanks, for example, are designed with systems that regulate operating temperature to ensure optimal performance and reduce the risk of overheating. Leaders should be treated no differently. Just as the Army studies and sets limits for its machines, it must also understand how human performance shifts under stress when paired with AI. For example, a unit placed under sleep deprivation might show a sharp increase in AI-induced vulnerabilities and errors when compared to a well-rested unit. By integrating AI evaluations into comprehensive human performance efforts like the MASTR-E study, the Army can establish the experimental framework needed to uncover these unknown thresholds. Establishing them is essential not only for operational safety but also for building a feedback loop spanning the entire DOTMLPF-P (doctrine, organization, training, matériel, leadership and education, personnel, facilities, and policy) spectrum to help prepare Army leaders to recognize and mitigate these vulnerabilities.
The field of psychology offers methodological tools to accomplish this. Unlike observational studies, experimental designs can isolate cause and effect by manipulating conditions such as stress, fatigue, or tempo and measuring their impact on how soldiers interact with AI. These experiments must also study how the presentation of information by AI—whether presented as confidence scores, ranked lists, or color-coded indicators—influences soldier decision-making to see what lessons can be learned. By leveraging this tradition of experimentation in the social sciences, the Army can move beyond speculation and generate the data it needs to set safeguards, design leader training, and adapt doctrine.
This is the call to action: The Army cannot assume that lessons from civilian studies or commercial applications of AI will automatically translate to the battlefield. It must generate its own evidence by embedding AI into combat training center rotations, professional military education wargames, and other high-fidelity training environments. Only then will leaders understand not just when AI can be trusted, but when it must be challenged and how to build the redundancy necessary to prevail when it fails.

History is clear: When the Army has rushed untested technology into the fight, it has paid the price in lives and mission failure. The vulnerabilities are already visible. Passing the buck, accept-and-forget thinking, and mental dependency have the potential to negatively shape how soldiers interact with AI. But the Army is not powerless. As an institution it must do what it has always done best: train under conditions that mimic the stresses of war, but this time incorporating AI into realistic and high-fidelity training scenarios to uncover where technology helps and where it misleads. The goal is not to reject AI, but to master it. AI can sharpen the intellect of soldiers rather than dull it if it is understood and applied correctly. Just as we safeguard our machines to keep them operating at their limits, we must safeguard our leaders so they can think clearly, question confidently, and adapt when the algorithm is wrong. The Army’s future advantage will not come from the speed of its machines, but from the resilience of its leaders when those machines fail.
Dr. Michael Hay is a research psychologist with the Center for the Army Profession and Leadership at Fort Leavenworth, Kansas and holds a doctorate in industrial-organizational psychology. His work involves designing Army leadership development tools and leveraging data to inform Army leaders.
The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Image credit: Pfc. Thomas Nguyen, US Army

