It was the twelfth hour of the planning cycle when our lead plans officer stood between a whiteboard and a map, pulled on a headset, and started describing a complex defensive operation. He talked through the scheme of maneuver in sequence—terrain, routes, phase lines, objectives, support-by-fire positions, transitions, branches. The intelligence and fire support officers interjected with detail where their warfighting functions were involved. Over a twenty-minute period, a transcription tool captured every word. Thirty minutes later, the staff had a first-draft brigade operations order in correct doctrinal format—drafted not by any officer in the room, but by an artificial intelligence tool trained on the structure of Army operations orders, working from the transcript of what the humans had actually said.
This one use of artificial intelligence characterized our rotation at the Joint Readiness Training Center at Fort Polk, Louisiana. The three of us served on the staff of 3rd Brigade Combat Team, 101st Airborne Division—the Rakkasans. Across the rotation, we integrated artificial intelligence tools into the military decision-making process, the seven-step planning methodology the Army uses to convert higher headquarters guidance into an executable order. The results were three complete brigade operations orders, each produced in roughly twenty-three hours—compliant with the one-third/two-thirds rule that reserves two-thirds of planning time for subordinate units—a pace that significantly exceeds what most brigade staffs achieve under the pressure of a combat training center rotation.
The result is not the finding. What matters is what the compressed timeline created: cognitive time. Time for the commander and staff to do the conceptual work that no machine can adequately perform—understanding the problem, visualizing the fight, accepting risk, and deciding. AI inside a command post is most appropriately employed as artificial staff support, never as a substitute for command judgment. Treating it as anything more breaks doctrine and invites operational risk. Our rotation demonstrated both halves of that argument—where AI strengthens the staff, and where, if permitted past its proper boundary, it would corrupt the very process it is meant to support.
Where AI Earned Its Place
Army doctrine draws the relevant line clearly. Army Doctrine Publication 5-0, The Operations Process establishes that planning is commander driven and staff supported. The commander owns the conceptual dimension—framing the problem, defining the end state, developing the operational approach, selecting the decisive operation, accepting risk. The staff owns the supporting labor—running estimates, synchronization, comparing options, converting commander guidance into executable products. Where AI belongs in a command post, and where it does not, tracks that line exactly. Both Major Michael Zequeira and Colonel Jason Adler, writing in Military Review, have argued much the same point—that AI’s most immediate military value lies in unburdening staffs without severing the human role in judgment.
During our rotation, AI delivered the greatest value in five planning functions, each characterized by high clerical labor and limited conceptual judgment.
The first was during receipt of mission. After receiving our division-level operation order, we fed it and supporting products into an AI tool that surfaced specified and implied tasks, constraints, restraints, command relationships, and critical deadlines. The output required human interpretation and validation, but the raw extraction was faster and less error-prone than manual scanning. We published a complete warning order to subordinate battalions within one hour of receipt—a pace that exceeds standard brigade performance for a warning order subordinates can actually execute against.
The second was during mission analysis, where we employed a separate AI-enabled tool that ingested documents, extracted mission analysis factors, and generated structured outputs for staff refinement. Essential tasks still required human validation. Assumptions still required human judgment. Risks still required human assessment. What changed was the starting point: The operations officer walked into the mission analysis brief with a working product in hand rather than a folder of marked-up notes. A staff working from a structured draft under deadline behaves fundamentally differently than one staring at empty paragraphs at hour four of a twenty-four-hour clock.
The third and most consequential use case we observed—and the one we believe most warrants Army attention—was voice-to-doctrine translation. Commanders and planners do not naturally think in finished five-paragraph doctrinal prose. Tactical thought emerges verbally, in narrative, correction, and refinement. Traditional staff work forces that thought through a bottleneck: Someone must record it, interpret it, convert it into doctrinal structure, and build the operation order, warning orders, tasks to subordinate units, synchronization matrix, and timeline from it. The process consumes hours and introduces translation friction at every step. Commander’s intent gets lost in translation. It happens in headquarters across the Army every day.
What the AI did, working from the transcript of the plans officer’s verbal walkthrough, was take what we had actually said and put it into the format the Army requires. The machine did not invent the tactical concept. It preserved and formalized command thought—narrowing the gap between how commanders speak and how doctrine requires headquarters to publish.
The fourth function where AI proved most valuable came after the scheme of maneuver was defined, when an AI tool drafted warning orders, structured order paragraphs, built timelines, and produced first drafts of synchronization tools. In most headquarters, the concept’s creation is not where product labor ends—it is where product churn accelerates. Every refinement cascades into changes across timelines, tasks, matrices, and rehearsal materials. Manual staffs spend disproportionate energy just keeping their own products aligned with one another. Using AI to absorb that churn let us spend our energy assessing the quality of the plan rather than maintaining the mechanics of the plan.
Finally, the most underappreciated use case required the machine to interrogate rather than to decide. Our brigade executive officer prompted an AI tool to generate pointed questions across the warfighting functions—what is our biggest vulnerability during a forward passage of lines, for example, or what happens to the scheme of fires if the main effort breaches thirty minutes early—and walked those questions into the synchronization meeting. The machine enhanced rigor without displacing judgment. It functioned as a staff coach, not a staff commander.
Where AI Stayed Back
During course of action development, AI took a back seat. This was not a failure of integration but the correct doctrinal relationship—and it was the most important finding from the rotation. Course of action development depends on tactical imagination, doctrinal literacy, terrain appreciation, understanding of the enemy, and the commander’s sense of acceptable risk. It is the stage at which the headquarters decides what is decisive, what is shaping, where combat power is massed, how tempo is created, what the reserve does. Our plans officer did not ask the AI to generate courses of action. He stood at the map with the commander and the operations officer and built the concepts by hand—against terrain, against enemy disposition, against the commander’s intent. The AI was in the room. The AI was not the author.
The same boundary applied at every node of commander authority. AI was not used to determine commander’s intent, select the decisive operation, authorize fires, accept risk, or sign orders. These functions are nondelegable to machines—not because of regulatory prohibition but because command responsibility is not an administrative task. It is the function for which an officer is commissioned.
The Risk the Training Environment Masked
Speed without governance produces faster confusion, not better plans. Even in a disciplined rotation, we observed several failure modes. Three warrant particular attention from any brigade considering adoption.
The most dangerous is what we have come to call hidden confidence. Generative AI models sometimes produce output that is grammatically polished, stylistically correct, and factually wrong. They do not flag these errors. They do not know they are wrong. Over the course of our planning cycle, we observed incorrect unit designations, inverted phase lines, fabricated control-measure names that sounded plausible but did not exist in the original order, and time-distance calculations off by thirty to sixty minutes. Any one of these errors, published in an operations order and executed by a subordinate unit, could produce fratricide, a missed linkup, or a breakdown in synchronization at a critical moment. Our section chiefs caught these errors because they refused to treat AI output as validated. That outcome reflected training and discipline, not luck. A polished paragraph is not a correct paragraph. A clean matrix is not an approved matrix. Staff professionalism in an AI-enabled headquarters is measured less by production speed than by validation rigor—an expectation consistent with the Department of Defense framework for responsible AI, which requires AI-enabled systems to remain traceable, reliable, and governable, with humans accountable for their use.
The second failure mode is architectural. Our tools operated in a benign, connected, unclassified training environment. They are not necessarily deployable in a classified, contested operational one. Cloud-hosted commercial models require connectivity the Army may not have in denied, degraded, intermittent, or limited conditions. They also raise risks of classified information being exposed to nonsecure systems if used carelessly. The Army’s emerging secure-network generative tools are a starting point, but the force still needs hardened, on-premise or edge-deployable models that operate in a contested electromagnetic spectrum. A headquarters that depends on a commercial software service to produce orders has introduced a logistics dependency as critical to operations as water or fuel. The planning question in combat is not only whether AI works—it is whether it works when the enemy is actively trying to prevent it from working.
The third failure mode is strategic. Russian and Chinese forces face the same cognitive-time problem US forces do. Their staffs read comparable volumes of higher guidance, build comparable products, face comparable clerical friction. Both have prioritized AI-enabled command-and-control development for several years. If they integrate AI into their planning cycles faster or more effectively than we do, the tempo advantage we observed at Fort Polk evaporates. The relevant question is not whether AI will change tactical planning. It is whose force will learn to govern it first—and relative governance discipline, more than relative speed of adoption, will shape which force presents faster, better-synchronized problems to the other at the start of the next conflict.
What This Means for the Army
The broader conclusion is not that AI can replace the military decision-making process. It is that AI can make that process more executable under modern conditions by reducing friction in the parts of staff work that consume time without adding proportional conceptual value. Four imperatives follow for brigade commanders and staffs adopting these tools now.
First, employ AI aggressively at receipt of mission, mission analysis drafting, voice-to-doctrine translation, orders production, and red teaming. These are the stages where machine labor buys the most cognitive-time return for humans, and where clerical consolidation most directly translates into planning advantage.
Second, prohibit AI authorship of commander’s intent, the decisive operation, or any approval-authority product without explicit human validation. Codify these limits in unit standard operating procedures before operational pressure tests them. Before, not after.
Third, establish product-governance discipline as a prerequisite to AI adoption—not as a follow-on refinement. A single source of truth, a version-control process, and named validators by product class are the conditions under which AI integration succeeds. A brigade that adopts AI without these in place is worse off than one that does not adopt at all. Governance first. Tools second.
Fourth, assume connectivity will fail. Identify which AI tools function in degraded conditions and which do not, and rehearse planning processes without them. A staff that cannot plan without its tools has not adopted AI. It has become dependent on it.
The institutional Army must move in parallel. Professional military education—the Command and General Staff College, the captains career courses, precommand courses—should treat AI governance as core curriculum rather than elective material, covering prompt discipline, output validation, and the specific failure modes identified above. Field Manual 5-0, Planning and Orders Production will eventually require a chapter on AI-enabled staff processes. The Army should not wait a decade of emergent practice before codifying lessons the operating force is already generating.
What we observed at Fort Polk was not artificial command, and it was not the replacement of tactical judgment. It was the preservation of time for tactical judgment. AI belongs inside the military decision-making process if—and only if—it strengthens the commander’s ability to command and the staff’s ability to support command. When properly bounded, it does precisely that.
Captain Chris Lajeunesse is a US Army infantry officer serving as a brigade operations planner and innovation officer for the 3rd Mobile Brigade, 101st Airborne Division. Captain Lajeunesse is a West Point graduate and Ranger-qualified officer. He will take command of a rifle company this summer in the 2nd Battalion of the 506th Infantry Regiment.
Captain Joseph Palazini is a US Army engineer officer serving as the deputy operations officer for the 3rd Mobile Brigade, 101st Airborne Division. Captain Palazini is a graduate of the University of New Hampshire and both a Ranger- and Sapper-qualified officer. He has served as both an engineer and brigade headquarters company commander.
Captain James Tulskie is a US Army engineer officer serving as the engineer officer and chief of protection for the 3rd Mobile Brigade, 101st Airborne Division. Captain Tulskie is a graduate of VMI and both a Ranger- and Sapper-qualified officer. He will take command of a brigade headquarters company this summer within the 3rd Mobile Brigade, 101st Airborne Division.
The authors participated in their brigade’s recent rotation at the Joint Readiness Training Center, where they integrated artificial intelligence tools into brigade-level execution of the military decision-making process.
The views expressed are those of the authors and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.
Image credit: Master Sgt. Whitney Hughes, US Army

