Should the United States buy more destroyers, or would the money be better spent developing bases, munitions, or software? These are the kinds of questions the Department of Defense confronts every year when building its budget. Yet the Pentagon’s process still lacks a clear way to answer the most fundamental question: Which force structure delivers the greatest strategic effect for every dollar spent? And just as important: How should the Pentagon work with industry to generate the concepts, data, and prototypes needed to answer that question?
This challenge is not the result of neglect but of how the U.S. military has traditionally measured performance. The armed services tend to compare platforms by attributes (e.g., range, payload, speed, stealth). These comparisons matter tactically, but they rarely capture whether a capability meaningfully contributes to campaign-level outcomes or national objectives. Not because defense planners are incapable, but because campaign-level interdependencies are too complex to evaluate without robust modeling. The result is a force design process optimized for tactical performance rather than strategic return on investment.
As strategic readiness debates shift the focus from unit-level metrics towards long-term strategic objectives, cost-per-effect applies the same logic to force design. In theory, cost-per-effect should provide a unified way to link dollars and outcomes. In practice, however, the Department of Defense does not have an analytical framework capable of evaluating force structures at scale. High-fidelity campaign models can examine a handful of options in exquisite detail, but they are too slow and rigid to explore the vast decision space real force design demands. What’s missing is a standardized, tiered cost-per-effect methodology — one that integrates fast exploratory models with detailed simulations and establishes new mechanisms for working with industry to generate credible, testable alternatives earlier in the process.
Why Cost and Effect Are Inherently Difficult to Define
At its core, cost-per-effect asks: How much money does it take to achieve a given outcome? In practice, neither side of that equation is straightforward. “Cost” can mean many things from procurement and acquisition dollars to the full life-cycle expense of operating, maintaining, and sustaining a capability. Even small analytical choices — how to frame indirect costs, whether to include replacement costs for systems no longer in production, or how to treat sunk costs — can dramatically alter the result.
Likewise, “effect” is just as challenging. Effects range from a discrete tactical outcome to the cumulative results of a multi-domain campaign. Because broader effects are harder to measure, discussions often collapse down to the lowest level: one-on-one comparisons between systems. While these comparisons make performance easier to quantify, they say little about what determines success in combat. A missile that is 10 percent more lethal in a tactical fight might yield no meaningful improvement operationally if the real constraint is an aircraft’s sensors rather than the missile itself.
For this reason, meaningful cost-per-effect analysis must be conducted at the campaign-level, where the full set of operational interactions and constraints determine outcomes. Even then, cost-per-effect is not a panacea. Its purpose is to identify which force structures are most effective in a future conflict, not to dictate how those forces are built, sustained, or employed over time. Moreover, as effects become more ambiguous — such as deterring adversaries or shaping political behavior — the ability to measure them degrades rapidly. How does one compare an aircraft to a carrier in terms of deterrence? In these cases, quantitative analysis must be complemented by human judgment. Fortunately, because major defense acquisitions are overwhelmingly focused on warfighting capabilities, cost-per-effect remains a powerful and relevant analytic tool.
A Simple Example: Stealth vs. Non-Stealth
The 1991 Gulf War offers a clear illustration. On paper, legacy fighters looked far cheaper than the F-117 stealth aircraft. But cost-per-effect turns that thinking on its head. It took a strike package of 41 non-stealth aircraft to hit eight targets, whereas 20 F-117 stealth fighters were able to strike 28 targets in the same amount of time. If analysts only looked aircraft to aircraft, the non-stealth ones would have been cheaper. But when compared at the operational level, those non-stealth aircraft required escorts, jammers, and other suppression platforms to survive the mission. After adding up all these additional costs, the F-117, while initially more expensive, proved to be the best cost-per-effect.
But this was just one mission. A single operation can have hundreds of missions, and a campaign can have thousands, all going on simultaneously across air, land, sea, space, and cyber. So, while the one attack mission seemed easy to compare, the number of interdependencies at the campaign level explode to the point where no human can compare all the interactions properly. These questions quickly outstrip human capabilities and demand sophisticated modeling.
The Pentagon Needs a Tiered Cost-Per-Effect Modeling Architecture
Evey few years, the Department of Defense develops Defense Planning Scenarios to anticipate future conflicts and guide force development. One purpose of these scenarios is to evaluate which force structures are most likely to succeed. To do this effectively, the Office of the Secretary of Defense and the Joint Staff — working with service futures and analytical organizations — must be able to explore an enormous range of force combinations quickly and systematically.
But today’s tools sit at two extremes: high-fidelity, time-intensive models that can evaluate only a handful of cases, and lower-fidelity models that don’t capture campaign-level interdependencies. What’s missing is a tiered system — a modeling architecture that moves from fast exploratory tools to high-confidence validation.
This architecture should include three complementary layers:
Tier 1: Rapid Exploration
Tier 1 is designed to search the force-design trade space. Planning organizations — either through their internal analytic shops or in partnership with organizations like the Office of Cost Assessment and Program Evaluation or the Joint Staff’s Force Structure Directorate — would use fast exploratory models to examine millions of force combinations. The Air Force’s Combat Forces Assessment Model is one such model. Instead of simulating every engagement, the model uses optimization techniques to explore millions of possible force structures, allowing analysts to identify promising force mixes. The output of Tier 1 is not a decision, but a ranked set of feasible force structures.
Tier 2: Validation
Once a few force structures have been identified, those analytic organizations like the Office of Cost Assessment and Program Evaluation are needed to develop the most realistic modeling scenario the U.S. military has. The gold standard for campaign analysis is a simulation called Synthetic Theater Operations Research Model. The model is a high-fidelity campaign simulation — essentially a massive computer model that can simulate a conflict, with detailed interactions of forces across domains. But Synthetic Theater Operations Research Model scenarios are enormously time-consuming to build and run. That constraint means this model must be used sparingly — to validate the most promising options identified by faster exploratory models.
Tier 3: Force Structure Innovation
Even Tier 1 and 2 cannot search the entire decision space of modern war. This third tier focuses on expanding that frontier. Defense Advanced Research Projects Agency’s Strategic Chaos Engine for Planning, Tactics, Experimentation and Resiliency model is an early example. The idea is to have artificial intelligence assistants as battle managers that can help human commanders evaluate courses of action in a dynamic, complex battlespace — something far beyond the ability of today’s models. These models should be pushed to all planning organizations across the military as they can become sandboxes for all levels of war.
Together, these three tiers form a coherent, scalable cost-per-effect ecosystem. But when insights diverge across tiers, resolution must rest with senior joint decision-makers, whose job is to be above parochial interests. This framework is designed to inform judgment, not replace it.
What This Means for Industry
A tiered cost-per-effect system does not just change how the Pentagon analyzes force design — it changes how industry participates in shaping it. Rather than narrowing innovation, this approach gives companies clearer problems to solve, faster feedback on their ideas, and a more predictable path from concept to acquisition.
First, cost-per-effect pushes the Pentagon toward problem-centric demand signals rather than platform-centric requirements. Today, companies are often asked to build a preselected platform: a new fighter, a new destroyer, or a new missile. Cost-per-effect reverses that logic. When the Pentagon defines the effect it needs — survivable long-range fires, resilient basing, contested logistics — companies gain the freedom to propose a wider range of solutions. Material and non-material ideas can be tested early in Tier 1 exploratory models, giving industry clearer insight into what problems matter most and how their concepts might contribute at the campaign level.
Second, cost-per-effect gives industry faster feedback by creating a demand for their input and venues for early iteration. Campaign analysis only succeeds when companies provide performance data for the models to use. In return, industry can see how their systems perform in realistic campaign scenarios, eliminating guesswork for defense analysts and reducing uncertainty for firms. Similar to hackathons and wargames, the Pentagon can provide venues for companies to test concepts without compromising proprietary data. Analysts can work with performance ranges rather than specifications, preserving competitive integrity while identifying which attributes matter most.
Finally, cost-per-effect creates a clearer analytic bridge to acquisition. In many programs today, analytical insights remain siloed within assessment offices and struggle to directly influence program or budget choices. A cost-per-effect framework changes that by tying modeling results directly to requirements writing and budget development. For industry, this means a more transparent connection between demonstrated effect and funding decisions.
Why This Has Never Been Done
If modeling is so powerful, why hasn’t cost-per-effect become the backbone of force design? The short answer is that analysis inside the Pentagon is fragmented, and perfection often becomes the enemy of progress.
Even though the Department of Defense submits one unified budget, the underlying analysis is largely generated within separate service bureaucracies. Cost analysis happens in one set of offices, operational analysis in another, and programming decisions in a third. Each using different models, assumptions, and incentives. These communities collaborate, but rarely operate from a shared framework, which makes it difficult to link cost and effect into a single decision-making architecture.
A second barrier is cultural. All models are imperfect, but the fear of being wrong can discourage analysts from using fast-turn exploratory tools that are good enough to guide force design. Instead, analysts default to exquisite, highly detailed models that are too slow to inform early decisions. Where senior leaders routinely speak about the need to take intelligent risk, the same mindset must apply to analysis. The relevant question for cost-per-effect is not whether a model is perfect, it is whether the model is better than the status quo.
Designing the Future Force Through Cost-Per-Effect
If cost-per-effect is to guide force design, the analysis needs to be done at the strategic level — not the tactical one. Otherwise, services risk buying weapons that look efficient in isolation but don’t contribute meaningfully to victory. A tiered modeling framework allows the Pentagon to compare billions of force combinations and identify innovative options that traditional methods might miss.
Just as important, a cost-per-effect approach brings industry into the design process in a more focused and constructive way. Clearer problem statements and early integration would allow companies to test ideas against real operational challenges rather than pre-defined platform-centric requirements. Instead of industry guessing what the Pentagon wants, cost-per-effect provides a continuous analytic signal of what matters.
Every year, each service crafts its program objective memorandum — a five-year spending plan that guides how billions in taxpayer dollars are allocated. A robust cost-per-effect methodology would complement this process by providing an analytically grounded force design against which service proposals could be evaluated. In practice, this likely requires vesting clear ownership of cost-per-effect analysis at the level of the Office of the Secretary of Defense with the authority to set common assumptions, adjudicate across services, and ensure that analytic results meaningfully inform requirements and budget decisions. No longer would those services be able to fund projects that do not advance national objectives. With it, the Department of Defense could explain to Congress not just what it wants to buy, but why those investments deliver the best return in military capability. It would also help identify programs whose costs outweigh their contribution, allowing resources to be redirected toward more effective options.
In the end, cost-per-effect is a concept that comes off as simplistic but in fact is extremely difficult to do properly. Without faster, standardized tools to calculate cost-per-effect, the Pentagon risks making the same old mistakes — judging weapons by their price tags rather than their value in war. The payoff for getting this right is enormous: a force that is not only lethal, but truly cost-effective in an era of constrained budgets and growing threats.
Erik Schuh is an Air Force officer serving as an operations research analyst. The views expressed are those of the author and do not reflect the official guidance or position of the U.S. government, the Department of Defense, the U.S. Air Force, or the U.S. Space Force.
**Please note, as a matter of house style War on the Rocks will not use a different name for the U.S. Department of Defense until and unless the name is changed by statute by the U.S. Congress.
Image: Navy Petty Officer 1st Class Thomas Gooley via the Defense Department.

