"Just as the simplest and most natural of movements, walking, cannot be easily performed in water, so in war it is difficult for normal efforts to achieve even moderate results."
Clausewitz
A hypothesis is a model of how the world works that makes a prediction. If you believe that X implies Y, you can write this in logical notation as X → Y, or "if X then Y."
Assume X → Y = True
X.
Therefore Y.
The strength of a hypothesis (or any belief) is how accurately and specifically it predicts the future. If X → Y = True, and you observe indicator X, then you should be confident that you will also observe Y. If you have a high level of confidence in your hypothesis, then you can make plans to exploit future state Y when it occurs. If your hypothesis is correct, you will be better positioned to exploit Y because you will be ready to implement your plan while your competitors are still orienting themselves to the change in situation. This is true whether you are shorting stocks or mounting combat team attacks. This is the competitive form of making your beliefs pay rent in anticipated experiences.
"War" writes British Army researcher and doctrine writer Jim Storr, "is adversarial, highly dynamic, complex, and lethal." We make war to change the world, or at least, to change the future state of the world as it relates to us. A desired future state of the world is called an “end state".
If we are prepared to fight, it is because we assume that our efforts to achieve the desired end state will be opposed - with force - by other people, hence, war is adversarial. In order for us to achieve the desired end state despite the forceful efforts of our adversary, we must also be prepared to exert force against them. This can be done by pre-empting their action and countering their reaction.
Our aim is to create the right conditions, a change in the situation (the status of and relationships between friendly forces, enemy forces, and the environment) which can be exploited to achieve our desired end state. This dance of action-reaction-counter-exploitation, when performed by organized groups who can coordinate the simultaneous and mutually-supporting activities of many sub-components, is what makes war "highly dynamic". Trying to do all of this under conditions of uncertainty makes war “complex." The use of force makes it lethal.
In Army-speak, hypotheses are called plans. Operations, roughly speaking, are the actions taken to carry out those plans. The real (not predicted) outcomes of those actions are called effects. Plans will succeed if they correctly predict which actions will cause effects that can be exploited to achieve the end state. Plans that fail to predict which actions will cause exploitable effects must either be adapted faster than the adversary can react or they will fail, perhaps terminally.
When predictions fail, which they inevitably do, and assuming the failure wasn't terminal, we change or update our hypothesis and carry on. This continues until we either achieve our desired end state or one of our predictions fails with terminal results i.e. we lose. Quitting the game may or may not be an option.
At the level of the individual combatant, every failure of prediction is potentially lethal. There is a range of possible outcomes when individual combatants fail to predict what their next action ought to be, from near-misses to maiming to total obliteration. Sometimes a combatant does everything right and still ends up dead, because probability had them in the wrong place at the wrong time. Even so, it's accurate enough to say that in an adversarial, highly dynamic, complex, and lethal environment, when n = 1, every risk is existential risk.
Imagine that you are a Russian soldier in the spring of 2022. You forgot to pack baby wipes and hand sanitizer, or maybe these hygiene items simply weren't available to you. As a result, you get sick at the front. You risk squatting in broad daylight to avoid soiling yourself. A UA drone spots you and drops a grenade. You die with your pants around your ankles covered in shit. This is a hypothetical scenario based on a real event (the targeted individual survived, whether or not this is a good thing is left as an exercise to the reader).
At the sharp end, a failure of prediction is known as a casualty. For those of us who work in the organs of the Army responsible for doctrine and training, a failure of prediction is known as Tuesday.
II
The point of this post isn't to take cheap shots at the Army's managerial class. Please understand that I am making an observation, not a moral judgement (that comes later).
Plans will fail, so it's important wrap our heads around this and mitigate the risk of terminal failure. “Protection against a decisive blow” warned Alexsandr Svechin, “is the first rule of any conflict”. How do plans fail? An infinite number of ways. It would be impossible to list them all. Now, asking why do plans fail? That's a better question.
We've already established that plans fail when they do not predict which actions will cause effects that can be exploited to achieve the desired end state. Therefore, the failure of a plan is a failure of prediction. The problem is that humans are awful at predicting outcomes, or more specifically, humans are awful at predicting second- and third-order outcomes which will have significant impact on the most carefully laid plans.
It’s worth pausing to consider how much time and effort the Canadian Army invests into Professional Military Education (PME) specifically focused on planning. We have a manual dedicated to the subject - B-GL-335-001/FP-001 Decision-Making and Planning at the Tactical Level - which provides doctrinal guidance on, well, exactly what it says in the title. For General Service Officers, i.e. not professional specialists like doctors and lawyers, the Army Tactical Operations Course (ATOC) is primarily dedicated to planning tactical tasks for sub-units (companies, squadrons, or combat teams). The Army Operations Course (AOC) repeats this at the unit and formation level and consumes six months of your life which you'll never get back.
This brings us back to the question of why the "best laid plans" still fail. Nobel laureates Daniel Kahneman and Amos Tversky made significant contributions to behavioral economics, which Kahneman summarized for laypersons in his bestseller Thinking Fast and Slow. Kahneman describes a common bug in minds of planners, what he calls the "inside view". The inside view is focused on specific circumstances and relies on evidence drawn from personal experience. Planners stuck in the inside view make predictions based on the information that is directly in front of them, without referring to similar recorded cases from outside their personal experience i.e they don't take an outside view of the problem. As a result, planners fall into the trap of failing to account for the unknown-unknowns and in doing so, situate their estimates in the best-case scenario.
Recounting his personal brush with the inside view while heading a textbook writing project, Kahneman recalls "There was no way for us to foresee, that day, the succession of events that would cause the project to drag out for so long. The divorces, the illnesses, the crises of coordination with bureaucracies that delayed the work could not be anticipated."
What Kahnemann is describing is known to military theorists as “friction”, after Clausewitz. To normal people with healthy social lives, friction is known as “shit happens”. A retired legend of the Royal Canadian Armoured Corps, Colonel Charles S. Oliviero, provides the clearest description of friction that I know of in his primer Praxis Tacticum:
"[Friction] is the resistance that living in the real world presents, the innumerable minor and major obstacles that rise up to have an impact upon the plans and intentions of the commander. Unlike what some may believe, it is not Murhpy's Law. It is the unexpected flat tire, the sudden deluge of rain that reduces visibility, the radio that suddenly stops working and the thousands of other influences that impact our actions. These are only the physical aspect of the term. There is also the mental component of friction. Thus it also refers to the unexpected forgetfulness of a subordinate, the mental fatigue of the commander who has gone too long without sleep, the slight variations of understanding that each subordinate may have of the commander's intent and, again, the thousands of other minor and major influences that affect us all."
More concisely, F = P(Shit Happens) x Time, where F is Friction and P is probability. The longer the project, the more likely it is that something unforeseen, an unknown unknown, will occur. Planners who base their estimates on best case scenarios without checking their estimate against similar reference cases commit what Kahneman and Tversky call the planner's fallacy. It's worth noting that even people who know about the planner's fallacy will continue to commit it unless checked, including Kahneman himself in the example above. This explains why no matter how much brain-sweat the brigade staff pour into an op order, H-hour always gets pushed to the right.
The planner's fallacy accounts for friction and unknown-unknowns but not enemy action. Storr, true to form, provides two more fallacies common in military planning in Battlegroup! He doesn't name the two fallacies, so for ease of reference, I've given them nicknames in brackets:
“At their best, USAREUR formations clearly displayed the spirit of Patton in the 1980s. What seems to have gone wrong is that the Army attempted to capture that dynamism and elan by describing it, and trying to plan for it, in minute detail. Formations, and their units, displayed a growing tendency to over-plan. That tendency had been apparent at times during the Second World War. Logically it depended on two major fallacies:
- the first [intelligence fallacy] is that we can predict which course of action an enemy will adopt; in circumstances that we can only assume.
- the second [commander's fallacy] is that a detailed description of the actions intended to be conducted within an engagement contributes to their success.”
Recall Storr’s characterization of war: adversarial, highly dynamic, complex, and lethal. You can make whatever plan you want, but you don't control the environment or the enemy. If you think that this is self-evident and we know better by now, than you've never seen a battalion headquarters waste days (and reams of paper) going through the Operational Planning Process, leaving mere hours for subordinates to receive orders and go through their own planning cycles, just for the attack to go tits-up in the first five minutes.
So, we have three fallacies that explain why our plans fail as predictions:
1. Planners fallacy: the failure to look beyond specific information and account for statistical patterns from general reference cases. The planner's fallacy results in plans being based on best-case scenario assumptions. Example: when drafting training plans for courses, writing boards are not allowed to budget extra periods for re-testing students, even for assessments which are known to have historically high failure rates. When some (or most) students inevitably fail an assessment, time has to be stolen from other parts of the course (or made up on nights and weekends) to conduct the re-tests. Now consider that most students are granted at least four attempts for any given assessment, and you have a recipe for frustration and burnout for everyone involved. If you want another example, take a look at literally every major CAF procurement project ever.
2. Intelligence fallacy: the belief that you can make accurate and specific predictions about what a thinking adversary will do next, based on assumptions you've made under conditions of uncertainty. Examples: Russia's invasion plan in February 2022. Another example would be whatever bullshit you wrote under Enemy Most Dangerous Course of Action in your ATOC estimate worksheet (the enemy's real MDCOA is the one you don't predict).
3. Commander's fallacy: the belief that by ordering things to be done, often in exhaustive detail, those things somehow become achievable if not inevitable. Examples: the Strengthening the Army Reserves (StAR) program and Operation HONOUR.
The effects of friction, enemy action, and the psychological factors affecting planners themselves combine to reduce the likelihood that any operation or project will be executed as planned. Because operation and project plans are divided into phases (or God-help-me, "spirals") based on what must be accomplished sequentially, it's possible to do some napkin math to illustrate the problem.
Let's say that you are planning a classic three phase operation. Moving to the next phase is contingent on completing the current phase, because if you have means to execute two phases simultaneously, then that's actually just one phase. So you need to finish Ph1 before you can move to Ph2, and you need finish Ph2 before you can move to Ph3. Finishing Ph3 should result in your desired end state. Let's assume that the odds of success in each phase are 3:1. These odds may seem pretty good at first glance, but since Ph2-3 are contingent on prior success, the overall probability of success is:
P(Ph1, Ph2, Ph3) = 0.75 x 0.75 x 0.75 = 0.42
The probability of achieving your end state as planned is 42%, or worse than a coin toss. Breaking up your phases into stages doesn't change this, since each stage is also contingent on prior success; you would just be falling for the commander's fallacy. This example is obviously oversimplified, but it illustrates how the probability of your initial plan becoming obsolete rapidly approaches 1 over time.
III
Plans are predictions about which actions will create the necessary conditions for success, but these predictions are being made under time constraints with limited information and limited processing capacity. As we've seen, even the most detailed plans made with the utmost care struggle to account for unknown variables and the fact that sometimes the enemy just doesn't do what you want them to do.
By now you may be asking yourself, if all of the above is true, then how do plans succeed? The answer is straightforward: the plan changes. This is the function of branch plans ("what will we do if this goes wrong?") and sequel plans ("what will we do if this goes right?"). The thing is, branch and sequel plans are still… plans. They're minor predictions which are created as accessories to the major prediction, usually by the same people, so they will suffer from the same systemic errors.
Per Eisenhower, planning is critical; you need to plan in order to set things in motion and synchronize the activity all the sub-components in a military formation. But plans are also worthless, because they inevitably come apart over time. You must plan just enough to initiate, integrate, and synchronize activity, but not so much that you're wasting time and attention. Time is precious: it is the only resource that you can never get more of.
The keys here are simplicity, flexibility, and agility. Simple plans are easy for subordinates to understand and are easier to modify on the fly in response to a change in the situation. I challenge the reader to come up with a real-world example where a ground force commander pausing the advance to ponder "am I in Phase 3 Stage 2 or Phase 4 Stage 1?" ever resulted in mission success.
Early in my officer years, I was a sucker for the commander's fallacy. I thought that if I could just write the best, most doctrinal, most schoolhouse set of orders, then my platoon would surely grok whatever I was saying and achieve tactical enlightenment (whatever enlightenment is for a pack of 18-25 year olds who smoke, dip, and consume energy drinks all at the same time). A wise and right-thinking Newfie Warrant Officer set me straight: "Sir, the boys don't give a fuck about your perfect mission task verb. They need to know who they're gonna fuck up, where they're gonna fuck him up, and when they'll get to eat and sleep." Orders are written for your subordinates, not for your boss, which is something that I think gets forgotten when the closest you get to combat are validation exercises in Wainwright.
Flexibility describes how well you can adapt to change. The situation will change, either due to enemy action, friendly mistakes, or unforeseen environmental factors (including the actions of civilians and media). Identifying and seizing upon tactical opportunity is essential. This requires a clear-eyed understanding of the higher-level intent for the mission (which comes from the commander) and the freedom to make a decision on the spot (which comes from judicious use of control measures). I've heard this phrased as "following the orders you should have received". Flexibility and simplicity go hand-in-glove: if your intent is vague or requires a master's degree to interpret correctly, then it will be very difficult for your subordinates to carry it out. If you try to backseat drive by imposing too many control measures on your subordinates, they’ll be tied in knots and unable to act. There are material aspects to flexibility, such as keeping some forces in depth and maintaining force elements that are trained and equipped to tackle a wide variety of tasks so they don't need wait on specialists showing up to fix the problem.
Agility is the ability to rapidly (re-)direct assets, attention, and combat power from low-value to high-value tasks. Tactical agility comes from reserves and fire support. Reserves are uncommitted forces which can be dynamically tasked by commanders to do such things as fill gaps in a defensive or exploit a penetration. Fires are inherently agile; you can't outrun a bullet. Indirect fire, being less restricted by terrain and visibility than direct fire, is the most agile form of fire support. It's trivial for artillery to traverse and engage targets that are kilometers apart, from tens of kilometers away. Not so much for an squadron of tanks that has to drive cross-country, through potentially contested ground.
IV
By this point, the reader can credibly accuse me being a hack for getting this far without mentioning Auftragstaktik, which is (mis-)translated in English as mission command. On a superficial level, mission command is gospel in the Canadian Army along with the rest of NATO, but you would be terribly naive to believe that we actually apply it as praxis in our plans and operations. In this respect, we could learn a thing or two from our allies. We over-plan everything and then act surprised when our plans don't survive first contact with reality.
All too often in professional settings, I hear some variation of "we will never need X because it's not how we ought to fight" and the corollary "we will always have X because that's how we ought to fight." You can replace "ought" with "want" in these statements, it's the same thing. Arguments along these lines are symptoms of diseased thinking. They are shallow appeals to the authority of doctrine which conceal a rotten sub-structure of assumptions and magical thinking. The highest principle of mission command is that you cannot predict outcomes from on high, it’s down to the commander on the ground to act based on what is there, what is really happening, to do the right thing in the midst of chaos.
Knowing why a particular piece of doctrine or policy was written is necessary for making sound judgements about how to apply it. Was it validated in the real world or is it a latticework of assumptions made by an apparatchik gazing into a crystal ball? Which do you think makes better predictions?
Note 081152ZFEB23: I have edited the title and format of this post in order to maintain a common formatting standard. The body text has been left unaltered.
Tremendous, Maples! Like Coglianese, I followed Bruce Gudmundson's link to you and was exulatant reading your post! So I ask you, why are we like this? Why do our organizations lean toward more process and more paper? I used to think it was a simple "peace-time armies" versus "war-time armies" problem, but after twenty years of actual operations, we (the USMC in my case) seem worse than ever at constructing useful plans and orders. Storr may be old and bitter (my assumption), but his thesis is correct: We over-plan, we over-predict, and we take too much time doing it!
Looking forward to you sharing more thoughts in the future; good stuff so far. I am here thanks to Bruce Gudmundsson giving you a shout out in TACTICAL NOTEBOOK. I actually own a full set of the original paper version of it from the mid 90s.
Two comments:
1) Are you familiar with Paul Fussell's use of the term "chickenshit" from his time as a rifle platoon leader in WW2? Tracks well with your previous post on "...Bullshit."
From Wikipedia: "According to Paul Fussell in his book Wartime, chickenshit in this sense has military roots: "Chickenshit refers to behavior that makes military life worse than it need be: petty harassment of the weak by the strong; open scrimmage for power and authority and prestige; sadism thinly disguised as necessary discipline; a constant 'paying off of old scores'; and insistence on the letter rather than the spirit of the ordinances ... Chickenshit is so called—instead of horse—or bull—or elephant shit—because it is small-minded and ignoble and takes the trivial seriously."
2) I am much less enamored by Jim Storr's work of late and find some of his conclusions disconnected from reality and shows a lack of experience at echelon. His first book, THE HUMAN FACE OF WAR, was quite good, but I found his most recent work, SOMETHING ROTTEN to be chock full of errors and woefully dated observations about the U.S. Army and the Bundeswehr (das Heer) in particular. I had intended to write a critique and send to him, but the errors and or poorly formed conclusions just kept accumulating.