An official website of the United States government
A .mil website belongs to an official U.S. Department of Defense organization in the United States.
A lock (lock ) or https:// means you’ve safely connected to the .mil website. Share sensitive information only on official, secure websites.

News | Dec. 12, 2024

OPINION: 12 battle-tested lessons from Army acquisition

By Col. Matthew Paul, Project Manager, Integrated Personnel and Pay System – Army

The following content is a single person’s perspective and not intended to replace any official guidance. No warranties, either expressed or implied, as to the merchantability, authoritativeness, correctness or even entertainment, are claimed.

If you’ve been in the Army Acquisition Corps for more than a minute, you’ve accumulated battle scars.  

My acquisition adventure began in 2009. I’ve supported and led a wide range of acquisition programs, including software-intensive applications and major defense systems.  

My body is covered in acquisition battle scars from lessons I learned the hard way. The following lessons learned guide the way I approach acquisition.  

1. Competition is better than no competition. I've been part of programs that embraced competition and programs that didn’t. Programs that embrace competition are more successful. Competition tends to drive innovation, risk reduction and cost containment. 

2. Requirements are overrated. I view requirements as a starting point. Typically, requirements are written by one or two people who make a lot of assumptions. Many of those assumptions are not accurate and become invalid six months down the road. 

I’ve been part of programs that organize the entire acquisition strategy and contracts around specific requirements, and we didn't have the ability to pivot as Army priorities changed. And the Army’s priorities change every six months. It’s hard to turn an aircraft carrier around if you are on a fixed path and can’t adjust your requirements to keep pace with the Army’s rate of change.

Programs should not organize around requirements; they should organize around customer value. It’s more important to give Soldiers what they want than cling to requirements written six months ago.

The Army has pivoted to a new norm. The Army Software Directive, a policy memo released in March, calls for high-level capability needs statements that can be adjusted instead of detailed, prescriptive requirements that are infrequently reassessed.  

3. Make the test community part of the team. Testing is important. Good acquisition programs embrace a culture of testing. Our test community needs to be part of the sausage making and team. Programs that undervalue testing suffer the consequences. Programs that keep the test community at arm's length suffer the consequences. Programs that wait until their final graduation test to engage the end user suffer the consequences.  

 4. Direct Soldier feedback is important. Testing early and often — and with the end user, the Soldier — has tremendous value. Testing should be routine, continuous and automated. 

5. System success in the field with Soldiers breeds program success. A system’s success in the field is how we should measure program success. Some programs have a warped sense of success. Staffing documents and spending money are critical, and we must do those things. But that’s not how success should be measured. Success should be Soldier-centric. The Soldier gets 51% of the vote in determining whether a program is successful. Are we delivering value or not?  

6. Most lackluster programs measure the wrong things or nothing at all. Measuring the wrong things is dangerous because what gets measured gets done.  

It’s important to have program metrics connected to customer value.  

The best measurements are forward-looking, leading indicators aligned to customer value. We want to identify ways to keep pace with our customer or even get ahead of our customer. If we are looking back, our customer is always going to be ahead of us, and the tail is always going to wag the dog.   

7. Most program cost and schedule problems are caused by poor estimates. Real technical issues are usually caused by poor initial planning, flawed design or bad requirements. When a program overruns its cost or schedule, it's usually not a result of a real technical problem — it’s because of poor estimates. 

In a new program there's always a lot of optimism, and you back your cost estimates into an unrealistic scenario. There are always cognitive biases. You have your blinders on and are not seeing the risks staring you in the face. You're just not seeing them. Bad assumptions that drive bad estimates result in cost and schedule overruns. 

When there is a real technical problem, you can almost always trace it back to a bad requirement. For example, the requirement may not have been grounded in reality or technical maturity. We just let the requirement go and then fought like hell during program execution to try to meet the requirement. 

8. Integration is hard. I've been part of programs that make bad assumptions about integration. These programs assumed integration would not be hard and didn’t provide sufficient resources to accomplish the mission. Integration is usually the hardest part of a program.  

9. Data sharing and interoperability are hard. Programmers frequently don't invest time upfront plotting out a good architecture and data-sharing design. Agile does not get you off the hook from evaluating architecture, design and dependencies upfront. Investing that time in those big-picture items at the beginning will pay dividends and enable you to go faster later. 

10. Technical debt is everywhere and hard to overcome. Technical debt is rampant and a big, systemic problem. Every minute that a program lives and breathes, it’s accumulating technical debt. Experts — people much smarter than me — advise organizations to invest at least 20% of their resources in mitigating or paying down their technical debt. And if they fail to do so, at some point down the road, they become encumbered by tech debt and get strangled. We can’t just pay interest on our technical debt. We must always be thinking about our technical debt and ways to mitigate it.  

11. Strategic communication is important. Communication is incredibly important. I've been part of programs that did really well in the field and earned positive Soldier reviews, but the messaging was poor, nonexistent or not tailored to the right audience. How we communicate about our program should be based on who we are talking to — whether it’s Congress, our functional teammates or the Army staff. We need to have a good plan. Programs that aren’t messaged well end up paying the consequences, usually during the budget season.  

12. It’s impossible to deploy a new capability to the field without bending or breaking a rule. This is a little bit controversial. I have a reputation for being a rule breaker. I believe it’s impossible to deploy a new capability that delivers a lot of customer value on cost and on schedule without breaking a rule or two. There are many rules, and some of them conflict with each other. Programs that are obsessive-compulsive about following every rule to the letter get tied up in in knots and end up never delivering anything. 

Challenge the status quo. If a rule doesn’t make sense, raise a hand to bring it to light and work towards corrective action. 

Recent acquisition reforms are game changers.
The Army Software Directive — championed by Army leaders, including Honorable Gabe Camarillo, Young Bang and Jennifer Swanson — has empowered program managers to embrace modern software development practices. 

I’ve seen more software acquisition reform in the Army in the last 18 months than the past 18 years. It’s truly an exciting time to be in this business.

SOURCE: https://www.eis.army.mil/12-battle-tested-lessons