Systems Engineering Effectiveness Measures
Barry Boehm, Director of Research, Stevens-USC Systems Engineering Research Center, Dan Ingold
Objectives: Early Identification of SE-Related Program Risks
DoD programs need effective systems engineering (SE) to succeed.
DoD program managers need early warning of any risks to achieving effective SE.
This SERC project has synthesized analyses of DoD SE effectiveness risk sources into a lean framework and toolset for early identification of SE-related program risks.
Three important points need to be made about these risks.
- They are generally not indicators of "bad SE." Although SE can be done badly, more often the risks are consequences of inadequate program funding (SE is the first victim of an underbudgeted program), of misguided contract provisions (when a program manager is faced with the choice between allocating limited SE resources toward producing contract-incentivized functional specifications vs. addressing key performance parameter risks, the path of least resistance is to obey the contract), or of management temptations to show early progress on the easy parts while deferring the hard parts till later.
- Analyses have shown that unaddressed risk generally leads to serious budget and schedule overruns.
- Risks are not necessarily bad. If an early capability is needed, and the risky solution has been shown to be superior to the alternatives, accepting and focusing on mitigating the risk is generally better than waiting for a better alternative to show up.
Unlike traditional schedule-based and event-based reviews, the SERC SE EM technology enables sponsors and performers to agree on the nature and use of more effective evidence-based reviews. These enable early detection of missing SE capabilities or personnel competencies with respect to a framework of Goals, Critical Success Factors (CSFs), and Questions determined by the EM task from the leading DoD early-SE CSF analyses. The EM tools enable risk-based prioritization of corrective actions, as shortfalls in evidence for each question are early uncertainties, which when combined with the relative system impact of a negative answer to the question, translates into the degree of risk that needs to be managed to avoid system overruns and incomplete deliveries.
(USC-CSSE-2009-518, p. 1)
USC-CSSE-2010-501 - Barry Boehm, Dan Ingold, Kathleen Dangle, Rich Turner, Paul Componation, "Early Identification of SE-Related Program Risks," CSER 2010 (pdf)
USC-CSSE-2009-518 - Barry Boehm, Dan Ingold, A. Winsor Brown, Jo Ann Lane, George Friedman, Kathleen Dangle, Linda Esker, Forrest Shull, Rich Turner, Jon Wade, Mark Weitekamp, Paul Componation, Sue O'Brien, Dawn Sabados, Julie Fortune, Alfred Brown, "Early Identification of SE-Related Program Risks," SERC, September 30, 2009 (pdf)
SEPRT Tool (v2.4.1) (xls)
SECRT Tool (v2.4.1) (xls)
SERC EM Capability Pilot User's Guide (11/8/09) (doc)
Based on the Conclusions, we recommend a two-step approach for achieving a SE EM initial operational capability and transitioning it to a sustaining organization. Phase IIa is proposed to begin with research on three tasks. The first task would involve experimentation with domain extensions and larger-scale pilots. The second task would involve performing and analyzing the results of further completed successful and unsuccessful projects to test the hypothesis that there is a critical small set of critical success-failure factors that could serve as top-level early warning indicators. The third task would involve extended analyses of the commonalities and variabilities between the SERC SE EM apabilities and the DAPS methodology and SADB results. This could strengthen both and enable them to be used in complementary ways.
Phase IIb would continue with incremental elaboration, experimentation, and refinement of the preferred approaches, and coordination with complementary efforts. Candidate tasks would include EM tool top-risk summaries, suggestions for mitigating the identified risks, ease of creating domain-specific extensions, creating further users-guide and tutorial material, creating and populating a knowledge base of the results, plans for transitioning the support and evolution of the tools to an appropriate support organization such as DAU, and continuing to coordinate the tools’ content with complementary initiatives such as the INCOSE Leading Indicators upgrade, the NDIA enterprise-oriented personnel competency initiative, and the SERC SE Body of Knowledge and Reference Curriculum RT.
(USC-CSSE-2009-518, pp. 41-42)
If you have any questions about Systems Engineering Effectiveness Measures, please contact Dan Ingold (firstname.lastname@example.org) or Barry Boehm (email@example.com).