University of Southern California
    
Home   Contact Us    
Center for Systems and Software Engineering

About us
News
History
People
Events
Upcoming
Highlights
Past
Publication
Tech. Report
TR by Author
Research
Projects
Tools
Courses
Education
Degrees
Admissions
Affiliates
List of Affiliates
Private Area
Other Resources


Technical Reports

USC-CSSE-2009-534

Ivo Krka, George Edwards, Yuriy Brun, Nenad Medvidovic, "From System Specifications to Component Behavioral Models," New Ideas and Emerging Results Track, 31st International Conference on Software Engineering (ICSE 2009) (pdf)

Early system specifications, such as use-case scenarios and properties, are rarely complete. Partial models of system-level behavior, derived from these specifications, have proven useful in early system analysis. We believe that the scope of possible analyses can be enhanced by component-level partial models. In this paper, we outline an algorithm for deriving a component-level Modal Transition System (MTS) from system-level scenario and property specifications. The generated MTSs capture the possible component implementations that (1) necessarily provide the behavior required by the scenarios, (2) restrict behavior forbidden by the properties, and (3) leave the behavior that is neither explicitly required nor forbidden as undefined. We discuss how these generated models can help discover system design flaws, support requirements elicitation, and help select off-the-shelf components.

Added December 15th, 2008


USC-CSSE-2009-533

Tim Menzies, Steve Williams, Oussama Elrawas, Barry Boehm, Jairus Hihn, "How to Avoid Drastic Software Process Change (using Stochastic Stability)," ICSE 2009, Vancouver, BC, May 16-24, 2009, pp. 540-550 (pdf)

Before performing drastic changes to a project, it is worthwhile to thoroughly explore the available options within the current structure of a project. An alternative to drastic change are internal changes that adjust current options within a software project. In this paper, we show that the effects of numerous internal changes can out-weigh the effects of drastic changes. That is, the benefits of drastic change can often be achieved without disrupting a project.

The key to our technique is SEESAW, a novel stochastic stability tool that (a) considers a very large set of minor changes using stochastic sampling; and (b) carefully selects the right combination of effective minor changes.

Our results show, using SEESAW, project managers have more project improvement options than they currently realize. This result should be welcome news to managers struggling to maintain control and continuity over their project in the face of multiple demands for drastic change.

Added February 2nd, 2009


USC-CSSE-2009-532

A Winsor Brown, Ramin Moazeni, Barry Boehm, "Realistic Software Cost Estimation for Fractionated Space Systems," AIAA SPACE 2009, Pasadena, California, September 14-17, 2009 (pdf)

Fractionated Space Systems, as exemplified by Defense Advanced Research Projects Agency's Future Fast, Flexible, Fractionated, Free-Flying Spacecraft united by Information eXchange (DARPA’s System F6), represent real challenges to software cost estimation.  The concept is a traditional 'monolithic' spacecraft is replaced by a cluster of wirelessly interconnected spacecraft modules to create a virtual satellite, delivering capability which is at least equivalent to the monolithic spacecraft.  Concurrently, they significantly enhance flexibility and system robustness, and reduce risk throughout the mission life and the spacecraft development cycle.  Such systems present real challenges to software cost estimation which arise from both the concept of a Directed System of Systems (DSOS) and the reduced risk which is primarily achievable only through the application of an Incremental Commitment Model (ICM).  This paper will briefly introduce a next-generation synthesis of the spiral model and other leading process models into the ICM, which is being piloted or considered for adoption in some parts of Department of Defense (DoD).  The ICM emphasizes architecting systems (or DSOSs) to encapsulate subsystems (or systems) undergoing the most rapid change; and architecting the incremental development process by having agile systems engineers handling longer-range change traffic to rebaseline the plans for future increments while largely plan-driven teams develop and continuously verify and validate (V&V) the current increment, as is usually required for safe or secure software.

One approach being followed for estimating the software development cost of the Fractionated Space Craft Systems of Systems (FSSOS) is based on the Constructive Incremental Commitment Cost Model (COINCOMO) model and its tool which currently implements together in one tool the Constructive Cost Model (COCOMO II), the Constructive Phased Schedule and Effort Model (COPSEMO) and a Constructive Security Cost Model (COSECMO).  COINCOMO has an added super-structure to accommodate multiple suppliers of independent software subsystems and multiple builds (or incremental deliveries).  Many software systems components today rely on Commercial-Off-the-Shelf (COTS) sub-components and/or reuse of (potentially proprietary) open-source or legacy components which are treated like COTS components rather than reused, modified components, and adapted/reused open-source subcomponents, as well as new code.  The COINCOMO model also accommodates them without the name-space explosion that happened in COCOMO II.  COINCOMO 2.0 is implemented in a database-centric fashion designed with the ability to save and reuse any of the sets of parameters thus making it much easier to do what-if and evolving estimates.

Added January 18th, 2011


USC-CSSE-2009-531

Tim Menzies, Steve Williams, Oussama Elrawas, Daniel Baker, Barry Boehm, Jairus Hihn, Karen Lum, Ray Madachy, "Accurate Estimates without Local Data?" Software Process: Improvement and Practice Volume 14, Issue 4, July/August 2009, pp. 213-225 (pdf)

Models of software projects input project details and output predictions via their internal tunings. The output predictions, therefore, are affected by variance in the project details P and variance in the internal tunings T. Local data is often used to constrain the internal tunings (reducing T).

While constraining internal tunings with local data is always the preferred option, there exist some models for which constraining tuning is optional. We show empirically that, for the USC COCOMO family of models, the effects of P dominate the effects of T i.e. the output variance of these models can be controlled without using local data to constrain the tuning variance (in ten case studies, we show that the estimates generated by only constraining P are very similar to those produced by constraining T with historical data).

We conclude that, if possible, models should be designed such that the effects of the project options dominate the effects of the tuning options. Such models can be used for the purposes of decision making without elaborate, tedious, and time-consuming data collection from the local domain.

Added January 13th, 2011


USC-CSSE-2009-530

Barry Boehm, "Applying the Incremental Commitment Model to Brownfield System Development," 7th Annual Conference on Systems Engineering Research 2009 (CSER 2009) (pdf)

The Incremental Commitment Model (ICM) is a risk-driven process model generator for which common risk patterns generate common special-case system definition and development process models. The increasing importance of Brownfield (legacyconstrained) vs. Greenfield (unconstrained) application systems created a challenge of determining an ICM Brownfield risk pattern and an associated set of special-case process guidelines. This paper presents a case study of a failed Brownfield project, and shows how the concurrent-engineering activities, risk assessments, and anchor point milestones of the ICM could have avoided the failure. It compares the resulting ICM special-case Brownfield process with two leading Brownfield approaches, the CMU-SEI SMART approach and the IBM Brownfield approach, and finds them compatible and complementary.

Added January 12th, 2011


USC-CSSE-2009-529

Tim Menzies, Oussama El-Rawas, Jairus Hihn, Barry Boehm, "Can We Build Software Faster and Better and Cheaper?" PROMISE '09 Proceedings of the 5th International Conference on Predictor Models in Software Engineering, Vancouver, British Columbia, Canada, 2009, pp. 1-9 (pdf)

"Faster, Better, Cheaper" (FBC) was a development philosophy adopted by the NASA administration in the mid to late 1990s. that lead to some some dramatic successes such as Mars Pathfinder as well as a number highly publicized mission failures, such as the Mars Climate Orbiter & Polar Lander.

The general consensus on FBC was "Faster, Better, Cheaper? Pick any two". According to that view, is impossibly to optimize on all three criteria without compromising the third. This paper checks that view using an AI search tool called STAR. We show that FBC is indeed feasible and produces similar or better results when compared to other methods. However, for FBC to work, there must be a balanced concern and concentration on the quality aspects of a project. If not, FBC becomes CF (cheaper and faster) with the inevitable lose in project quality.

Added January 12th, 2011


USC-CSSE-2009-528

Alexander Lam, Barry Boehm, "Experiences in Developing and Applying a Software Engineering Technology Testbed," Empirical Software Engineering, Volume 14, Number 5, 2009, pp. 579-601 (pdf)

A major problem in empirical software engineering is to determine or ensure comparability across multiple sources of empirical data. This paper summarizes experiences in developing and applying a software engineering technology testbed. The testbed was designed to ensure comparability of empirical data used to evaluate alternative software engineering technologies, and to accelerate the technology maturation and transition into project use. The requirements for such software engineering technology testbeds include not only the specifications and code, but also the package of instrumentation, scenario drivers, seeded defects, experimentation guidelines, and comparative effort and defect data needed to facilitate technology evaluation experiments. The requirements and architecture to build a particular software engineering technology testbed to help NASA evaluate its investments in software dependability research and technology have been developed and applied to evaluate a wide range of technologies. The technologies evaluated came from the fields of architecture, testing, state-model checking, and operational envelopes. This paper will present for the first time the requirements and architecture of the software engineering technology testbed. The results of the technology evaluations will be analyzed from a point of view of how researchers benefitted from using the SETT. The researchers just reported how their technology performed in their original findings. The testbed evaluation showed (1) that certain technologies were complementary and cost-effective to apply; (2) that the testbed was cost-effective to use by researchers within a well-specified domain of applicability; (3) that collaboration in testbed use by researchers and the practitioners resulted comparable empirical data and in actions to accelerate technology maturity and transition into project use, as shown in the AcmeStudio evaluation; and (4) that the software engineering technology testbed’s requirements and architecture were suitable for evaluating technologies and accelerating their maturation and transition into project use.

Added January 12th, 2011


USC-CSSE-2009-527

Magne Jorgensen, Barry Boehm, Stan Rifkin, "Software Development Effort Estimation: Formal Models or Expert Judgment?" IEEE Software, Volume 26, Issue 2, March-April 2009, pp. 14-19 (pdf)

Which is better for estimating software project resources: formal models, as instantiated in estimation tools, or expert judgment? Two luminaries, debate this question in this paper. For this debate, they're taking opposite sides and trying to help software project managers figure out when, and under what conditions, each method would be best.

Added January 12th, 2011


USC-CSSE-2009-526

Supannika Koolmanojwong, "The Incremental Commitment Model process patterns for rapid-fielding projects," Qualifying Exam Report, November 2009 (pdf)

To provide better services to customer and not to be left behind in a competitive business environment, wide variety of ready‐to‐use software and technologies are available for one to grab and go and build up software system in a very fast pace. Rapid fielding comes to play a major role in developing software system to provide a quick response to the organization. This research investigates the appropriateness of current software development processes and develops new software development process guidelines, focusing on four process patterns: Use single NDI, NDI‐intensive, Services‐intensive, and Architected Agile. Currently, there is no single software development process model that is applicable to all four process patterns, but the Incremental Commitment Model (ICM) can help a new project converge on a process that fits their process scenario. The output of this research will be implemented as an Electronic Process Guide for USC Software Engineering students to use as a guideline to develop real‐client Software Engineering course projects. An empirical study will be conducted to verify the suitability of the newly developed process as compared to results data from previous course projects.

Added August 13th, 2010


USC-CSSE-2009-525

Steven Crosson, Barry Boehm, "Adjusting Software Life-Cycle Anchorpoints," Proceedings, SSTC 2009 (pdf)

Evaluating the state of a software-centric program based on paper-based reviews is always challenging. Predicting future challenges from those reviews is even more so. The Armys Future Combat Systems program has implemented Life Cycle Objective and Life Cycle Architecture events to provide continuous software reviews and risk analysis for individual software products. However, through lessons learned from those events, a new process was implemented for a System of Systems level review.

The review model was expanded to include reviews of individual system test data aimed at evaluating key program risks and providing data-based course corrections. Additionally, in-depth software productivity data was reviewed and analyzed to provide a projection of future capability development and recommendations for software build plan adjustments. This article will identify key changes made to the review process and why these changes, based on lessons learned, provided a more complete assessment of the FCS software effort.

Added August 10th, 2010


USC-CSSE-2009-524

Thomas Tan, Qi Li, Barry Boehm, Ye Yang, Mei He, Ramin Moazeni, "Productivity Trends in Incremental and Iterative Software Development," Proceedings, ACM-IEEE ESEM 2009 (pdf)

In an investigating study to trace the productivity changes of a commercial software project, which uses incremental and iterative development model, we’ve found evidence that attributes such as staffing stability, design compatibility/ adaptability to incremental and iterative development, and integration and testing would have significant impact on modifying the productivity outcome – either positive or negative. In this report, we will present an empirical analysis to review, evaluate, and discuss these influential attributes in regard of their correlations with productivity in incremental and iterative software development. We also hope that our approach and results would contribute to initiate more research in this subject area.

Added June 22nd, 2010


USC-CSSE-2009-523

Ramin Moazeni, A. Winsor Brown, Barry Boehm, "Productivity Decline in Directed System of Systems Software Development," ISPA/SCEA 2009 (pdf)

Incremental Commitment Model (ICM)
–Overview
–Multi-Build Software and Overlap across builds
–Directed Systems of Systems
–Systems
–COINCOMO
Incremental Development Productivity Decline (IDPD)
–Overview
–Cost Drivers
–Effect on number of increments
Conclusion

Added June 22nd, 2010


USC-CSSE-2009-522

Ali Afzal Malik, Barry Boehm, "Quantifying Requirements Elaboration to Improve Early Software Cost Estimation," Information Sciences, 2009 (pdf)

This paper describes an empirical study undertaken to investigate the quantitative aspects of the phenomenon of requirements elaboration which deals with transformation of high-level goals into low-level requirements. Prior knowledge of the magnitude of requirements elaboration is instrumental in developing early estimates of a project’s cost and schedule. This study examines the data on two different types of goals and requirements – capability and level of service (LOS) – of 20 real-client, graduate-student, team projects done at USC. Metrics for data collection and analyses are described along with the utility of results they produce. Besides revealing a marked difference between the elaboration of capability goals and the elaboration of LOS goals, these results provide some initial relationships between the nature of projects and their ratios of elaboration of capability goals into capability or functional requirements.

Added December 29th, 2009


USC-CSSE-2009-521

Ali Afzal Malik, Supannika Koolmanojwong, Barry Boehm, "An Empirical Study of Requirements-to-Code Elaboration Factors," 24th International Forum on COCOMO and Systems/Software Cost Modeling, Cambridge, Massachusetts, November 2009 (pdf)

During a software development project’s lifecycle, requirements are specified at multiple levels of detail e.g. objectives, shall statements, use cases, etc. The elaboration of requirements proceeds from one level of detail to the other till the final level – lines of code – is reached. This empirical study conducts a quantitative analysis of this hierarchical process of requirements elaboration. Multi-level requirements data from 25 small e-services projects is examined and the ratios between the numbers of requirements at consecutive levels are determined. Knowledge of these ratios – called elaboration factors – can play a crucial role in early software cost estimation. A case study demonstrating the utility of this approach in producing an early size estimate of a large commercial project is also presented.

Added December 29th, 2009


USC-CSSE-2009-518

Barry Boehm, Dan Ingold, A. Winsor Brown, Jo Ann Lane, George Friedman, Kathleen Dangle, Linda Esker, Forrest Shull, Rich Turner, Jon Wade, Mark Weitekamp, Paul Componation, Sue O'Brien, Dawn Sabados, Julie Fortune, Alfred Brown, "Early Identification of SE-Related Program Risks," SERC, September 30, 2009 (pdf)

DoD programs need effective systems engineering (SE) to succeed.
DoD program managers need early warning of any risks to achieving effective SE.
This SERC project has synthesized analyses of DoD SE effectiveness risk sources into a lean framework and toolset for early identification of SE-related program risks.
Three important points need to be made about these risks.
• They are generally not indicators of "bad SE." Although SE can be done badly, more often the risks are consequences of inadequate program funding (SE is the first victim of an underbudgeted program), of misguided contract provisions (when a program manager is faced with the choice between allocating limited SE resources toward producing contract-incentivized functional specifications vs. addressing key performance parameter risks, the path of least resistance is to obey the contract), or of management temptations to show early progress on the easy parts while deferring the hard parts till later.
• Analyses have shown that unaddressed risk generally leads to serious budget and schedule overruns.
• Risks are not necessarily bad. If an early capability is needed, and the risky solution has been shown to be superior to the alternatives, accepting and focusing on mitigating the risk is generally better than waiting for a better alternative to show up.
Unlike traditional schedule-based and event-based reviews, the SERC SE EM technology enables sponsors and performers to agree on the nature and use of more effective evidence-based reviews. These enable early detection of missing SE capabilities or personnel competencies with respect to a framework of Goals, Critical Success Factors (CSFs), and Questions determined by the EM task from the leading DoD early-SE CSF analyses. The EM tools enable risk-based prioritization of corrective actions, as shortfalls in evidence for each question are early uncertainties, which when combined with the relative system impact of a negative answer to the question, translates into the degree of risk that needs to be managed to avoid system overruns and incomplete deliveries.

Added December 11th, 2009


USC-CSSE-2009-517

Daniel Popescu, Joshua Garcia, Kevin Bierhoff, Nenad Medvidovic, "Helios: Impact Analysis for Event-Based Systems" (pdf)

Event-based software systems contain highly-decoupled components that interact by exchanging messages via implicit invocation, thus allowing exible system composition and adaptation. At the same time, these inherently desirable properties render an event-based system more difficult to understand and evolve since, in the absence of explicit dependency information, an engineer has to assume that any component in the system may potentially interact with, and thus depend on, any other component. Software analysis techniques that have been used successfully in traditional, explicit invocation-based systems are of little use in this domain. In order to aid the understandability of, and assess the impact of changes in, event-based systems, we propose Helios, a technique that combines component-level (1) control-flow and (2) state-based dependency analysis with system-level (3) structural analysis to produce a complete and accurate message dependence graph for a system. We have applied Helios to applications constructed on top of four different event-based implementation platforms. We summarize the results of several such applications. We demonstrate that Helios enables e ective event-based impact analysis and quantify its improvements over existing alternatives.

Added November 24th, 2009


USC-CSSE-2009-516

Qi Li , "Using Additive Multiple-Objective Value Functions for Value-Based Software Testing Prioritization" (pdf)

Testing is one way of validating that the customers have specified the right product and verifying that the developers have built the product right. Traditional testing methodologies such as: path, branch, instruction, mutation, scenario, or requirement testing usually treat all aspects of software as equally important [1], and is often reduced to a purely technical issue leaving the close relationship between testing and business decisions unlinked and the potential value contribution of testing unexploited [2]. Value-based software testing serves to bridge the gap between the internal software testing process and business value that comes from customers and market [3]. The essential of Value-based software testing is feature prioritization that brings business importance that comes from CRACK key customers, software quality risk from developing team, software testing cost, process estimation and controlling from the testing team, and the external commercial factors, e.g., market pressure from the market into consideration. This decision making process could be seen as a multiple-objective decision making process that tries to maximize the business importance, identify the most risky features, minimize the testing cost and minimize the market erosion. The prioritization strategy could be reflected and influenced by different multiple-objective decision value functions. In [3], a multiplicative multiple-objective value function is used to generate the testing priority ranking. In this paper, we will use additive multiple-objective value function to generate testing priority ranking and compare the result in the end.

Added October 1st, 2009


USC-CSSE-2009-515

Vu Nguyen, Barry Boehm, Phongphan Danphitsanuphan, "Assessing and Estimating Corrective, Enhancive, and Reductive Maintenance Tasks: A Controlled Experiment," Proceedings of 16th Asia-Pacific Software Engineering Conference (APSEC 2009), December 2009 (pdf)

This paper describes a controlled experiment of student programmers performing maintenance tasks on a C++ program. The goal of the study is to assess the maintenance size, effort, and effort distribution of three different maintenance types and to describe estimation models to predict the programmer’s effort on maintenance tasks. The results of our study suggest that the corrective maintenance is much less productive than the enhancive and reductive maintenance. Our study also confirms the previous results which conclude that corrective and reductive maintenance requires large proportions of effort on the program comprehension activity. Moreover, the best effort model we obtained from fitting the experiment data can estimate the time of 79% of the maintainers with the error of 30% or less.

Added August 18th, 2009


USC-CSSE-2009-514

George Edwards, Nenad Medvidovic, "Model Interpreter Frameworks" (pdf)

One of the practical challenges in utilizing domain-specific modeling technologies is the construction of model interpreters that leverage domain-specific models for code generation and quality analysis. Model interpreter frameworks (MIFs) simplify and reduce model interpreter development effort by providing off-the-shelf analysis and synthesis capabilities, while still permitting domain-specific customization. The XTEAM model-driven design environment implements highly extensible MIFs that generate system simulations and source code from domain-specific models. Drawing on our experience building MIFs within XTEAM, as well as the related (though independent) experience of other research groups, we identify the shared, essential characteristics of MIFs.

Added July 15th, 2009


USC-CSSE-2009-513

Qi Li, "The Effort Distribution Pattern Analysis of Three Types of Software Quality Assurance Activities and Process Implication: an Empirical Study" (pdf)

Review, process audit, and testing are three main Quality Assurances activities during software development life cycle. They complement each other to examine work products for defects and improvement opportunities to the largest extent. Understanding the effort distribution and inter-correlation among them will facilitate software organization project planning, improve the software quality within the budget and schedule and make continuous process improvement. This paper reports some empirical findings of effort distribution pattern of the three types of QA from a serial of incremental projects in China. The result of the study gives us some implications on how to identify which type of QA activity is insufficient while others might be overdone, and how to balance the effort allocation and planning for future projects, improve the weak part of QA activities and finally improve the whole process effectiveness under the specific organization context.

Added July 5th, 2009


USC-CSSE-2009-512

Ali Afzal Malik, Barry Boehm, "Predicting Understandability of a Software Project: A Comparison of Two Surveys" (pdf)

This paper summarizes the results of an empirical study conducted to explore the relative importance of eight pertinent COCOMO II model drivers in predicting the understandability of a software project. Here understandability of the project is measured solely from the perspective of the development team. This empirical study employed a survey that was targeted towards experienced practitioners. Results from this survey are compared with the corresponding results from a prior similar survey conducted on graduate students. While this comparison reveals some interesting differences between the importance given to each of the eight model drivers by these two categories of individuals it also highlights some subtle similarities.

Added June 10th, 2009


USC-CSSE-2009-511

George Edwards, Nenad Medvidovic, "A Highly Extensible Simulation Framework for Domain-Specific Architectures" (pdf)

Discrete event simulation is a powerful and flexible mechanism for analyzing both the functional characteristics and quality properties of software design models. However, software engineers do not utilize simulations in many situations where it has significant potential benefits because (1) the programming model used by existing discrete event simulators is not well-suited for capturing domain-specific software designs, and (2) customizing and optimizing core simulation algorithms based on domain-specific requirements and constraints is difficult and, in some cases, not possible. In this paper, we describe a new approach to constructing discrete event simulations of software systems that removes both of these obstacles. First, our simulation approach supports a software architecture-based programming model and allows software engineers to define domain-specific modeling constructs. Second, our simulation approach allows domain-specific optimization of simulation engine functions through modification or replacement of core framework components. To demonstrate the implementation of our approach, we present XDEVS, is a highly extensible discrete event simulation framework that provides both of the above capabilities. We evaluated the utility and efficiency of XDEVS using a large-scale simulation of a volunteer computing system, and this evaluation confirmed that XDEVS represents an improvement over the current state-of-the-art.

Added May 13th, 2009


USC-CSSE-2009-510

Yuriy Brun, George Edwards, Jae young Bang, Nenad Medvidovic, "Online Reliability Improvement via Smart Redundancy in Systems with Faulty and Untrusted Participants" (pdf)

Many software systems today, such as computational grids, include faulty and untrusted components.  As faults are inevitable, these systems utilize redundancy to achieve fault tolerance.  In this paper, we present two new, "smart" redundancy techniques: iterative redundancy and progressive redundancy.  The two techniques are efficient, adaptive, and automated.  They are efficient in that they leverage runtime information to improve system reliability using fewer resources than existing methods.  They are automated in that they inject redundancy in situations where it is most beneficial and eliminate it where it is unnecessary.  Finally, they are adaptive in that they increase redundancy on-the-fly when component reliability drops and decrease redundancy when component reliability improves.  We enumerate examples of systems that can benefit from our techniques but focus in this paper on computational grid systems.  We present formal analytical and empirical analyses, demonstrating our techniques on a real-world computational grid and comparing them to existing methods.

Added May 13th, 2009


USC-CSSE-2009-507

Qi Li, Mingshu Li, Ye Yang, Qing Wang, Thomas Tan, Barry Boehm, Chenyong Hu, "Bridge the Gap between Software Test Process and Business Value: A Case Study," ICSP2009, Vancouver, Canada, May 16-17, 2009 (pdf)

For a software project to succeed, acceptable quality must be achieved within an acceptable cost, providing business value to the customers, and keeping delivery time short. Software testing is a strenuous and expensive process and is often not organized to maximize business value. In this article, we propose a practical value based software testing method which aligns the internal test process with the value objectives coming from the customers and the market. Our case study in a real-life business project shows that this method helps manage testing process effectively and efficiently.

Added April 20th, 2009


USC-CSSE-2009-506

Ali Afzal Malik, Barry Boehm, "An Empirical Analysis of Effort Distribution of Small Real-Client Projects" (pdf)

This paper presents an analysis of the weekly effort distribution of 23 small real-client projects. The procedure used in collecting the effort data is described and the major findings are summarized. The results indicate the crests and troughs of project effort in relation to the major project milestones. Possible reasons for the pecularities in the effort distribution are also discussed. Moreover, an attempt is made to analyze the impact of project type on the effort distribution by comparing the effort profiles of web-based projects with non-web-based projects.

Added April 15th, 2009


USC-CSSE-2009-505

Ali Afzal Malik, Barry Boehm, "Software Size Analysis of Small Real-Client Projects" (pdf)

This paper presents an analysis of the results of an empirical study conducted to examine the relationship between the estimated and actual sizes of small real-client projects. Estimates of software size are collected at two different points of the software development life cycle – one immediately before rebaselining the project mid-stream and the other immediately after rebaselining. It is found that estimates made after rebaselining are usually more accurate. The accuracy of estimates based on the category of projects is also analysed. The results indicate that the correlation between estimated and actual size is much stronger in case of web-based projects vis-à-vis non-web-based projects.

Added April 7th, 2009


USC-CSSE-2009-504

Supannika Koolmanojwong, Barry Boehm, "Using Software Project Courses to Integrate Education and Research: An Experience Report," Proceedings of the 2009 22nd Conference on Software Engineering Education and Training - Volume 00, CSEET, pp. 26-33 (pdf)

At University of Southern California (USC), CSCI577ab is a graduate software engineering course that teaches best software engineering practices and allows students to apply the learned knowledge in developing real-client projects. The class is used as an experimental test-bed to deploy various research tools and approaches for validation of new methods and tools. Various research data have been collected as partial basis for twelve PhD dissertations. This paper reports how research and education are integrated via project experiments and how the results strengthen future educational experiences.

Added March 23rd, 2009


USC-CSSE-2009-503

Jo Ann Lane, "Cost Model Extensions to Support Systems Engineering Cost Estimation for Complex Systems and Systems of Systems," 7th Annual Conference on Systems Engineering Research 2009 (CSER 2009) (pdf)

Considerable work has been done in the areas of software development, systems engineering, and Commercial Off-the-Shelf (COTS) integration cost modeling, as well as some preliminary work in the area of System of System Engineering (SoSE) cost modeling. These cost models focus on the engineering product, the processes used to develop the product, and the skills and experience levels of the technical staff responsible for the development of the product. The nature of complex systems and systems of systems (SoS) is that they are often composed of many subsystems, some relatively simple and others rather complex. The Constructive Systems Engineering Cost Model (COSYSMO) is a calibrated cost model that most closely estimates the systems engineering effort associated with these complex systems and SoS engineering activities. However, the current version of COSYSMO only allows the user to characterize the system using a single set of parameters, with no ability to generate multiple characterizations for the various subsystems comprising a complex system or SoS. This paper presents an extension to the COSYSMO cost model that allows a user to characterize multiple subsystems within a complex system or multiple systems within an SoS and illustrates how this can be applied to an example SoS.

Added March 16th, 2009


USC-CSSE-2009-502

Barry Boehm, Jo Ann Lane, Supannika Koolmanojwong, "A Risk-Driven Process Decision Table to Guide System Development Rigor," INCOSE 2009 (pdf)

The Incremental Commitment Model (ICM) organizes systems engineering and acquisition processes in ways that better accommodate the different strengths and difficulties of hardware, software, and human factors engineering approaches. As with other models trying to address a wide variety of situations, its general form is rather complex. However, its risk-driven nature has enabled us to determine a set of twelve common risk patterns and organize them into a decision table that can help new projects converge on a process that fits well with their particular process drivers. For each of the twelve special cases, the decision table provides top-level guidelines for tailoring the key activities of the ICM, along with suggested lengths between each internal system build and each external system increment delivery. This paper elaborates on each of the twelve cases and provides examples of their use.

Added March 16th, 2009


USC-CSSE-2009-501

Barry Boehm, Jo Ann Lane, "Better Management of Development Risks: Early Feasibility Evidence," 7th Annual Conference on Systems Engineering Research 2009 (CSER 2009) (pdf)

The Incremental Commitment Model (ICM) organizes more rapid and thorough concurrent systems engineering and acquisition processes in ways that provide points at which they can synchronize and stabilize, and at which their risks of going forward can be better assessed and fitted into a risk-driven stakeholder resource commitment process. In particular, its concurrent activities of Understanding Needs, Envisioning Opportunities, System Scoping and Architecting, Feasibility Evidence Development, and Risk/Opportunity Assessment enable projects to focus specifically on their system constraints and environments and on opportunities to deal with them. This paper describes in detail the content, preparation, and role of feasibility evidence at key decision points in the system development cycle and how this can be used to effectively identify and manage risks throughout the development cycle. The feasibility evidence is not intended to assess just a single sequentially developed system definition element, but rather to assess the consistency, compatibility, and feasibility of several concurrently-engineered elements. To make this concurrency work, a set of anchor point milestone reviews are performed to ensure that the many concurrent activities are synchronized, stabilized, and risk-assessed at the end of each phase using developer-produced, expert-validated feasibility evidence.

Added March 16th, 2009


USC-CSSE-2009-500

Barry Boehm, Jo Ann Lane, "Guide for Using the Incremental Commitment Model (ICM) for Systems Engineering of DoD Projects" (pdf)

Future mission-critical DoD combat platforms, systems of systems, and network-centric services will have many usage uncertainties and emergent characteristics. Their hardware, software, and human factors will need to be concurrently engineered, risk-managed, and evolutionarily developed to converge on cost-effective system operations and mission success.

The recent revision of DoDI 5000.02 provides a much improved policy for dealing with these future challenges, including its starting with Needs and Opportunities, its emphasis on early Analysis of Alternatives as an entry condition for Milestone A, its inclusion of a passed Preliminary Design Review as an entry condition for Milestone B, and its emphasis on evolutionary versus single-increment or prespecified increments of development.

These emphases on DoDI 5000.02 are also key emphases in the Incremental Commitment Model (ICM), along with more detailed guidance on how to apply them. The ICM has been developed based on best commercial practices for addressing such future challenges; on discussions with contributors to DoDI 5000.02; on successful use of the ICM principles on previous DoD projects; and on its exploratory use on highly complex, future-precursor, net-centric DoD systems of systems.

The ICM builds on the experience-based critical success factor principles of stakeholder satisficing, incremental definition, iterative evolutionary system growth, concurrent engineering, and evidence-based, risk-driven management milestones. It is not a one-size-fits-all process model, but has risk-driven options at each milestone decision point to enable projects to adapt to their particular situations. It provides a decision table for early determination of common special-case life cycle processes to avoid overkill in project ceremony and documentation.

It also incorporates the strengths of existing V, concurrent engineering, spiral, agile, and lean process models. As with these other models, it is not fully tested for its ability to support DoDI 5000.02 in all project situations, but it has been formulated to make it straightforward to do so.

Added March 16th, 2009


Copyright 2008 The University of Southern California

The written material, text, graphics, and software available on this page and all related pages may be copied, used, and distributed freely as long as the University of Southern California as the source of the material, text, graphics or software is always clearly indicated and such acknowledgement always accompanies any reuse or redistribution of the material, text, graphics or software; also permission to use the material, text, graphics or software on these pages does not include the right to repackage the material, text, graphics or software in any form or manner and then claim exclusive proprietary ownership of it as part of a commercial offering of services or as part of a commercially offered product.