University of Southern California
    
Home   Contact Us    
Center for Systems and Software Engineering

About us
News
History
People
Events
Upcoming
Highlights
Past
Publication
Tech. Report
TR by Author
Research
Projects
Tools
Courses
Education
Degrees
Admissions
Affiliates
List of Affiliates
Private Area
Other Resources


Technical Reports

USC-CSSE-2006-643

Hoh Peter In, Jongmoon Baik, Sangsoo Kim, Ye Yang, Barry Boehm, "A Quality-Based Cost Estimation Model for the Product Line Life Cycle," Communications of The ACM, Volume 49, Number 12, December 2006 (pdf)

In reusing common organizational assets, the software product line (SPL) provides substantial business opportunities for reducing the unit cost of similar products, improving productivity, reducing time to market, and promoting customer satisfaction [4]. By adopting effective product line practices, return on investment (ROI) becomes increasingly critical in the decision-making process. The majority of SPL cost estimation and ROI models [5-9] confine themselves to software development costs and savings. However, if software quality cost is considered in the spectrum of the SPL life cycle, product lines can result in considerably larger payoffs, compared to non-product lines.

This article proposes a quality-based product line life cycle cost estimation model, called qCOPLIMO, and investigates the effect of software quality cost on the ROI of SPL. qCOPLIMO is derived from two COCOMO suite models: COPLIMO and COQUALMO, as presented in Figure 1. COPLIMO [2] provides a baseline cost estimation model of the product line life cycle, and COQUALMO [3] estimates the number of residual defects. These models are used to estimate software quality cost. Both models are an extension of COCOMO II [1].

Added November 8th, 2007


USC-CSSE-2006-642

Mingshu Li, Barry Boehm, Leon J. Osterweil, "Unifying the Software Process Spectrum," Journal of Software, Vol.17, No.4, April 2006, pp. 649-657 (pdf)

Software Process Workshop (SPW 2005) was held in Beijing on May 25-27, 2005. This paper introduces the motivation of organizing such a workshop, as well as its theme and paper gathering and review; and summarizes the main content and insights of 11 keynote speeches, 30 regular papers in five sessions of “Process Content”, “Process Tools and Metrics”, “Process Management”, “Process Representation and Analysis”, and “Experience Reports”, 8 software development support tools demonstration, and the ending panel “Where Are We Now? Where Should We Go Next?”.

Added November 8th, 2007


USC-CSSE-2006-641

Gan Wang, Philip Wardle, Aaron Ankrum, "Architecture-Based Drivers for System-of-Systems and Family-of-Systems Cost Estimating," INCOSE 2006 (pdf)

As the industry undergoes a paradigm shift from a system-based procurement model to a capability-based acquisition model with a focus on integration of legacy systems and interoperability of systems of systems and families of systems, new challenges have emerged for the field of cost estimating. What is the cost of an operational capability in a net centric environment based on enterprise architecture? This paper explores a set of enterprise architecture-based drivers for estimating the life cycle cost or total ownership cost of operational capabilities from integration of complex systems of systems and families of systems. It attempts to extend the traditional systems engineering practices and to address the new challenges from capability-based engineering and interoperability of systems of systems.

Added May 12th, 2008


USC-CSSE-2006-640

Barry Boehm, "Value-Based Software Engineering: Seven Key Elements and Ethical Considerations," Value-Based Software Engineering, Springer Berlin Heidelberg, Part 2, 2006, pp. 109-132 (pdf)

This chapter presents seven key elements that provide candidate foundations for value-based software engineering:

1. Benefits Realization Analysis
2. Stakeholder Value Proposition Elicitation and Reconciliation
3. Business Case Analysis
4. Continuous Risk and Opportunity Management
5. Concurrent System and Software Engineering
6. Value-Based Monitoring and Control
7. Change as Opportunity

Using a case study, it then shows how some of these elements can be used to incorporate ethical considerations into daily software engineering practice.

Added March 31st, 2005


USC-CSSE-2006-639

Barry Boehm, "Value-Based Software Engineering: Overview and Agenda," Value-Based Software Engineering: Overview and Agenda, Springer Berlin Heidelberg, 2006, pp. 3-14 (pdf)

Much of current software engineering practice and research is done in a value-neutral setting, in which every requirement, use case, object, test case, and defect is equally important. However most studies of the critical success factors distinguishing successful from failed software projects find that the primary critical success factors lie in the value domain.

The value-based software engineering (VBSE) agenda discussed in this chapter and exemplified in the other chapters involves integrating value considerations into the full range of existing and emerging software engineering principles and practices. The chapter then summarized the primary components of the agenda: value-based requirements engineering, architecting, design and development, verification and validation, planning and control, risk management, quality management, people management, and an underlying theory of VBSE. It concludes by approaches for going toward VBSE at the project, organization, national, or global level.

Added March 31st, 2005


USC-CSSE-2006-638

Barry Boehm, Apurva Jain, "An Initial Theory of Value-Based Software Engineering," An Initial Theory of Value-Based Software Engineering, Springer Berlin Heidelberg, 2006, pp. 15-37 (pdf)

This chapter presents an initial “4+1” theory of value-based software engineering (VBSE). The engine in the center is the stakeholder win-win Theory W, which addresses the questions of “which values are important?” and “how is success assured?” for a given software engineering enterprise. The four additional theories that it draws upon are utility theory (how important are the values?), decision theory (how do stakeholders’ values determine decisions?), dependency theory (how do dependencies affect value realization?), and control theory (how to adapt to change and control value realization?). After discussing the motivation and context for developing a VBSE theory and the criteria for a good theory, the chapter discusses how the theories work together into a process for defining, developing, and evolving software-intensive systems. It also illustrates the application of the theory to a supply chain system example, discusses how well the theory meets the criteria for a good theory, and identifies an agenda for further research.

Added March 31st, 2005


USC-CSSE-2006-637

LiGuo Huang, Barry Boehm, "How Much Software Quality Investment is Enough: A Value-Based Approach," IEEE Software, Volume 23, Number 5, September/October 2006, pp. 88-95 (pdf)

A classical problem facing many software projects is how to determine when to stop testing and release the product for use. We have found that risk analysis helps to address such "how much is enough?" questions, by balancing the risk exposure of doing too little with the risk exposure of doing too much. However, people have often found it difficult to quantify the relative probabilities and sizes of loss in order to provide practical approaches for determining a risk-balanced "sweet spot" operating point.

We provide a quantitative approach based on the COCOMO II cost estimation model and the COQUALMO quality estimation model to help project decision-makers determine "how much software quality investment is enough?" We also provide examples of its use under differing value profiles. Further, we use the models and some representative empirical data to assess the relative payoff of value-based testing as compared to value-neutral testing.

Added November 10th, 2005


USC-CSSE-2006-636

LiGuo Huang, Barry Boehm, "Value-based Feedback in Software and Information Systems Development," Software Evolution and Feedback, John Wiley & Sons, 2006 (pdf)

The role of feedback control in software and information system development has traditionally focused on a milestone plan to deliver a pre-specified set of capabilities within a negotiated budget and schedule. One of the most powerful approaches available for controlling traditional software projects is called the Earned Value system. However, the Earned Value Management process is generally good for tracking whether a project is meeting its original plan. It becomes difficult to administer if the project plans change rapidly. And more significantly it has absolutely nothing to say about the actual value being earned for the organization by the results of the project.

This chapter begins by summering a set of four nested feedback and feedforward loops that have been successfully used to scope, estimate, control, and improve the predictability and efficiency of software development and evolution. It then proposes an alternative approach for project feedback control. It focuses on the actual stakeholder value likely to be earned by completing the project. And a framework is provided for monitoring and controlling value in terms of a Benefits Realization Approach [Thorp, 1998] and business case analysis. An order processing system is used as an example to illustrate the value-based feedback control mechanisms. At the end of this chapter, it presents the conclusions and directions for future research and development.

Added November 11th, 2005


USC-CSSE-2006-635

Ye Yang, Barry Boehm, Betsy Clark, "Assessing COTS Integration Risk Using Cost Estimation Inputs," International Conference on Software Engineering, Proceedings of the 28th international conference on Software engineering, 2006, pp. 431-438 (pdf)

Most risk analysis tools and techniques require the user to enter a good deal of information before they can provide useful diagnoses. In this paper, we describe an approach to enable the user to obtain a COTS glue code integration risk analysis with no inputs other than the set of glue code cost drivers the user submits to get a glue code integration effort estimate with the COnstructive COTS integration cost estimation (COCOTS) tool. The risk assessment approach is built on a knowledge base with 24 risk identification rules and a 3-level risk probability weighting scheme obtained from an expert Delphi analysis. Each risk rule is defined as one critical combination of two COCOTS cost drivers that may cause certain undesired outcome if they are both rated at their worst case ratings. The 3-level nonlinear risk weighting scheme represents the relative probability of risk occurring with respect to the individual cost driver ratings from the input. Further, to determine the relative risk impact, we use the productivity range of each cost driver in the risky combination to reflect the cost consequence of risk occurring. We also develop a prototype called COCOTS Risk Analyzer to automate our risk assessment method. The evaluation of our approach shows that it has done an effective job of estimating the relative risk levels of both small USC e-services and large industry COTS-based applications.

Added November 11th, 2005


USC-CSSE-2006-634

Apurva Jain, Barry Boehm, "SimVBSE: Developing a Game for Value-Based Software Engineering," CSEET 2006 (pdf)

The development of games in aid of improving and enriching a student's learning experience is again on the rise. The beer game [6] in the field of system dynamics was developed to instill the key principles of production and distribution. SimSE [5] provides a simulated game for its players to take on the role of a project manager, and experience the fundamentals of software engineering through cause-effect models. In this paper we present an initial design of SimVBSE as a game for students to better understand value-based software engineering [1] , and its underlying theory [3] .

Added November 15th, 2005


USC-CSSE-2006-633

Nenad Medvidovic, Vladimir Jakobac, "Using Software Evolution to Focus Architectural Recovery," Automated Software Engineering, Springer Netherlands, Volume 13, Number 2, April 2006, pp. 225-256 (pdf)

Ideally, a software project commences with requirements gathering and specification, reaches its major milestone with system implementation and delivery, and then continues, possibly indefinitely, into an operation and maintenance phase. The software system's architecture is in many ways the linchpin of this process: it is supposed to be an effective reification of the system's technical requirements and to be faithfully reflected in the system's implementation. Furthermore, the architecture is meant to guide system evolution, while also being updated in the process. However, in reality developers frequently deviate from the architecture, causing architectural erosion, a phenomenon in which the initial, “as documented” architecture of an application is (arbitrarily) modified to the point where its key properties no longer hold. Architectural recovery is a process frequently used to cope with architectural erosion whereby the current, “as implemented” architecture of a software system is extracted from the system's implementation. In this paper we propose a light-weight approach to architectural recovery, called Focus, which has three unique facets. First, Focus uses a system's evolution requirements to isolate and incrementally recover only the fragment of the system's architecture affected by the evolution. In this manner, Focus allows engineers to direct their primary attention to the part of the system that is immediately impacted by the desired change; subsequent changes will incrementally uncover additional parts of the system's architecture. Secondly, in addition to software components, which are the usual target of existing recovery approaches, Focus also recovers the key architectural notions of software connector and architectural style. Finally, Focus does not only recover a system's architecture, but may in fact rearchitect the system. We have applied and evaluated Focus in the context of several off-the-shelf applications and architectural styles to date. We discuss its key strengths and point out several open issues that will frame our future work.

Added June 10th, 2008


USC-CSSE-2006-632

Barry Boehm, "A Theory and Process for Realizing Successful Systems," INSIGHT INCOSE, Volume 8, Issue 2, March 2006, pp. 11-12 (pdf)

Three key themes in the INCOSE Technical Vision [Crisp et. al., 2005] are:

• The INCOSE definition of Systems Engineering as "an interdisciplinary approach and means to enable the realization of successful systems."
• The objective of providing an underlying theory of systems engineering.
• The objective of determining the distinguishing intellectual content of systems engineering as compared to other engineering disciplines.

The paper introduces the essentials of an initial underlying theory of systems engineering. Its Fundamental System Success Theorem provides necessary and sufficient conditions for a system to be successful. These conditions involve determining and reconciling the value propositions of the system's success-critical stakeholders. Their elaboration leads to a System Success Realization Theorem, and a process that involves other components of the theory. These include utility theory, dependency theory, decision theory, and control theory. Their emphasis on stakeholder value provides a key distinction between system engineering's concern with stakeholder values and the essentially value-neutral orientation of other engineering disciplines.

Added July 25th, 2008


USC-CSSE-2006-631

Gan Wang, Jo Ann Lane, Ricardo Valerdi, Barry Boehm, "Towards a Work Breakdown Structure for Net Centric System of Systems Engineering and Management," 16th INCOSE Symposium, Orlando, FL, July 2006 (pdf)

As the system engineering industry sees an increasing focus on the lifecycle development, acquisition, and sustainment of net-centric Systems of Systems (SoS) and Family of Systems (FoS), organizations find the need to evolve current processes and tools to better handle the increased scope, scale, and complexity of these efforts. One such tool, the Work Breakdown Structure (WBS) is important in planning and execution of program activities as requirements and goals of the program evolve. This paper provides an overview of the limitations of current WBSs with respect to SoS efforts and presents a proposed WBS structure that more adequately reflects the evolving processes and cross-organizational complexities.

Added May 12th, 2008


USC-CSSE-2006-630

Barry Boehm, Hasan Kitapci, "The WinWin Approach: Using a Requirements Negotiation Tool for Rationale Capture and Use," Rationale Management in Software Engineering, Springer Berlin / Heidelberg, Part 2, 2006, pp. 173-190 (pdf)

A highly cost-effective approach for rationale capture and management is to provide automated support, and capture the resulting artifacts of the process by which software and system requirements and solutions are negotiated. The WinWin process model, equilibrium model, and collaborative negotiation tool provide capabilities for capturing the artifacts. The MBASE software process model provides an approach for using and updating the rationale artifacts and process to keep it in a win-win state. Supporting requirements negotiation with attaching rationale can have a high impact on all phases of development by enabling much better context for change impact analysis as the increasingly frequent requirements changes arrive. The WinWin approach involves having a system's success-critical stakeholders participate in a negotiation process so they can converge on a mutually satisfactory or win-win set of requirements. The WinWin framework in essence captures stakeholder-oriented objectives, options and constraints in the form of a decision rationale.

Added April 29th, 2008


USC-CSSE-2006-629

Barry Boehm, Apurva Jain, "A Value-Based Software Process Framework," Lecture Notes in Computer Science, Springer Berlin / Heidelberg, Volume 3966/2006, pp. 1-10 (pdf)

This paper presents a value-based software process framework that has been derived from the 4+1 theory of value-based software engineering (VBSE). The value-based process framework integrates the four component theories – dependency, utility, decision, and control, to the central theory W, and orients itself as a 7-step process guide to practice value-based software engineering. We also illustrate applying the process framework to a supply chain organization through a case study analysis.

Added April 21st, 2008


USC-CSSE-2006-628

Zhihao Chen, Daniel Port, Yue Chen, Barry Boehm, "Evolving an Experience Base for Software Process Research," Lecture Notes in Computer Science, Springer Berlin / Heidelberg, Volume 3840/2006, pp. 433-448 (pdf)

Since 1996 the USC Center for Software Engineering has been accumulating a large amount of software process experience through many real-client project software engineering practices. Through the application of the Experience Factory approach, we have collected and evolved this experience into an experience base (eBASE) which has been leveraged successfully for empirically based software process research. Through eBASE we have realized tangible benefits in automating, organizational learning, and strategic advantages for software engineering research. We share our rationale for creating and evolving eBASE, give examples of how the eBASE has been used in recent process research, discuss current limitations and challenges with eBASE, and what we hope to do achieve in the future with it.

Added April 16th, 2008


USC-CSSE-2006-627

Raymond Madachy, Barry Boehm, Jo Ann Lane, "Spiral Lifecycle Increment Modeling for New Hybrid Processes," Lecture Notes in Computer Science, Springer Berlin / Heidelberg, Volume 3966/2006, pp. 167-177 (pdf)

The spiral lifecycle is being extended to address new challenges for Software-Intensive Systems of Systems (SISOS), such as coping with rapid change while simultaneously assuring high dependability. A hybrid plan-driven and agile process has been outlined to address these conflicting challenges with the need to rapidly field incremental capabilities. A system dynamics model has been developed to assess the incremental hybrid process and support project decision-making. It estimates cost and schedule for multiple increments of a hybrid process that uses three specialized teams. It considers changes due to external volatility and feedback from user-driven change requests, and dynamically re-estimates and allocates resources in response to the volatility. Deferral policies and team sizes can be experimented with, and it includes tradeoff functions between cost and the timing of changes within and across increments, length of deferral delays, and others. Both the hybrid process and simulation model are being evolved on a very large scale incremental project and other potential pilots.

Added April 16th, 2008


USC-CSSE-2006-626

Barry Boehm, "A View of 20th and 21st Century Software Engineering," Proceedings of the 28th International Conference on Software Engineering, International Conference on Software Engineering, 2006, pp. 12-29 (pdf)

George Santayana's statement,"Those who cannot remember the past are condemned to repeat it," is only half true. The past also includes successful histories. If you haven't been made aware of them, you're often condemned not to repeat their successes.

In a rapidly expanding field such as software engineering, this happens a lot. Extensive studies of many software projects such as the Standish Reports offer convincing evidence that many projects fail to repeat past successes.

This paper tries to identify at least some of the major past software experiences that were well worth repeating, and some that were not. It also tries to identify underlying phenomena influencing the evolution of software engineering practices that have at least helped the author appreciate how our field has gotten to where it has been and where it is.

A counterpart Santayana-like statement about the past and future might say,"In an era of rapid change, those who repeat the past are condemned to a bleak future." (Think about the dinosaurs, and think carefully about software engineering maturity models that emphasize repeatability.)

This paper also tries to identify some of the major sources of change that will affect software engineering practices in the next couple of decades, and identifies some strategies for assessing and adapting to these sources of change. It also makes some first steps towards distinguishing relatively timeless software engineering principles that are risky not to repeat, and conditions of change under which aging practices will become increasingly risky to repeat.

Added November 9th, 2007


USC-CSSE-2006-625

Donald J. Reifer, Barry Boehm, "Providing Incentives for Spiral Developments: An Award Fee Plan," Defense Journal, Supplemental Issue, Volume 12, Number 1, 2006 (pdf)

This article describes a set of award fee criteria and an award fee process and plan that enable buyers to provide suppliers with incentives for using evolutionary acquisition and spiral development approaches when developing large-scale, softwareintensive systems per DoD Directive 5000.1 and DoD Instruction 5000.2. Most Senior Program Managers agree that spiral development is a good idea. However, many quickly become confused when trying to provide contractual incentives for large system acquisitions. To reduce confusion, the authors have developed an award fee plan that Program Managers can use to stimulate on budget, schedule and technical performance by supplier teams who are pursuing system development and deployment under contract to the government or a Lead System Integrator.

Added November 9th, 2007


USC-CSE-2006-622

Dan Wu, "Security Functional Requirements Analysis for Developing Secure Software," Qualifying Exam Report (pdf)

In the past decade, the usage of Commercial Off The Shelf (COTS) products has increased significantly in building software systems. The empirical results at the Center for Systems and Software Engineering (CSSE) reveal that the percentage of COTS Based Applications (CBA) in CSSE e-Services projects increased from 28% in 1997 to 70% in 2002 [Boehm et al 2002], which generally matches with the Standish Group’s survey results for the IT field at large. [Standish 2001].

At the same time, according to the US Computer Emergency Response Team statistics, the number of annual published COTS product vulnerabilities increased dramatically as well from 417 in 1997 to 5990 in 2005 [CERT Statistics]. Today, COTS security has become more important than ever before for many organizations whose daily business heavily relies upon healthy IT infrastructures. Competing with often limited IT resources and the fast changing internet threats, the ability to prioritize security practices correctly and efficiently has become a critical success factor to every modern organization.

Added December 6th, 2006


USC-CSE-2006-621

Yue Chen, "Stakeholder/Value Driven Security Threat Modeling for COTS Based System," Qualifying Exam Report (pdf)

In the past decade, the usage of Commercial Off The Shelf (COTS) products has increased significantly in building software systems. The empirical results at the Center for Systems and Software Engineering (CSSE) reveal that the percentage of COTS Based Applications (CBA) in CSSE e-Services projects increased from 28% in 1997 to 70% in 2002 [Boehm et al 2002], which generally matches with the Standish Group’s survey results for the IT field at large. [Standish 2001]. At the same time, according to the US Computer Emergency Response Team statistics, the number of annual published COTS product vulnerabilities increased dramatically as well from 417 in 1997 to 5990 in 2005 [CERT Statistics]. Today, COTS security has become more important than ever before for many organizations whose daily business heavily relies upon healthy IT infrastructures. Competing with often limited IT resources and the fast changing internet threats, the ability to prioritize security practices correctly and efficiently has become a critical success factor to every modern organization.

Added December 6th, 2006


USC-CSE-2006-619

Barry Boehm, Apurva Jain, "A Value-Based Theory of Systems Engineering," Proceedings, INCOSE 2006 (pdf)

The INCOSE definition of “systems engineering” is “an interdisciplinary approach and means to enable the realization of successful systems.” The Value-Based Theory of Systems Engineering presents necessary and sufficient conditions for realizing a successful system and elaborates them into an executable process. The theory and process are illustrated on a supply-chain system example, and evaluated with respect to criteria for a good theory.

Added April 16, 2008


USC-CSE-2006-618

Paul Carlock, Jo Ann Lane, "System of Systems Enterprise Systems Engineering, the Enterprise Architecture Management Framework, and System of Systems Cost Estimation," 21st International Forum on COCOMO and Software Cost Modeling (pdf)

Today's need for more complex, more capable systems in a short timeframe is leading more organizations towards the integration of existing systems, Commercial-Off-the-Shelf (COTS) products, and new systems into network-centric, knowledge-based Systems of Systems (SoSs). With this development approach, system development processes to define the new architecture, identify sources to either supply or develop the required components, and eventually integrate and test these high level components are evolving and are being referred to as SoS Engineering (SoSE). Recent reports indicate that SoSE activities are considerably different from the more traditional systems engineering (SE) activities and various researchers are working to describe these differences in SoSE process models. One of these models is the SoS Enterprise Systems Engineering (ESE) and associated Enterprise Architecture Management Framework (EAMF) developed by Dr. Paul Carlock and Robert Fenton. In addition, efforts are underway at the University of Southern California (USC) Center for Systems and Software Engineering (CSSE) to develop a cost model to estimate the effort required to define, architect, and integrate component systems into an SoS framework. This paper provides an overview of the SoS ECE and EAMF, provides an overview of the USC CSSE SoSE cost model, attempts to evaluate how well the EAMF captures the unique aspects of SoSE identified in recent SoSE studies, and shows how the cost model addresses some of the unique aspects of SoSE identified in both the EAMF and recent SoSE studies.

Added October 30th, 2006


USC-CSE-2006-616

Ray Madachy, Barry Boehm, Dan Wu, "Comparison and Assessment of Cost Models for NASA Flight Projects," 21st International Forum on COCOMO and Software Cost Modeling (pdf)

This research is assessing the strengths, limitations, and improvement needs of existing cost, schedule, quality and risk models for critical flight software for the NASA AMES project Software Risk Advisory Tools. This particular report focuses only on the cost model aspect and supersedes the cost model sections in a previously delivered draft report [USC-CSE 2006].

A comparative survey and analysis of cost models used by NASA flight projects is described. The models include COCOMO II, SEER-SEM and True S. We look at evidence of accuracy, the need for calibration, and the use of knowledge bases to reflect specific domain factors. The models are assessed against a common database of relevant NASA projects. The overriding primary focus is on flight projects, but part of the work also looks at related sub-domains for critical NASA software. They are assessed as applicable in some of the following analyses. This report also addresses the critical NASA domain factors of high reliability and high complexity, and how the cost models address them.

Added October 28th, 2006


USC-CSE-2006-614

Jo Ann Lane, Barry Boehm, "System-of-Systems Cost Estimation: Analysis of Lead System Integrator Engineering Activities," Inter-Symposium 2006, The International Institute for Advanced Studies in Systems Research and Cybernetics (pdf)

As organizations strive to expand system capabilities through the development of system-of-systems (SoS) architectures, they want to know "how much effort" and "how long" to implement the SoS. In order to answer these questions, it is important to first understand the types of activities performed in SoS architecture development and integration and how these vary across different SoS implementations. This paper provides results of research conducted to determine types of SoS Lead System Integrator (LSI) activities and how these differ from the more traditional system engineering activities described in Electronic Industries Alliance (EIA) 632 (“Processes for Engineering a System”). This research further analyzed effort and schedule issues on “very large” SoS programs to more clearly identify and profile the types of activities performed by the typical LSI and to determine organizational characteristics that significantly impact overall success and productivity of the LSI effort. The results of this effort have been captured in a reduced-parameter version of the Constructive SoS Integration Cost Model (COSOSIMO) that estimates LSI SoS Engineering (SoSE) effort.Keywords: System of Systems, System of Systems Engineering, Lead System Integrator, Cost Model.

Added September 19th, 2006


USC-CSE-2006-613

Barry Boehm, Jo Ann Lane, "21st Century Processes for Acquiring 21st Century Software-Intensive Systems of Systems," CrossTalk, May 2006 (pdf)

Our experiences in helping to define, acquire, develop, and assess 21st century software-intensive system of systems (SISOS) have taught us that traditional 20th century acquisition and development processes do not work well on such systems. This article summarizes the characteristics of such systems, and indicates the major problem areas in using traditional processes on them. We also present new processes that we and others have been developing, applying, and evolving to address 21st century SISOS. These include extensions to the risk-driven spiral model to cover broad (many systems), deep (many supplier levels), and long (many increments) acquisitions needing rapid fielding, high assurance, adaptability to high change traffic, and complex interactions with evolving Commercial Off-the-Shelf (COTS) products, legacy systems, and external systems.

Added September 19th, 2006


USC-CSE-2006-611

Jo Ann Lane, Barry Boehm, "Synthesis of Existing Cost Models to Meet System of Systems Needs," Proceedings of Conference on Systems Engineering Research (CSER), 2006 (pdf)

Today’s need for more complex, more capable systems in a short timeframe is leading more organizations towards the integration of existing systems into network-centric, knowledge-based system-of-systems (SoS). Software and system cost model tools to date have focused on the software and system development activities of a single software system, but none to date adequately estimate the integration of multiple systems into an SoS. This paper presents an overview of the activities that must be included in an SoS cost model and describes an approach for estimating SoS effort using the Constructive Cost Model (COCOMO) suite of estimation tools to estimate SoS Lead System Integrator (LSI) effort as well as the total SoS development effort.

Added September 19th, 2006


USC-CSE-2006-609

Yue Chen, Barry Boehm, Luke Sheppard, "Measuring Security Investment Benefit for COTS Based Systems - A Stakeholder Value Driven Approach," 29th International Conference on Software Engineering, September 8th, 2006 (pdf)

This paper presents the improved version of the Threat Modeling method based on Attacking Path Analysis (T-MAP) which quantifies security threats by calculating the total severity weights of relevant attacking paths for Commercial Off The Shelf (COTS) based systems.

Added September 9th, 2006


USC-CSE-2006-608

Jesal Bhuta, "A Framework for Intelligent Assessment and Resolution of Commercial Off-The-Shelf (COTS) Product Incompatibilities" (pdf)

Boehm and Scherlis in [Boehm and Scherlis 1992] introduced Megaprogramming, the practice of software construction in a component-oriented fashion heavily based on software reuse. It is an effective technique of reducing long-term software development cost, improving software quality, and reducing development time. One critical factor that influences the success of megaprogramming is the effort taken to actually reuse available software components. This process entails the identification of the requirements to be satisfied by the component, selection of a component that satisfies these requirements, and using it appropriately in the system. Some challenges in the past reuse attempts have been in identifying the amount of effort required to develop reusable components, estimating the number of components to reuse, effectively selecting these components and adapting the components to differences in domain and/or architectural assumptions.

Added June 16th, 2006


USC-CSE-2006-607

Yue Chen, Barry Boehm, Luke Sheppard, "Value Driven Security Threat Modeling Based on Attacking Path Analysis," 40th Hawaii International Conference on System Sciences, June 15, 2006 (pdf)

Security threat modeling has been an important but difficult topic. This paper presents a novel quantitative threat modeling method, the Threat Modeling method based on Attacking Path Analysis (T-MAP), which quantifies security threats by calculating the total severity weights of relevant attacking paths for Commercial Off The Shelf (COTS) systems. Compared to existing approaches, T-MAP is sensitive to an organization's business value priorities and IT environment. It distills the technical details of thousands of software vulnerabilities into management-friendly numbers at a high-level. T-MAP can help system designers evaluate the security performance of COTS systems and analyze the effectiveness of security practices. In the case study, we demonstrate the steps of using T-MAP to analyze the cost-effectiveness of how IT system patching and upgrades can improve security. In addition, we introduce a software tool that automates the T-MAP framework.

Added May 11th, 2006


USC-CSE-2006-605

Dan Wu, Ye Yang, "Towards An Approach for Security Risk Analysis in COTS Based Development," Proceedings of Software Process Workshop/Workshop on Software Process Simulation 2006, Shanghai, China, May 2006 (pdf)

More and more companies tend to use secure products as COTS to develop their secure systems due to resource limitations. The security concerns add more complexity as well as potential risks to COTS selection process, and it is always a great challenge for developers to make the selection decisions. In this paper, we provide a method for security risk analysis in COTS based de-velopment (CBD) based on Common Criteria and our previous work in identi-fying general risk items for CBD. The research result provides useful insights for developers in identifying security risks, so that it can be used to aid for the COTS selection decision.

Added May 10th, 2006


USC-CSE-2006-603

Barry Boehm, "Some Future Trends And Implications for Systems And Software Engineering Processes," Systems Engineering, Wiley Periodicals, Inc., Volume 9, Issue 1, 2006, pp. 1-19 (pdf)

In response to the increasing criticality of software within systems and the increasing demands being put onto 21st century systems, systems and software engineering processes will evolve significantly over the next two decades. This paper identifies eight relatively surprise-free trends - the increasing interaction of software engineering and systems engineering; increased emphasis on users and end value; increased emphasis on systems and software dependability; increasingly rapid change; increasing global connectivity and need for systems to interoperate; increasingly complex systems of systems; increasing needs for COTS, reuse, and legacy systems and software integration; and computational plenty. It also identifies two wild card trends: increasing software autonomy and combinations of biology and computing. It then discusses the likely influences of these trends on systems and software engineering processes between now and 2025, and presents an emerging scalable spiral process model for coping with the resulting challenges and opportunities of developing 21st century software-intensive systems and systems of systems.

Added May 8th, 2006


USC-CSE-2006-602

Sam Malek, Nenad Medvidovic, Chiyoung Seo, Marija Mikic-Rakic, "A User Centric Approach for Improving A Distributed Software System's Deployment Architecture" (pdf)

The quality of service (QoS) provided by a distributed software system depends on many system parameters, such as network bandwidth, reliability of links, frequencies of software component interactions, etc. A distributed system's deployment architecture can have a significant impact on its QoS. Furthermore, the deployment architecture will influence user satisfaction, as users typically have varying QoS preferences for the system services they access. Finding a deployment architecture that will maximize the users' overall satisfaction is a challenging, multi-faceted problem. In this paper, we present a framework model and a set of generic algorithms that can be tailored and instantiated to address this problem. We also provide an evaluation of our approach by applying it on a large number of representative scenarios.

Added May 8th, 2006


USC-CSE-2006-601

Chris A. Mattmann, "Software Connectors for Highly Distributed and Voluminous Data Intensive Systems," PhD Qualifying Exam Report (pdf)

We describe a research agenda for selecting combinations of software connectors in order to quantifiably satisfy different use-case scenarios for large volume data distribution. We outline the necessity for an appropriate categorization framework which allows a user to confidently select amongst the different distribution connectors available. The categorization framework is based on a classification of distribution connectors along eight key dimensions of data distribution. Finally we describe our approach for testing and validating quantifiable functional properties of data distribution connectors, and their ability to satisfy specified data distribution scenarios.

Added February 6th, 2006


USC-CSE-2006-600

Ed Colbert, Dan Wu, Yue Chen, Barry Boehm, "Cost Estimation for Secure Software & Systems," ISPA 2006 (pdf)

The Center for Software Engineering (CSE) at the University of Southern California (USC) is extending the widely–used Constructive Cost Model version 2 (COCOMO II) [Boehm, Abts, et al. 2000] to account for developing secure software. CSE is also developing a model for estimating the cost to acquire secure systems, and is evaluating the effect of security goals on other models in the COCOMO family. We will present the work to date.

Added January 16th, 2006


Copyright 2008 The University of Southern California

The written material, text, graphics, and software available on this page and all related pages may be copied, used, and distributed freely as long as the University of Southern California as the source of the material, text, graphics or software is always clearly indicated and such acknowledgement always accompanies any reuse or redistribution of the material, text, graphics or software; also permission to use the material, text, graphics or software on these pages does not include the right to repackage the material, text, graphics or software in any form or manner and then claim exclusive proprietary ownership of it as part of a commercial offering of services or as part of a commercially offered product.