University of Southern California
    
Home   Contact Us    
Center for Systems and Software Engineering

About us
News
History
People
Events
Upcoming
Highlights
Past
Publication
Tech. Report
TR by Author
Research
Projects
Tools
Courses
Education
Degrees
Admissions
Affiliates
List of Affiliates
Private Area
Other Resources


Technical Reports

USC-CSSE-2008-840

Yuriy Brun, "Nondeterministic Polynomial Time Factoring in the Tile Assembly Model," A subsequent version has been published as Theoretical Computer Science, Volume 395, Issue 1, April 17, 2008, pp. 3-23 (pdf)

Formalized study of self-assembly has led to the definition of the tile assembly model.  Previously, I presented ways to compute arithmetic functions, such as addition and multiplication, in the tile assembly model: a highly distributed parallel model of computation that may be implemented using molecules or a large computer network such as the Internet.  Here, I present tile assembly model systems that factor numbers nondeterministically using a constant number of distinct components.  The computation takes advantage of nondeterminism, but theoretically, each of the nondeterministic paths is executed in parallel, yielding the solution in time linear in the size of the input, with high probability.  I describe mechanisms for finding the successful solutions among the many parallel executions and explore bounds on the probability of such a nondeterministic system succeeding and prove that probability can be made arbitrarily close to 1.

Added April 11th, 2007


USC-CSSE-2008-839

Chiyoung Seo, George Edwards, Sam Malek, Nenad Medvidovic, "A Framework for Estimating the Impact of a Distributed Software System’s Architectural Style on its Energy Consumption," Seventh Working IEEE/IFIP Conference on Software Architecture (WICSA 2008) pp. 277-280 (pdf)

The selection of an architectural style for a given software system is an important factor in satisfying its quality requirements. In battery-powered environments, such as mobile and pervasive systems, efficiency with respect to energy consumption has increasingly been recognized as an important quality attribute. In this paper, we present a framework that (1) facilitates early estimation of the energy consumption induced by an architectural style in a distributed software system, and (2) consequently enables an engineer to use energy consumption estimates along with other quality attributes in determining the most appropriate style for a given distributed application. We have applied the framework on five distributed systems styles to date, and have evaluated it for precision and accuracy using a particular middleware platform that supports the implementation of those styles. In several application scenarios, our framework exhibited excellent precision, in that it was consistently able to correctly rank the five styles and estimate the relative differences in their energy consumptions. Moreover, the framework has also proven to be accurate: its estimates were within 7% of the different style implementations' actually measured energy consumptions.

Added November 29th, 2007


USC-CSSE-2008-838

Ye Yang, Qi Li, Mingshu Li, Qing Wang, "An Empirical Analysis on Distribution Patterns of Software Maintenance Effort," 24th IEEE International Conference on Software Maintenance (ICSM 2008), Beijing, China, September 28 - October 4, 2008 (pdf)

Distribution of effort in software engineering process has been the basis for facilitating more reasonable software project planning. This paper reports empirical results on activity effort distribution patterns of a series of industrial software maintenance projects. The results show that with respect to different influencing factors, the projects demonstrate large variations in their activity effort distribution, which necessitates appropriate adjustments to strategic planning.

Added April 20th, 2009


USC-CSSE-2008-837

Qi Li, Qing Wang, Ye Yang, Mingshu Li, "Reducing Biases in Individual Software Effort Estimations: A Combining Approach," Empirical Software Engineering and Measurement (ESEM 2008), Kaiserslautern, Germany, October 9-10, 2008 (pdf)

Software effort estimation techniques abound, each with its own set of advantages and disadvantages, and no one proves to be the single best answer. Combining estimating is an appealing approach. Avoiding the difficult problem of choosing the single “best” technique, it solves the problem by asking which techniques would help to improve accuracy, assuming that each has something to contribute. In this paper, we firstly introduce the systematic “external” combining idea into the field of software effort estimation, and estimate software effort using Optimal Linear Combining (OLC) method with an experimental study based on a real-life data set. The result indicates that combining different techniques can significantly improve the accuracy and consistency of software effort estimation by making full use of information provided by all components, even the much “worse” one.

Added April 20th, 2009


USC-CSSE-2008-836

Ye Yang, Mei He, Mingshu Li, Qing Wang, Barry Boehm, "Phase Distribution of Software Development Effort," Proceedings of the Second ACM-IEEE International Symposium on Empirical Software Engingeering and Measurement, Kaiserslautern, Germany, 2008, pp. 61-69 (pdf)

Effort distribution by phase or activity is an important but often overlooked aspect compared to other steps in the cost estimation process. Poor effort allocation is among the major root causes of rework due to insufficiently resourced early activities. This paper provides results of an empirical study on phase effort distribution data of 75 industry projects, from the China Software Benchmarking Standard Group (CSBSG) database. The phase effort distribution patterns and variation sources are presented, and analysis results show some consistency in effects of software size and team size on code and test phase distribution variations, and some considerable deviations in requirements, design, and transition phases, compared with recommendations in the COCOMO model. Finally, this paper discusses the major findings and threats to validity and presents general guidelines in directing effort allocation. Empirical findings from this study are beneficial for stimulating discussions and debates to improve cost estimation and benchmarking practices.

Added February 5th, 2009


USC-CSSE-2008-835

Ali Afzal Malik, Barry Boehm, A. Winsor Brown, "Predicting Understandability of a Software Project Using COCOMO II Model Drivers," 23rd International Forum on COCOMO and Systems/Software Cost Modeling, Los Angeles, October 2008 (pdf)

This paper presents the results of an empirical study undertaken to investigate the utility of COCOMO II model drivers in predicting the understandability of a software project. Understandability is defined as the degree of clarity of the purpose and requirements of a software system to the developers of that system at the end of the Inception phase. COCOMO II scale factors and cost drivers relevant for prediction are shortlisted and a weighted-sum formula relating these model drivers to understandability is derived through voting. The utility of this formula is judged by examining the COCOMO II model drivers of 24 real-client, MS-student, team projects done at USC. It is found that the weighted-sum formula correctly predicts the understandability of a software project in more than 80% of the cases suggesting a strong relationship between the shortlisted COCOMO II model drivers and the understandability of a software project. This objective way of measuring the understandability of a software project can be extremely useful in determining the time when it is safe to minimize the effort spent on requirements engineering activities.

Added February 2nd, 2009


USC-CSSE-2008-833

Tim Menzies, Oussama Elrawas, Barry Boehm, Raymond Madachy, Jairus Hihn, Daniel Baker, Karen Lum, "Accurate Estimates without Calibration?" Lecture Notes in Computer Science, Making Globally Distributed Software Development a Success Story, Volume 5007, 2008, Springer Belin / Heidelberg, pp. 210-221 (pdf)

Most process models calibrate their internal settings using historical data. Collecting this data is expensive, tedious, and often an incomplete process.

Is it possible to make accurate software process estimates without historical data? Suppose much of uncertainty in a model comes from a small subset of the model variables. If so, then after (a) ranking variables by their ability to constrain the output; and (b) applying a small number of the top-ranked variables; then it should be possible to (c) make stable predictions in the constrained space.

To test that hypothesis, we combined a simulated annealer (to generate random solutions) with a variable ranker. The results where quite dramatic: in one of the studies in this paper, we found process options that reduced the median and variance of the effort estimates by a factor of 20. In ten case studies, we show that the estimates generated in this manner are usually similar to those produced by standard local calibration.

Our conclusion is that while it is always preferable to tune models to local data, it is possible to learn process control options without that data.

Added January 29th, 2009


USC-CSSE-2008-832

Supannika Koolmanojwong, Pongtip Aroonvatanaporn, Itti Charoenthongtrakul, "Incremental Commitment Model Process Guidelines for Software Engineering Class" (pdf)

Effectively communicating the software process model to the software engineers is essential in enabling them to understand the overall process as well as specific areas of focus. To satisfy the objective of helping students learn the software processes, an Electronic Process Guide (EPG) for the Incremental Commitment Model (ICM) has been developed by the University of Southern California (USC) Center for System and Software Engineering (CSSE) and has been experimented for its efficiency and effectiveness by software engineering students. This paper reports on the experimental results of utilizing the EPG of the ICM process to develop software systems comparing with the use of traditional paper-based guidelines in the past. The analyses focus on both quantitative and qualitative aspects of the software development process based on the objectives defined by Humphrey and Kellner [8], process model characteristics defined by Fuggetta [1], people-oriented process information aspects by Heidrich, et al. [6], and students’ performances and feedbacks.

Added January 28th, 2009


USC-CSSE-2008-831

Barry Boehm, Ricardo Valerdi, "Achievements and Challenges in Cocomo-Based Software Resource Estimation," IEEE Software, Volume 25, Number 5, September 2008, pp. 74-83 (pdf)

This article summarizes major achievements and challenges of software resource estimation over the last 40 years, emphasizing the Cocomo suite of models. Critical issues that have enabled major achievements include the development of good model forms, criteria for evaluating models, methods for integrating expert judgment and statistical data analysis, and processes for developing new models that cover new software development approaches. The article also projects future trends in software development and evolution processes, along with their implications and challenges for future software resource estimation capabilities.

Added January 26th, 2009


USC-CSSE-2008-830

Ray Madachy, Barry Boehm, "Assessing Quality Processes with ODC COQUALMO," Lecture Notes in Computer Science, Making Globally Distributed Software Development a Success Story, Volume 5007, 2008, Springer Belin / Heidelberg, pp. 198-209 (pdf)

Software quality processes can be assessed with the Orthogonal Defect Classification COnstructive QUALity MOdel (ODC COQUALMO) that predicts defects introduced and removed, classified by ODC types. Using parametric cost and defect removal inputs, static and dynamic versions of the model help one determine the impacts of quality strategies on defect profiles, cost and risk. The dynamic version provides insight into time trends and is suitable for continuous usage on a project. The models are calibrated with empirical data on defect distributions, introduction and removal rates; and supplemented with Delphi results for detailed ODC defect detection efficiencies. This work has supported the development of software risk advisory tools for NASA flight projects. We have demonstrated the integration of ODC COQUALMO with automated risk minimization methods to design higher value quality processes, in shorter time and with fewer resources, to meet stringent quality goals on projects.

Added January 26th, 2009


USC-CSSE-2008-829

Barry Boehm, Jesal Bhuta, "Balancing Opportunities and Risks in Component-Based Software Development," IEEE Software, November-December 2008, Volume 15, Issue 6, pp. 56-63 (pdf)

The increasingly rapid change in information technology makes it essential for software development projects to continuously monitor and adapt to new sources of opportunity and risk. Software projects and organizations can increase their success rates in software development by better assessing and balancing their opportunities and risks. The authors summarize the incremental commitment model (ICM), a process framework for improved project monitoring and decision making based on balancing opportunities and risks. They give an example of how the ICM framework can improve component-based development choices based on assessment of opportunities and risks. They show how different opportunistic solutions result from different stakeholder value propositions. They elaborate on the risks involved in architectural mismatches among components, present a tool called the Integration Studio (iStudio) that enables projects to assess the most common sources of architectural mismatch between components. Finally, they present representative examples of its use.

Added January 26th, 2009


USC-CSSE-2008-828

Barry Boehm, Jo Ann Lane, "A Process Decision Table for Integrated Systems and Software Engineering," Conference on Systems Engineering Research, 2008 (pdf)

The Incremental Commitment Model (ICM), developed in a recent National Research Council study on integrating human factors into the systems development process, organizes systems engineering and acquisition processes in ways that better accommodate the different strengths and difficulties of hardware, software, and human factors engineering approaches. As with other models trying to address a wide variety of situations, its general form is rather complex. However, its risk-driven nature has enabled us to determine a set of ten common risk patterns and organize them into a decision table that can help new projects converge on a process that fits well with their particular process drivers. For each of the ten special cases, the decision table provides top-level guidelines for tailoring the key activities of the ICM, along with suggested lengths between each internal system build and each external system increment delivery. This paper elaborates on each of the ten cases and provides examples of their use.

Added January 21st, 2009


USC-CSSE-2008-827

Judith Dahmann, Jo Ann Lane, George Rebovich Jr., "A Model of Systems Engineering in a System of Systems Context," Conference on Systems Engineering Research, 2008 (pdf)

Systems engineering is a key enabler of defense system acquisition. Current Department of Defense (DoD) systems engineering policy and guidance focus on the engineering of new systems. At the same time, the defense environment is increasingly characterized by networks of systems which work together to meet user capability needs. Individual systems are no longer considered as individual bounded entities, but rather as components in larger, more variable, ensembles of interdependent systems which interact based on end-to-end business processes and networked information exchange. This paper presents a model of systems engineering which provides a framework for supporting the systems engineer in this systems-of-systems (SoS) environment.

Added January 21st, 2009


USC-CSSE-2008-826

Jo Ann Lane, Barry Boehm, "Modern Tools to Support DoD Software Intensive System of Systems Cost Estimation," SoftwareTech, Volume 11, Number 3, 2008, pp. 4-13 (pdf)

Many Department of Defense (DoD) organizations are attempting to provide new system capabilities through the net-centric integration of existing software-intensive systems into a new system often referred to as Software-Intensive System of Systems (SISOS). The goal of this approach is to build on existing capabilities to produce new capabilities not provided by the existing systems in a timely and cost-effective manner. Many of these new SISOS efforts such as the Future Combat Systems are of a size and complexity unlike their predecessor systems and cost estimation tools such as the Constructive Cost Model (COCOMO) suite are undergoing significant enhancements to address these challenges. This article describes the unique challenges of SISOS cost estimation, how current tools are changing to support these challenges, as well as on-going efforts to further support SISOS cost estimation needs. It summarizes key information in DACS Report Number 347336.

Added January 21st, 2009


USC-CSSE-2008-825

Jo Ann Lane, Judith Dahmann, "Process Evolution to Support System of Systems Engineering," Ultra-Large Scale Software-Intensive Systems (ULSSIS) Workshop held in conjunction with the International Conference in Software Engineering, Leipzig, Germany, 2008 (pdf)

One of the current research areas at the University of Southern California (USC) Center for Systems and Software Engineering (CSSE) is the development of large scale, software intensive systems of systems (SoS). These SoS are a type of Ultra-Large-Scale Software-Intensive System (ULSSIS) that have become increasingly prevalent in both government and commercial sectors. Research activities have focused on cost estimation and risk assessment associated with the development and evolution of these systems. As part of this research, USC CSSE has teamed with others to learn how SoS engineering processes are evolving to support the development of these systems. This paper highlights the findings of these research activities.

Added January 21st, 2009


USC-CSSE-2008-824

Judith Dahmann, George Rebovich Jr., Jo Ann Lane, "Systems Engineering for Capabilities," CrossTalk Journal, Volume 21, Number 11, 2008, pp. 4-9 (pdf)

With the increased emphasis on capabilities and networking, the DoD is recognizing the criticality of effective end-to-end performance of systems of systems (SoS) to meet user needs. While acquisition continues to focus on systems, systems requirements are increasingly based on assessment of gaps in user capabilities and in priority areas; there is an increasing focus on integration across systems to enable capabilities. Thus, the role of systems engineering (SE) is expanding to the engineering of SoS that provide user capabilities. This article discusses the shape of SoS in the DoD today. It outlines a recent initiative to provide guidance on the application of SE processes to the definition and evolution of SoS.

Added January 21st, 2009


USC-CSSE-2008-823

Jo Ann Lane, Doncho Petkov, Manuel Mora, "Software Engineering and the Systems Approach: A Conversation with Barry Boehm," International Journal of Information Technologies and Systems Approach (JITSA), Volume 1, Issue 2, 2008, pp. 99-103 (pdf)

IJITSA is honored by the fact that this issue presents an interview with probably the most significant figure in the field of software engineering since its inception and one of its founders, Professor Barry W. Boehm. He has published many seminal books and papers that have shaped the foundations of software engineering. We have included in the references just a small sample of his numerous publications addressing some of the fundamental issues in this field in recent years. They cover diverse topics ranging from a comparison of agile development methods and software engineering (Boehm & Turner, 2004) to reflections on enhancing software engineering education (Boehm, 2006c). A thought-provoking review of the evolution of software engineering and its current challenges is presented in Boehm (2006b), while his thoughts on the need to integrate more closely software and systems engineering are reflected in Boehm (2006a) and Boehm and Lane (2006). The questions we asked Professor Boehm relate to his significant contributions to software engineering and enhancing its links to the systems approach.

Added January 21st, 2009


USC-CSSE-2008-822

Joshua Garcia, Daniel Popescu, George Edwards, Nenad Medvidovic, "Identifying Architectural Bad Smells" (pdf)

Certain design fragments in software architectures can have a negative impact on system maintainability. Examples of such fragments include applying a design solution in an inappropriate context, mixing design fragments that have undesirable emergent behaviors, and applying design abstractions at the wrong level of granularity. In this paper, we introduce the concept of architectural "bad smells," which are frequently recurring software designs that can have non-obvious and significant detrimental effects on system lifecycle properties, such as understandability, testability, extensibility, and reusability. We define architectural smells and differentiate them from related concepts, such as architectural antipatterns and code smells. We also describe in detail a set of four representative architectural smells we encountered in the context of reverse-engineering and re-engineering two large industrial systems and from our search through case studies in research literature. For each of these architectural smells, we provide illustrative examples and demonstrate the impact on system lifecycle properties.

Added December 15th, 2008


USC-CSSE-2008-820

Chris A. Mattmann, Joshua Garcia, Ivo Krka, Daniel Popescu, Nenad Medvidovic, "The Anatomy and Physiology of the Grid Revisited" (pdf)

A domain-specific software architecture (DSSA) represents an effective, generalized, reusable solution to constructing software systems within a given application domain. In this paper, we revisit the widely cited DSSA for the domain of grid computing. We have studied systems in this domain over the past five years. During this time, we have repeatedly observed that, while individual grid systems are widely used and deemed successful, the grid DSSA is actually underspecified to the point where providing a precise answer regarding what makes a software system a grid system is nearly impossible. Moreover, every one of the existing purported grid technologies actually violates the published grid DSSA. In response to this, based on an analysis of the source code, documentation, and usage of eighteen of the most pervasive grid technologies, we have significantly refined the original grid DSSA. We demonstrate that this DSSA much more closely matches the grid technologies studied. Our refinements allow us to more definitively identify a software system as a grid technology, and distinguish it from software libraries, middleware, and frameworks.

Added October 20th, 2008


USC-CSSE-2008-819

Yuriy Brun, Nenad Medvidovic, "Preserving Privacy in Distributed Computation via Self-Assembly" (pdf)

We present the tile style, an architectural style that allows the creation of distributed software systems for solving NP-complete problems on large public networks. The tile style preserves the privacy of the algorithm and data, tolerates faulty and malicious nodes, and scales well to leverage the size of the public network to accelerate the computation. We exploit the known property of NP-complete problems to transform important real-world problems, such as protein folding, image recognition, and resource allocation, into canonical problems, such as 3-SAT, that the tile style solves. We provide a full formal analysis of the tile style that indicates the style preserves data privacy as long as no adversary controls more than half of the public network. We also present an empirical evaluation showing that problems requiring privacy-preservation can be solved on a very large network using the tile style orders of magnitude faster than using existing alternatives.

Added September 8th, 2008


USC-CSSE-2008-818

Ray Madachy, Ricardo Valerdi, "Knowledge-Based Systems Engineering Risk Assessment" (pdf)

A knowledge-based method for systems engineering risk assessment has been automated in an expert system tool.  Expert COSYSMO performs systems engineering risk assessment in conjunction with cost estimation using the Constructive Systems Engineering Cost Model (COSYSMO).  The technique is an extension of COSYSMO which supports project planning by identifying, categorizing, quantifying, and prioritizing system-level risks.  Workshops and surveys with seasoned systems engineering practitioners are used to identify and quantify risks, and the expert assessment has been implemented in an Internet-based tool.  The tool is being refined for sustained usage on projects by providing risk control advice, updating the rule base and being integrated into a more comprehensive risk management framework.

Added November 7th, 2008


USC-CSSE-2008-817

Ray Madachy, Barry Boehm, "ODC COQUALMO - A Software Defect Introduction and Removal Model using Orthogonal Defect Classification" (pdf)

Software quality processes can be assessed with the Orthogonal Defect Classification COnstructive QUALity MOdel (ODC COQUALMO) that predicts defects introduced and removed, classified by ODC types.  Using parametric cost and defect removal inputs, static and dynamic versions of the model help one determine the impacts of quality strategies on defect profiles, cost and risk.  The dynamic version provides insight into time trends and is suitable for continuous usage on a project.  The models are calibrated with empirical data on defect distributions, introduction and removal rates; and supplemented with Delphi results for detailed ODC defect detection efficiencies.  This work has supported the development of software risk advisory tools for NASA flight projects.  We have demonstrated the integration of ODC COQUALMO with automated risk minimization methods to design higher value quality processes, in shorter time and with fewer resources, to meet stringent quality goals on projects.

Added December 22nd, 2008


USC-CSSE-2008-816

Ray Madachy, Barry Boehm, "Comparative Analysis of COCOMO II, SEER-SEM and True-S Software Cost Models," updated version of USC-CSE-2006-616 from the 21st International Forum on COCOMO and Software Cost Modeling (pdf)

We have been assessing the strengths, limitations, and improvement needs of cost, schedule, quality and risk models for NASA flight projects.  The primary cost models used in this domain for critical flight software are COCOMO II, SEER-SEM and True S.  A comparative survey and analysis of these models against a common database of NASA projects was undertaken.  A major part of this work is defining transformations between the different models by the use of Rosetta Stones that describe the mappings between their cost factors.

With these Rosetta Stones, projects can be represented in all models in a fairly consistent manner and differences in their estimates better understood.  Top-level Rosetta Stones map the factors between the models, and the detailed ones map the individual ratings between the corresponding factors.  Most of the Rosetta Stone mappings between factors are one to one, but some are one to many.

The Rosetta Stones we have developed so far allow one to convert COCOMO II estimate inputs into corresponding SEER-SEM or True S inputs, or vice-versa.  NASA data came in the COCOMO format and was converted to SEER-SEM and True S factors per the Rosetta Stones.

This initial study was largely limited to a COCOMO viewpoint.  The current Rosetta Stones need further review and have to deal with incommensurate quantities from model to model.  The cost models performed well when assessed against the NASA data despite these drawbacks, the absence of contextual data and potential flaws in the factor transformations.

The current set of Rosetta Stones has provided a usable framework for analysis, but more should be done including developing two-way and/or multiple-way Rosetta Stones, and partial factor-to-factor mappings.  Factors unique to some models should be addressed and detailed translations between the size inputs should be developed including COTS and reuse sizing.  Remaining work also includes elaborating the detailed Rosetta Stone for the new True S model, and rigorous review of all the top-level and detailed Rosetta Stones.

Conclusions for existing model usage and new model development are provided.  In practice no one model should be preferred over all others, and it is best to use a variety of methods.  Future work involves repeating the analysis with the refined Rosetta Stones, updated calibrations, improved models and new data.

Added September 19th, 2008


USC-CSSE-2008-815

Di Wu, Da Yang, Supannika Koolmanojwong, Barry Boehm, "Experimental Evaluation of Wiki Technology and the Shaper Role in Collaborative Requirements Negotiation" (pdf)

Wiki has been successfully used in a variety of collaboration tasks in corporate context. Studies of corporate wiki users identified the role of shaping as a critical success factor. However, little is known about how wiki and the shaping role can benefit software requirements negotiation activities. Following our initial development of a wiki-based requirements negotiation support tool – WikiWinWin, we experimented with the use of wiki and the shaping role to engage stakeholders in requirements negotiations in the graduate software engineering course at University of Southern California (USC). Our initial experience with 20 real-client projects shows promising results with room for improvements.

Added August 8th, 2008


USC-CSSE-2008-814

Barry Boehm, Dan Ingold, Raymond Madachy, "The Macro Risk Model: An Early Warning Tool for Software-Intensive Systems Projects," INCOSE 2008 (pdf)

The Macro Risk Model is a tool for early detection and quantification of project risks associated with Software-Intensive Systems (SIS). It has been calibrated and validated against detailed case studies. Structured questions guide a user in rating evidence and risk impacts with respect to critical project success factors. It provides color-coded risk exposures, and allows for rationale and supporting artifacts. This paper describes the model and the basis of its risk evaluation methods, it reviews case studies of NASA exploration mission failures with respect to the model, it discusses a Delphi consensus-seeking process that was used to analyze factors affecting model accuracy and to calibrate the model, and compares the model to other risk approaches.

Added May 28th, 2008


USC-CSSE-2008-813

David Woollard, "Supporting in silico Experimentation Via Software Architecture," (pdf)

The “in silico” process is a scientific methodology which relies on computer processing and simulation as the primary means of experimentation. Scientific workflows are an area of computer science research poised to greatly impact the way scientists conduct “in silico” research. They are not widely applied, though, because current workflow systems require scientists to manage engineering aspects of workflow design as well as the science being conducted. A domain specific software architecture for scientific software in workflow systems can abstract engineering aspects of workflow design from the scientist while giving design guidance for the composition of algorithms, reducing time-to-integration.

Added May 30th, 2008


USC-CSSE-2008-812

Gan Wang, Ricardo Valerdi, Aaron Ankrum, Cort Millar, Garry J. Roedler, "COSYSMO Reuse Extension," INCOSE 2008 (pdf)

Reuse in systems engineering is a frequent, but poorly understood phenomenon. Nevertheless, it has a significant impact on estimating the appropriate amount of systems engineering effort with models like the Constructive Systems Engineering Cost Model. Practical experience showed that the initial version of COSYSMO, a model based on a “build from the scratch” philosophy, needed to be refined in order to incorporate reuse considerations that fit today’s industry environment. The notion of reuse recognizes the effect of legacy system definition in engineering a system and introduces multiple reuse categories for classifying each of the four COSYSMO size drivers – requirements, interfaces, algorithms, and operational scenarios. It fundamentally modifies the counting rules for the COSYSMO size drivers and updates the definition of system size in COSYSMO.

In this paper, we present (1) the definition of the COSYSMO reuse extension and the approach employed to define this extension; (2) the updated COSYSMO size driver definitions that are consistent with the reuse model; (3) the method applied to defining the reuse weights used in the modified parametric relationship; (4) a practical implementation example that instantiates the reuse model by an industry organization and the empirical data that provided practical validation of the extended COSYSMO model; and (5) recommendations for organizational implementation and deployment of this extension.

Added May 12th, 2008


USC-CSSE-2008-811

Ed Colbert, Barry Boehm, "Cost Estimation for Secure Software & Systems," ISPA / SCEA 2008 Joint International Conference (pdf)

The Center for Software Engineering (CSSE) at the University of Southern California (USC) is extending the widely–used Constructive Cost Model version 2 (COCOMO II) [4] to account for developing secure software.  CSSE is also developing a model for estimating the cost to acquire secure systems (emphasizing space systems), and is evaluating the effect of security goals on other models in the COCOMO family.  We will present the work to date.

Added May 12th, 2008


USC-CSSE-2008-810

Gan Wang, Ricardo Valerdi, Barry Boehm, Alex Shernoff, "Proposed Modification to COSYSMO Estimating Relationship," INCOSE Symposium, June 2008 (pdf)

This paper proposes a modification to the Academic COSYSMO estimating relationship to remedy a critical limitation in its current implementation of the cost drivers. The effort multipliers defined for these drivers have an overdramatic impact on the nominal effort, which unrealistically amplifies or compresses the effort estimate. This problem severely limits its practical applications. The newly proposed parametric relationship is inspired by the COCOMO II modeling approach and based on the considerations of the life cycle impact of the cost drivers. Two additional cost drivers are also introduced. The feasibility of the new model definition is examined with a boundary analysis and validated by the analysis of historical data.

In this paper, we present (1) an analysis of the problem with the current Academic COSYSMO model definition; (2) proposed addition of two new cost drivers to the model; (3) an analysis of life cycle impact of the cost drivers and an organization of the drivers based on the impact; (4) the modified COSYSMO parametric relationship; (5) validation of the modified relationship through an analysis of historical data; and (6) conclusion and suggestion of future work. This work is based on the practical implementation of COSYSMO at BAE Systems.

Added April 25th, 2008


USC-CSSE-2008-809

Ricardo Valerdi, Elliot Axelband, Thomas Baehren, Barry Boehm, Dave Dorenbos, Scott Jackson, Azad Madni, Gerald Nadler, Paul Robitaille, Stan Settles, "A Research Agenda for Systems of Systems Architecting," International Journal of System of Systems Engineering, 2008 (pdf)

This paper, documents the activity of a workshop on defining a research agenda for Systems of Systems SoS; Architecting, which was held at USC in October 2006. After two days of invited talks on critical success factors for SoS engineering, the authors of this paper convened for one day to brainstorm topics for the purpose of shaping the near-term research agenda of the newly convened USC Center for Systems and Software Engineering (CSSE). The output from the workshop is a list of ten high-impact items with corresponding research challenges in the context of SoS Architecting. Each item includes a description of the research challenges, its link to contemporary academic or industrial problems and reasons for advocacy of that area. The items were assessed in terms of value and difficulty to determine a prioritisation both for the CSSE’s future research agenda and for others in the field.

Added April 21st, 2008


USC-CSSE-2008-808

Barry Boehm, Ricardo Valerdi, Eric Honour, "The ROI of Systems Engineering: Some Quantitative Results for Software-Intensive Systems," Systems Engineering, Volume 11, Issue 3, April 2008, pp. 221-234 (pdf)

This paper presents quantitative results on the return on investment of systems engineering (SE-ROI) from an analysis of the 161 software projects in the COCOMO II database. The analysis shows that, after normalizing for the effects of other cost drivers, the cost difference between projects doing a minimal job of software systems engineering -  as measured by the thoroughness of its architecture definition and risk resolution -  and projects doing a very thorough job was 18% for small projects and 92% for very large software projects as measured in lines of code. The paper also presents applications of these results to project experience in determining how much up front systems engineering is enough for baseline versions of smaller and larger software projects, for both ROI-driven internal projects and schedule-driven outsourced systems of systems projects.

Added April 16th, 2008


USC-CSSE-2008-807

Barry Boehm, "System Development Process: The Incremental Commitment Model," chapter 2 of Human-System Integration in the System Development Process: A New Look, 2007, with updated 2008 terminology and charts (pdf)

The ultimate goal of system development is to deliver a system that satisfies the needs of its operational stakeholders—users, operators, administrators, maintainers, interoperators, the general public—within satisfactory levels of the resources of its development stakeholders—funders, acquirers, developers, suppliers, others.  From the human-system integration perspective, satisfying operational stakeholders’ needs can be broadly construed to mean a system that is usable and dependable; permits few or no human errors; and leads to high productivity and adaptability.  Developing and delivering systems that simultaneously satisfy all of these success-critical stakeholders usually requires managing a complex set of risks such as usage uncertainties, schedule uncertainties, supply issues, requirements changes, and uncertainties associated with technology maturity and technical design.  Each of these areas poses a risk to the delivery of an acceptable operational system within the available budget and schedule.  End-state operational system risks can be categorized as uncertainties in achieving a system mission, carrying out the work processes, operating within various constraints such as cost or personnel, satisfying operational stakeholders, or achieving an acceptable operational return on investment.

This chapter summarizes the study’s analysis of candidate system design, development, and evolution processes with respect to a set of study-derived critical success factor principles for support of human-intensive system development.  It presents the results of synthesizing the contributions of these models along with key human factors processes into an Incremental Commitment Model that is used as a process framework for application of the study’s recommended processes, methods, and tools, and for illustrating their successful application in several human-system design case studies (see Chapter 5).

Added April 7th, 2008


USC-CSSE-2008-806

Vu Nguyen, Bert Steece, Barry Boehm, "A Constrained Regression Technique for COCOMO Calibration," Empirical Software Engineering and Measurement Conference, 2008 (pdf)

Building cost estimation models is often considered as a search problem in which the solver should return an optimal solution satisfying an objective function. Moreover, the obtained solution also needs to meet certain constraints. In the COCOMO model, for example, increasing in the estimated effort requires increases in the model parameters. In this research, we introduce a constrained regression technique that uses different objective functions and constraints for calibrating COCOMO model parameters. To access the performance of the proposed technique, we run a cross-validation procedure and compare the prediction accuracy from different approaches such as least squares, stepwise, Lasso, and Ridge regression. Our result suggests that the regression model that minimizes the mean of relative error and imposes non-negative coefficients is a favorable technique for calibrating the COCOMO model parameters.

Added April 2nd, 2008


USC-CSSE-2008-805

Da Yang, Di Wu, Supannika Koolmanojwong, A. Winsor Brown, Barry Boehm, "WikiWinWin: A Wiki Based System for Collaborative Requirements Negotiation," Proceedings of the 41st Annual Hawaii International Conference on System Sciences, 2008, p. 24 (pdf)

Defining requirements is one of the most critical activities in the development of software intensive systems.

The EasyWinWin system has been very good in capturing initial requirements involving heterogeneous stakeholders in over 150 client-developer requirements negotiations. However, it has been less easy to use in updating requirements and related information as a project proceeds and adapting to the evolving nature of the requirements.

Because our clients are finding that wikis are easier to learn and use, and can organize information in a flexible and updatable manner, we have developed an initial version of a WikiWinWin system as a potential successor to EasyWinWin. We have conducted a case study of WikiWinWin, and the result shows that the initial  WikiWinWin is basically good at facilitating stakeholder collaborative negotiation and learning, but has some limitations that we are now addressing.

Added February 22nd, 2008


USC-CSSE-2008-804

LiGuo Huang, Barry Boehm, Hao Hu, Jidong Ge, Jian Lü, Cheng Qian, "Applying Value-Based Software Processes: An ERP Example," International Journal of Software and Informatics, Volume 2, Issue 1, 2008, pp. 1-15 (pdf)

Commercial organizations increasingly need software processes sensitive to business value, quick to apply, supportive of multi-stakeholder collaboration, and capable of early analysis for subprocess consistency and compatibility. This paper presents experience in applying a lightweight synthesis of a Value-Based Software Quality Achievement process and an Object-Petri-Net-based process model to achieve a stakeholder win-win outcome for software quality achievement in an on-going ERP software project in China. The application results confirmed that 1) the application of value-based approaches was inherently better than value-neutral approaches adopted by most ERP software projects; 2) the Object-Petri-Net-based process model provided project managers with a synchronization and stabilization framework for process activities, success-critical stakeholders and their value propositions; 3) process visualization and simulation tools significantly increased management visibility and controllability for the success of the software project.

Added February 22nd, 2008


USC-CSSE-2008-803

Jo Ann Lane, Barry Boehm, "System of Systems Lead System Integrators:  Where Do They Spend Their Time and What Makes Them More or Less Efficient?" Systems Engineering, Volume 11, Issue 1, February 2008, pp. 81-91 (pdf)

As organizations strive to expand system capabilities through the development of system-of-systems (SoS) architectures, they want to know "how much effort" and "how long".  In order to answer these questions, it is important to first understand the types of activities performed in SoS architecture development and integration and how these vary across different SoS implementations.  This paper provides preliminary results of research conducted to determine types of SoS Lead System Integrator (LSI) activities and how these differ from the more traditional system engineering activities described in EIA 632 (Processes for Engineering a System).  It also looks at concepts in organizational theory, complex adaptive systems, and chaos theory and how these might be applied to SoS LSI activities to improve success rates and efficiency in the development of these “very large” complex systems.

Added February 22nd, 2008


USC-CSSE-2008-802

Yuriy Brun, Dustin Reishus, "Path Finding in the Tile Assembly Model," A subsequent version has been published in Theoretical Computer Science, Volume 410, Issue 15, April 2009, pp. 1461-1472 (pdf)

Swarm robotics, active self-assembly, and amorphous computing are fields that focus on designing systems of large numbers of small, simple components that can cooperate to complete complex tasks.  Many of these systems are inspired by biological systems, and all attempt to use the simplest components and environments possible, while still being capable of achieving their goals.  The canonical problems for such biologically-inspired systems are shape assembly and path finding.  We will demonstrate path finding in the well-studied tile assembly model, a model of molecular self-assembly that is strictly simpler than other biologically inspired models.  As in related work, our systems function in the presence of obstacles and can be made fault-tolerant.  The path-finding systems use O(1) distinct components and find minimal-length paths in time linear in the length of the path.

Added February 19th, 2008


USC-CSSE-2008-801

Yuriy Brun, "Solving Satisfiability in the Tile Assembly Model with a Constant-Size Tileset," A subsequent version has been published as Journal of Algorithms, Volume 63, Issue 4, October 2008, pp. 151-166 (pdf)

Biological systems are far more complex and robust than systems we can engineer today.  One way to increase the complexity and robustness of our engineered systems is to study how biological systems function.  The tile assembly model is a highly distributed parallel model of nature's self-assembly.  Previously, I defined deterministic and nondeterministic computation in the tile assembly model and showed how to add, multiply, factor, and solve SubsetSum.  Here, I present a system that decides satisfiability, a well known NP-complete problem.  The computation is nondeterministic and each parallel assembly executes in time linear in the input.  The system requires only a constant number of different tile types: 64, an improvement over previously best known system that uses O(n^2) tile types.  I describe mechanisms for finding the successful solutions among the many parallel assemblies and explore bounds on the probability of such a nondeterministic system succeeding and prove that probability can be made arbitrarily close to 1.

Added February 19th, 2008


USC-CSSE-2008-800

Barry Boehm, "Making a Difference in the Software Century," Computer magazine, March 2008, pp. 32-38 (pdf)

I feel very lucky to have been born in the US in the 1930s and to have had a chance to participate in the formation of a whole new discipline of software engineering. I think those of you just entering the software engineering field have an even more exciting prospect ahead of you. I believe that at least the next few decades will make the 21st century the Software Century. Software will be the main element that drives our necessary capabilities and quality of life, and people who know how best to develop software-intensive systems will have the greatest opportunity to make a difference in the results.

This will be very satisfying for software engineers, but it will impose large responsibilities to provide excellence in the software-intensive systems that are developed and in the services that they provide. Here are the main challenges that I believe 21st-century software engineers will need to address: increasingly rapid change, uncertainty and emergence, dependability, diversity, and interdependence.

Added February 25th, 2008


Copyright 2008 The University of Southern California

The written material, text, graphics, and software available on this page and all related pages may be copied, used, and distributed freely as long as the University of Southern California as the source of the material, text, graphics or software is always clearly indicated and such acknowledgement always accompanies any reuse or redistribution of the material, text, graphics or software; also permission to use the material, text, graphics or software on these pages does not include the right to repackage the material, text, graphics or software in any form or manner and then claim exclusive proprietary ownership of it as part of a commercial offering of services or as part of a commercially offered product.