Barry Boehm, "Experiences in Software Cost Modeling," Principles of Software Cost Estimating, Capers Jones, McGraw-Hill, 1998 (pdf)
My first exposure to software economics came on my first day in the software business, in June 1955 at General Dynamics in San Diego. My supervisor took me on a walking tour through the computer, an ERA 1103 which occupied most of a large room. His most memorable comment was, "Now listen, We're paying this computer six hundred dollars an hour, and we're paying you two hundred dollars an hour, and I want you to act accordingly." This created some good habits for me, such as careful desk checking, test planning, and analyzing before coding. But it also created some bad habits-a preoccupation with saving microseconds, patching object code, etc.-which were hard to unlearn when the balance of hardware and software costs began to tip the other way.
Barry Boehm, Bradford Clark, Sunita Chulani, "Calibration Results of COCOMOII.1997," presented at the SEPG-98 (pdf)
COCOMO II is an effort to update software cost estimation models, such as the 1981 COnstructive COst MOdel and its 1987 Ada COCOMO update. Both these and other 1980's cost models have experienced difficulties in estimating software projects of the 90s due to new practices such as non-sequential and rapid-development process models; reuse-driven approaches involving commercial-off-the-shelf (COTS) packages, reengineering, applications composition, and application generation capabilities; object-oriented ap proaches supported by distributed middleware; software process maturity effects and process-driven quality estimation. The COCOMO II research effort has developed new functional forms reflecting these practices, and is concentrated on developing a model well-suited for the 1990s and then annually updating it for the forthcoming years of the 21st Century.
The current COCOMO II.1997 has been calibrated to a dataset of 83 projects from a mix of Commercial, Aerospace, Government, and FFRDC organizations. The estimates of the 1997 calibrated model are within 30% of the actuals 52% of the times before stratific ation by organization; and within 30% of the actuals 64% of the times after stratification by organization.
The 1997 calibration results indicated that the following changes from COCOMO '81 to COCOMO II were successfully explaining sources of variation in the project data : Replacing the COCOMO '81 Development Modes by the 5 exponent drivers Precedentedness, Development Flexibility, Architecture/Risk Resolution, Team Cohesiveness, and CMM-based Process Maturity. Adding multiplicative cost drivers for Amount of Documentation and Multisite Development.
Barry Boehm, Alexander Egyed, "WinWin Requirements Negotiation Processes: A Multi-Project Analysis," Proceedings, ICSP'98, pp. 125-136 (pdf)
Fifteen 6-member-teams were involved in negotiating requirements for multimedia software systems for the Library of the University of Southern California. The re-quirements negotiation used the Stakeholder WinWin success model and the USC WinWin negotiation model (Win Condition-Issue-Option-Agreement) and groupware system. The negotiated results were integrated into a Life Cycle Objectives (LCO) package for the project, including descriptions of the system's requirements, operational concept, architecture, life cycle plan, and feasibility rationale. These were subsequently elaborated into a Life Cycle Architecture package including a prototype; six of these were then implemented as products.
A number of hypotheses were formulated, tested, and evolved regarding the WinWin negotiation processes and their effectiveness in supporting the development of effective LCO packages, in satisfying Library clients, and in stimulating cooperation among stakeholders. Other hypotheses involved identification of WinWin improvements, relationships among negotiation strategies on LCO pack-age and project outcomes.
Barry Boehm, "An Early Applications Generator and Other Recollections," In the Beginning: Recollections of Software Pioneers, R. Glass, IEEE, 1998, pp. 67-97 (pdf)
In this paper, I'll focus primarily on a set of experiences I had in developing an early application generator in the area of rocket flight mechanics. But I'll begin with some other recollections to indicate what led up to this work. And I'll end with recollections of some of the software experiences I've had since.
Chris Abts, Barry Boehm, "COTS Software Integration Cost Modeling Study," final report under DoD contract F30602-94-C-1095 (pdf)
This study represents a first effort towards the goal of developing a comprehensive COTS integration cost modeling tool. The approach taken was to first examine a wide variety of sources in an attempt to identify the most significant factors driving COTS integration costs, and to develop a mathematical form for such a model. These sources ranged from already existing cost models to information gathered in a preliminary high level data collection survey. Once the form and candidate drivers had been identified, the next step was to gather project level COTS integration effort data in a second round data collection exercise. This project level data was then used to calibrate and validate the proposed model. Data from both a graduate level software engineering class and from industrial sources were used in calibration attempts. The industrial data proved problematic, however, so for the purposes of this study, the final calibration of the model was based upon the student projects.
The final result was a cost model following the general form of the well-known COCOMO software cost estimation model, but with an alternate set of cost drivers. The scope of the model is also narrow, addressing only initial integration coding costs. The predictive power of the model at this stage is only fair, but it was demonstrated that with appropriate data, the accuracy of the model could be greatly improved.
Finally, the richness to the problem of capturing all significant costs associated with using COTS software offers many worth-while directions in which to expand the scope of this model.
Added September 24th, 1998
Barry Boehm, Dan Port, Marwan Abi-Antoun, Alexander Egyed, "Guidelines for the Life Cycle Objectives (LCO) and the Life Cycle Architecture (LCA) deliverables for Model-Based Architecting and Software Engineering (MBASE)" (pdf)
Over our three years of developing digital library products for the USC Libraries, we have been evolving an approach called Model-Based (System) Architecting and Software Engineering (MBASE). MBASE involves early reconciliation of a project's success models, product models, process models, and property models. It extends the previous spiral model in two ways: initiating each spiral cycle with a stakeholder win-win stage to determine a mutually satisfactory (win-win) set of objectives, constraints, and alternatives for the system's next elaboration during the cycle; orienting the spiral cycles to synchronize with a set of life cycle anchor points: Life Cycle Objectives (LCO), Life Cycle Architecture (LCA), and Initial Operational Capability (IOC).
The MBASE guidelines present the content and the completion criteria for the LCO and LCA milestones (which correspond to the Inception and Elaboration Phases of the Rational Unified Process) of the following system definition elements: Operational Concept Description (OCD); System and Software Requirements Definition (SSRD); System and Software Architecture Description (SSAD); Life Cycle Plan (LCP); Feasibility Rationale Description (FRD); Risk-driven prototypes.
Added September 24th, 1998
Barry Boehm, Alexander Egyed, Dan Port, Archita Shah, Julie Kwan, Ray Madachy, "Using the WinWin Spiral Model: A Case Study," IEEE Computer, Volume 31, Number 7, July 1998, pp. 33-44 (pdf)
Fifteen teams used the WinWin spiral model to prototype, plan, specify, and build multimedia applications for USC’s Integrated Library System. The authors report lessons learned from this case study and how they extended the model’s utility and cost-effectiveness in a second round of projects.
Added August 18th, 1998
Barry Boehm, Alexander Egyed, Dan Port, Archita Shah, Julie Kwan, Ray Madachy, "A Stakeholder Win-Win Approach to Software Engineering Education," Annals of Software Engineering, 1998, pp. 295-321 (pdf)
We are applying the stakeholder win-win approach to software engineering education. The key stakeholders we are trying to simultaneously satisfy are the students; the industry recipients of our graduates; the software engineering community as parties interested in improved practices; and ourselves as instructors and teaching assistants. In order to satisfy the objectives or win conditions of these stakeholders, we formed a strategic alliance with the University of Southern California Libraries to have software engineering student teams work with Library clients to define, develop, and transition USC digital library applications into operational use. This adds another set of key stakeholders: the Library clients of our class projects.
This paper summarizes our experience in developing, conducting, and iterating the course. It concludes by evaluating the degree to which we have been able to meet the stakeholder-determined course objectives.
Added August 18th, 1998
Barry Boehm, Alexander Egyed, "Improving the Life-Cycle Process in Software Engineering Education," accepted to EUROMICRO '98 - Workshop for Software Process Improvement (pdf)
The success of software projects and the resulting software products are highly dependent on the initial stages of the life-cycle process – the inception and elaboration stages. The most critical success factors in improving the outcome of software projects have often been identified as being the requirements negotiation and the initial architecting and planing of the software system.
Not surprisingly, this area has thus received strong attention in the research community. It has, however, been hard to validate the effectiveness and feasibility of new or improved concepts because they are often only shown to work in a simplified and hypothesized project environment. Industry, on the other hand, has been cautious in adopting unproven ideas. This has led to a form of deadlock between those parties.
In the last two years, we had had the opportunity to observe dozens of software development teams in planing, specifying and building library related, real-world applications. This environment provided us with a unique way of introducing, validating and improving the life cycle process with new principles such as the WinWin approach to software development. This paper summarizes the lessons we have learned.
Added August 17th, 1998
Alexander Egyed, Barry Boehm, "Telecooperation Experience with the WinWin System," Proceedings of the IFIP World Computer Conference, IFIP'98, pp. 37-46 (pdf)
WinWin is a telecooperation system supporting the definition of software-based applications as negotiated stakeholder win conditions. Our experience in using WinWin in defining over 30 digital library applications, including several telecooperation systems, is that it is important to supplement negotiation support systems such as WinWin with such capabilities as prototyping, tradeoff analysis tools, email, and videoconferencing. We also found that WinWin's social orientation around considering other stakeholders' win conditions has enabled stakeholders to achieve high levels of shared vision and mutual trust. Our subsequent experience in implementing the specified digital library systems in a rapidly changing web-based milieu indicated that achieving these social conditions among system stakeholders was more important than achieving precise requirements specifications, due to the need for team adaptability to requirements change. Finally, we found that the WinWin approach provides an effective set of methods of integrating ethical considerations into practical system definition processes via Rawls' stakeholder negotiation-based Theory of Justice.
Sunita Chulani, "Incorporating Bayesian Analysis to Improve the Accuracy of COCOMO II and Its Quality Model Extension," Ph.D. Qualifying Exam Report (pdf)
The three main highlights of this report are:
1. A simple modeling methodology that can be used to formulate software estimation models in the lack of abundance of software engineering data.
One of the biggest challenges faced by the software engineering community has been to make good decisions using data that is usually scarce and incomplete. Classical statistical techniques derive conclusions based on available sampling data. But, to make the best decision (especially with software engineering data) it is imperative that in addiction to available sampling data we should incorporate prior information that is relevant. The modeling methodology developed helps make use of easily available expert judgment data along with sampling data in the decision making process.
2. A COCOMO II Baysesian prototype
Using the above methodology, I developed a Bayesian prototype in an attempt to improve the accuracy of the existing COCOMO II model. A formal proces of how the Bayesian approach can be used to incorporate prior information obtained by expert-judgment-based Delphi and other sources in software economics to existing software engineering data was demonstrated. In many models, such prior information is informally used to evaluate the "appropriateness" of results. By describing the use of prior information along wiht sampling data, I have shown that it is possible to formally combine both these sources of information. An important aspect of formalizing the use of prior information is that others know what prior production functions are being used and can repeat the calibration calculatoins (or can incorporate different prior information in a similar way).
3. Quality model extension to COCOMO II
Using the modeling methodology, a quality model extension to the existing COCOMO model is being developed. This model facilitates cost/schedule/quality tradeoffs and provides insights on determining ship time. it enables 'what-if' analyses that demonstrate the effects of personnel, project, product and platform characteristics on software quality.
Cristina Gacek, Barry Boehm, "Composing Components: How Does One Detect Potential Architectural Mismatches?" to appear in Proceedings of the OMG-DARPA-MCC Workshop on Compositional Software Architectures, January 1998 (pdf)
Nowadays, in order to be competitive, a developer's usage of Commercial off the Shelf (COTS), or Government off the Shelf (GOTS), packages has become a sine qua non, at times being an explicit requirement from the customer. The idea of simply plugging together various COTS packages and/or other existing parts results from the megaprogramming principles [Boehm and Scherlis 1992]. What people tend to trivialize is the side effects resulting from the plugging or composition of these subsystems. Some COTS vendors tend to preach that because their tool follows a specific standard, say CORBA, all composition problems disappear. Well, it actually is not that simple. Side effects resulting from the composition of subsystems are not just the result of different assumptions in communication methods by various subsystems, but the result from differences in various sorts of assumptions, such as the number of threads that are to execute concurrently, or even on the load imposed on certain resources. This problem is referred to as architectural mismatches [Garlan et al. 1995] [Abd-Allah 1996]. Some but not all of these architectural mismatches can be detected via domain architecture characteristics, such as mismatches in additional domain interface types (units, coordinate systems, frequencies), going beyond the general interface types in standards such as CORBA.
Other researchers have successfully approached reuse at the architectural level by limiting their assets not by domain, but rather by dealing with a specific architectural style. I.e., they support reuse based on limitations on the architectural characteristics of the various parts and resulting systems [Medvidovic et al. 1997] [Magee and Kramer 1996] [Allan and Garlan 1996]. This approach can be successful because it simply avoids the occurrence of architectural mismatches.
Our work addresses the importance of underlying architectural features in determining potential architectural mismatches while composing arbitrary components. We have devised a set of those features, which we call conceptual features [Abd-Allah 1996][Gacek 1997], and are building a model that uses them for detecting potential architectural mismatches. This underlying model has been built using Z [Spivey 1992].
Sunita Chulani, Bradford Clark, Barry Boehm, Bert Steece, "Calibration Approach and Results of the COCOMO II Post-Architecture Model," Proceedings, ISPA '98 (pdf)
This paper describes our experience and results of the first calibration of the Post-Architecture model. The model determination process began with an expert Delphi process to determine apriori values for the Post-Architecture model parameters. A dataset of 83 projects was used in the multiple regression analysis. Projects with missing data or unexplainable anomalies were dropped. Model parameters that exhibited high correlation were consolidated. Multiple regression analysis was used to produce coefficients. These coefficients were used to adjust the previously assigned expert-determined model values. Stratification was used to improve model accuracy.
The resulting model produced estimates within 30% of the actuals 52% of the time for effort. Stratification by organization resulted in a model that produced estimates within 30% of the actuals 64% of the time for effort. It is therefore recommended that organizations using the model calibrate it using their own data. This increases model accuracy and produces a local optimum estimate for similar type projects.
The next calibration of COCOMO II will be done by using Bayesian techniques to incorporate prior knowledge and the sampling data information to determine the posteriori model.
Sunita Chulani, Bradford Clark, Barry Boehm, "Calibrating the COCOMO II Post-Architecture Model," Proceedings, ICSE 20, April 1998, pp. 477-480 (pdf)
The COCOMO II model was created to meet the need for a cost model that accounted for future software development practices. This resulted in the formulation of three submodels for cost estimation, one for composing applications, one for early lifecycle estimation and one for detailed estimation when the architecture of the product is understood. This paper describes the calibration procedures for the last model, Post-Architecture COCOMO II model, from eighty-three observations. The results of the multiple regression analysis and their implications are discussed. Future work includes further analysis of the Post-Architecture model, calibration of the other models, derivation of maintenance parameters, and refining the effort distribution for the model output.
Barry Boehm, Alexander Egyed, "Software Requirements Negotiation: Some Lessons Learned," Proceedings, ICSE 20, April 1998, pp. 503-506 (pdf)
Negotiating requirements is one of the first steps in any software system life cycle, but its results have probably the most significant impact on the system's value. However, the processes of requirements negotiation are not well understood. We have had the opportunity to capture and analyze requirements negotiation behavior for groups of projects developing library multimedia archive systems, using an instrumented version of the USC WinWin groupware system for requirements negotiation. Some of the more illuminating results were:
Most stakeholder Win Conditions were non-controversial (were not involved in Issues)
Negotiation activity varied by stakeholder role.
LCO package quality (measured by grading criteria) could be predicted by negotiation attributes.
WinWin increased cooperativeness, reduced friction, and helped focus on key issues.
Alexander Egyed, Barry Boehm, "A Comparison Study in Software Requirements Negotiation," Proceedings, INCOSE '98 (pdf)
In a period of two years, two rather independent experiments were conducted at the University of Southern California. In 1995, 23 three-person teams negotiated the requirements for a hypothetical library system. Then in 1996, 14 six-person teams negotiated the requirements for real multimedia related library systems.
A number of hypotheses were created to test how real software projects differ from hypothetical ones. Other hypotheses address differences in uniformity and repeatability.
The results indicate that repeatability in 1996 was even harder to achieve then in 1995 (Egyed-Boehm, 1996). Nevertheless, this paper presents some surprising commonalties between both years that indicate some areas of uniformity.
In both years, the same overall development process (spiral model) was followed, the same negotiation tools (WinWin System) were used, and the same people were doing the analysis of the findings. Thus, the comparison is less blurred by fundamental differences like terminology, process, etc.
Copyright 2008 The University
of Southern California
The written material, text,
graphics, and software available on this page and all related
pages may be copied, used, and distributed freely as long as the
University of Southern California as the source of the material,
text, graphics or software is always clearly indicated and such
acknowledgement always accompanies any reuse or redistribution
of the material, text, graphics or software; also permission to
use the material, text, graphics or software on these pages does
not include the right to repackage the material, text, graphics
or software in any form or manner and then claim exclusive proprietary
ownership of it as part of a commercial offering of services or
as part of a commercially offered product.