University of Southern California
    
Home   Contact Us    
Center for Systems and Software Engineering

About us
News
History
People
Events
Upcoming
Highlights
Past
Publication
Tech. Report
TR by Author
Research
Projects
Tools
Courses
Education
Degrees
Admissions
Affiliates
List of Affiliates
Private Area
Other Resources


Technical Reports

USC-CSE-2005-528

Jo Ann Lane, Ricardo Valerdi, "Synthesizing SoS Concepts for Use in Cost Estimation," Proceedings of IEEE 2005 International Conference on Systems, Man, and Cybernetics (pdf)

Today’s need for more complex, capable systems in a short timeframe is leading many organizations towards the integration of existing systems into network-centric, knowledge-based system-of-systems (SoS). Software and system cost model tools to date have focused on the software and system development activities of a single system. When viewing the new SoS architectures, one finds that the effort associated with the design and integration of these SoSs is not handled well, if at all, in current cost models. This paper includes (1) a comparison of various SoS definitions and concepts with respect to cost models, (2) a classification of these definitions in terms of product, process, and personnel focus, and (3) the definition of a set of discriminators for defining model boundaries and potential drivers for an SoS cost estimation model. Eleven SoS definitions are synthesized to provide reasonable coverage for different properties of SoS and illustrated in two examples.

Added September 19th, 2006


USC-CSE-2005-527

Jo Ann Lane, "Factors Influencing System-of-Systems Architecting and Integration Costs," Proceedings of Conference on Systems Engineering Research, 2005 (pdf)

Today’s need for more complex, more capable systems in a short timeframe is leading more organizations towards the integration of existing systems into network-centric, knowledge-based system-of-systems (SoS). Software and system cost model tools to-date have focused on the software and system development activities of a single software system. As we view the new SoS architectures, we find that the effort associated with the integration of these SoSs is not handled well, if at all, in current cost models. USC’s Center for Software Engineering (CSE) began work on a SoS cost model, the Constructive SoS Integration Model (COSOSIMO), in late 2003. This model has evolved using feedback obtained from USC CSE affiliates and other experts in industry and academia.

This paper presents an overview of the COSOSIMO cost model, descriptions of the size drivers and cost factors currently in the model, a summary of survey feedback received from USC CSE affiliates and other interested experts from industry, and the impact of survey findings on the current COSOSIMO cost model. It concludes with future plans for the COSOSIMO model.

Added September 19th, 2006


USC-CSE-2005-526

Sam Malek, Marija Mikic-Rakic, Nenad Medvidovic, "A Decentralized Redeployment Algorithm for Improving the Availability of Distributed Systems," Lecture Notes in Computer Science, Component Deployment, Springer Berlin / Heidelberg, Volume 3798/2005, pp. 99-114 (pdf)

In distributed and mobile environments, the connections among the hosts on which a software system is running are often unstable. As a result of connectivity losses, the overall availability of the system decreases. The distribution of software components onto hardware nodes (i.e., the system’s deployment architecture) may be ill-suited for the given target hardware environment and may need to be altered to improve the software system’s availability. Determining a software system’s deployment that will maximize its availability is an exponentially complex problem. Although several polynomial-time approximative techniques have been developed recently, these techniques rely on the assumption that the system’s deployment architecture and its properties are accessible from a central location. For these reasons, the existing techniques are not applicable to an emerging class of decentralized systems marked by the limited system wide knowledge and lack of centralized control. In this paper we present an approximative solution for the redeployment problem that is suitable for decentralized systems and asses its performance.

Added March 8th, 2004


USC-CSE-2005-525

Marija Mikic-Rakic, Sam Malek, Nenad Medvidovic, "A Style-Aware Architectural Middleware for Resource-Constrained, Distributed Systems," IEEE Transactions on Software Engineering, Volume 31, Issue 3, March 2005, pp. 256-272 (pdf)

A recent emergence of small, resource-constrained, and highly-mobile computing platforms presents numerous new challenges for software developers. We refer to development in this new setting as programming-inthe-small-and-many (Prism). This paper provides a description and evaluation of Prism-MW, a middleware platform intended to support software architecture-based development in the Prism setting. Prism-MW provides highly efficient and scalable implementation-level support for the key aspects of Prism application architectures, including their architectural styles. Additionally, Prism-MW is easily extensible to support different application requirements suitable for the Prism setting. Prism-MW has been applied in a number of applications and used as an educational tool in graduatelevel software architecture and embedded systems courses. Recently, Prism-MW has been successfully evaluated by a major industrial organization for use in one of their key distributed embedded systems. Our experience with the middleware indicates that the principles of architecture-based software development can be successfully, and flexibly, applied in the Prism setting.

Added June 24th, 2004


USC-CSE-2005-524

Chris A. Mattmann, Sam Malek, Nels Beckman, Marija Mikic-Rakic, Nenad Medvidovic, Dan Crichton, "GLIDE: A Grid-based Lightweight Infrastructure for Data-intensive Environments," Proceedings of the European Grid Conference (EGC2005), Amsterdam, the Netherlands, February 14th-16th, 2005, pp. 68-77 (pdf)

The promise of the grid is that it will enable public access and sharing of immense amounts of computational and data resources among a large number of individuals and institutions. However, the current grid solutions make several limiting assumptions that curtail their widespread adoption in the emerging decentralized, resource constrained, embedded, autonomic, and mobile (DREAM) environments: they are designed primarily for highly complex scientific problems, and therefore require powerful hardware and reliable network connectivity; additionally, they provide no application design support to grid users (e.g., scientists). To address these limitations, we present GLIDE, a prototype light-weight, data-intensive middleware infrastructure that enables access to the robust data and computational power of the grid on DREAM platforms. GLIDE embodies a number of features of an existing data grid solution within the framework of an existing DREAM middleware solution with extensive application design capabilities. We illustrate GLIDE on an example mp3 file sharing application. We discuss our early experience with GLIDE and present a set of open research questions.

Added August 2nd, 2004


USC-CSE-2005-523

Chris A. Mattmann, Nenad Medvidovic, Paul Ramirez, Vladimir Jakobac, "Unlocking the Grid," Proceedings of the 8th ACM SIGSOFT International Symposium on Component-based Software Engineering (CBSE8), St. Louis, Missouri, May 14-15, 2005, pp. 322-336 (pdf)

The grid has emerged as a novel paradigm that supports seamless cooperation of distributed, heterogeneous computing resources in addressing highly complex computing and data management tasks. A number of software technologies have emerged to enable “grid computing”. However, their exact nature, underlying principles, requirements, and architecture are still not fully understood and remain under-specified. In this paper, we present the results of a study whose goal was to try to identify the key underlying requirements and shared architectural traits of grid technologies. We then used these requirements and architecture in assessing five existing, representative grid technologies. Our studies show a fair amount of deviation by the individual technologies from the widely cited baseline grid architecture. Our studies also suggest a core set of critical requirements that must be satisfied by grid technologies, and highlight a key distinction between “computational” and “data” grids in terms of the identified requirements.

Added December 11th, 2004


USC-CSE-2005-522

Marija Mikic-Rakic, Sam Malek, Nenad Medvidovic, "Improving Availability in Large, Distributed, Component-Based Systems via Redeployment," Lecture Notes in Computer Science, Component Deployment, Springer Berlin / Heidelberg, Volume 3798/2005, pp. 83-98 (pdf)

In distributed and mobile environments, the connections among the hosts on which a software system is running are often unstable. As a result of connectivity losses, the overall availability of the system decreases. The distribution of software components onto hardware nodes (i.e., the system’s deployment architecture) may be ill-suited for the given target hardware environment and may need to be altered to improve the software system’s availability. The critical difficulty in achieving this task lies in the fact that determining a software system’s deployment that will maximize its availability is an exponentially complex problem. In this paper, we present a fast approximative solution for this problem, and assess its performance.

Added December 15th, 2003


USC-CSE-2005-521

Barry Boehm, "Vic Basili's Experience Base Papers," Foundations of Empirical Software Engineering, Boehm, Barry; Rombach, Hans Dieter; Zelkowitz, Marvin V. (Eds.), Springer-Verlag, 2005 (pdf)

It's impossible to discuss a piece of Vic's work without relating it to the other pieces. Everything fits together within an overall strategy of applying the empirical scientific method to the challenge of continuously improving an organization's software processes and products.

The Goal-Question-Metric approach recognizes that "improvement" requires metrics, but that every organization has its own set of goals and environmental influences. This means that "improvement" metrics may be anything from meaningless to dysfunctional if they aren't related to the organization's goals and to questions about the organization's current state and evolving environment. The Quality Improvement Paradigm recognizes that continuous process and product improvement needs to fit within a framework involving the scientific method of hypothesis formulation, test, and closed-loop feedback control. The Experience Factory recognizes that continuous improvement, as with any other investment to achieve results, should have a business plan, management commitment to the plan, and an infrastructure of policies, processes, procedures, facilities, tools, management information systems, stafEng, training, and incentives to get best results. The Sohare Engineering Laboratory (SEL) has been a marvelous example of successfUlly applying, evaluating, learning about, and evolving all of these concepts and capabilities in the area of software development and evolution. It justly deserved being the first recipient of the IEEE Software Process Achievement Award.

In this context, a narrow definition of an Experience Base is that it serves as the management information system for the continuous process and product improvement enterprise. This includes the data definitions, data base organization and content, database management capabilities, and analysis tools for formulating, testing, and evolving hypotheses about improving the organization's processes and products. But I think a broader definition is more appropriate; the entire infrastructure of product, process, data, and personnel assets that evolve to enable the organization to most rapidly and costeffectively improve its capabilities to adapt to its changing goals and environment

Added July 18th, 2008


USC-CSE-2005-520

Ye Yang, Zhihao Chen, Ricardo Valerdi, Barry Boehm, "Effect of Schedule Compression on Project Effort," 27th Conference of the International Society of Parametric Analysts, Denver, CO, June 2005 (pdf)

Schedule pressure is often faced by project managers and software developers who want to quickly deploy information systems. Typical strategies to compress project time scales might include adding more staff/personnel, investing in development tools, improving hardware, or improving development methods. The tradeoff between cost, schedule, and performance is one of the most important analyses performed during the planning stages of software development projects. In order to adequately compare the effects of these three constraints on the project it is essential to understand their individual influence on the project’s outcome.

In this paper, we present an investigation into the effect of schedule compression on software project development effort and cost and show that people are generally optimistic when estimating the amount of schedule compression. This paper is divided into three sections. First, we follow the Ideal Effort Multiplier (IEM) analysis on the SCED cost driver of the COCOMO II model. Second, compare the real schedule compression ratio exhibited by 161 industry projects and the ratio represented by the SCED cost driver. Finally, based on the above analysis, a set of newly proposed SCED driver ratings for COCOMO II are introduced which show an improvement of 6% in the model estimating accuracy.

Added December 11th, 2005


USC-CSE-2005-519

Vladimir Jakobac, "Recovering Architectural Views of OO Systems" (pdf)

In this paper, we present an iterative, user-guided approach to recovering architectural information of OO legacy systems based on a framework for analyzing and visualizing software systems. The framework is built around a pluggable and extensible set of clues and rules about a given problem domain, execution environment, and/or programming language. Architecture-relevant information is recovered by providing several architectural views. While the purpose view captures the high-level functionality of the system elements, the usage view identifies the regions of related elements. These views are then used as input to the phase which results in high-level structure and interaction views of the system. We have developed an implementation prototype of our framework targeted at Java systems. The tool is integrated with IBM Rational Rose®.

Added December 12th, 2005


USC-CSE-2005-517

Zhihao Chen, Tim Menzies, Dan Port, Barry Boehm, "Finding the Right Data for Software Cost Modeling," IEEE Software, Volume 22, Issue 6, November-December 2005, pp. 38-46 (pdf)

Strange to say, when building a software cost model, sometimes it's useful to ignore much of the available cost data. One way to do this is to perform data-pruning experiments after data collection and before model building. Experiments involving a set of Unix scripts that employ a variable-subtraction algorithm from the WEKA (Waikato Environment for Knowledge Analysis) data-mining toolkit illustrate this approach's effectiveness. This article is part of a special issue on predictor modeling.

Added November 11th, 2005


USC-CSE-2005-516

Barry Boehm, Rich Turner, "Management Challenges to Implementing Agile Processes in Traditional Development Organizations," IEEE Software, Volume 22, Issue 5, September-October 2005, pp. 30-39 (pdf)

Discussions with traditional developers and managers concerning agile software development practices nearly always contain two somewhat contradictory ideas. They find that on small, stand-alone projects, agile practices are less burdensome and more in tune with the software industry's increasing needs for rapid development and coping with continuous change. Managers face several barriers, real and perceived, when they try to bring agile approaches into traditional organizations. They categorized the barriers either as problems only in terms of scope or scale, or as significant general issues needing resolution. From these two categories, we've identified three areas - development process conflicts, business process conflicts, and people conflicts - that we believe are the critical challenges to software managers of large organizations in bringing agile approaches to bear in their projects.

Added November 11th, 2005


USC-CSE-2005-515

Ye Yang, Jesal Bhuta, Barry Boehm, Dan Port, "Value-Based Processes for COTS-Based Applications," IEEE Software, Volume 22, Issue 4, July-August 2005, pp. 54-62 (pdf)

Economic imperatives are changing the nature of software development processes to reflect both the opportunities and challenges of using COTS products. Processes are increasingly moving away from the time-consuming composition of custom software from lines of code (although these processes still apply for developing the COTS products themselves) toward assessment, tailoring, and integration of COTS or other reusable components. Two factors are driving this change: COTS or other reusable components can provide significant user capabilities within limited costs and development time, and more COTS products are becoming available to provide needed user functions.

Added November 11th, 2005


USC-CSE-2005-513

Barry Boehm, Ricardo Valerdi, "Achievements and Challenges in Software Resources Estimation," submitted to ICSE 2006 (pdf)

This paper summarizes major achievements and challenges of software resource estimation over the last forty years. We address critical issues that enabled major achievements such as the development of good model forms, criteria for evaluating models, methods for integrating expert judgment and statistical data analysis, and processes for developing new models that cover new software development approaches. Future trends in software development and evolution processes are projected, along with their implications and challenges for future software resource estimation capabilities.

Added November 11th, 2005


USC-CSE-2005-512

Keun Lee, Barry Boehm, "Empirical Results from an Experiment on Value-Based Review (VBR) Processes," ISESE 2005 (pdf)

As part of our research on value-based software engineering, we conducted an experiment on the use of value-based review (VBR) processes. We developed a set of VBR checklists with issues ranked by successcriticality, and a set of VBR processes prioritized by issue criticality and stakeholder-negotiated product capability priorities. The experiment involved 28 independent verification and validation (IV&V) subjects (full-time working professionals taking a distance learning course) reviewing specifications produced by 18 real-client, full-time student e-services projects. The IV&V subjects were randomly assigned to use either the VBR approach or our previous valueneutral checklist-based reading (CBR) approach. The difference between groups was not statistically significant for number of issues reported, but was statistically significant for number of issues per review hour, total issue impact, and cost effectiveness in terms of total issue impact per review hour. For the latter, the VBRs were roughly twice as cost-effective as the CBRs.

Added November 11th, 2005


USC-CSE-2005-509

Barry Boehm, Ricardo Valerdi, Jo Ann Lane, A. Winsor Brown, "COCOMO Suite Methodology and Evolution," CrossTalk, April 2005 (pdf)

Over the years, software managers and software engineers have used various cost models such as the Constructive Cost Model (COCOMO) to support their software cost and estimation processes. These models have also helped them to reason about the cost and schedule implications of their development decisions, investment decisions, client negotiations and requested changes, risk management decisions, and process improvement decisions. Since that time, COCOMO has cultivated a user community that has contributed to its development and calibration. COCOMO has also evolved to meet user needs as the scope and complexity of software system development has grown. This eventually led to the current version of the model: COCOMO II.2000.3. The growing need for the model to estimate different aspects of software development served as a catalyst for the creation of derivative models and extensions that could better address commercial off-the-shelf software integration, system engineering, and system-of-systems architecting and engineering. This article presents an overview of the models in the COCOMO suite that includes extensions and independent models, and describes the underlying methodologies and the logic behind the models and how they can be used together to support larger software system estimation needs. It concludes with a discussion of the latest University of Southern California Center for Software Engineering effort to unify these various models into a single, comprehensive, user-friendly tool.

Added November 10th, 2005


USC-CSE-2005-508

Jo Ann Lane, "System of Systems Lead System Integrators: Where do They Spend Their Time and What Makes Them More/Less Efficient? - Background for COSOSIMO" (pdf)

As organizations strive to expand system capabilities through the development of system-of-systems (SoS) architectures, they want to know "how much effort" and "how long". In order to answer these questions, it is important to first understand the types of activities performed in SoS architecture development and integration and how these vary across different SoS implementations. This paper provides preliminary results of research conducted to determine types of SoS Lead System Integrator (LSI) activities and how these differ from the more traditional system engineering activities described in EIA 632 (Processes for Engineering a System). It also looks at concepts in organizational theory, complex adaptive systems, and chaos theory and how these might be applied to SoS LSI activities to improve success rates and efficiency in the development of these "very large" complex systems.

Added November 10th, 2005


USC-CSE-2005-507

Barry Boehm, "The Future of Software and Systems Engineering Processes," SSCI Member Forum, 2005 (pdf)

In response to the increasing criticality of software within systems and the increasing demands being put onto software-intensive systems, software and systems engineering processes will evolve significantly over the next two decades. This paper identifies eight relatively surprise-free trends - the increasing interaction of software engineering and systems engineering; increased emphasis on users and end value; increasing software criticality and need for dependability; increased emphasis on systems and software dependability; increasingly rapid change; increasing global connectivity and need for systems to interoperate; increasing IT globalization and need for interoperability; increasingly complex systems of systems; increasing needs for COTS, reuse, and legacy systems and software integration; and computational plenty. It also identifies two "wild card" trends: in-creasing software autonomy and combinations of biology and computing. It then discusses the likely influences of these trends on software and systems engineering processes between now and 2025, and presents an emerging three-team adaptive process model for coping with the resulting challenges and opportunities of developing 21st century software-intensive systems and systems of systems.

Added June 29th, 2005


USC-CSE-2005-506

Chiyoung Seo, Sam Malek, Nenad Medvidovic, "A Generic Approach for Estimating the Energy Consumption of Component-Based Distributed Systems" (pdf)

In distributed software systems, each software component interacts with other components in order to provide users with various services. Recently, portable devices (e.g. PDA) with wireless network capabilities have been widely being used in building Java-based distributed software systems. However, these portable devices generally suffer from limited battery power. Since each device has a different battery capacity and each software component consumes different power, the initial deployment of software components over comprising devices is no longer appropriate with respect to the duration of services as the batteries of the devices are being drained. Therefore, it is necessary to redeploy software components over mobile devices during the runtime by considering each component's energy consumption and the remaining battery capacity of each device in order to increase the lifetime of all the services provided by distributed software components. As a part of our work toward this goal, we suggest the generic approach for estimating the energy consumption of Java-based software components, which can be easily applied to heterogeneous devices. Through the extensive experiments, we show that our estimation model is generic and highly accurate compared with the actual energy consumption.

Added April 4th, 2005


USC-CSE-2005-502

Keun Lee, Monvorath Phongpaibul, Barry Boehm, "Value-Based Verification and Validation Guidelines" (pdf)

The USC Center for Software Engineering’s Value-Based Software Engineering agenda involves experimentation with value-based reformulations of traditional value-neutral software engineering methods. The experimentation explores conditions under which value-based methods lead to more cost-effective project outcomes, and assesses the degree of impact that value-based methods have on the various dimensions of project outcomes. Examples of areas in which value-based technical have shown improvements in cost-effectiveness have included stakeholder win-win requirements determination, use of value-based anchor point milestones, use of prioritized requirements to support schedule-as-independent variable development processes, and the use of risk management and business case analysis to support value-based project monitoring and control.

Added March 31st, 2005


USC-CSE-2005-501

Zhihao Chen, "Software Engineering Graduate Project Effort Analysis Report" (pdf)

In the graduate courses - CSCI577ab, graduate students apply software engineering methodologies, processes, procures, and models to manage software development. This report analyzes their activities and effort distribution from fall 2001 to spring 2004. It would be very helpful for students to appropriately arrange their time, manage their schedule and plan their projects. It would also very helpful for software engineering research.

Added January 25th, 2005


USC-CSE-2005-500

Barry Boehm, "Software Process Disruptors, Opportunity Areas, and Strategies," (pdf)

The near future (5-10 years) of software processes will be largely driven by disruptive forces that require organizations to change their traditional ways of doing business. This report begins with a discussion of the major current and near-future disruptors in the software process area and how they interact. It then discusses major trends in terms of opportunity areas for dealing with various combinations of disruptors. Based on the opportunity areas, it then identifies some attractive future strategies that appear to have high payoff.

Added January 6th, 2005


Copyright 2008 The University of Southern California

The written material, text, graphics, and software available on this page and all related pages may be copied, used, and distributed freely as long as the University of Southern California as the source of the material, text, graphics or software is always clearly indicated and such acknowledgement always accompanies any reuse or redistribution of the material, text, graphics or software; also permission to use the material, text, graphics or software on these pages does not include the right to repackage the material, text, graphics or software in any form or manner and then claim exclusive proprietary ownership of it as part of a commercial offering of services or as part of a commercially offered product.