University of Southern California
    
Home   Contact Us    
Center for Systems and Software Engineering

About us
News
History
People
Events
Upcoming
Highlights
Past
Publication
Tech. Report
TR by Author
Research
Projects
Tools
Courses
Education
Degrees
Admissions
Affiliates
List of Affiliates
Private Area
Other Resources


Technical Reports

USC-CSE-2004-524

Nenad Medvidovic, Nikunj R. Mehta, "JavaBeans and Software Architecture," The Internet Encyclopedia, John Wiley and Sons (pdf)

Java has emerged as a popular programming language and platform for Web applications. JavaBeans defines the software component model in the Java programming language that is used for creating reusable, coarse-grained components. Beans are used in numerous complementary technologies including Enterprise JavaBeans, Java Abstract Windowing Toolkit (AWT), Java Database Connectivity (JDBC), Java Mail, and Java Management Extensions (JMX) to create software components. Beans provide a hybrid of object-oriented and loosely-coupled architectural styles, where the interaction between components is in the form of both events and method calls. This chapter focuses on the support for component technology in Java, specifically in the form of JavaBeans and its technology variants that fulfil the needs of Web application development. The chapter also discusses the role of JavaBeans and its technology variants in architecture-based software development.

Added February 9th, 2003


USC-CSE-2004-523

Marija Mikic-Rakic, Nenad Medvidovic, "Software Architectural Support for Disconnected Operation in Highly Distributed Environments," ICDCS 2004 (pdf)

In distributed and mobile environments, the connections among the hosts on which a software system is running are often unstable. As a result of connectivity losses, the overall availability of the system decreases. The distribution of software components onto hardware nodes (i.e., deployment architecture) may be ill-suited for the given target hardware environment and may need to be altered to improve the software system's availability. The critical difficulty in achieving this task lies in the fact that determining a software system's deployment that will maximize its availability is an exponentially complex problem. In this paper, we present an automated, flexible, software architecture-based solution for disconnected operation that increases the availability of the system during disconnection. We provide a fast approximative solution for the exponentially complex redeployment problem, and assess its performance.

Added September 17th, 2003


USC-CSE-2004-522

Barry Boehm, Jesal Bhuta, David Garlan, Eric Gradman, LiGuo Huang, Alexander Lam, Ray Madachy, Nenad Medvidovic, Kenneth Meyer, Steven Meyers, Gustavo Perez, Kirk Reinholtz, Roshanak Roshandel, Nicolas Rouquette, "Using Testbeds to Accelerate Technology Maturity and Transition: The SCRover Experience," 2004 ACM-IEEE International Symposium on Empirical Software Engineering, Redondo Beach, CA, August 2004 (pdf)

This paper is an experience report on a first attempt to develop and apply a new form of software: a full service testbed designed to evaluate alternative software dependability technologies, and to accelerate their maturation and transition into project use. The SCRover testbed includes not only the specifications, code, and hardware of a public safety robot, but also the package of instrumentation, scenario drivers, seeded defects, experimentation guidelines, and comparative effort and defect data needed to facilitate technology evaluation experiments.

The SCRover testbed’s initial operational capability has been recently applied to evaluate two architecture definition languages (ADLs) and toolsets, Mae and AcmeStudio. The testbed evaluation showed (1) that the ADL-based toolsets were complementary and cost effective to apply to mission-critical systems; (2) that the testbed was cost-effective to use by researchers; and (3) that collaboration in testbed use by researchers and the Jet Propulsion Laboratory (JPL) project users resulted in actions to accelerate technology maturity and transition into project use. The evaluation also identified a number of lessons learned for improving the SCRover testbed, and for development and application of future technology evaluation testbeds.

Added September 18th, 2003


USC-CSE-2004-521

Barry Boehm, A. Winsor Brown, LiGuo Huang, Dan Port, "The Schedule as Independent Variable (SAIV) Process for Acquisition of Software-Intensive Systems," INCOSE 2004 International Symposium (pdf)

Many system acquisitions do not achieve on-time delivery because of delays in software development. This paper presents a highly successful approach for on-time delivery of software-intensive systems: the Schedule As Independent Variable (SAIV) process. The SAIV process involves prioritization of desired features; scoping a Core Capability of top-priority features easily achievable within the available schedule; architecting for ease of dropping or adding borderline-priority features; monitoring progress with respect to plans; and adding or dropping borderline-priority features to meet the schedule target. The paper summarizes experiences and discusses critical success factors in applying the SAIV acquisition process across a range from small in-house e-services projects to very large Government systems of systems.

Added November 19th, 2003


USC-CSE-2004-520

Barry Boehm, LiGuo Huang, Apurva Jain, Ray Madachy, "The Nature of Information System Dependability: A Stakeholder/Value Approach" (pdf)

A critical objective of the NASA High Dependability Computing Program is a definition of “dependability” that enables the program to evaluate the contributions of existing and new computing technologies to the improvement of an information-intensive system’s dependability.  Such evaluations require one or more evaluation criteria or metrics that enable quantitative comparisons of candidate technology solutions to be performed.

Ideally, one would like to have a single Dependability metric by which the contributions of each technology could be ranked.  However, in practice, such a one-size-fits-all metric is unachievable.  Different systems have different success-critical stakeholders, and these stakeholders depend on the system in different ways. For example, the mean time between failures for a user depending on acceptable response time will be different from the MTBF for an operator just depending on system liveness (a real stakeholder value conflict on the Earth Observation System Distributed Information System).

Added December 16th, 2009


USC-CSE-2004-519

Barry Boehm, A. Winsor Brown, LiGuo Huang, Dan Port, "The Schedule as Independent Variable (SAIV) Process for Acquisition Software-Intensive Systems," Proceedings, INCOSE 2004, July 2004 (pdf)

In this article, we show how you can use the MBASE process framework to generate a family of acquisition process models for delivering user-satisfactory systems under schedule, cost, and quality constraints. We present the six major steps of the Schedule/Cost/Schedule-Cost-Quality as Independent Variable (SAIV/CAIV/SCQAIV) process using SAIV and a representative Department of Defense (DoD) Command, Control, and Communications Interoperability application as context. We then summarize our experience in using SAIV on 26 University of Southern California electronic services projects, followed by discussions of SAIV/CAIV/SCQAIV application in the commercial and defense sectors, of model application within the DoD acquisition framework, and of the resulting conclusions.

Added November 14th, 2005


USC-CSE-2004-518

Barry Boehm, Jesal Bhuta, David Garlan, Eric Gradman, LiGuo Huang, Alexander Lam, Ray Madachy, Nenad Medvidovic, Kenneth Meyer, Steven Meyers, Gustavo Perez, Kirk Reinholtz, Roshanak Roshandel, Nicolas Rouquette, "Using Empirical Testbeds to Accelerate Technology Maturity and Transition: The SCRover Experience," Proceedings of the 2004 International Symposium on Empirical Software Engineering, ISESE'04, August 19-20 2004, pp. 117-126 (pdf)

This paper is an experience report on a first attempt to develop and apply a new form of software: a full-service empirical testbed designed to evaluate alternative software dependability technologies, and to accelerate their maturation and transition into project use. The SCRover testbed includes not only the specifications, code, and hardware of a public safety robot, but also the package of instrumentation, scenario drivers, seeded defects, experimentation guidelines, and comparative effort and defect data needed to facilitate technology evaluation experiments.

The SCRover testbed's initial operational capability has been recently applied to empirically evaluate two architecture definition languages (ADLs) and toolsets, Mae and AcmeStudio. The testbed evaluation showed (1) that the ADL-based toolsets were complementary and cost-effective to apply to mission-critical systems; (2) that the testbed was cost-effective to use by researchers; and (3) that collaboration in testbed use by researchers and the Jet Propulsion Laboratory (JPL) project users resulted in actions to accelerate technology maturity and transition into project use. The evaluation also identified a number of lessons learned for improving the SCRover testbed, and for development and application of future technology evaluation testbeds.

Added November 14th, 2005


USC-CSE-2004-517

Barry Boehm, A. Winsor Brown, Ray Madachy, Ye Yang, "A Software Product Line Life Cycle Cost Estimation Model," Proceedings of the 2004 International Symposium on Empirical Software Engineering, ISESE'04, August 19-20 2004, pp. 156-164 (pdf)

Most software product line cost estimation models are calibrated only to local product line data rather than to a broad range of product lines. They also underestimate the return on investment for product lines by focusing only on development vs. life-cycle savings, and by applying writing-for-reuse surcharges to the entire product rather that to the portions of the product being reused. This paper offers some insights based on the exploratory development and collaborative refinement of a software product line life cycle economics model, the Constructive Product Line Investment Model (COPLIMO) that addresses these shortfalls. COPLIMO consists of two components: a product line development cost model and an annualized post-development life cycle extension. It focuses on modeling the portions of the software that involve product-specific newly-built software, fully reused black-box product line components, and product line components that are reused with adaptation. This model is an extension built upon USC-CSE's well-calibrated, multi-parameter Constructive Cost Model (COCOMO) II, tailored down to cover the essentials of strategic software product line decision issues and available supporting data from industries.

Added November 14th, 2005


USC-CSE-2004-516

Donald J. Reifer, Victor R. Basili, Barry Boehm, Betsy Clark, "COTS-Based Systems - Twelve Lessons Learned about Maintenance," ICCBSS 2004 (pdf)

This paper presents the twelve most significant lessons the CeBASE community has learned across a wide variety of projects, domains, and organizations about COTS-Based Systems (CBS) maintenance. Because many of the lessons identified are not intuitive, the source and implications of the lesson are discussed as well within the context of maintenance model for CBS.

Added November 14th, 2005


USC-CSE-2004-515

Barry Boehm, A. Winsor Brown, Victor R. Basili, Rich Turner, "Spiral Acquisition of Software-Intensive Systems of Systems," CrossTalk, May 2004 (pdf)

The Department of Defense and other organizations are finding that the acquisition and evolution of complex systems of systems is both software-intensive and fraught with old and new sources of risk. This article summarizes both old and new sources of risk encountered in acquiring and developing complex software-intensive systems of systems. It shows how these risks can be addressed via risk analysis, risk management planning and control, and application of the risk-driven Win-Win Spiral Model. It will also discuss techniques for handling complicating factors such as compound risks, incremental development, and rapid change, and illustrates the use of principles and practices with experience in applying the model to the U.S. Army Future Combat Systems program and similar programs.

Added November 14th, 2005


USC-CSE-2004-514

Barry Boehm, LiGuo Huang, Apurva Jain, Ray Madachy, "The ROI of software dependability: The iDAVE model," IEEE Software, Volume 21, Issue 3, May-June 2004, pp. 54-61 (pdf)

In most organizations, proposed investments in software dependability compete for limited resources with proposed investments in software and system functionality, response time, adaptability, speed of development, ease of use, and other system capabilities. The lack of good return-on-investment models for software dependability makes determining the overall business case for dependability investments difficult. So, with a weak business case, investments in software dependability and the resulting system dependability are frequently inadequate.

Because different stakeholders depend on different system capabilities (such as availability, safety, or security) in different situations, the business case for dependability must deal with multiple situation-dependent attribute values. Dependability models will need to support stakeholders in determining their desired levels for each dependability attribute and estimating the cost, value, and ROI for achieving those. At the University of Southern California, researchers have developed software cost- and quality-estimation models and value-based software engineering processes, methods, and tools. We used these models and the value-based approach to develop an Information Dependability Attribute Value Estimation model (iDAVE) for reasoning about software dependability's ROI.

Added November 14th, 2005


USC-CSE-2004-513

Vladimir Jakobac, Alexander Egyed, Nenad Medvidovic, "ARTISAn: An Approach and Tool for Improving Software System Understanding via Interactive, Tailorable Source Code Analysis" (pdf)

In situations in which developers are not familiar with a system or its documentation is inadequate, the system's source code becomes the only reliable source of information. Unfortunately, source code has much more detail than is needed to understand the system, and it disperses or obscures high-level constructs that would ease the system’s understanding. Automated tools can aid system understanding by identifying recurring program features, classifying the system modules based on their purpose and usage patterns, and analyzing dependencies across the modules. This paper presents an iterative, user-guided approach to program understanding based on a framework for analyzing and visualizing software systems. The framework is built around a pluggable and extensible set of clues about a given problem domain, execution environment, and/or programming language. We evaluate our approach by providing the analysis of our tool’s results obtained from several case studies.

Added December 11th, 2004


USC-CSE-2004-511

Yue Chen, Barry Boehm, Ray Madachy, Ricardo Valerdi, "An Empirical Study of eServices Product UML Sizing Metrics," ACM-IEEE International Symposium on Empirical Software Engineering (ISESE), August, 2004 (pdf)

Size is one of the most fundamental measurements of software. For the past two decades, the source line of code (SLOC) and function point (FP) metrics have been dominating software sizing approaches. However both approaches have significant defects. For example, SLOCcan only be counted when the software construction is complete, while the FP counting is time consuming, expensive, and subjective. In the late 1990s researchers have been exploring faster, cheaper, and more effective sizing methods, such as Unified Modeling Language (UML) based software sizing. In this paper we present an empirical 14-project-study of three different sizing metrics which cover different software life-cycle activities: requirement metrics (requirement), UML metrics (architecture), and SLOC metrics (implementation). Our results show that the software size in terms of SLOC was moderately well correlated with the number of external use cases and the number of classes. We also demonstrate that the number of sequence diagram steps per external use case is a possible complexity indicator of software size. However, we conclude that at least for this 14-project eServices applications sample, the UML-based metrics were insufficiently well-defined and codified to serve as precise sizing metrics.

Added November 15th, 2004


USC-CSE-2004-510

Vishal Sankhla, "SMART: A Small World based Reputation System for MANETs," A Thesis In Partial Fulfillment of the Requirements for the Master of Science Degree in Electrical Engineering, USC (pdf)

We propose SMART, a novel reputation system that aggregates and distributes current reputation of nodes in the network. Trust is evaluated in terms of number of successful interactions an entity performs with other entities; once initial trust is established an entity can trust its neighbors with certain degree of confidence. These neighbors help to establish a short chain of mutually trusted entities in the network known as “trusted contacts”. Each node has to be consistent in providing good service else it is penalized. We aim at providing inherent incentive for nodes to cooperate. The nodes are rewarded by benefiting in terms of packet forwarding and receiving quality service from this trusted community. A node can evaluate trust based on its own direct observation of peer nodes behavior or based on Neighbor observations.

Added November 6th, 2004


USC-CSE-2004-507

Chris A. Mattmann, Paul Ramirez, "A Comparison and Evaluation of Architecture Recovery in Data-Intensive Systems Using Focus" (pdf)

Architecture recovery is an emerging practice in which the architecture of a software system is extracted from relevant available information including source code, documentation and observations of system’s runtime behavior. Typically architecture recovery has been performed on statically linked software systems which have well-defined architectural configurations (arrangements of software components and software connectors). Recently, dynamically linked software applications which rely on middleware implementation facilities have become critical to facilitate source code reuse, dynamic binding and system heterogeneity. Data-Intensive software systems rely heavily on middleware implementation facilities for the aforementioned properties. These systems are becoming an important research area as data-volumes approach the near petabyte scale and little or no reuse of architecture, design or code exists at this point in time. Our study centers on the architectural recovery of two such data-intensive software systems, OODT and the Globus Toolkit using the Focus architecture recovery approach. We present our recovered architectures, provide insight with regards to architecture recovery of middleware systems and finally we conclude by evaluating the architecture recovery method itself. Further, our work can be used to drive architecture recovery in the data-intensive system domain.

Added May 13th, 2004


USC-CSE-2004-505

Nikunj R. Mehta, Ramakrishna Soma, Nenad Medvidovic, "Style-Based Software Architectural Compositions as Domain-Specific Models," Proceedings of the Workshop on Directions in Software Engineering Environments (WoDiSEE 2004), Edinburgh, UK, May 25, 2004 (pdf)

Architectural styles represent composition patterns and constraints at the software architectural level and are targeted at families of systems with shared characteristics. While both style-specific and style-neutral modeling environments for software architectures exist, creation of such environments is expensive and frequently involves reinventing the wheel. This paper describes the rapid design of a style-neutral architectural modeling environment, ViSAC. ViSAC is a domain-specific modeling environment obtained by configuring Vanderbilt University’s Generic Modeling Environment (GME) for Alfa, a framework for constructing style-based software architectures from architectural primitives. Users can define their own styles in ViSAC and, in turn, use them to design software architectures. Moreover, ViSAC supports the hierarchical design of heterogeneous software architectures, i.e., using multiple styles. The rich user interface of GME and support for domain-specific semantics enable interactive design of well-formed styles and architectures.

Added March 9th, 2004


USC-CSE-2004-504

Nikunj R. Mehta, Nenad Medvidovic, "Checking Style Conformance of Software Architectural Compositions"

Added February 24th, 2004


USC-CSE-2004-503

Nikunj R. Mehta, Nenad Medvidovic, "Composition of Style-Based Software Architectures from Architectural Primitives" (pdf)

The codification of software architectural decisions made to address recurring software development challenges results in architectural styles. The Alfa framework provides a small set of architectural primitives for systematically specifying styles and style-based architectures for networkbased systems. In this paper, we formalize Alfa’s primitives in a compositional theory of styles and software architectures. Formalization of this theory has helped us discover one missing primitive in Alfa. Moreover, this theory establishes a refinement relation between styles and architectures along five dimensions: structure, behavior, interaction, data, and topology. Finally, this approach supports heterogeneous architectural composition, i.e., using multiple styles in a single architecture. We illustrate our approach using the software architecture of a networkbased system that employs three different styles: pipeline, event-based integration, and client/server.

Added February 24th, 2004


USC-CSE-2004-502

Ye Yang, Barry Boehm, "Guidelines for Producing COTS Assessment Background, Process, and Report Documents" (pdf)

The Guidelines for Producing COTS Assessment Background, Process, and Report Documents (CAB, CAP, and CAR) apply to projects that need to assess the relative merits of commercial-off-the-shelf (COTS), nondevelopmental-item (NDI), and/or other pre-existing software products/components for use in a software system.

Basically, COTS/NDI assessment activity takes place in the following two primary situations: As part of the software process for a COTS-based development project that follows general guidelines such as Dynamic Systems Development Method (DSDM), Feature Driven Development (FDD), Model-Based Architecting and Software Engineering (MBASE), Rational Unified Process (RUP), or Team Software Process (TSP); As a standalone COTS/NDI assessment activity to serve as the basis for future project decisions.

The guidelines cover the above two primary situations and were developed by integrating COTS-based development lessons learned and proven engineering practices to help teams prepare, plan, manage, track, and improve their COTS assessment activity.

Added February 24th, 2004


USC-CSE-2004-501

Marija Mikic-Rakic, Sam Malek, Nels Beckman, Nenad Medvidovic, "A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings," Proceedings of the 2nd International Working Conference on Component Deployment (CD 2004), Edinburgh, UK, May 20-21, 2004 (pdf)

A distributed software system’s deployment architecture can have a significant impact on the system’s properties. These properties will depend on various system parameters, such as network bandwidth, frequencies of software component interactions, and so on. Existing tools for representing system deployment lack support for specifying, visualizing, and analyzing different factors that influence the quality of a deployment, e.g., the deployment’s impact on the system’s availability. In this paper, we present an environment that supports flexible and tailorable specification, manipulation, visualization, and (re)estimation of deployment architectures for large-scale, highly distributed systems. The environment has been successfully used to explore large numbers of postulated deployment architectures. It has also been integrated with a middleware platform to support the exploration of deployment architectures of actual distributed systems.

Added February 19th, 2004


USC-CSE-2004-500

Nikunj R. Mehta, Nenad Medvidovic, "Toward Composition Of Style-Conformant Software Architectures" (pdf)

The codification of software architectural decisions made to address recurring software development challenges results in architectural styles. The primary benefit of architectural styles is that properties demonstrated at the level of a style are carried over into the software system architectures constructed using that style. However, in the absence of comprehensive techniques for checking conformance of a software architecture to its style(s), the expected stylistic properties are not always present in the architecture. This paper argues for a need to look beyond the existing formalizations of styles and architectures to construct style-conformant oftware architectures. The paper proposes a compositional formalization of styles and style-based architectures aimed at ensuring an architecture’s conformance to its style(s).

Added January 10th, 2004


Copyright 2008 The University of Southern California

The written material, text, graphics, and software available on this page and all related pages may be copied, used, and distributed freely as long as the University of Southern California as the source of the material, text, graphics or software is always clearly indicated and such acknowledgement always accompanies any reuse or redistribution of the material, text, graphics or software; also permission to use the material, text, graphics or software on these pages does not include the right to repackage the material, text, graphics or software in any form or manner and then claim exclusive proprietary ownership of it as part of a commercial offering of services or as part of a commercially offered product.