COCOTS

Last updated 8/01/2001


Model Description

Introduction
Intended Users
Definitions
Scope and Lifecycle
Problem Context
Cost Sources
The Four Submodels
Assessment (post-architecture), (early design)
Tailoring (post-architecture), (early design)
Glue Code (post-architecture), (early design)
Volatility (post-architecture), (early design)
Overall Cost Estimation Guidelines
Outstanding Modeling Issues
Prospective Modeling Follow-ons
Many of the documents
available at this site
require a PDF or Postcript
file viewer:


Model Description

Introduction:

Following in the naming tradition established by its parent model COCOMO, the name COCOTS is an abbreviated contraction of the longer phrase Constructive COTS Model. (For true consistency with the naming convention now being applied to the entire suite of COCOMO derived models currently under development at the Center, renaming the model COCOTSMO was briefly considered--and thankfully rejected as being just plain too goofy!) The word "constructive" in the name refers to the fact that the model helps an estimator better understand the complexities of a given software job to be done, and by its openness permits the estimator to know exactly why the model gives the estimate it does. "COTS" of course is shorthand for commercial-off-the-shelf, signifying the kind of pre-built software components which is the focus of the model.

COCOTS is actually an amalgam of four related sub-models, each addressing individually what we have identified as the four primary sources of COTS software integration costs. These are costs due to the effort needed to perform (1) candidate COTS component assessment, (2) COTS component tailoring, (3) the development and testing of any integration or "glue" code needed to plug a COTS component into a larger system, and (4) increased system level programming due to volatility in incorporated COTS components.

Assessment is the process by which COTS components are selected for use in the larger system being developed. Tailoring refers to those activities that would have to be performed to prepare a particular COTS program for use, regardless of the system into which it is being incorporated, or even if operating as a stand-alone item. These are things such as initializing parameter values, specifying I/O screens or report formats, setting up security protocols, etc. Glue code development and testing refers to the new code external to the COTS component itself that must be written in order to plug the component into the larger system. This code by nature is unique to the particular context in which the COTS component is being used, and must not be confused with tailoring activity as defined above. Volatility in this context refers to the frequency with which new versions or updates of the COTS software being used in a larger system are released by the vendors over the course of the system's development and subsequent deployment.

A fifth cost source was actually identified early in our research. This was the cost related to the increased IV&V effort usually required when using COTS components. It was concluded, however, that this extra effort could be captured within the glue code model, as opposed to requiring a separate sub-model unto itself. (Recent feedback from industry experts has led us to believe we made need to refine our Tailoring submodel further to more completely capture all COTS related system integration costs. Modeling this cost within the Glue Code submodel alone is likely not sufficient.)

Finally, what may be the most important aspect of COCOTS is that the model is completely open. Regardless of whatever estimates it provides, the descriptions of the elements that have gone into the model help highlight the most important factors that should be of concern to managers and developers of software systems using COTS software components. When fully formulated and validated, COCOTS will be in the public domain, available for use by anyone who cares to use it to help meet their software cost estimation needs.
Intended Users:

COCOTS in its full and validated incarnation is expected to be useful to a variety of individuals: early in a system procurement process by strategic planners doing "what if" analyses; later in the process by individuals refining cost estimates for contract bidding purposes; later still by project managers making resource allocation and scheduling plans. To facilitate these intended uses, we are formulating the model at two levels of fidelity, paralleling the Early Design and Post-architecture versions of COCOMO II. Our Early Design COCOTS model will have fewer parameters, and will be useful early in a COTS system procurement when less information is likely to be available. Our Post-architecture COCOTS model will have more parameters, and is intended for use later in a procurement process when more information is likely to be available.
Definitions:

Application Volatility - creates difficulty in benchmarking stable system configurations, resulting from the use of COTS products which may experience multiple or frequent product releases or upgrades during system development.

Attribute - characteristic of a COTS package or associated products and services which are evaluated and used in comparing alternative products and as input into a buy/no buy decision.

COTS Assessment - the activity of determining the appropriateness or feasibility of using specific COTS products to fulfill required system functions.

COTS software - "commercial-off-the-shelf" software commercially available as stand-alone products and which offer specific functionality needed by a larger system into which they might be incorporated. Generally there is no access to source code for COTS products, which are treated as black boxes with application program interfaces. (In some cases, however, some access to COTS source code is available, in which case these products have been described as "gray" or "white" box COTS. In this case, we treat these COTS items as NDI or Reuse components.)

COTS Tailoring - the activity associated with setting or defining shell parameters or configuration options available for a COTS product, but which do not require modification of COTS source code, including defining I/O report formats, screens, etc.

NDI software - "non-developmental item" software available from some source other than the organization developing the system into which the NDI component is to be integrated. The source can be commercial, private, or public sector, just so long as the procuring organization expended no resources on the NDI component's initial development. Source code is usually available for an NDI component, which may or may not be able to function as a stand-alone item.

Integration or "glue" code - software developed in-house and composed of 1) code needed to facilitate data or information exchange between the COTS/NDI component and the system or other COTS/NDI component into which it is being integrated, 2) coded needed to connect or "hook" the COTS/NDI component into the system or other COTS/NDI component but does not necessarily enable data exchange, and 3) code needed to provide required functionality missing in the COTS/NDI component AND which depends upon or must interact with the COTS/NDI component.
Scope and Lifecycle Presently Addressed:

At the moment, COCOTS addresses only the cost of software COTS components. The cost associated with hardware COTS elements used in a system will be addressed in future refinements of the model. A further consequence of this current limitation is that firmware components are also presently excluded. That is, the cost of using any COTS software that must run on a particular piece of COTS hardware is not captured independently in the current version of the model.

Also, the current incarnation of the model estimates needed effort only. Schedule estimation is yet to be incorporated in the model, though this is definitely high on the list of future model enhancements.

As for lifecycle, the current version of COCOTS addresses only development costs associated with using COTS components. Long-term operation and maintenance costs associated with using COTS components again are intended to be captured in future model versions.

In terms of a waterfall development process, the specific project phases currently covered by the model are these:

Requirements*
Definition
Preliminary
Code Design
Detailed
Code Design
Code and
Unit Test
Integration
and Test

* This phase traditionally has NOT been covered by COCOMO estimates. The fact that assessment and requirements definition must be done in tandem when using COTS components necessitates the inclusion of the requirements definition phase in COCOTS estimates.

In terms of a spiral development process, we have overlaid a set of software process anchor points onto language introduced by RATIONAL SOFTWARE Corp., thereby identifying the major iterative developmental milestones or "objectory management checkpoints" currently covered by the model as:

LCO
(Lifecycle Objectives)
LCA
(Lifecycle Architecture)
IOC
(Initial Operational Capability)

 

In more general terms, a COTS software usage lifecycle typically follows this pattern:
 

1) Qualify COTS Products
2) Establish System Requirements
3) Administer COTS Software Acquisition
4) Prototype the System including COTS Software
5) Fully Integrate COTS Software and Interface/Glue Code
6) Test Completed Prototype including COTS Software
7) Delivery
8) Operation & Maintenance (including refresh of updated COTS product releases)

Aside from keeping in mind that items 3 and 8 are not yet covered by the COCOTS model, the most important point to take from this list is that to get the full benefit from using COTS components, items 1 and 2 need to be very closely worked and iterated upon each other. In other words, there must be a certain flexibility on the part of the system users or acquirers towards bending their requirements towards the available capabilities of COTS products already on the market. Another implication of this lifecycle is that while using COTS components may indeed reduce the effort allocated to code development, the tradeoff is usually a need for increased effort dedicated to testing. This is because testing is no longer limited to the activity of component integration, but also now must be performed at some level during initial component qualification or assessment, as well as possibly to address a need for mission critical or safety critical assurance certification.
Problem Context:

Perhaps the most difficult aspect of modeling the integration, development, and longterm maintenance costs associated with using COTS components is reaching a consensus on just exactly what is a COTS component. Illustrating this controversy is a list of "off-the-shelf" terms which were offered at a recent GSAW conference:

These terms showcase the idea that all OTS or "off-the-shelf" components are definitely not created equal. Some are true commercial components. Some come from non-commercial sources. Some are mature products with a solid history of performance. Others have barely been on the market and have no visible track record. And though pitched as "off-the-shelf," some components later prove to require a great deal of effort to get them integrated and functioning as desired.

For the purposes of modeling, we have converged on the following definition of a COTS product:

* Even this point is controversial. Studies conducted by the SEI suggest that up to 30% of components acquired as "COTS" components require changes to internal code. For our purposes, however, we are choosing to treat components for which there is any source code supplied by the vendor as "reuse" or "NDI (non-developmental item)" components. Integration of these kinds of components can be modeled using the REUSE model of COCOMO II.

As a consequence of this definition, the following COTS phenomena obtain:

This in turn leads to some basic risks inherent with using COTS components: These risks can be reduced or eliminated using the following mitigation strategies: More specifically, to avoid the pitfalls associated with each of the COTS phenomena noted above, we recommend adopting the strategies outlined in the following table:
Pitfalls to Avoid
Recommended Practices to Adopt
1. You have no control over a COTS product's functionality or performance.
  • Using the waterfall model on a COTS integration project. 
  • Using evolutionary development with the assumption that every undesired feature can be changed to fit your needs. 
  • Believing that advertised COTS capabilities are real. 
  • Use risk management and risk-driven spiral-type process models. 
  • Perform the equivalent of a "receiving inspection" upon initial COTS receipt. 
  • Keep requirements negotiable until the system's architecture and COTS choices stabilize.
  • Involve all key stakeholders in critical COTS decisions.
2. Most COTS products are not designed to interoperate with each other.
  • Premature commitment to incompatible combinations of COTS products. 
  • Trying to integrate too many incompatible COTS products. 
  • Deferring COTS integration till the end of the development cycle. 
  • Committing to a tightly-coupled subset of COTS products with closed, proprietary interfaces. 
  • Use the Life Cycle Architecture milestone as a process anchor point.
  • Use the Architecture Review Board (ARB) best commercial practice at the Life Cycle Architecture milestone. 
  • Go for open architectures and COTS substitutability. 
3. You have no control over a COTS product's evolution.
  • "Snapshot" requirements specs and corresponding point-solution architectures. 
  • Understaffing for software maintenance. 
  • Tightly coupled, independently evolving COTS products. 
  • Assuming that uncontrollable COTS evolution is just a maintenance problem. 
  • Stick with dominant commercial standards. 
  • Use likely future system and product line needs as well as current needs as COTS selection criteria. 
  • Use flexible architectures facilitating adaptation to change. 
  • Carefully evaluate COTS vendors' track records with respect to predictability of product evolution.
  • Establish a pro-active system release strategy, synchronizing COTS upgrades with system releases.
4. COTS vendor behavior varies widely.
  • Uncritically accepting COTS vendors' statements about product capabilities and support.
  • Lack of fallbacks or contingency plans. 
  • Assuming that an initial vendor support honeymoon will last forever.
  • Perform extensive evaluation and reference-checking of a COTS vendor's advertised capabilities and support track record.
  • Establish strategic partnerships or other incentives for COTS vendors to provide support.
  • Negotiate and document critical vendor support agreements. 

Another phenomena we have observed relevant to COTS risk mitigation is that quality of vendor support is somewhat a function of the maturity or size of the vendor. As indicated in the chart below, the best support is likely to come from a middle-sized or moderately mature vendor, one sophisticated enough to be able to offer the level of support you need, yet still "hungry" enough to also really want your business.
 

In more general terms, the following two tables provide a comparison of the advantages and disadvantages of developing custom software versus the advantages and disadvantages of using COTS components in the development of new systems:

Custom Software: Advantages vs. Disadvantages
Advantages 
  • Complete freedom
  • Smaller, often simpler
  • Often better performance
  • Control of development/enhancement
  • Control of reliability tradeoffs

  •  
Disadvantages 
  • Development expensive/unpredictable
  • Available data unpredictable
  • Maintenance expensive
  • Portability often expensive
  • Drains export resources

  •  

 
COTS Software: Advantages vs. Disadvantages
Advantages 
  • Predictable license costs
  • Broadly used, mature technology
  • Available now
  • Dedicated support organization
  • Hardware/software independence
  • Rich in functionality
  • Frequent upgrades

  •  
Disadvantages 
  • Up front license fees
  • Recurring maintenance fees
  • Dependency on vendor
  • Efficiency sacrifices
  • Functionality constraints
  • Integration not always trivial
  • No control over upgrades/maintenance
  • Unnecessary features consume extra resources
  • Reliability often unknown/inadequate
  • Scale difficult to change
  • Incompatibilities among vendors
  • Licensing and intellectual property issues

  •  

Intelligently and systematically weighing the tradeoffs implied by these two tables is the key to making the best choice in any given circumstance between developing a new software system from scratch or doing so using COTS software components.

At the executive level, other issues must be considered regarding the use of COTS software in developing new systems:

Other executive level issues (with thanks to Larry Bernstein) to be considered regarding the use of COTS software include these: Finally, COTS products generally fall into one of three usage categories: The chart below illustrates our sense that for COTS products used as tools or infrastructure, accounting for the cost of using such components post-assessment is already done so adequately within COCOMO II via the COCOMO Effort Adjustment Factors LTEX (language and tool experience) and TOOL (the use of tools) for COTS software as tools; and PVOL (platform volatility) and PEXP (platform experience) for COTS software as infrastructure. It is the newer problem of accounting for the cost of COTS products used as elements of the application that is addressed by the Tailoring, Glue Code, and Volatility submodels of COCOTS.
 

Cost Sources:

Our research to date has identified five primary sources of cost/effort associated with the use of COTS products and which are different from those costs traditionally associated with developing software systems from scratch:

In terms of level of effort involved, the big three tend to be Tailoring, Glue Code development and managing the System Volatility. Added System V & V and COTS product Assessment are significant but tend to be of lesser impact. As will be dicussed in the next section, the first four items listed above are addressed explicitly in separate submodels. The last item, System V & V, is captured across and within the Tailoring, Glue Code, and Volatility submodels.

The two charts below illustrate how the modeling of these costs in COCOTS is related to costs modeled by COCOMO II. The chart on the left represents the total effort to build a software system entirely of new code as estimated by COCOMO II. The chart on the right represents the total effort to build a software system out of a mix of new code and COTS components as estimated by a combination of COCOMO II and COCOTS. The red blocks in the two charts indicate COCOMO II estimates. The additional yellow blocks in the right-most chart indicate COCOTS estimates. The relative size of the yellow and red blocks in this second chart is a function of the number of COTS components relative to the amount of new code in the system, and of the nature of the COTS component integrations themselves. The more complex the tailoring and/or glue code writing efforts, the larger these blocks will be relative to the assessment block. Also, note that addressing the system wide volatility due to volatility in the COTS components is an effort that will obtain throughout the entire system development cycle, as indicated by the large yellow block running along the bottom of the chart.
 
 

Finally, keep in mind that when doing a trade-off analysis between building a new system totally from scratch versus using a mix of new code and COTS components, if the total combined area of the red and yellow blocks in the righthand chart--representing the total effort required for building a system from both new and COTS components--is not less than the total area of the single red block in the lefthand chart--representing the total effort required for building a new system from scratch--and all other things being equal, then using COTS components has not bought you anything, at least during the development phase.
The Four Submodels:

The heart of COCOTS is its four separate submodels, each one designed to capture a different component of the total cost of using COTS software components in building new software systems--keeping in mind that the fifth cost component of additional System V & V noted above is to be captured within the Tailoring, Glue Code, and Volatility submodels. Also, recall that we are developing the models at two levels of fidelity, a Post-architecture and an Early Design formulation. The Post-architecture version is the more detailed of the two, and is intended for use later in a COTS-based system acquisition, after the basic architecture of the system has been determined. Just as its name implies, the Early Design incarnation is intended for use earlier in the system acquistion, at the stage when basic "what if" design questions are still being addressed.

It should be noted, however, that while the Assessment submodel lends itself easily to use very early in the project planning stages, the Tailoring, Glue Code, and Volatility submodels by the very nature of the costs they are trying to address are more problematic if used before the specific COTS products that will actually be integrated into a new system have been identified. The reason is that the costs covered by these models are extremely dependent on the unique characteristics of any given set of COTS products and their vendors.

Also be aware that these submodels are to be applied at the system level. That is, the various input parameters such as effort adjustment factor ratings, complexity ratings, glue code sizing, etc., should be determined as aggregate ratings and sizings for all COTS components in the system considered as a whole. The flip side to this is that effort estimates provided by the submodels are also aggregate estimates, giving, for example, the total effort required to assess COTS components or to create COTS glue code over the entire system development. This is a change from the way the original model detailed in the June 1997 USAF ESC/Rome Laboratory study was applied. That model was applied at the individual component level. Raising the application of COCOTS to the system level achieves two things: 1) it eases the data gathering effort, as most organizations do not track COTS component integration effort at the component level, and 2) it provides output estimates at the level which appears to have the most utility for software cost estimators.

We also want to make it clear that for the time being at least, the formulations, parameter definitions, and rating criteria presented below are not etched in stone. We invite comment from anyone who might be able to suggest ways to improve any aspect of the model as currently defined.

Note: the submodels below use the same definition of standard person-month as is used in COCOMO:

one person-month = 152 person-hours = 19 person-days = 1/12 person-years





Post-architecture Model Definition
 
Assessment:

Assessment is the activity whereby COTS software products are vetted and selected as viable components for integration into a larger system. In general terms, viable COTS candidates are determined based on the following:

The key thing to remember is that COTS assessment and requirements definition must be done in concert. Final system requirements must be shaped based upon the capabilities of COTS products on the market if one is to reap the full benefit of taking the COTS approach to system development.

In the specific terms of our model, assessment is done in two passes. When selecting candidate COTS components, there is usually a quick and dirty first effort intended to very rapidly cull products patently unsuitable in the current context. The remaining products are then examined more carefully to determine the final set of COTS components to be included in the system being developed. (We use the terms initial filtering and final selection, both under the banner of "assessment." Others use a slightly different terminology, referring to what we call initial filtering as "assessment," and to what we call final selection as "evaluation." No matter the semantics, however, the concept of a need for a two pass approach is the same.)

The chart below shows the basic formulation of the assessment submodel, illustrating the two pass approach. The intent is that the specific parameter values required for these formulas will eventually be specified by project domain. (How quickly that happens is a function of how long it takes to gather enough calibration data points to allow the formal specification of unique domains.)

The initial filtering formula presumes a parameterized value for a rough average filtering effort can be determined for a given domain. The total initial filtering effort for a project then becomes a simple function of the number of candidate COTS products that are being filtered for that project and the average filtering effort per candidate specified for that project's domain.

The final selection formula is more complex. It posits that final selection of COTS products will be based on an assessment of each product in light of certain product attributes. Depending upon the project domain, more effort will be expended assessing a COTS product in terms of some attributes as opposed to others. For example, within one domain, price and ease of use may be critical, whereas in a different domain, security and portability may be paramount. Thus it follows that in the first domain, more effort will be expended assessing candidate COTS products according to their purchase price and their user friendliness, while less effort (if any) will be expended assessing those same products according to the security features and portability they offer. In the second domain, the relative effort expended assessing each candidate product in terms of these four attributes should be just the reverse. Based on this idea, the final selection filtering formula then presumes parameterized values for a rough average assessment effort per attribute can be determined for a given domain. This is carried even further within the formula by refining the parameterization of each attribute according to a standardized rating of its relative importance on a scale from extra low to extra high, again by domain. The total final selection effort for a project then becomes a function of the number of candidate COTS products that are being finally assessed for that project and the average assessment effort per rated attribute per candidate as specified for that project's domain and summed over all attributes.

Whether or not you actually know the number of candidate COTS products that are to go through the more detailed final selection assessment needed for this formula is a function of when you are doing your estimate. If your estimation of the final assessment effort is being performed literally after the initial filtering has been performed, then this is a non-issue since the number of products to go through final assessment will be a known quantity. If, however, you are doing your final assessment effort estimation while the project is still in the planning stages, you will have to use some rule of thumb to estimate a typical percentage of COTS products that make it through initial filtering to final selection. As part of our modeling effort will we attempt to provide a useful initial rule of thumb percentage for this quantity, but experience with these kinds of models has shown that the more quickly you start using such a rule of thumb determined from your own organization's past experience, the more accurate your overall estimate assessment effort will be.

(If your assessment efforts are in fact being done very early in the planning process, a similar rule of thumb might be required for estimating a typical number of candidate COTS products that will go through the initial filtering effort as well. This would seem to be such a project dependent or at least organization dependent quantity, however, that it seems unlikely that we will be able to provide a useful initial generic quantity for you along these lines. We recommend that if you do indeed find yourself having to estimate this quantity, your best approach is to interview engineers in your organization who have experience assessing COTS products for the kinds of functionality that are being proposed in your current project and who are up to date on the COTS products currently on the market that claim to provide that functionality. These individuals are the ones most likely able to provide at least a ballpark estimate of how many COTS products might potentially go through initial filtering on the project for which you are currently doing estimates.)

The total effort expended doing COTS product assessment for the current project is the sum of the initial filtering effort plus the final selection effort.
 

The table below lists the set of seventeen attributes we have defined as the most likely to be of concern when doing a final selection assessment of COTS software components. The actual parameters are the items at the first level (i.e., Correctness, Availabiltiy/Robustness, Security, etc.). The italicized items at the second level show the underlying factors that are intended to be captured by a given parameter at the first level.
 

COTS Software Assessment Attributes
  • Correctness
  • accuracy
  • correctness
  • Availability/Robustness
  • availability
  • fail safe
  • fail soft
  • fault tolerance
  • input error tolerance
  • redundancy
  • reliability
  • robustness
  • safety
  • Security
  • access related
  • sabotage related
  • Product Performance
  • exection performance
  • information/data capacity
  • precision
  • memory performance
  • response time
  • throughput
  • Understandability
  • documentation quality
  • simplicity
  • testability
  • Ease of Use
  • usability/human factors
  • Version Compatibility
  • downward compatibility
  • upward compatibility
  • Inter-component Compatibility
  • compatibility
  • interoperability 
  • Flexibility
  • extendability
  • flexibility
  • Installation/Upgrade Ease
  • installation ease
  • upgrade/refresh ease
  • Portabiliity
  • portability
  • Functionality
  • functionality
  • Price
  • initial purchase/lease
  • recurring costs
  • Maturity
  • product maturity
  • vendor maturity
  • Vendor Support
  • response time for critical problems
  • support
  • warranty
  • User Training
  • user training
  • Vendor Concessions
  • willingness to escrow source code
  • willingness to make modifications

 

This table below illustrates the rating scale applied to each assessment attribute.
 

Assessment Attribute Rating Scale
Extra Low Very Low Low Nominal High Very High Extra High
irrelevant unnecessary somewhat useful useful desirable important mandatory

 

Assessment Effort Estimation Example:

You have a new project for which you are going to do a COTS assessment effort estimation. First you must determine what standardized domain your project falls under, based upon given descriptive criteria. Let's say you decide the project falls under Domain A. Next you have been told by your designers that some 185 COTS products are initial potential candidates for inclusion in the system to handle a variety of functions, but that these will quickly be pared down to a more manageable set for serious consideration. For Domain A, the parameterized calibrated value for average filtering effort per candidate is say .03 person-months/candidate. Thus your total initial filtering effort in this case is (185 candidates) X (.03 p-m/candidate) = 5.6 person-months.

Next you need to do the estimate for final selection effort. You are doing your estimate early in the project planning process, so you need to use a rule of thumb to estimate how many COTS products will go through final selection assessment. You decide to use the generic rule of thumb which comes with the model. Let's say that value is 20%. Thus the number of COTS products expected to go through final assessment is (185 candidates) X 20% = 37 candidates. After that you will rate each of the seventeen assessment attributes shown in the table above on the seven point scale from irrelevant to mandatory according to the requirements of the current project. Let's say you arrive at the following ratings:
 

Attribute Rating Attribute Rating Attribute Rating
Correctness Very High Version Compatibility Extra Low Price High
Availability/Robustness High Inter-component Compatibility Extra High Maturity High
Security Very Low Flexibility Very Low Vendor Support Extra High
Product Performance Nominal Installation/Upgrade Ease Low User Training Nominal
Understandability High Portability Nominal Vendor Concessions Extra Low
Ease of Use High Functionality Extra High

For Domain A, the parameterized calibrated values for average assessment effort for attribute i at rating j per candidate let's say are as given below:
 

Rated Attribute average assessment effort
(person-months)
Rated Attribute average assessment effort
(person-months)
Rated Attribute average assessment effort
(person-months)
Correctness@Very High 3 Version Compatibility@Extra Low 0 Price@High 1
Availability/Robustness@High 1 Inter-component Compatibility@Extra High 6 Maturity@High 1
Security@Very Low .007 Flexibility@Very Low .007 Vendor Support@Extra High 6
Product Performance@Nominal .25 Installation/Upgrade Ease@Low .05 User Training@Nominal .25
Understandability@High 1 Portability@Nominal .25 Vendor Concessions@Extra Low 0
Ease of Use@High 1 Functionality@Extra High 6

With these parameter values your total final selection effort in this case is [(37 candidates) X (3 p-m/candidate) + (37 candidates) X (1 p-m/candidate) + (37 candidates) X (.007 p-m/candidate) + (37 candidates) X (.25 p-m/candidate) + (37 candidates) X (1 p-m/candidate) + (37 candidates) X (1 p-m/candidate) + (37 candidates) X (0 p-m/candidate) + (37 candidates) X (6 p-m/candidate) + (37 candidates) X (.007 p-m/candidate) + (37 candidates) X (.05 p-m/candidate) + (37 candidates) X (.25 p-m/candidate) + (37 candidates) X (6 p-m/candidate) + (37 candidates) X (1 p-m/candidate) + (37 candidates) X (1 p-m/candidate) + (37 candidates) X (6 p-m/candidate) + (37 candidates) X (.25 p-m/candidate) + (37 candidates) X (0 p-m/candidate)] = 992 person-months.

The total COTS product assessment effort for this project is then (5.6 person-months) + (992 person-months) = 998 person-months.

The final table below in this section summarizes the key Assessement submodel elements in terms of inputs, outputs, and required calibrated parameters.
 

Summary of Assessment Submodel Elements (Post-architecture)

Initial Filtering formula
User Inputs 
  • Total number of COTS

  • products being initially filtered
Submodel Outputs 
  • Total effort spent filtering COTS

  • components for the project
Calibrated Submodel Parameters
(one parameter for each standardized project domain)
  • Average filtering effort per candidate
Final Selection formula
User Inputs 
  • Total number of COTS

  • products being assessed
    for final selection
  • Ratings for each of the 17 assessment attributes
Submodel Outputs 
  • Total effort spent doing final selection of COTS components for the project
Calibrated Submodel Parameters
(one set of 17 parameters--each with 7 subparameters--for each standardized project domain)
  • Average final assessment effort

  • per rated attribute per candidate
Aggregate Assessment Submodel Output
  • Total effort spent doing COTS product assessment = initial filtering effort + final selection effort
Tailoring:

Tailoring is the activity whereby COTS software products are configured for use in a specific context. These are the normal things that would have to be done to a product no matter what system into which it is being integrated. These are things like parameter initialization, input/GUI screen and output report layout, security protocols set-up, etc. Specifically excluded from this definition is anything that could be considered a unique or atypical modification or expansion of the functionality of the COTS product as delivered by the vendor. (These activities would be covered under the Glue Code model.)

The basic approach taken to capturing this effort in our model is illustrated in the chart below. The formulation of the tailoring submodel presumes that the difficulty or complexity of the tailoring work that is going to be required to get a given COTS product integrated into a system can be anticipated and even characterized by standardized rating criteria. The submodel then presumes that a parameterized value for a rough average tailoring effort per complexity rating can be determined for a given domain. The total tailoring effort for a project then becomes a function of the number of COTS products whose needed tailoring is estimated to be at a given rated level of complexity and the average tailoring effort at the given complexity rating per candidate, again as specified by the project's domain and summed over all tailoring complexity rating levels.
 

We have defined five overall tailoring effort complexity levels or ratings, ranging from very low to very high. To arrive, however, at a particular complexity rating for a given tailoring job requires examining individual tailoring activities which affect the aggregate complexity of the job. The table below illustrates this concept. The first four cells in the first column under the heading "Tailoring Activities & Aids" identify the major activities that fall under our definition of COTS Tailoring, while the last cell refers to automated tools which may mitigate the difficulty of doing a given COTS tailoring job. Moving to the right across the other columns in the table you will find specific criteria for rating the complexity of the individual activities represented by each row, or the utility of the any available tailoring tools in the case of the last row. The last column at the far right of the table provides space for recording the point values associated with the rating given to each indivual item identified in the first column on the far left. These individual point values are then summed to provide the total point score indicated in the extreme lower right-corner of the table. This total score is then used to characterize the overall complexity of the COTS tailoring effort required for COTS components in the current project by using the smaller table below titled "Final Tailoring Activity Complexity Rating Scale." You determine where the point total falls on the scale shown in that table and from that identify the COTS tailoring effort complexity rating associated with that point total for all COTS components being used in the current project.

To apply our tailoring formula recall that you must determine how many COTS components in the current project are being tailored to each level of complexity, i.e. five at Very Low, 27 at Nominal, 0 at Very High, etc.
 

Dimensions of Tailoring Activity Difficulty
  Individual Activity & Tool Aid Complexity Ratings
Tailoring Activities & Aids Very Low
(point value = 1)
Low
(point value = 2)
Nominal
(point value = 3)
High
(point value = 4)
Very High
(point value = 5)
Corresponding
Points
Parameter Specification 0 to 50 parms
to be initialized.
51 to 100 parms
to be initialized.
101 to 500 parms
to be initialized.
501 to 1000 parms
to be initialized.
1001 or more parms
to be initialized.
_______
Script Writing menu driven;
1 to 5 line scripts;
1 to 5 scripts needed.
menu driven;
6 to 10 line scripts;
6 to 15 scripts needed.
handwritten;
11 to 25 line scripts;
16 to 30 scripts needed.
handwritten;
26 to 50 line scripts;
31 to 50 scripts needed.
handwritten;
51 or more line scripts;
51 or more scripts needed.
_______
I/O Report & GUI
Screen Specification
& Layout
automated or
standard templates
used;
1 to 5
reports/screens
needed.
automated or
standard templates
used;
6 to 15
reports/screens
needed.
automated or
standard templates
used;
16 to 25
reports/screens
needed.
handwritten or
custom designed;
26 to 50
reports/screens
needed.
handwritten or
custom designed;
51 or more
reports/screens
needed.
_______
Security/Access
Protocol Intialization
& Set-up
1 security level;
1 to 20 user profiles;
1 input screen/user.
2 security level;
21 to 50 user profiles;
2 input screens/user.
3 security level;
51 to 75 user profiles;
3 input screens/user.
4 security level;
76 to 100 user profiles;
4 input screens/user.
5 security level;
101 or more user profiles;
5 or more input screens/user.
_______
Availability of COTS
Tailoring Tools
excellent tools available. good tools available. adequate tools available. marginal tools available. no tools available. _______

Total Point Score = _________ 
Final Tailoring Activity Complexity Rating Scale
5 to 7 points 8 to 12 points 13 to 17 points 18 to 22 points 23 to 25 points
Very Low Low Nominal High Very High

 

Tailoring Effort Estimation Example:

Continuing the example from the preceding section, you now want to estimate the total COTS tailoring effort for your project. Recall that when estimating the COTS assessment effort, you already determined that your project fell under standardized Domain A. Now you have to determine at what level of complexity each of the COTS components to be used in your system will be tailored. Let's say that of those 37 products that went through final selection assessment, 11 have been selected for integration. So now let's examine COTS component #1. Let's say that COTS #1 will require the following: the initialization of about 735 data parameters, the writing of maybe 8 ten line scripts, about 40 custom designed I/O screens, and 3 levels of security. You also have no automated COTS tailoring tools available to you. Now you go to the "Dimensions of Tailoring Difficulty" table. 735 parameters implies a Parameter Specification difficulty rating of High for COTS #1, so you record 4 points for that item in the last "Corresponding Points" column. For 8 ten line scripts, Script Writing for COTS #1 gets a difficulty rating of Low, corresponding to 2 points. For 40 custom input/output screens, I/O Report & GUI Specification gets a High rating with 4 points. For 3 security screens, Security/Access Protocol Set-up gets a difficulty rating of Nominal, worth 3 points. And no COTS Tailoring Tools gives that item a difficulty rating of Very High, worth 5 points. The overall tailoring difficulty point score for COTS #1 is then 4 + 2 + 4 + 3 + 5 = 18. Going to the "Final Tailoring Activity Complexity Rating Scale" table, a total point score of 18 indicates an overall tailoring effort complexity rating for COTS #1 of High.

You do something similar for the remaining ten COTS components. Let's say that after this activity, you end up with the following ratings for each COTS product:

COTS
Component
Tailoring Effort
Complexity Rating
COTS
Component
Tailoring Effort
Complexity Rating
# 1 High # 7 Nominal
# 2 Low # 8 High
# 3 High # 9 Very High
# 4 Low # 10 Low
# 5 Very High # 11 Nominal
# 6 Low

Thus you have 0 COTS components with tailoring efforts rated at Very Low, 4 rated at Low, 2 rated at Nominal, 3 rated at High, and 2 rated at Very High.

For Domain A, the parameterized calibrated values for average tailoring effort at complexity level i per candidate let's say are as given below:
 

Complexity Level average tailoring effort
(person-months)
Very Low .5
Low 5
Nominal 25
High 100
Very High 600

The total COTS product tailoring effort for this project is then [(0 candidates) X (.5 p-m/candidate) + (4 candidates) X (5 p-m/candidate) + (2 candidates) X (25 p-m/candidate) + (3 candidates) X (100 p-m/candidate) + (2 candidates) X (600 p-m/candidate)] = 1,570 person-months.

The final table below in this section summarizes the key Tailoring submodel elements in terms of inputs, outputs, and required calibrated parameters.
 

Summary of Tailoring Submodel Elements (Post-architecture)

Tailoring formula
User Inputs
(5 inputs, one for each complexity rating level Very Low through Very High for the 1st item; number of inputs dependent on number of products for 2nd item)
  • Total number of COTS products

  • being tailored at the given level
    of complexity
  • Tailoring effort complexity rating

  • per COTS product
Submodel Outputs 
  • Total effort spent tailoring COTS components for the project
Calibrated Submodel Parameters
(one set of 5 parameters--Very Low through Very High--for each standardized project domain)
  • Average tailoring effort at the given level of complexity per candidate
Glue Code:

Glue code (sometimes called "glueware" or "binding" code) is the new code needed to get a COTS product integrated into a larger system. It can be code needed to connect a COTS component either to higher level system code, or to other COTS or NDI components also being used in the system. Reaching consensus on just what exactly constitutes glue code has not always been easy. For the purposes of our model, we finally decided on the following three part definition: glue code is software developed in-house and composed of 1) code needed to facilitate data or information exchange between the COTS component and the system or some other COTS/NDI component into which it is being integrated or to which it is being connected, 2) coded needed to connect or "hook" the COTS component into the system or some other COTS/NDI component but does not necessarily enable data exchange between the COTS component and those other elements, and 3) code needed to provide required functionality missing in the COTS component and which depends upon or must interact with the COTS component.
 

The first two parts of our definition are straightforward and have not caused any controversy. The last part of our definition, however, regarding new functionality still causes some debate. It arose out of the fact that often functionality that was originally expected to be provided by a COTS component itself is found to be unavailable in the COTS component. Sometimes this deficiency is known before a COTS component is actually selected for integration, but for other reasons is selected anyway. In this case, it is known ahead of time that this needed functionality is going to have to be created. Often, however (unfortunately, probably too often and thus may speak to weaknesses in some organizations' COTS assessment processes), deficiencies in COTS functionality are not discovered until the COTS integration activity is well under way. The choice then is to either go back and select a different COTS product, or as is more typical, to go ahead and create the required functionality with original code--and more often than not the person responsible for creating that original code is the individual tasked to integrate the COTS component in the first place. So from a practical point of view, creating that new functionality just becomes part and parcel to integrating the COTS component itself. Thus counting that effort as part of the overall integration code writing effort seems reasonable to us. The real issue, however, is to avoid double counting of code writing effort. If you know before COTS integration has begun that some functionality will have to be created, whether you choose to treat its creation as part of the glue code, or as simply more lines of new code that must be written for the overall system and thus capturable under a cost model like COCOMO, as long as you are consistent, then you should be okay.

(The last sentence in the preceding paragraph actually goes to the heart of another sometimes controversial subject, and which in truth qualifies the claim above that "as long as you are consistent, then you should be okay." The controversy is whether or not the programming effort required to create "glue" code is in fact qualitatively different from that required to create any other kind of original code. After all, as some people say, "code is code is code, right?"--maybe, in some people's minds, but the majority of software professionals we have interviewed during the course of our research do not hold this view. If "code is code is code," then the COCOMO II cost drivers should work just fine when trying to estimate the effort required to create glue code. But the experience of these professionals has been that this is not the case. Their explanation as to why this is true is that the structure and design of glue code is highly constrained by both the design of the COTS component itself, and the behavior of it's vendor, factors that do not apply to "normal" new code. Thus they feel that a different set of cost drivers from those in COCOMO II do often obtain. This provides the rationale for the set of COTS Glue Code Effort Adjustment Factors presented below.)

The basic approach taken to modeling COTS glue code writing effort is shown in the chart below. The formulation of the Glue Code submodel uses the same general form as does COCOMO. The model presumes that the total amount of glue code to be written for a project can be predicted and quantified (in either source lines of code or function points), including the amount of reworking or "breakage" of that code that will likely occur due to changes in requirements or new releases of COTS components by the vendors during the integration period; also presumed to be predictable are the broad conditions that will obtain while that glue code is being written--in terms of personnel, product, system, and architectural issues--and that these too can be characterized by standardized rating criteria. The model then presumes that a parameterized value for a linear scaling constant (in either units of person-months per source lines of code or person-months per function points) can be determined for a given domain (this is the constant A in the formula below). Other parameterized values are also presumed to be determinable by domain: a nonlinear scale factor that accounts for diseconomies of scale that can occur depending on how thoroughly the architecting of the overall system into which the COTS component is being integrated was conducted; and some thirteen Effort Adjustment Factors that linearly inflate or deflate the estimated size of the glue code writing effort based upon a rating from very low to very high of specific project conditions.

The total glue code writing effort for a project then becomes a function of the amount (or size) of glue code to be written, its estimated percentage breakage, the linear constant A, the rated nonlinear architectural scale factor, and the individual rated effort adjustment factors.
 

The table below summarizes the thirteen linear effort multipliers and one nonlinear scale factor.
 

COTS Glue Code Effort Adjustment Factors (EAFs)
Personnel Drivers
  • 1) ACIEP - COTS Integrator Experience with Product
  • 2) ACIPC - COTS Integrator Personnel Capability
  • 3) AXCIP - Integrator Experience with COTS Integration Processes
  • 4) APCON - Integrator Personnel Continuity
COTS Component Drivers
  • 5) ACPMT - COTS Product Maturity
  • 6) ACSEW - COTS Supplier Product Extension Willingness
  • 7) APCPX - COTS Product Interface Complexity
  • 8) ACPPS - COTS Supplier Product Support
  • 9) ACPTD - COTS Supplier Provided Training and Documentation
Application/System Drivers
  • 10) ACREL - Constraints on Application System/Subsystem Reliability
  • 11) AACPX - Application Interface Complexity
  • 12) ACPER - Constraints on COTS Technical Performance
  • 13) ASPRT - Application System Portability
Nonlinear Scale Factor
  • AAREN - Application Architectural Engineering

 

The next table below provides the criteria used to individually rate each cost driver indicated in the table above.
 

Rating Scales Criteria
Linear Effort Adjustment Factors
Personnel Drivers
1) ACIEP - COTS Integrator Experience with Product
How much experience did/does the development staff have with running, integrating, and maintaining the COTS products? 

Metric: months/years of experience with product.

Very Low Low Nominal High Very High
Staff on average has no experience with th products. Staff on average has less than 6 months' experience with the products. Staff on average has between 6 months' and 1 year's experience with the products. Staff on average has between 1 and 2 years'experience with the products. Staff on average has more than 2 years' experience with the products.
2) ACIPC - COTS Integrator Personnel Capability
What were/are the overall software development skills and abilities which your team as a whole on average brought/bring to the product integration tasks as well as experience with the specific tools, languages, platforms, and operating systems used/being used in the integration tasks? 

Metric: months/years of experience.

Very Low Low Nominal High Very High
Staff on average has no development experience or with the specific environmental items listed. Staff on average has less than 6 months' development experience or with the specific environmental items listed. Staff on average has between 6 months' and 1 year's development experience or with the specific environmental items listed. Staff on average has between 1 and 2 years' development experience or with the specific environmental items listed. Staff on average has more than 2 years' development experience or with the specific environmental items listed.
3) AXCIP - Integrator Experience with COTS Integration Processes
Does a formal and validated COTS integration process exist within your organization and how experienced was/is the development staff in that formal process? 

Metric: a mix of conditions including SEI CMM level, ISO 9001 certification, and number of times integration team as a whole on average has used the defined COTS integration process.

Very Low Low Nominal High Very High
Not Applicable CMM level = 1
ORthere is no formally defined COTS integration process.
[CMM level = 2
OR
ISO 9001 certified]
AND
there is a formally defined COTS integration process
AND
the integration team has never used the process before.
[CMM level = 3
OR
ISO 9001 certified]
AND
there is a formally defined COTS integration process
AND
the integration team has used the process 1 or 2 times before.
[CMM level > 3
OR
ISO 9001 certified]
AND
there is a formally defined COTS integration process
AND
the integration team has used the process 3 or more times before.
4) APCON - Integrator Personnel Continuity
How stable was/is your integration team? Are the same people staying around for the duration of the tasks, or must you keep bringing in new people and familiarizing them with the particulars of the project because experienced personnel leave? 

Metric: annual integration personnel turnover rate (a high personnel turnover rate implies a low personnel continuity).

Very Low Low Nominal High Very High
48% or more per year. Between 24% and 47% per year. Between 12% and 23% per year. Between 6% and 11% per year. 5% or less per year.
COTS Component Drivers
5) ACPMT - COTS Product Maturity
How many copies have been sold or used previously of the major versions (as opposed to release of those versions) of the COTS components you integrated or intend to integrate? How long have the versions been on the market or available for use? How large are the versions' market shares or installed user bases? How thoroughly have the versions been used by others in the manner you used or intend to use them? 

Metric: time on market.

Very Low Low Nominal High Very High
Versions in pre-release beta test. Versions on market/available less then 6 months. Versions on market/available between 6 months and 1 year. Versions on market/available between 1 and 2 years. Versions on the market/available more than 2 years.
6) ACSEW - COTS Supplier Product Extension Willingness
How willing were/are the suppliers of the COTS products to modify the design of their software to meet your specific needs, either by adding or removing functionality or by changing the way it operates? This refers to changes that would appear in market releases of the product. This does NOT include specialty changes in the COTS component that would appear in your copy only. 

Metric: number and nature of changes supplier will make.

Very Low Low Nominal High Very High
Not Applicable Suppliers will not make any changes. Suppliers will make a few minor changes. Suppliers will make one major change and several minor ones. Suppliers will make two or more major changes along with any minor changes needed.
7) APCPX - COTS Product Interface Complexity
What are the nature of the interfaces between the COTS components and the glue code connecting them to the main application? Are there difficult synchronization issues? Must the interfaces balance conflicting criteria (e.g., security, safety, accuracy, ease of use, speed)? 

Metric: the scale for this driver uses a subjective average of the three equally weighted facets of interface complexity described in the Interface Complexity Criteria table below. To rate this driver, first rate the three items (interface conventions, control aspects, data) in the Interface Complexity Criteria table individually according to the criteria given in the table. Next, sum the total point score as described in the table for the combination of ratings you selected, and determine which gross category that score corresponds to on the scale below.

Example: individual ratings of Low for Interface , Low for Control , and Very High for Data would result in a point total of 9, indicating a gross combined rating of Nominal.

Very Low Low Nominal High Very High
Not Applicable Point total is between 5 and 7. Point total is between 8 and 10. Point total is between 11 and 13. Point total is between 14 and 15.
8) ACPPS - COTS Supplier Product Support
What is the nature of the technical support for the COTS components that was/is available and procured for the integration team during the development, either directly from the component suppliers or through third parties? 

Metric: the level of support available and procured.

Very Low Low Nominal High Very High
Not Applicable Products are unsupported. Help desk support. Trained technical support. Formal consulting help.
9) ACPTD - COTS Supplier Provided Training and Documentation
How much training and/or documentation for the COTS components was/is available and procured for the integration team during the development, either directly from the component suppliers or through third parties? 

Metric: the amount of training and/or documentation available and procured.

Very Low Low Nominal High Very High
No training and very little documentation procured. Roughly 1/4 of the needed training and/or documentation procured. Roughly 1/2 of the needed training and/or documentation procured. Roughly 3/4 of the needed training and/or documentation procured. As much training and/or documentation procured as needed.
Application/System Drivers
10) ACREL - Constraints on Application System/Subsystem Reliability
How severe are the overall reliability constraints on the system or subsystem into which the COTS components was/is being integrated? What are the potential consequences if the components fail to perform as required in any given time frame? (Note that availability is considered an issue different than reliability and is NOT addressed in this cost driver.) 

Metric: the potential threat if the component fails to perform as expected.

Very Low Low Nominal High Very High
Not Applicable Threat is low; if a failure occurs losses are easily recoverable (e.g., document publishing). Threat is moderate; if a failure occurs losses are fairly easily recoverable (e.g., support systems). Threat is high; if a failure occurs the risk is to mission critical requirements. Threat is very high; if a failure occurs the risk is to safety critical requirements.
11) AACPX - Application Interface Complexity
What are the nature of the interfaces between the main application system or subsystem and the glue code used to connect the system to the COTS components? Are there difficult synchronization issues? Must the interface balance conflicting criteria (e.g., security, safety, accuracy, ease of use, speed)? 

Metric: the scale for this driver uses a subjective average of the three equally weighted facets of interface complexity described in the Interface Complexity Criteria table below. To rate this driver, first rate the three items (interface conventions, control aspects, data) in the Interface Complexity Criteria table individually according to the criteria given in the table. Next, sum the total point score as described in the table for the combination of ratings you selected, and determine which gross category that score corresponds to on the scale below.

Example: individual ratings of Low for Interface , Low for Control , and Very High for Data would result in a point total of 9, indicating a gross combined rating of Nominal.

Very Low Low Nominal High Very High
Not Applicable Point total is between 5 and 7. Point total is between 8 and 10. Point total is between 11 and 13. Point total is between 14 and 15.
12) ACPER - Constraints on COTS Technical Performance
How severe were/are the technical performance constraints (e.g., storage, memory, reserve, flow through capacity, etc.) on the application system or subsystem that the COTS components needed to/must meet? 

Metric: the presence or absence of constraints.

Very Low Low Nominal High Very High
Not Applicable Not Applicable There are no technical constraints or real time processing needs. Real time processing must be performed
OR
other technical constraints exist.
Real time processing must be performed
AND
other technical constraints exist.
13) ASPRT - Application System Portability
What were/are the overall system or subsystem portability requirements that the COTS component needed to/must meet? 

Metric: the nature of portability requirements.

Very Low Low Nominal High Very High
Not Applicable Not Applicable There are no portability requirements at the system/subsystem level. System must be portable across platforms within the same family (e.g., across different versions of UNIX). System must be portable across divergent platforms (e.g., from UNIX to VMS).
Nonlinear Scale Factor
AAREN - Application Architectural Engineering
How adequate/sophisticated were the techniques used to define and validate the overall systems architecture? 

Metric: architecture validation techniques.

Very Low Low Nominal High Very High
No architecture validation done. Paper analysis performed. Peer reviews of architectural design (including interface definitions). Prototyping/demos of the architecture performed. Simulations of the architecture created.

 

Use the table below in evaluating interface complexity drivers #7 (APCPX) and #11 (AACPX). Use it once for APCPX, then repeat its use for AACPX. Rate each complexity element described in the table individually, recording the point value associated with your rating in the far right column. Then sum all three point values to arrive at a total point score (minimum score possible is 5, maximum score is 15). Then apply that total point score to the scales provided for each of the two cost drivers as indicated under their descriptions in the table above.
 

Interface Complexity Criteria
Complexity Elements Very Low
(point value = 1)
Low
(point value = 2)
Nominal
(point value = 3)
High
(point value = 4)
Very High
(point value = 5)
Corresponding
Points
Interface Conventions
(e.g., naming, relevant usage scenarios, service signature, service order)
Not Applicable Nearly all API conventions are clear and consistent. Most API conventions are clear and consistent. Few API conventions are clear and consistent. API conventions are nonexistent. _______
Control Aspects
(e.g., consistent and clear error handling/recovery)
Not Applicable Nearly all control aspects are well defined and consistently applied. Most control aspects are well defined and consistently applied. Few control aspects are well defined and consistently applied. No control aspects are well defined and consistently applied. _______
Data
(e.g.,conversion, number/range typing)
No data conversion required. Little data conversion required and standard data types used. Some data conversion required and standard data types used. Significant data conversion required and/or use of non-standard data types used. Extensive data conversion required and/or use of non-standard data types used. _______

Total Point Score = _________ 

 
 

Glue Code Effort Estimation Example:

The final table below in this section summarizes the key Glue Code submodel elements in terms of inputs, outputs, and required calibrated parameters.
 

Summary of Glue Code Submodel Elements (Post-architecture)

Glue Code formula
User Inputs
  • Total size of all glue code in project

  • either in SLOC or Function Points
  • Ratings for the 13 EAFs

  • and one scale factor
  • Estimate of % glue code breakage
Submodel Outputs 
  • Total effort spent creating COTS glue code for the project
Calibrated Submodel Parameters
(one linear scaling constant; one set of 13 EAF parameters plus nonlinear scale factor--each with 5 subparameters--for each standardized project domain)
  • Average tailoring effort at the given level of complexity per candidate
System Volatility:
Total Cost:
Early Design Model Definition
Assessment:
Tailoring:
Glue Code:
System Volatility:
Total Cost:
Overall Cost Estimation Guidelines:

COTS component integration cost estimation guidelines using COCOTS and COCOMO II are as follows:

*Summing the assessment effort as determined by COCOTS with the overall new system development effort determined by COCOMO II allows capturing more completely the true cost of using COTS products as infrastructure and tools, but the assessment activity still is not reflected in the schedule estimate provided by COCOMO II in this circumstance.
Significant Outstanding Modeling Issues:

Listed below are issues which are recognized as being arguable weaknesses in the COCOTS model as it is currently formulated and as such are high priority items for resolution:

* In the meantime, the key to sizing GLUE CODE is to keep in mind the three glue code bounding criteria we have adopted: 1) any code required to facilitate information or data exchange between the COTS component and the application, 2) any code needed to "hook" the COTS component into the application, even though it may not necessarily facilitate data exchange, and 3) any code needed to provide functionality that was originally intended to be provided by the COTS component, AND which must interact with that COTS component. Once the glue code has been bounded by these criteria, sizing can be done via either 1) a standard source-lines-of-code count (again where sloc refers to logical sloc), or 2) in function points using recognized IFPUG counting rules. The third bounding criterion noted previously in particular requires that if sizing is done in function points, any or all of the standard functional types (External Inputs, External Outputs, External Inquiries, Internal Logical Files, and External Interface Files) be applied as needed.

** Again in the meantime, the various BREAKAGE figures are best estimated based upon acquiring knowledge of two things: 1) your COTS vendors' past histories regarding releases of the COTS products in question or of similar products that the vendors market, and 2) your customers' past histories regarding demanding changes in requirements after system development and COTS product integration has begun.
Prospective Modeling Follow-ons:

Aside from the issues noted above, other areas ripe for exploration as part of future efforts to improve the fidelity of COCOTS include the following:

We have a basic skeleton in mind for addressing some of these other COTS related costs, taking into account the dynamic nature of costs over time and the time-value of money. Highlights of the approach are as follows: The following three charts illustrate a high level formula for capturing these costs:
 

These formulas provide an orderly conceptual framework which we will use in the future to systematically explore the best approaches to expanding the scope of COCOTS so that it addresses the modeling issues identified above as desirable follow-ons, in the hope that the utility of COCOTS to the professional software cost estimation community will be even further enhanced.



 
 
 
 


Copyright 1995, 1996, 1997, 1998 The University of Southern California
 

The written material, text, graphics, and software available on this page and all related pages may be copied, used, and distributed freely as long as the University of Southern California as the source of the material, text, graphics or software is always clearly indicated and such acknowledgement always accompanies any reuse or redistribution of the material, text, graphics or software; also permission to use the material, text, graphics or software on these pages does not include the right to repackage the material, text, graphics or software in any form or manner and then claim exclusive proprietary ownership of it as part of a commercial offering of services or as part of a commercially offered product.