1. Use COCOMO II cost modeling technique to write the formula (not required to c
ID: 3669314 • Letter: 1
Question
1. Use COCOMO II cost modeling technique to write the formula (not required to calculate an arithmetic result) for the estimation of the effort required to complete a project. Assume that you have an initial architectural design for the system.
• You have following scale factor: precedentedness (4), development flexibility (1),architecture/risk resolution (5), team cohesion (3), and process maturity (3).
• Your project has 8,000 source lines of code.
• An organization-dependent constant factor is 2.9 initial calibration.
• Your project has all Nominal Cost Drivers is 1.0.
Explanation / Answer
The initial definition of COCOMO II and its rationale are described in this paper. The definition will be refined as additional data are collected and analyzed. The primary objectives of the COCOMO II effort are:
• To develop a software cost and schedule estimation model tuned to the life cycle practices of the 1990’s and 2000’s. • To develop software cost database and tool support capabilities for continuous model improvement.
• To provide a quantitative analytic framework, and set of tools and techniques for evaluating the effects of software technology improvements on software life cycle costs and schedules
3.2 Scaling Drivers
Equation 12 defines the exponent, B, used in Equation 1. Table 21 provides the rating levels for the COCOMO® II scale drivers. The selection of scale drivers is based on the rationale that they are a significant source of exponential variation on a project's effort or productivity variation. Each scale driver has a range of rating levels, from Very Low to Extra High. Each rating level has a weight, W, and the specific value of the weight is called a scale factor. A project's scale factors, Wi, are summed across all of the factors, and used to determine a scale exponent, B, via the following formula:
EQ 12.
For example, if scale factors with an Extra High rating are each assigned a weight of (0), then a 100 KSLOC project with Extra High ratings for all factors will have 2 Wi = 0, B = 1.01, and a relative effort E = 1001.01= 105 PM. If scale factors with Very Low rating are each assigned a weight of (5), then a project with Very Low (5) ratings for all factors will have 2Wi= 25, B = 1.26, and a relative effort E = 331 PM. This represents a large variation, but the increase involved in a one-unit change in one of the factors is only about 4.7%.
relaxation
conformity
conformity
cooperative
cooperative
Table 6: Scale Factors for COCOMO® II Early Design and Post-Architecture Models
a % significant module interfaces specified, % significant risks eliminated.
3.2.1 Precedentedness (PREC) and Development Flexibility (FLEX)
These two scale factors largely capture the differences between the Organic, Semidetached and Embedded modes of the original COCOMO® model [Boehm 1981]. Table 7 reorganizes [Boehm 1981, Table 6.3] to map its project features onto the Precedentedness and Development Flexibility scales. This table can be used as a more in depth explanation for the PREC and FLEX rating scales given in Table 21.
Table 7: Scale Factors Related to COCOMO® Development Modes
3.2.2 Architecture / Risk Resolution (RESL)
This factor combines two of the scale factors in Ada COCOMO®, "Design Thoroughness by Product Design Review (PDR)" and "Risk Elimination by PDR" [Boehm and Royce 1989; Figures 4 and 5]. Table 8 consolidates the Ada COCOMO® ratings to form a more comprehensive definition for the COCOMO® II RESL rating levels. The RESL rating is the subjective weighted average of the listed characteristics. (Explain the Ada COCOMO® ratings)
3.2.3 Team Cohesion (TEAM)
The Team Cohesion scale factor accounts for the sources of project turbulence and entropy due to difficulties in synchronizing the project's stakeholders: users, customers, developers, maintainers, interfacers, others. These difficulties may arise from differences in stakeholder objectives and cultures; difficulties in reconciling objectives; and stakeholder's lack of experience and familiarity in operating as a team. Table 9 provides a detailed definition for the overall TEAM rating levels. The final rating is the subjective weighted average of the listed characteristics.
Table 8: RESL Rating Components
Table 9: TEAM Rating Components
3.2.4 Process Maturity (PMAT)
The procedure for determining PMAT is organized around the Software Engineering Institute's Capability Maturity Model (CMM). The time period for rating Process Maturity is the time the project starts. There are two ways of rating Process Maturity. The first captures the result of an organized evaluation based on the CMM.
Overall Maturity Level
r CMM Level 1 (lower half)
r CMM Level 1 (upper half)
r CMM Level 2
r CMM Level 3
r CMM Level 4
r CMM Level 5
Key Process Areas
The second is organized around the 18 Key Process Areas (KPAs) in the SEI Capability Maturity Model [Paulk et al. 1993, 1993a]. The procedure for determining PMAT is to decide the percentage of compliance for each of the KPAs. If the project has undergone a recent CMM Assessment then the percentage compliance for the overall KPA (based on KPA Key Practice compliance assessment data) is used. If an assessment has not been done then the levels of compliance to the KPA's goals are used (with the Likert scale below) to set the level of compliance. The goal-based level of compliance is determined by a judgement-based averaging across the goals for each Key Process Area. If more information is needed on the KPA goals, they are listed in Appendix B of this document.
* Check Almost Always when the goals are consistently achieved and are well established in standard operating procedures (over 90% of the time).
* Check Frequently when the goals are achieved relatively often, but sometimes are omitted under difficult circumstances (about 60 to 90% of the time).
* Check About Half when the goals are achieved about half of the time (about 40 to 60% of the time).
* Check Occasionally when the goals are sometimes achieved, but less often (about 10 to 40% of the time).
* Check Rarely If Ever when the goals are rarely if ever achieved (less than 10% of the time).
* Check Does Not Apply when you have the required knowledge about your project or organization and the KPA, but you feel the KPA does not apply to your circumstances.
* Check Don't Know when you are uncertain about how to respond for the KPA. After the level of KPA compliance is determined each compliance level is weighted and a PMAT factor is calculated, as in Equation 13. Initially, all KPAs will be equally weighted.
EQ 13.
6.3 Cost Drivers
These are the 17 effort multipliers used in COCOMO® II Post-Architecture model to adjust the nominal effort, Person Months, to reflect the software product under development. They are grouped into four categories: product, platform, personnel, and project. Figure 21 lists the different cost drivers with their rating criterion (found at the end of this section). Whenever an assessment of a cost driver is between the rating levels always round to the Nominal rating, e.g. if a cost driver rating is between High and Very High, then select High. The counterpart 7 effort multipliers for the Early Design model are discussed in the chapter explaining that model
6.3.1 Product Factors
Required Software Reliability (RELY)
This is the measure of the extent to which the software must perform its intended function over a period of time. If the effect of a software failure is only slight inconvenience then RELY is low. If a failure would risk human life then RELY is very high.
Data Base Size (DATA)
This measure attempts to capture the affect large data requirements have on product development. The rating is determined by calculating D/P. The reason the size of the database is important to consider it because of the effort required to generate the test data that will be used to exercise the program.
EQ 16.
DATA is rated as low if D/P is less than 10 and it is very high if it is greater than 1000.
Product Complexity (CPLX)
Table 20 (found at the end of this section) provides the new COCOMO® II CPLX rating scale. Complexity is divided into five areas: control operations, computational operations, device-dependent operations, data management operations, and user interface management operations. Select the area or combination of areas that characterize the product or a sub-system of the product. The complexity rating is the subjective weighted average of these areas.
Required Reusability ( RUSE)
This cost driver accounts for the additional effort needed to construct components intended for reuse on the current or future projects. This effort is consumed with creating more generic design of software, more elaborate documentation, and more extensive testing to ensure components are ready for use in other applications.
Documentation match to life-cycle needs (DOCU)
Several software cost models have a cost driver for the level of required documentation. In COCOMO® II, the rating scale for the DOCU cost driver is evaluated in terms of the suitability of the project's documentation to its life-cycle needs. The rating scale goes from Very Low (many life-cycle needs uncovered) to Very High (very excessive for life-cycle needs).
6.3.2 Platform Factors
The platform refers to the target-machine complex of hardware and infrastructure software (previously called the virtual machine). The factors have been revised to reflect this as described in this section. Some additional platform factors were considered, such as distribution, parallelism, embeddedness, and real-time operations. These considerations have been accommodated by the expansion of the Module Complexity ratings in Equation 20.
Execution Time Constraint (TIME)
This is a measure of the execution time constraint imposed upon a software system. The rating is expressed in terms of the percentage of available execution time expected to be used by the system or subsystem consuming the execution time resource. The rating ranges from nominal, less than 50% of the execution time resource used, to extra high, 95% of the execution time resource is consumed.
Main Storage Constraint (STOR)
This rating represents the degree of main storage constraint imposed on a software system or subsystem. Given the remarkable increase in available processor execution time and main storage, one can question whether these constraint variables are still relevant. However, many applications continue to expand to consume whatever resources are available, making these cost drivers still relevant. The rating ranges from nominal, less that 50%, to extra high, 95%.
Platform Volatility (PVOL)
"Platform" is used here to mean the complex of hardware and software (OS, DBMS, etc.) the software product calls on to perform its tasks. If the software to be developed is an operating system then the platform is the computer hardware. If a database management system is to be developed then the platform is the hardware and the operating system. If a network text browser is to be developed then the platform is the network, computer hardware, the operating system, and the distributed information repositories. The platform includes any compilers or assemblers supporting the development of the software system. This rating ranges from low, where there is a major change every 12 months, to very high, where there is a major change every two weeks.
minor: 1 wk.
minor: 2 days
6.3.3 Personnel Factors
Analyst Capability (ACAP)
Analysts are personnel that work on requirements, high level design and detailed design. The major attributes that should be considered in this rating are Analysis and Design ability, efficiency and thoroughness, and the ability to communicate and cooperate. The rating should not consider the level of experience of the analyst; that is rated with AEXP. Analysts that fall in the 15th percentile are rated very low and those that fall in the 95th percentile are rated as very high..
Programmer Capability (PCAP)
Current trends continue to emphasize the importance of highly capable analysts. However the increasing role of complex COTS packages, and the significant productivity leverage associated with programmers' ability to deal with these COTS packages, indicates a trend toward higher importance of programmer capability as well.
Evaluation should be based on the capability of the programmers as a team rather than as individuals. Major factors which should be considered in the rating are ability, efficiency and thoroughness, and the ability to communicate and cooperate. The experience of the programmer should not be considered here; it is rated with AEXP. A very low rated programmer team is in the 15th percentile and a very high rated programmer team is in the 95th percentile.
Applications Experience (AEXP)
This rating is dependent on the level of applications experience of the project team developing the software system or subsystem. The ratings are defined in terms of the project team's equivalent level of experience with this type of application. A very low rating is for application experience of less than 2 months. A very high rating is for experience of 6 years or more..
Platform Experience (PEXP)
The Post-Architecture model broadens the productivity influence of PEXP, recognizing the importance of understanding the use of more powerful platforms, including more graphic user interface, database, networking, and distributed middleware capabilities.
Language and Tool Experience (LTEX)
This is a measure of the level of programming language and software tool experience of the project team developing the software system or subsystem. Software development includes the use of tools that perform requirements and design representation and analysis, configuration management, document extraction, library management, program style and formatting, consistency checking, etc. In addition to experience in programming with a specific language the supporting tool set also effects development time. A low rating given for experience of less than 2 months. A very high rating is given for experience of 6 or more years.
Personnel Continuity (PCON)
The rating scale for PCON is in terms of the project's annual personnel turnover: from 3%, very high, to 48%, very low.
6.3.4 Project Factors
Use of Software Tools (TOOL)
Software tools have improved significantly since the 1970's projects used to calibrate COCOMO®. The tool rating ranges from simple edit and code, very low, to integrated lifecycle management tools, very high.
Multisite Development (SITE)
Given the increasing frequency of multisite developments, and indications that multisite development effects are significant, the SITE cost driver has been added in COCOMO® II. Determining its cost driver rating involves the assessment and averaging of two factors: site collocation (from fully collocated to international distribution) and communication support (from surface mail and some phone access to full interactive multimedia).
Required Development Schedule (SCED)
This rating measures the schedule constraint imposed on the project team developing the software. The ratings are defined in terms of the percentage of schedule stretch-out or acceleration with respect to a nominal schedule for a project requiring a given amount of effort. Accelerated schedules tend to produce more effort in the later phases of development because more issues are left to be determined due to lack of time to resolve them earlier. A schedule compress of 74% is rated very low. A stretch-out of a schedule produces more effort in the earlier phases of development where there is more time for thorough planning, specification and validation. A stretch-out of 160% is rated very high.
Table 20: Module Complexity Ratings versus Type of Module
Pgm SLOC < 10
minor: 1 wk.
minor: 2 days
Table 21: Post-Architecture Cost Driver Rating Level Summary
Scale Factors (Wi) Very Low Low Nominal High Very High Extra High PREC thoroughly unprecedented largely unprecedented somewhat unprecedented generally familiar largely familiar throughly familiar FLEX rigorous occasional relaxation somerelaxation
generalconformity
someconformity
general goals RESLa little (20%) some (40%) often (60%) generally (75%) mostly (90%) full (100%) TEAM very difficult interactions some difficult interactions basically cooperative interactions largelycooperative
highlycooperative
seamlessinteractions PMAT Weighted average of "Yes" answers to CMM Maturity Questionnaire
Related Questions
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.