Software Productivity Research
SPR
Home About Us Contact Us News & Refereneces
SPR KnowledgePLAN Feature Points Programming Languages Table
Function Point WORKBENCH Function or Feature? KnowledgePLAN on the Web
Training & Enablement
Software Metrics & Counsel
Benchmark & Assessment
Dispute Resolution & Avoidance
Project Estimation
Client Services
Products Tools & Resources
Products, Tools & Resources
What are Function Points?
Function Points Counting Algorithms

What Are Function Points?

The standard economic definition of productivity is, "Goods or services produced per unit of labor and expense." Until 1979, when A.J. Albrecht of IBM published his Function Point metric, there was never a software definition of exactly what "goods or services" were the outputs of a software project.

The previous metric for software was "cost per line of source code," which unfortunately does not correlate at all to the economic definition of productivity. All manufacturing managers understand that if a manufacturing process involves a substantial percentage of fixed costs, and there is a decline in the number of units manufactured, then the cost per unit must go up.

Software, as it turns out, involves a substantial percentage of fixed or inelastic costs that are not associated with coding. When more powerful programming languages are used, the result is to reduce the number of "units" that must be produced for a given program or system. However, the requirements, specifications, user documents, and many other cost elements tend to behave like fixed costs, and hence cause metrics such as "cost per line of source code" to move paradoxically upwards instead of downwards.

Table 1 provides an example showing two versions of a software project. Case A is written in a primitive Assembler language and Case B is written in the more powerful FORTRAN language. Observe the paradox: the FORTRAN version of the application cost $50,000 less than the Assembler language version, for a savings of 40 percent in terms of real economic cost. Yet the cost per source line metric favors the Assembler language version by 2 to 1.

In Case A, some 40 percent of the costs were for coding, while in Case B only 20 percent went to coding. The non-coding costs tend to behave inelastically and act as fixed costs, hence invalidating source code metrics as economic indicators.

Table 1. The Paradox of Source Code Metrics

Activity Case A
Assembler Version
(10,000 Lines)
Case B
Fortran Version
(3,000 Lines)
Difference
Requirement 2 Months 2 Months 0
Design 3 Months 3 Months 0
Coding 10 Months 3 Months -7
Integration/Test 5 Months 3 Months -2
User Documentation 2 Months 2 Months 0
Management/Support 3 Months 2 Months -1
Total 25 Months 15 Months -10
Total Costs $125,000 $75,000 ($50,000)
Cost Per Source Line $12.50 $25.00 $12.50
Lines Per Person Month 400 200 -200

In the late 1970's A.J. Albrecht of IBM took the position that the economic output unit of software projects should be valid for all languages, and should represent topics of concern to the users of the software. In short, he wished to measure the functionality of software.

Albrecht considered that the visible external aspects of software that could be enumerated accurately consisted of five items: the inputs to the application, the outputs from it, inquiries by users, the data files that would be updated by the application, and the interfaces to other applications.

After trial and error, empirical weighting factors were developed for the five items, as was a complexity adjustment. The number of inputs was weighted by 4, outputs by 5, inquiries by 4, data file updates by 10, and interfaces by 7. These weights represent the approximate difficulty of implementing each of the five factors.

In October of 1979, Albrecht first presented the results of this new software measurement technique, termed "Function Points" at a joint SHARE/GUIDE/IBM conference in Monterey, California. This marked the first time in the history of the computing era that economic software productivity could actually be measured.

Table 2 provides an example of Albrecht's Function Point technique used to measure either Case A or Case B. Since the same functionality is provided, the Function Point count is also identical.

Table 2. Sample Function Point Calculations

Raw Data Weights   Function Points
1 Input X 4 = 4
1 Output X 5 = 5
1 Inquiry X 4 = 4
1 Data File X 10 = 10
1 Interface X 7 = 7
    ----
Unadjusted Total     30
Compexity Adjustment     None
Adjusted Function Points     30

Table 3. The Economic Validity of Function Point Metrics

Activity Case A
Asssembler Version
(30 F.P.)
Case B
Fortran Version
(30 F.P.)
Difference
Requirements 2 Months 2 Months 0
Design 3 Months 3 Months 0
Coding 10 Months 3 Months -7
Integration/Test 5 Months 3 Months -2
User Documentation 2 Months 2 Months 0
Management/Support 3 Months 2 Months -1
Total 25 Months 15 Months -10
Total Costs $125,000 $75,000 ($50,000)
Cost Per F.P. $4,166.67 $2,500.00 ($1,666.67)
F.P. Per Person Month 1.2 2 + 0.8

The Function Point metrics are far superior to the source line metrics for expressing normalized productivity data. As real costs decline, cost per Function Point also declines. As real productivity goes up, Function Points per person month also goes up.

In 1986, the non-profit International Function Point Users Groups (IFPUG) was formed to assist in transmitting data and information about this metric. In 1987, the British government adopted a modified form of Function Points as the standard software productivity metric. In 1990, IFPUG published Release 3.0 of the Function Point Counting Practices Manual, which represented a consensus view of the rules for Function Point counting. Readers should refer to this manual for current counting guidelines.

Function Points give software engineering researchers a way of sizing software through the analysis of the implemented functionality of a system from the user's point of view. They provide a way to predict the number of source code statements that must be written for a program or system. Languages have varying, but characteristic, levels. The level is the average number of statements required to implement one Function Point.

This form of sizing is new and in rapid evolution. For some languages (such as PL/I), the data is very closely grouped. Sizing by extrapolation from Function Points is quite accurate. For other languages (such as COBOL), the range of variation exceeds plus or minus 50 percent. With COBOL, the Function Point sizing method is less accurate.

The level of languages is of considerable interest for the following reasons:
•  The ability to size a project, or predict the number of source code statements that will be required, as early as the requirements or design phase;
•  The ability to retrofit Function Points to existing software without laborious hand counting of Function Points;
•  The ability to convert the size of an application in any language (in lines of source code) to the equivalent size if the application were written in some other language; and
•  The ability to measure the productivity of projects that are written in multiple languages.