Software Crisis Intervention

By Marlene A. Jordan

If there is indeed a chronic crisis in software production, as suggested by W. Gibbs in "Software's Chronic Crisis", then what is needed is crisis intervention. What we are doing and how we are doing it needs to be analyzed and improved upon.
 Gibbs' article, which appeared in Scientific American in September 1994, begins with a description of the catastrophic consequences of poor software production processes at Denver International Airport. Because of software problems in the automated baggage-handling system, the airport's planners suffered great losses with respect to time and money. The scenario at Denver is indicative of the need for programming discipline.
Systems have become so large that one person cannot comprehend their entirety, and errors in real-time systems are hard to find because they occur only when the conditions are in one of numerous possible states. Distributed systems pose a major challenge. Gibbs cites several examples of huge financial losses resulting from attempted systems integrations. So what type of intervention is needed?
According to W. Wayt Gibbs, "Disaster will become an increasingly common and disruptive part of software development unless programming takes on more of the characteristics of an engineering discipline rooted firmly in science and mathematics." In 1968, the NATO Science Committee gave a name to a possible solution to the crisis; they called it software engineering, "the application of a systematic, disciplined, quantifiable approach to the development, operation and maintenance of software." Barry Boehm (chair of the International Conference on Software Engineering this year, describes software engineering as the practical application of scientific knowledge and the production of associated documentation.
 In 1991, the Software Engineering Institute (SEI), founded by the Department of Defense, put forth the capability maturity model (CMM) initiative, described as:
"a related group of strategies for improving the software process, irrespective of the actual life-cycle model used." "The CMM for software, the SW-CMM, was first put forward in 1986 by Watts Humphrey…The strategy of the SW-CMM is to improve the management of the software process in the belief that improvements in technology will be a natural consequence."

With the CMM changes are introduced incrementally and 5 different levels of maturity are defined:
1. Initial level -- Ad hoc process
2. Repeatable level -- Basic project management
3. Defined level -- Process definition
4. Managed level -- Process measurement
5. Optimizing level -- Process control

W. W. Gibbs states in his article that "The CMM has at last persuaded many programmers to concentrate on measuring the process by which they produce software, a prerequisite for any industrial engineering discipline." Companies can now see that    "quantitative quality control can pay off in the long run." However, according to Bill Curtis, a consultant to the SEI, of the 261 organizations that have been rated to date,  about 75% are at level 1, 24% at levels 2 and 3, and only two groups have reached level 5. Those two groups are Motorola's Indian programming team in Bangalore and Loral's on-board space shuttle project.
The most serious programming errors are usually those made in the initial design. Rapid prototyping can be used to determine user requirements and clear up any misunderstandings between customer and developer early in the process, where mistakes are more costly to fix. Gibbs states that another way to combat the problem of initial design errors is to "recognize that the problem a system is supposed to solve always changes as the system is being built." The customer may change his mind. Gibbs also speaks of "rethinking software as something to be grown rather than built."
Edwin B. Dean of NASA (now retired) states that "an appropriate software engineering process is necessary for the delivered software to be competitive. It must be customer focused."   According to Dean, the ability to produce competitive software is related to reaching level 5 of the CMM.
"My research indicates that probably the fastest and most direct known approach to attain both goals simultaneously is through the appropriate application of quality function deployment (QFD) and the seven new tools. QFD is a system engineering process which transforms the desires of the customer/user into the language required, at all project levels, to implement a product. It also provides the glue necessary, at all project levels, to tie it all together and to manage it. Finally, it is an excellent method for assuring that the customer obtains high value from your product."

The concept of QFD was introduced in Japan by Yoji Akao in 1966. By 1972, the power of the approach had been demonstrated; it has been used by Toyota since 1977. The first book on QFD was published in Japanese in 1978 and translated into English in 1994. "Japan has consciously designed new technology for high value which means high quality and low cost. They have learned to listen to their customer and to provide a product which has high value to the customer…"
Besides satisfying the customer's requirements, software designs must be able to hold up in the real world. Engineers use mathematical analysis to predict how well their designs will work. Computer programs, however, are governed by a different set of mathematical rules, discrete mathematics. In his article, Gibbs talks about the use of "formal methods" to analyze specifications and programs. As defined by NASA Langley Research Center, Formal Methods Program  , formal methods refers to "the variety of mathematical modeling techniques that are applicable to computer system (software and hardware) design…the applied mathematics of computer system engineering… The mathematics of formal methods includes:
 Predicate calculus (1st order logic)
 Recursive function theory
 Lambda calculus
 Programming language semantics
 Discrete mathematics -- number theory, abstract algebra, etc. "


David A. Fisher of the National Institute of Standards and Technology (NIST) is "skeptical that Americans are sufficiently disciplined to apply formal methods in any broad fashion."
Gibbs also refers to experimentation with the "clean-room approach" to programming. Using this process, programmers try to assign a probability to every possible execution path that users can take, whether it is correct or incorrect. They then derive test cases from those statistical data, so that the most common paths are tested more thoroughly.
Problems with productivity in software development are also addressed. "There has been improvement, but everyone uses different definitions of productivity." (Gibbs) One suggestion for increasing productivity is to take advantage of the fact that software parts can be reused if they are properly standardized. Libraries of subroutines already exist, but they cannot be moved to a different programming language, platform, or operating environment. Current research investigates several approaches, such as a common language to describe the components, programs that reshape them to match any environment, and components with optional features that can be turned on or off.
One final point of concern is this: who is qualified to build important systems? Barry W. Boehm, director of the Center for Software Engineering at the University of Southern California, believes that software engineers should be certified, and of course, properly trained to begin with. A crisis does exist, and the intervention of trained and capable software engineers is needed.