Fundamentals of the Capability Maturity Model

Published in TDAN.com July 2004

The information systems community has developed a long-standing reputation for poor quality in their product development and delivery. As evidenced by the need for multiple “patches”,
incremental “releases” and supplemental “upgrades” to both packaged software and custom-designed applications, many organizations need to improve the quality of the software
they produce.

There have been several programs that attempt to address the identification of quality and its improvement for manufacturing processes (ISO 9000, Continuous Quality Improvement, Total Quality
Management, among others). These programs were designed primarily for measuring quality in generic manufacturing processes, and may not be very useful in the realm of software engineering, with its
amorphous development phases and its need to rely on input from user communities and the reliance on increasingly advanced information technology.

In 1984, the Software Engineering Institute (SEI) was founded at Carnegie Mellon University, as a federally funded, non-profit organization, responding to the need for a software-oriented process
improvement technique. “The purpose of the SEI was to establish protocols and methodologies in the field of software development that would assist the United States to maintain a competitive
edge in technological endeavors, especially in the improvement of the practice of software engineering.”[1]

One of the early results of the SEI’s efforts was the development of a “Capability Maturity Model” (CMM), based on the work of Watts Humphrey at IBM. This model strives to assist
organizations in improving the quality of their software development through implementation of processes that are “mature”, meaning that these processes have a high predictability of
results and low risk of encountering unknown variables or situations. The model takes an organization through five scales, from Level 1 (Initial) through Level 5 (Optimizing), and measures the
operation’s effectiveness at each level based on assessment of certain areas.

Many organizations raced to embrace the CMM, recognizing the high costs of poor quality software development and implementation. However, their enthusiasm in adopting the CMM was not accompanied by
an understanding of the foundations of quality improvement and the CMM’s role in assisting organizations to recognize and implement consistent quality improvement methods. Before implementing
the CMM, software development operations must fully understand the history of quality and process improvement methodologies, and the place the CMM holds in this environment.

Customer satisfaction has become the motto of many organizations attempting to survive and thrive in this competitive world. However, there is a growing perception of a dearth in software quality
in many organizations, as demonstrated by the number of professional and academic articles and research efforts on this topic (Arthur, 1993; Paulk et all, 1995; Ahern et al, 2001). This opinion has
persisted over many years and its impact has become more critical as software invades most aspects of our daily lives.

To understand the issues of quality in software development and implementation, to study the methods of determining the optimal ways to assess the state of an organization’s software
development practices and to recommend courses of action for improvement, it is necessary to study the origins of “quality” and “process improvement”. Through an
understanding of these foundations, it will be possible to pursue the study of software process improvement and the assessment of an organization’s relative effectiveness in developing and
maintaining quality software.

The “quality movement” is generally thought to have started with W. Edwards Deming and Joseph M. Juran, “The Pioneers of Quality”, as portrayed in a recent documentary.
Juran’s work as an engineer on the foundations of industrial quality began with a focus on process improvement in the 1920’s and continued throughout the Depression, World War II and
the growth of industrialization and automation in the 1950’s. Joined by W. E Deming, another engineer “fascinated by the vicissitudes of the manufacturing process”, these men
developed the theory that “quality” practices could lead to sustainable and competitive results, thus uniting the words “quality” and “process improvement”.
Initially, quality was understood to mean “conformance to a standard or specification”, and the emphasis was on inspection as the practice for quality assessment, as is referenced in
the literature on the history of the quality movement. Quality can also be defined as “a measure of excellence, an intelligible feature by which a thing may be identified”
(Webster’s Collegiate Dictionary, 10th Edition, Merriam-Webster), and it is this definition that forms the basis for subsequent discussion and analysis of quality in software process
measurement.

Juran and Deming posited that organizations whose products were not performing as expected could trace their problems to a lack of focus on “quality” (“measures of
excellence”) – both in the tools, techniques and methods employed on the manufacturing floor and the attitudes of the management that oversaw those people, tools and techniques. This
notion, revolutionary at the time, was tested in the rebuilding of Japan’s industrial base at the end of World War II, and the results of Juran’s and Deming’s approach are with us
today in the products produced by Japanese companies for sale in the United States, where the defects of the Japanese products are consistently fewer than for comparable products made by American
companies, according to the U.S. Department of Commerce annual report on quality production, 1999. Many organizations studied Juran’s three basic principles of quality management and combined
them with Deming’s 14 principles for Total Quality Management, developing an approach that combines “excellence in measurement” with “conformance to standards”. In
fact, the IEEE standard incorporates Juran’s and Deming’s work in their definition of quality, “the totality of features and characteristics of a product or service that bears on
its ability to satisfy given needs”.

After the widespread publication of Juran’s and Deming’s work, several other engineers and management consultants adopted a “quality approach”, including Peter F. Drucker
and Philip Crosby. Drucker’s view of quality, found in his numerous works on organizational management, includes the idea that “quality is management-based, management-driven,
internally focused”, and that “processes will improve when management improves its internal practices”. Crosby, building on the works of Juran, Deming and Drucker, adopted the
“customer is essential” focus, believing that “the customer defines quality through the purchase of a product or service”, as he states in “Quality is Free: The Art of
Making Quality Certain”. Some additional leaders in management thought have developed a more unified approach to pinpointing the source of the quality definition: it is both internal AND
external. Such thought leaders include Watts S. Humphrey, whose work in bringing the concepts of quality and quality-oriented practices to the process of software development and software
implementation led to the adoption of the Capability Maturity Model and other instruments for measuring the relative quality and effectiveness of an organization’s software development
methods.

Development of the Capability Maturity Model

As stated above, many organizations have discovered that their attempts to survive in heavy competition are based on their use of software for information management, process control and possibly
competitive advantage. However, the intricacy of the problems addressed by software, the complexity of processes automated by software and the need for faster and stronger competitive advantage
result in the development, delivery and implementation of less-than-reliable software. According to numerous studies, including the annual “Assessment of IS Practices” performed by the
Gartner Group of Stanford, CT, almost 70% of software projects are considered unsuccessful as measured by three criteria: reliability, usability, timeliness. This annual survey is conducted at
organizations across the United States, both in the public and private sectors, and the results are consistent since the study was first performed in 1983. The results demonstrate the overall poor
quality of the software development function, regardless of the size or type of company. The U.S. government has mandated that all organizations that desire to develop software for governmental use
must adhere to certain quality management requirements.

The search for solutions to the challenges noted in these studies has continued for many years, and has resulted in the adoption of versions of the various quality improvement methodologies (ISO
9000, Continuous Quality Improvement, Total Quality Management, etc.). Although these methodologies have proven to be effective in many manufacturing operations, their promises for productivity
gains and quality improvement have not been realized in the software development arena, according to Humphrey, Paulk and Weber of the Software Engineering Institute at Carnegie Mellon University.
From the research performed at SEI, analysts realized that “the fundamental problem in achieving software quality is the inability to manage the software development and implementation
process.

Founded in 1984, and based on the works of Watts S. Humphrey with both IBM and Carnegie Mellon University, SEI’s mission “is to provide leadership in advancing the state of the practice
of software engineering to improve the quality of systems that depend on software”. SEI is charged with assisting and leading organizations in both the private and public sectors to develop
and continuously improve their capability to identify, adapt and use sound management and technical practices in the software process. It is believed that the use of these practices will result in
“disciplined, well-defined, and effectively managed software engineering processes for the purpose of delivering quality software that meets cost and schedule commitments”.

In November 1984, the SEI led by Humphrey began to develop a process maturity framework that would help organizations improve their software practices. Drawing on the work of the leaders of the
“quality movement” such as Deming, Juran, Drucker and Crosby, this effort created a set of guidelines for developing high-quality and high performance software processes. Humphrey and
his colleagues determined that the quality of an application is related directly to the quality of the process used to develop it. To improve the application development process, Humphrey wanted to
implement the Deming continuous-improvement cycle (plan-do-check-act), but he wanted to include the behaviors of both the development organization and organization management into this process,
since previous improvement efforts that focused only in the development area had not been successful. Humphrey’s analysis determined that organizations had to remove managerial and process
barriers to improvement in a specific order if the efforts were to succeed. As a result, the “process maturity framework” (precursor to the CMM and released in 1987) displayed an
evolutionary path to help management and developers increase the capability of their application development processes in five clearly defined stages.

The Capability Maturity Model, whose first version was released in 1991, is “based on actual industry practices, reflects the best practices of the industry and reflects the needs of
individuals performing software process improvement and process appraisals (measurement and benchmarking)”. The CMM is intended to be a cohesive, coherent, ordered set of incremental
improvements, all relating to experienced success in the field, and packaged into a framework that demonstrates how effective practices can be built on one another into a logical and repeatable
progression. Far from a “quick fix”, successful use of the CMM requires attention to detail, support and participation from senior management and a rational approach to all aspects of
software development and implementation.

As a model, the CMM is rigid, conceptualized and flexible. It is rigid in its representation and description; its flexibility lies in its interpretation and implementation; it is conceptual and
generalized in that it represents the major aspects of software development and implementation at a variety of organizations in many industries and of different sizes. These paradoxical impressions
can make the CMM difficult to understand and implement for novices in quality initiatives or those not acquainted with the framework/model approach to analyzing a situation.

There are many misconceptions with the CMM, according to several sources. Contrary to popular opinion, the CMM will not improve organizational productivity, process quality or financial aspects
quickly; most implementing organizations wait for one to three years before seeing measurable improvement in the quality and productivity of the software development or integration functions. The
CMM does not address every issue that makes a software project successful; it is concerned with organizational process capabilities and does not pass judgment on the performance or capabilities of
the practioners or the hardware used. As a result, organizations have to learn how to read the CMM, study and customize it and implement it in their specific environments. In essence, this is the
heart of the CMM debate, how to personalize the framework for an individual organization, while maintaining the ability to use the CMM to assess the quality of the organization’s software
process and to benchmark that organization against others engaged in the same set of activities.

The Capability Maturity Model is a conceptual framework that represents best practices in software process management, according to Raynus and others. The CMM is based on five (5) stages, called
“maturity levels” and an organization’s practices can be identified and assessed at any of these levels. Arranging the model into these stages prioritizes improvement actions for
each level, throughout the organization.

Taken from the Capability Maturity Model for Software, version 1.1 document, the levels are defined as:

  1. Initial: The software process is characterized as ad hoc, and occasionally even chaotic. Few processes are defined, and success depends upon individual heroic efforts.
  2. Repeatable: Basic project management processes are established to track cost, schedule and functionality. The necessary process discipline is in place to repeat earlier successes on projects
    with similar applications.
  3. Defined: The software process for both management and engineering (development) activities is documented, standardized and integrated into a common software process set for the organization.
    All projects use an approved, tailored version of the organization’s standard software process for developing and maintaining software.
  4. Managed: Detailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled.
  5. Optimizing: Continuous process improvement is enabled by quantitative feedback from the process and from piloting innovative ideas and technologies, incorporating them into the standard process
    set when applicable.

These five levels reflect the fact that the CMM is a model for improving the capability of an organization to manage and direct consistently their software development activities. Appropriate
solutions for a troubled project not using the CMM may have limited value outside that project, because other projects may have different problems or be unable to implement the solutions in the
same manner. As a result, the CMM focuses on processes that are of value to the entire organization concerning software development and implementation.

The CMM has achieved widespread adoption since it is an incremental and evolutionary approach to improvement, where each stage/level serves as the foundation for the next stage/level. According to
the developers of the CMM, each maturity level is characterized by the implementation and institutionalization of several clusters of practices (called “process areas”) that contribute
to the development of capabilities for that level. Many organizations have chosen the Capability Maturity Model as their process improvement methodology to comply with the government mandate for
instituting a process for software development quality management (Whitten et al).

Since its introduction in 1991, the CMM has spread from the initial implementations in the defense and aerospace industries and into commercial information technology organizations. Commercial
information systems departments are using the CMM to drive customer-requirements alignment, manage external service providers or increase productive output in software development and enhancement
and to measure the effectiveness of practices against internal and external metrics (Curtis, 2001). Development of the CMM has been enhanced by many academic and industry-driven research
activities, allowing the SEI to refine its practices and create newer versions of the CMM that take advantage of best practices discovered by these researchers and practioners. Some notable
examples of academic research on the initial version of the CMM produced features that were incorporated into later releases. These studies eventually resulted in the creation of a set of related
maturity models for software development, software acquisition, network development and security, people and project management, all founded on the underlying principles of the original CMM, but
all independent of one another. The ensuing conflicts among these models caused organizations to incur additional training costs when moving from one model to another for organizational needs, and
resulted in confusion on the part of practioners as to which model applied to their specific needs (Ahern et al, 2001).

Starting in 1998, SEI began to develop an “integrated CMM”, to eliminate the conflicts caused by the creation of several similar but independent models. The CMMI effort is designed to
“codify the tenets of model-based process improvement engineering practices in organizations that span several disciplines. By integrating the tools and techniques used to improve individual
engineering disciplines, both the quality and the efficiency of the organizational process improvement are enhanced” (Ahern et al, 2001).

The CMMI, released in 2001, provides users with a choice of areas and practices upon which to focus their process improvement and measurement efforts. The integration allows the organization to
determine when and how to introduce additional areas, disciplines and product suites to minimize the incompatibility or redundant activities that would be required if the organization were using
separate CMM models.

Whether an organization uses the integrated approach or adopts only a single CMM if it needs process improvement in only one area, the knowledge and experience of the organizations that have
adopted a CMM methodology will benefit from the extensive academic and industry research that forms the foundation of the CMM (and CMMI). Use of the CMM (or CMMI, as appropriate) will allow an
organization to assess its relative maturity in software development; examine, define, implement and institutionalize a philosophy and methodology for software development; and evaluate its
performance against other organizations within and across levels of the CMM.

[1] Persse, J., Implementing the Capability Maturity Model, John Wiley and Sons, 2000

Share this post

Anne Marie Smith, Ph.D.

Anne Marie Smith, Ph.D.

Anne Marie Smith, Ph.D., is an acclaimed data management professional, consultant, author and speaker in the fields of enterprise information management, data stewardship and governance, data warehousing, data modeling, project management, business requirements management, IS strategic planning and metadata management. She holds a doctorate in Management Information Systems, and is a certified data management professional (CDMP), a certified business intelligence professional (CBIP), and holds several insurance certifications.

Anne Marie has served on the board of directors of DAMA International and on the board of the Insurance Data Management Association.  She is a member of the MIS faculty of Northcentral University and has taught at several universities. As a thought leader, Anne Marie writes frequently for data / information management publications on a variety of data-oriented topics.  She can be reached through her website at http://www.alabamayankeesystems.com and through her LinkedIn profile at http://www.linkedin.com/in/annemariesmith.

scroll to top