Published in TDAN.com July 2000
- Secret #1 – There are no secrets. If you can accept this reality, you are well on your way to understanding the true “secrets.”
- Secret #2 – Maintenance is an inherently bottom-up process, primarily because maintenance is a game of details, and managers are not hired to “do” details.
- Secret #3 – Good maintenance, like good development, is just the consistent application of sound life-cycle practices (requirement analysis, configuration control, independent verification and validation, et al.) that have been well known and understood, if spottily applied, for decades.
- Secret #4 – Development and maintenance are merely two sides of the same coin. To believe that one is better or worse than the other entirely misses the point. They are a continuum very much like the mathematical oddities of a Möbius strip or a Klein bottle.
- Secret #5 – The administrative infrastructure that supports the development/maintenance effort must become self-sustaining. Otherwise, the effort will be seen as busywork (read “costly, paper-pushing bureaucratic waste of time”); and it will die as soon as the original sponsors shift their focus or, worse, mutate to take on an expensive life of its own that is unrelated to productive work.
- Secret #6 – The job of management is to put into place the practices which enable the work of fixing, extending, and enhancing systems that support the business.
- Secret #7 – Organizations that are more intent on affixing blame for software problems than on ensuring that the conditions which caused the problem are removed/resolved will always regard maintenance as a headache to be endured until the mythical silver bullet is found.
- Secret #8 – There are no spectacular winning-touchdown-in-the-last-3-seconds events in this game and certainly no silver bullets. Success is simply the steady, incremental application of the basics one small tactical step at a time.
In the December 1993 issue of American Programmer, Ed Yourdon reported his impressions from a short visit to American Subsidiary, Inc., (ASI) in southern California. Yourdon’s article touched on ASI’s remarkable long-range focus, their corporate culture, and the highly successful application of data dictionary technology. This article will address how ASI uses their central data dictionary to support maintenance efforts and show how sound maintenance can give an organization increased systems flexibility to respond to ever changing market conditions. Bear in mind, there are no secrets.
ASI is a wholly owned subsidiary of Global Conglomerate, Ltd., (GCL) of Japan. At $8.8 billion in 1992 sales, GCI is listed as #166 among the Global Fortune 500. ASI’s 1992 sales of $507 million place it just below the U.S. Fortune 500, which begins at $585 million. ASI employs 400 people, with 50 in MIS, who are split evenly between support and maintenance/development.
Although primarily known for motorcycles, ASI has picked up three other product distribution lines-water jet skis, all-terrain vehicles, and light utility vehicles-as a result of the 67% decline in motorcycle sales experienced by the U.S market over the past 20 years.
BUSINESS SYSTEMS PLAN
In 1977 Rose Twohey (now executive vice president) joined ASI and quickly hired Tom Antel (now MIS director). Twohey describes the development/maintenance process at ASI at the time as “out of control.” Her charter was to (1) restore service levels and (2) implement an effective planning process.
In short order ASI brought in John Zachman (of current Zachman Information Architecture fame) from IBM to do a Business Systems Plan (BSP). Twohey and Antel had attended an IBM presentation that spoke of “managing data as a fundamental corporate resource.” For reasons that are now lost in the mists of time, this vision of developing and maintaining “data as valuable corporate asset” has stayed firmly in their sights ever since. The journey, however, has not been without some potholes along the way.
The year-long BSP drill entailed taking a high-level view of the organization’s process and data requirements with objectives and constraints superimposed. When done well, the matrices resulting from a BSP expose top management to the complexity and interrelatedness of their systems.
While working on the BSP, ASI also purchased a data dictionary with two specific objectives. First, the dictionary would support the database administration function for their then new IMS database applications. Second, it would be used to automate the process of defining its data element inventory. This second effort was driven by the requirement to communicate concisely and on an ongoing basis the informational content and meaning of the data being sent to their Japanese parent.
Fired by the revelations of the BSP experience, but not understanding that the BSP was not an end in itself, a concerted attempt was made to implement the BSP matrices in the new dictionary. While technically correct and robust, this two year effort proved to have no lasting value because it was unwieldy, and, from the perspective of Twohey’s MIS department, did not add value to the primary business of satisfying user requests on a day-to-day basis. Antel, as the senior manager most directly involved in both the BSP effort and the ultimate solution describes the top-down versus bottom-up conundrum as follows: “We, as managers, are inherently top-down thinkers. We conceptualize beautiful solutions from 100,000 feet. However, what sticks in the long run is the implementation of nitty-gritty details that directly support the daily routines of the troops in the trenches.”
WHAT WORKED-THE DICTIONARY AS CHANGE-LOG
At ASI use of the data dictionary required something extra beyond picking the right tool. Many other companies have purchased a dictionary or repository, to use the 1990s au courant term, expended lavish resources on it, and produced no lasting benefit for the enterprise. What did ASI do right? Automating their existing manual change-log process is what worked for ASI. It is an inescapable fact in all software efforts that there is a development life-cycle-whether or not the organization chooses to recognize it.
ASI chose to weave their change-log process into the dictionary to such an extent that today programs cannot be put into production without going through a series of clearly defined-and always obeyed-steps that are directly controlled from the dictionary and automated change control procedures.
Although ASI’s change-log does not sound as grand as a “life-cycle methodology,” in fact what they have done is implement an automated life-cycle methodology in their dictionary. At ASI libraries have three formal states: test/development, acceptance, and production.
Promotion between these libraries is entirely controlled by the automated change-log/dictionary process. When a project or fix is initiated, it is given an ID number in the dictionary. All subsequent work is tracked to this ID number. The physical components (copybooks, programs, job-steps, datasets, etc.) impacted by the project are tracked to the project ID number. Prior to initiating a project or fix, the extent of the project’s impact is researched in the dictionary. With a dictionary that now contains more than a decade of “dynamic artifacts” about virtually all previous development work, ASI has found that researching prior, similar projects is a reliable method to avoid reinventing the wheel.
There are three primary control steps in ASI’s automated change-log process: First, all data elements are formally defined in the dictionary. Second, all copybooks are defined in the dictionary and can contain only defined data elements. Third, all programs and their mid-level components (i.e. copybooks, subroutine calls, file or database assignments, etc.) are registered in the dictionary.
Without going into too much detail, a critical process control step happens when a project is OK’d by the authorities and a program is promoted from test to acceptance. When the program is recompiled for the acceptance library, copybook definitions must now come directly from the dictionary, not the programmer’s private test library. If a programmer has used undefined copybooks or data elements, the program simply will not compile. A programmer has to experience only once the embarrassment of a supposedly working program not compiling to be convinced that the rules must be obeyed.
This promotion process is easy-to-describe and hard-to-implement. Only management can oversee putting such procedures into place. Only management can ensure that these procedures are always followed. Project managers, programmers, and end users all eventually want to bypass these control steps “just this once for an emergency fix.” If management capitulates to the inevitable political pressure-“I’ve got Mr. Big’s authorization signature to bypass standard procedures!”-to ignore these checkpoints, the dictionary inexorably becomes inaccurate and not worth using as a reliable corporate memory and impact analysis tool.
For the promotion process to happen seamlessly, a great deal of attention was given to automating the dictionary update process. In fact it took ASI a long time to get the dictionary into self-sustaining mode. The primary technical architect and implementer, John Shipley, states that “…anytime within the first four years the whole effort could have fizzled and totally disappeared.” His key to success was being able to weave into the change-log process a series of scanning programs that automatically kept the dictionary 100% in sync with the production systems. Shipley realistically assumed that programmers would (1) not do additional dictionary documentation work requested of them and (2) what work they did do would be of questionable quality. Therefore, the approach taken was to require as little additional manual intervention as possible and to automate as many of the documentation steps as technically feasible.
Shipley, as data administrator, and Richard Herder, as database administrator, were uncompromising in their objective of assuring maximum accuracy via automatic scanning. Antel, as MIS Director, provided unfailing backup because he understood what was at stake. They all recognized that unless the scanners worked successfully behind the scenes, the dictionary, with it’s ability to do ad hoc impact analysis queries, would be seen as just more make-work documentation to be trivialized and eventually ignored by their programmers.
When the dictionary implementation began in the late 1970’s ASI already had an existing legacy portfolio and work force in which some programmers used copybooks and some did not. Rather than futilely attempting to decree “Thou shalt henceforth use copybooks!” to veteran programmers, Shipley’s approach was to control and document programs where existing copybooks were already used. Over time as resources permitted, in-line data structures were converted to copybooks on a project-by-project basis. There was no massive, frontal assault to do the job all at once. Eventually, both project managers and programmers came to see and appreciate that when copybooks and data elements were more completely documented in the dictionary, their daily routine was easier because they
now had available accurate and complete analysis information.
The impact of these efforts is best seen by comparing industry norms with the ASI result. Dr. Howard Rubin in his “Black Hole” metrics study, discovered that at fewer than 20% of the 2,000 sites he studied can management articulate the scope/extent of their software portfolios. In sharp contrast, Herder can state authoritatively that ASI’s systems portfolio currently consists of approximately 20 applications; 5,300 programs; 3,200 records; 16,000 datasets; and 5,600 data elements. Additionally, ASI has very precise definitions of what their data elements mean.
Consequently, ASI is able to connect data according to its central business meanings, despite the fact that the data elements have many different technical names and representations across the functional applications. The benefit of this capability is that ASI knows in detail the components of its systems and how these components are interrelated.
The players at ASI emphatically insist that they have not measured the financial value that the central change-log/dictionary infrastructure produces, nor do they intend to. Although there are no documented direct benefits, the indirect results are clearly visible. Perhaps the best indicator of the value of ASI’s efforts unfolded as this article was being written.
In early December 1993 the parent GCI decided that as of January 1994 ASI would be responsible for distributing a line of hydraulic engines in the U.S. Without the “dynamic artifacts” record in the dictionary, ASI would have been faced with three increasingly less attractive and more expensive choices: (1) take a wild guess and inform GCI the systems expansion effort would take at least six months, praying their guesstimate was reasonably accurate, (2) support the new product manually, or (3) outsource the distribution effort.
Because the dictionary contains accurate historical information of how the systems are put together, ASI was able to look at a similar product-line expansion project from three years earlier. Being able to examine a detailed log of the impact and extent of the prior project enabled ASI to state with confidence that they would be able to extend the existing core systems to support the new product line in less than one month. This ability to respond in a timely manner to a significant addition to their market is considered benefit enough by ASI management. Although this incident is large and visible, ASI recognizes that their maintenance efforts provide small benefits on a day-in-and-day-out basis.
An additional indirect benefit of ASI’s careful maintenance of its core systems is that the systems are integrated, not cloned. In other organizations a classic mode of coping with unexpected demands and short deadlines is to clone (copy en mass) a core system that is “pretty close” to the new need and then apply radical modifications. After doing this a few times, organizations find themselves stuck with a series of systems or pieces of systems that appear to have a common ancestry but now work in subtly different ways. Although this classic quick-and-dirty approach does get a “new” system in place rapidly, the longer term maintenance costs and increasing lack of flexibility become significant burdens.
A further problem with the clone approach, which ASI has consciously avoided, is that over time technical personal become increasingly dedicated to a narrow band of functional knowledge. They know how their system works but cannot be moved easily to other seemingly similar applications. ASI’s tightly knit team approach demands flexibility and high productivity. Individuals who are content to write 4GL report programs until they formally retire do not fit into ASI’s work ethic.
ASI’s success with their data dictionary chronicles how one organization put into place the structure to facilitate, and indeed require, good maintenance practices. Their achievement has not come overnight-they have been working on this solution for 23 years. They have not had the luxury of lavish resources. Although it is certain that the dedication and long-range focus of the individual participants contributed in no small way to their success, the principles they followed-the basics of sound life-cycle management-are universally applicable to all organizations that must build and
maintain software systems, large and small.
Many thanks to the now disguised individuals, plus John Zachman for their valuable time in helping me with this article.