Information Systems Plan:

Published in TDAN.com September 1999


Rationale for an Information Systems Plan

Every year, $300-700 million dollar corporations spend about 5% of their gross income on information systems and their supports. That’s from about $15,000,000 to $35,000,000! A significant part of
those funds support enterprise databases, a philosophy of database system applications that enable corporations to research the past, control the present, and plan for the future.

Even though an information system costs from $1,000,000 to $10,000,000, and even through most chief information officers (CIOs) can specify exactly how much money is being spent for hardware,
software, and staff, CIOs cannot however state with any degree of certainty why one system is being done this year versus next, why it is being done ahead of another, or finally, why it is being
done at all.

Many enterprises do not have model-based information systems development environments that allow system designers to see the benefits of rearranging an information systems development schedule.
Consequently, the questions that cannot be answered include:

  • What effect will there be on the overall schedule if an information system is purchased versus developed?
  • At what point does it pay to hire an abnormal quantity of contract staff to advance a schedule?
  • What is the long term benefit from 4GL versus 3GL?
  • Is it better to generate 3GL than to generate/use a 4GL?
  • What are the real costs of distributed software development over centralized development?

If these questions were transformed and applied to any other component of a business (e.g., accounting, manufacturing, distribution and marketing), and remained unanswered, that unit’s manager
would surely be fired!

We not only need answers to these questions NOW!, we also need them quickly, cost effectively, and in a form that they can be modeled and changed in response to unfolding realities. This paper
provides a brief review of a successful 10-step strategy that answers these questions.

Too many half-billion dollar organizations have only a vague notion of the names and interactions of the existing and under development information systems. Whenever they need to know, a meeting is
held among the critical few, an inventory is taken, interactions confirmed, and accomplishment schedules are updated.

This ad hoc information systems plan was possible only because all design and development was centralized, the only computer was a main-frame, and the past was acceptable prologue because budgets
were ever increasing, schedules always slipping, and information was not yet part of the corporation’s critical edge.

Well, today is different, really different! Budgets are decreasing, and slipped schedules are being cited as preventing business alternatives. Confounding the computing environment are different
operating systems, DBMSs, development tools, telecommunications (LAN, WAN, Intra-, Inter-, and Extra-net), and distributed hard- and software.

Rather than having centralized, long-range planning and management activities that address these problems, today’s business units are using readily available tools to design and build ad hoc
stop-gap solutions. These ad hoc systems not only do not interconnect, support common semantics, or provide synchronized views of critical corporate policy, they are soon to form the almost
impossible to comprehend confusion of systems and data from which systems order and semantic harmony must spring.

Not only has the computing landscape become profoundly different and more difficult to comprehend, the need for just the right–and correct–information at just the right time is escalating. Late
or wrong information is worse than no information.

Information systems managers need a model of their information systems environment. A model that is malleable. As new requirements are discovered, budgets modified, new hardware/software
introduced, this model must be such that it can reconstitute the information systems plan in a timely and efficient manner.


Characteristics of a Quality ISP

A quality ISP must exhibit five distinct characteristics before it is useful. These five are presented in the table that follows.

Characteristic

Description

Timely The ISP must be timely. An ISP that is created long after it is needed is useless. In almost all cases, it makes no sense to take longer to plan work than to perform the work planned.
Useable The ISP must be useable. It must be so for all the projects as well as for each project. The ISP should exist in sections that once adopted can be parceled out to project managers and
immediately started.
Maintainable The ISP must be maintainable. New business opportunities, new computers, business mergers, etc. all affect the ISP. The ISP must support quick changes to the estimates, technologies
employed, and possibly even to the fundamental project sequences. Once these changes are accomplished, the new ISP should be just a few computer program executions away.
Quality While the ISP must be a quality product, no ISP is ever perfect on the first try. As the ISP is executed, the metrics employed to derive the individual project estimates become refined as
a consequence of new hardware technologies, code generators, techniques, or faster working staff. As these changes occur, their effects should be installable into the data that supports
ISP computation. In short, the ISP is a living document. It should be updated with every technology event, and certainly no less often than quarterly.
Reproducible The ISP must be reproducible. That is, when its development activities are performed by any other staff, the ISP produced should essentially be the same. The ISP should not significantly
vary by staff assigned.

Whenever a proposal for the development of an ISP is created it must be assessed against these five characteristics. If any fail or not addressed in an optimum way, the entire set of funds for the
development of an ISP is risked.


ISP Within the Context of the Meta data Environment

The information systems plan is the plan by which databases and information systems of the enterprise are accomplished in a timely manner. A key facility through which the ISP obtains its Adata@ is
the meta data repository. The domain of the meta data repository is set forth in Figure 1, and, as seen through Figure 1, persons through their role within an organization perform functions in the
accomplishment of enterprise missions, they have information needs. These information needs reflect the state of certain enterprise resources such as finance, people, and products that are known to
the enterprises. The states are created through business information systems and databases.

The majority of the meta data employed to develop the ISP resides in the meta entities supporting the enterprise=s resource life cycles (see TDAN issue #7, December 1998, Resource Life Cycle
Analysis), the databases and information systems, and project management. All these meta entities are depicted within the meta data repository meta model in Figure 2.


The ISP Steps

The information systems plan project determines the sequence for implementing specific information systems. The goal of the strategy is to deliver the most valuable business information at the
earliest time possible in the most cost-effective manner.

The end product of the information systems project is an information systems plan (ISP). Once deployed, the information systems department can implement the plan with confidence that they are doing
the correct information systems project at the right time and in the right sequence. The focus of the ISP is not one information system but the entire suite of information systems for the
enterprise. Once developed, each identified information system is seen in context with all other information systems within the enterprise.

Information Systems Plan Development Steps

Step

Name

Description

1. Create the mission model The mission model, generally shorter than 30 pages presents end-result characterizations of the essential raison d=etre of the enterprise. Missions are strategic, long range, and
a-political because they are stripped of the Awho@ and the Ahow.@
2. Develop a high-level data model The high-level data model is an Entity Relationship diagram created to meet the data needs of the mission descriptions. No attributes or keys are created.
3. Create the resource life cycles (RLC) and their nodes Resources are drawn from both the mission descriptions and the high level data model. Resources and their life cycles are the names, descriptions and life cycles of the critical assets of
the enterprise, which, when exercised achieve one or more aspect of the missions. Each enterprise resource Alives@ through its resource life cycle.
4. Allocate precedence vectors among RLC nodes Tied together into a enablement network, the resulting resource life cycle network forms a framework of enterprise=s assets that represent an order and set of inter-resource
relationships. The enterprise Alives@ through its resource life cycle network.
5. Allocate existing information systems and databases to the RLC nodes The resource life cycle network presents a Alattice-work@onto which the Aas is@ business information systems and databases can be Aattached.@ See for example, the meta model in Figure 2.
The Ato-be@ databases and information systems are similarly attached. ADifference projects@ between the Aas-is@ and the Ato-be@ are then formulated. Achievement of all the difference
projects is the achievement of the Information Systems Plan.
6. Allocate standard work break down structures (WBS) to each RLC node Detailed planning of the Adifference projects@ entails allocating the appropriate canned work breakdown structures and metrics. Employing WBS and metrics from a comprehensive methodology
supports project management standardization, repeatability, and self-learning.
7. Load resources into each WBS node Once the resources are determined, these are loaded into the project management meta entities of the meta data repository, that is, metrics, project, work plan and deliverables. The meta
entities are those inferred by Figure 2.
8. Schedule the RLC nodes through a project management package facilities. The entire suite of projects is then scheduled on an enterprise-wide basis. The PERT chart used by project management is the APERT@ chart represented by the Resource Life Cycle enablement
network.
9. Produce and review of the ISP The scheduled result is predicable: Too long, too costly, and too ambitious. At that point, the real work starts: paring down the suite of projects to a realistic set within time and
budget. Because of the meta data environment (see Figure 1), the integrated project management meta data (see Figure 2), and because all projects are configured against fundamental
business-rationale based designs, the results of the inevitable trade-offs can be set against business basics. Although the process is painful, the results can be justified and
rationalized.
10. Execute and adjust the ISP through time. As the ISP is set into execution, technology changes occur that affect resource loadings. In this case, only steps 6-9 need to be repeated. As work progresses, the underlying meta data
built or used in steps 1-5 will also change. Because a quality ISP is Aautomated@ the recasting of the ISP should only take a week or less.

Collectively, the first nine steps take about 5000 staff hours, or about $500,000. Compared to an IS budget $15-35 million, that’s only about 3.0% to 1.0%.

If the pundits are to be believed, that is, that the right information at the right time is the competitive edge, then paying for an information systems plan that is accurate, repeatable, and
reliable is a small price indeed.


Executive and Adjusting the ISP Through Time

IT projects are accomplished within distinct development environments. The two most common are: discrete project and release. The discrete project environment is typified by completely encapsulated
projects accomplished through a water-fall methodology.

In release environments, there are a number of different projects underway by different organizations and staff of varying skill levels. Once a large number of projects are underway, the ability of
the enterprise to know about and manage all the different projects degrades rapidly. That is because the project management environment has been transformed from discrete encapsulated projects into
a continuous flow process of product or functionality improvements that are released on a set time schedule. Figure 3 illustrates the continuous flow process environment that supports releases. The
continuous flow process environment is characterized by:

  • Multiple, concurrent, but differently scheduled projects against the same enterprise resource
  • Single projects that affect multiple enterprise resources
  • Projects that develop completely new capabilities, or changes to existing capabilities within enterprise resources

It is precisely because enterprises have transformed themselves from a project to a release environment that information systems plans that can be created, evolved, and maintained on an
enterprise-wide basis are essential.

There are four major sets of activities within the continuous flow process environment. The user/client is represented at the top in the small rectangular box. Each of the ellipses represents an
activity targeted to a specific need. The four basic needs are:

  • Need Identification
  • Need Assessment
  • Design
  • Deployment

The box in the center is the meta data repository. Specification and impact analysis is represented through the left two processes. Implementation design and accomplishment is represented by the
right two processes. Two key characteristics should be immediately apparent. First, unlike the water-fall approach, the activities do not flow one to the other. They are disjoint. In fact, they may
be done by different teams, on different time schedules, and involve different quantities of products under management. In short, these four activities are independent one from the other. Their
only interdependence is through the meta data repository.

The second characteristic flows from the first. Because these four activities are independent one from the other, the enterprise evolves by means of releases rather than through whole systems. If
it evolved through whole systems, then the four activities would be connected either in a waterfall or a spiral approach, and the enterprise would be evolving through major upgrades to encapsulated
functionality within specific business resources. In contrast, the release approach causes coordinated sets of changes to multiple business resources to be placed into production. This causes
simultaneous, enterprise-wide capability upgrades across multiple business resources.

Through this continuous-flow process, several unique features are present:

  • All four processes are concurrently executing.
  • Changes to enterprise resources occur in unison, periodically, and in a very controlled manner.
  • The meta data repository is always contains all the enterprise resource specifications: current or planned. Simply put, if an enterprise resource semantic is not within the meta data
    repository, it is not enterprise policy.
  • All changes are planned, scheduled, measured, and subject to auditing, accounting, and traceability.
  • All documentation of all types is generated from the meta data repository.


ISP Summary

In summary, any technique employed to achieve an ISP must be accomplishable with less than 3% of the IT budget. Additionally, it must be timely, useable, maintainable, able to be iterated into a
quality product, and reproducible. IT organizations, once they have completed their initial set of databases and business information systems will find themselves transformed from a project to a
release environment.

The continuous flow environment then becomes the only viable alternative for moving the enterprise forward. It is precisely because of the release environment that enterprise-wide information
systems plans that can be created, evolved, and maintained are essential.

Share this post

Michael Gorman

Michael Gorman

Michael, the President of Whitemarsh Information Systems Corporation, has been involved in database and DBMS for more than 40 years. Michael has been the Secretary of the ANSI Database Languages Committee for more than 30 years. This committee standardizes SQL. A full list of Whitemarsh's clients and products can be found on the website. Whitemarsh has developed a very comprehensive Metadata CASE/Repository tool, Metabase, that supports enterprise architectures, information systems planning, comprehensive data model creation and management, and interfaces with the finest code generator on the market, Clarion ( www.SoftVelocity.com). The Whitemarsh website makes available data management books, courses, workshops, methodologies, software, and metrics. Whitemarsh prices are very reasonable and are designed for the individual, the information technology organization and professional training organizations. Whitemarsh provides free use of its materials for universities/colleges. Please contact Whitemarsh for assistance in data modeling, data architecture, enterprise architecture, metadata management, and for on-site delivery of data management workshops, courses, and seminars. Our phone number is (301) 249-1142. Our email address is: mmgorman@wiscorp.com.

scroll to top