The Role of XML in Business Re-Engineering

Published in TDAN.com July 2001


1. ABSTRACT

The Internet and corporate Intranets present opportunities to re-engineer business processes for direct access between customers and suppliers. Re-engineering for this environment requires
close integration of business plans, business processes and business information, to ensure that systems are built that are closely aligned with strategic directions. A new generation of I-CASE
tools is also emerging that can automatically analyse data models to identify cross-functional processes. These present re-engineering opportunities that benefit from the open architecture
environment of the Internet and Intranets. The emergence of the Extensible Markup Language (XML) as a recommendation by the World Wide Web Consortium (W3C) provides a technology that offers
significant opportunities for business re-engineering.

Note: This is an update of an earlier paper: “Business Re-Engineering and the Internet: Transforming Business for a
Connected World”
which discussed conceptually what could be achieved with the Internet. This paper provides the technology foundation to achieve the full potential of re-engineering using XML
and the Internet and Intranets.


2. THE PROBLEMS OF CHANGE

The Internet and its corporate counterpart, the Intranet, are transforming the competitive landscape with a rapidity and in ways never thought possible [Finkelstein, 1996]. Organizations are faced
with unprecedented competition on a global basis. To compete effectively, they must change: those that fail to see the need for change will not survive. This change, in most cases, involves
re-engineering the business. This paper shows how a focus on business re-engineering, and development of information systems that directly support strategic business plans, allows managers to take
control and turn tomorrow’s chaotic environment to their competitive advantage by using the Internet and Intranets.

To succeed in today’s environment an organization must focus first on customers and markets, rather than on products and production. To compete effectively in today’s market
environment, flexibility is key: time to market has to go to virtual zero [Zachman, 1992]. This suggests a strategy of Assemble-to-Order: of products, custom-built from standard components, and
tailored to each customer’s specific needs.

An assemble-to-order strategy applies not only to manufacturing and assembly, but to most service industries as well: such as custom-tailored banking or insurance products, or government services.
It also applies to systems development. The solutions are well known. They involve the integration of business and IT: integration on the business side using strategic business planning and
business re-engineering; and integration for IT using client-server systems and object-oriented development. The Internet and Intranets also assist. The emergence of the Extensible Markup Language
(XML) as a recommendation by the World Wide Web Consortium (W3C) offers a powerful technology to achieve integration of dissimilar systems. It introduces significant re-engineering opportunities.

We will first discuss well-known problems associated with manual and automated systems. We will then see how data models can help us identify re-engineering opportunities for business processes. We
will discuss how open architecture environments established by Internet and Intranet technology can be utilised to re-engineer processes in ways that were difficult to achieve before.


3. MANUAL PROCESSES AND SYSTEMS

Consider a typical business that accepts orders for products or services from the Sales Dept, placed by customers. These orders are processed first by the Order Entry Dept, and then by the Credit
Dept. Each department needs details of the customer, such as name and address, and account balance, and details of the order. These are saved in customer files and order files, held and maintained
by each department as shown in Figure 1.

In satisfying these orders, items used from inventory must eventually be replaced. These are ordered by the Purchasing Dept from suppliers, and are then later paid by the Accounts Payable section
of the Accounts Dept. Details of name and address, and the account balance due to the supplier, are also saved in supplier files and creditor files. These are held redundantly and are maintained
separately by each department as also shown in Figure 1.

 



Figure 1: The same data often exists redundantly in an organization. Each redundant data version must be maintained current, and up-to-date. This invarably leads to the evolution of
redundant processes.

To ensure these redundant data versions are kept current and up-to-date and all versions consistently refer to the same information, any change to one data version must also be made to all other
affected versions. For example, a customer notifies the Sales Dept of a change of address. That address change must be made not only in the Sales Dept customer file, but also in the customer file
maintained by the Order Entry Dept. The same customer details are held and maintained redundantly by the Credit Dept in their client file, and by the Invoicing Section of the Accounts Dept in their
debtor file.

And if the customer also does business with the organization as a supplier, that change of address must be made to the supplier file maintained by Purchasing and to the creditor file maintained by
Accounts Payable in the Accounts Dept. The address change must be made to every redundant data version, so that all information about the customer is correct and is consistent across the
organization.

This is not a computer problem: it is an organization problem! But its resolution over the years has defined the manual processes and the organizational structures adopted by countless
organizations. In our earlier example, processes are defined to capture the details of customers and make changes to those details when needed later, as in the change of customer address. Many of
these processes are also redundant, but are necessary to maintain the data versions all up-to-date. Redundant data thus leads to redundant processes.

And of course, people have been allocated to carry out these redundant processes in each department. This staffing is unproductive and adds nothing to the bottom line: in fact, it reduces
profitability. It introduces unnecessary delays in serving the customer, so reducing customer service – leading often to competitive disadvantage.


4. AUTOMATED PROCESSES AND SYSTEMS

In the 1980s office automation was introduced to solve the problem. But this focused on the symptom: the time that the Address Change Form took to reach each affected department. It did not address
the true problem, the redundant data – it only sped up the paper! To resolve the problem common data (name and address in our example) should be stored once only, so all areas of the business
that are authorized to access it can share the same data version. This is illustrated in Figure 2.

 



Figure 2: Common data can be shared as one data version, and made available to all authorized to use it. Specific data is then managed by each area that needs it.

While name and address is common to many areas, there is also data needed only by certain areas. An organization may have more than one role in its business dealings, shown by Organization Role in
Figure 2: one role may be as a customer and another role as a supplier. For example, the Credit Dept must ensure that the value of the current order, when added to the outstanding debtor balance
still to be paid (maintained by the Accounting Dept) does not exceed the customer’s credit limit. And Accounting must be aware of the creditor balances that are due to be paid to
organizations that we deal with as suppliers. While this data is specific to each of these areas, an organization’s name and address is common – regardless of its role as a customer, or
as a supplier – and so is shared by all, as shown in Figure 2.

It is in the identification of common data that the greatest impact can be made on the efficiency and responsiveness of organizations to change. Once applied, changes made to common data are
available immediately to all areas that need the data to be current. And because common data is stored only once, redundant processes that maintained redundant data are no longer needed. Business
processes that evolved over many years to ensure all areas had access to needed data (by each department maintaining its own data version) now share common data and are immediately aware of any
changes.


5. THE FIRST ERA OF THE INFORMATION AGE

The way in which an organization structures itself today, when common data is readily available and can be easily shared is quite different from the way it had to be organized when that data was
difficult to obtain. New business processes emerge that often cross previous functional boundaries. This leads to Business Re-Engineering (BRE) The strong interest in Business Process Reengineering
(BPR) has addressed only a subset of the broader subject of business re-engineering. Organizations which approach BPR without recognizing and first correcting the redundancy problems of
organizational evolution discussed above, are inviting trouble. This is discussed further, shortly.

We are now seeing the emergence of the “New Corporation”, defined by [Tapscott and Caston, 1992] as “the fundamental restructuring and transformation of enterprises, which in turn cascades
across enterprises to create the extended enterprise and the recasting of relationships between suppliers, customers, affinity groups and competitors.” In their book [Tapscott et al, 1993] they
discuss that organizations today are now moving from the First Era of the Information Age, to the Second Era.

They categorize the First Era by the traditional system islands built for established organization structures, using computers for the management and control of physical assets and facilities, for
the management and support of human resources, and for financial management and control systems. Computers have automated the existing processes used by an organization, replicating redundant data
and manual processes with redundant files or data bases, and redundant computer processing. But these automated systems are far more resistant and difficult to change than the manual systems they
replace. Organizations have buried themselves in computer systems that have now set like concrete: the computer systems introduced to improve organizational responsiveness now inhibit the very
business changes needed to survive!

One of the original roles of middle managers was to implement internal procedures and systems that reflected directions and controls set by senior management. Their other role was to extract
specific information, needed by senior management for decision-making, from underlying operational data at all levels of an organization. Organizations that downsized by only eliminating middle
managers are now vulnerable. But those that downsized also by eliminating redundant data as well as redundant processes have enjoyed dramatic cost reductions – while also ensuring that
accurate information is made available for management decision-making.

It is true that with accurate, up-to-date information available electronically and instantly, many layers of management in the First Era are no longer needed. But if this accurate information is
not available, organizations invite disaster if they do not first implement effective systems to provide that information before removing middle managers. Only when the earlier problems of
redundant data and redundant processes are resolved can downsizing truly be effective. Then decision-making can be faster, when access to accurate, up-to-date information is available
corporate-wide.


6. THE SECOND ERA OF THE INFORMATION AGE

Tapscott and Caston next described the Second Era of the Information Age as that which supports open, networked enterprises. These new corporations move beyond constraints of organizational
hierarchy. In this Second Era, re-engineered processes and systems in the new corporation not only cross previous functional boundaries. They move beyond the organizational hierarchy –
utilising computer systems that can directly link suppliers and customers. For example, insurance companies link directly with agents who sell their products: insurance policies that are uniquely
tailored to satisfy their customers’ exact insurance needs. With the Internet today, we are now moving into the Second Era.

Similarly airlines link with travel agents and convention organizers. The services they offer are not only booked airline flights, but also accommodation, car rental, and business meetings or
holidays tailored uniquely to each customer’s needs. In addition: banks provide direct online account access for their customers; manufacturers link to terminals in the trucks of their
suppliers for just-in-time delivery; and governments provide information kiosks for public access, so that people can obtain the specific information they need. These are all examples of business
re-engineering, in the spirit so memorably encapsulated in the title of a landmark paper: “Reengineering Work: Don’t Automate, Obliterate” [Hammer, 1990].

The important point to recognize with Business Re-Engineering, and its subset of Business Process Reengineering, is their vital link with the development of computer application systems. But
systems development has seen dramatic change. Integrated Computer-Aided Software Engineering [I-CASE] tools and methodologies are available that automate most manual, error-prone tasks of
traditional systems development. These result in the rapid development of high quality systems that can be changed easily, and fast, resolving many of the problems discussed earlier.

An understanding of computer technology was a prerequisite of the traditional systems development methods of the First Era, and it was hard for business managers to participate effectively. The new
generation of I-CASE tools for the Second Era achieve dramatic success by harnessing the knowledge of business experts. This knowledge is held by business managers and their staff, not computer
analysts. When business experts use business-driven Information Engineering and Enterprise Engineering [IE and EE] in a design partnership with computer experts, redundant data and processes are
readily identified.

Using the knowledge of business experts across different areas that share common data, integrated data bases that eliminate redundant data and redundant processes are defined using business-driven
CASE tools. Automatic analysis of common data by these I-CASE tools identifies new business processes that cross functional boundaries, and where appropriate also cross enterprise boundaries as
discussed later. These cross-functional processes are automatically identified by the software and suggest common, shared processes. These can be identified and implemented across an organization,
for improved efficiency and effectiveness. Re-engineering opportunities thus emerge.

Data bases and systems are designed which are of high business quality, and able to be implemented by computer experts using the most appropriate computer technology for the business: decentralized
in a client-server environment, or instead centralized. Data bases can be automatically generated for any SQL-dialect RDBMS, such as IBM DB2, Oracle, CA/Open-Ingres, Sybase, Microsoft SQL Server
and other RDBMS products. These systems can be built using a wide variety of computer languages or development tools, as object-oriented systems that share common data and common logic. This
enables systems to be built rapidly and changed easily. The I-CASE tools discussed earlier automatically derive object-oriented logic from integrated data identified by the business experts.

The result is a dramatic gain in systems development and maintenance productivity and quality. Systems can now be built rapidly to meet the changing business requirements imposed by the intense
competitive environment of today. These systems become competitive weapons that achieve success: not just by eliminating redundant data and processes, and duplicated staffing – so leading to huge
cost savings. They also provide management with accurate, up-to-date, consistent information that was difficult, if not impossible, to obtain with confidence before. In this way, IT achieves its
true potential, as corporations move to the Second Era of the Information Age.

The rest of this paper illustrates how this is achieved, using the Internet and Intranet for deployment.


7. BUSINESS RE-ENGINEERING FOR THE SECOND ERA

As discussed above, we need to consider not only redundant data versions, but also the redundant processes which have evolved to keep redundant data versions up-to-date. Integrated data models not
only eliminate redundant data versions, but also eliminate redundant processes. Data and processes represent Business Information and Business Processes which both must support Business Plans
defined by management, as shown in Figure 3.

 



Figure 3: Business Re-Engineering improves all three areas essential to business effectiveness: Business Plans, Business Processes and Business Information.


7.1. Business Re-Engineering from Business Plans

Business plans represent the ideal starting point for Business Re-Engineering, as they apply at all management levels. When defined explicitly by senior managers they are called strategic business
plans; when defined by middle managers they are called tactical business plans. At lower management levels they are operational business plans. We will use the generic term Business Plan to refer
to all of these.

Business plans define the directions set by management for each business area or organization unit. They indicate the mission of the area and its defined policies, critical success factors, its
goals, objectives, strategies and tactics. They are catalysts for definition of business processes, business events and business information as follows.

Policies are qualitative guidelines that define boundaries of responsibility. In a data model, policies are represented by related groups of data entities. The data entities defined within a
business area thus together represent the policies that apply to the management and operation of that area. A policy also establishes boundaries for business processes that are carried out by one
business area, and processes that relate to other business areas. An example of a policy is:

Employment Policy: We only hire qualified employees.

Critical Success Factors (CSFs) – also called Key Performance Indicators (KPIs) or Key Result Areas (KRAs) – define a focus or emphasis that achieves a desired result. They lead to the
definition of goals and objectives. Goals and objectives are quantitative targets for achievement. Goals are long-term; objectives are short-term. They indicate WHAT is to be achieved, and have
quantitative characteristics of measure, level and time. The measure is represented by data attribute(s) within entities. The level is the value to be achieved within an indicated time for the goal
or objective. These attributes represent management information, and are generally derived from detailed attributes by processes at the operational level. They provide information that managers
need for decision-making. An example of a measurable objective is:

Hiring Objective: To be eligible, an applicant must exceed a score of 70% on our Skills Assessment Test at the first attempt.

There are many alternative strategies that managers may use to obtain the information they need. Strategies may contain more detailed tactics. Strategies detail WHAT has to be done. Tactics define
how the information is provided and how it is derived. Together they lead to the definition of business processes. Examples of a strategy and business process are:

Assessment Strategy: Interview each applicant and match to position criteria, then administer the relevant Skills Assessment Test.

Evaluation Process:

1. Review the completed Application Form to ensure all questions have been satisfactorily completed.
2. Check that required references have been provided.
3. Select and administer relevant Skills Assessment Test.
4. Note total time taken to complete all questions.
5. Mark responses and calculate overall score.
6. Write score and completion time on Application Form.

A strategy is initiated by a Business Event, which in turn invokes a business process. Without a business event, nothing happens: business plans, policies, goals, objectives, strategies and
processes are merely passive statements. A business event initiates a business activity or activities – ie. business process(es). A business event example is:

Interview Event: Schedule an interview date with the applicant.

Documented planning statements of mission, critical success factors, policies, goals, objectives, strategies, tactics and business events are allocated to the business area(s) or organization
unit(s) involved in, or responsible for, those statements: in a Statement – Business Area Matrix or Statement – Organization Unit Matrix. This enables the subset of planning statements for each
area or unit to be clearly identified.Strategic plans that define future directions to be taken by an organization represent the most effective starting point for Business Re-Engineering. But in
many organizations today the plans are obsolete, incomplete, or worse, non-existent. In these cases, another apex of the triangle in Figure 3 can be used as the starting point: either business
processes, or business information.


7.2. Business Re-Engineering from Business Processes

Existing business processes, reviewed and documented by narrative description and/or Data Flow Diagrams (DFDs) show how each process is carried out today. Business areas or organization units that
are involved in, or responsible for, a process are identified and documented in a Process – Business Area Matrix or a Process – Organization Unit Matrix.

A business event is the essential link between a business plan and a business process. In the plan, an event is defined as a narrative statement. It can be a physical transaction that invokes a
business process. Or it may represent a change of state. The strategy or tactic that is initiated by an event is documented in an Event – Strategy Matrix. This link must be clearly shown. The
process invoked by each event should also be clearly indicated: documented in an Event – Process Matrix.

Without links to the plan, the business reason(s) why the process exists is not clear. It may be carried out only because “we have always done it that way.” If the process cannot be seen to
support or implement part of the plan, or provide information needed for decision-making, then it has no reason to remain. As the past fades into history, it too can be discarded as a relic of
another time. To re-engineer these processes without first determining whether they are relevant also for the future is an exercise in futility. Worse still, the danger is that management feel they
have done the right thing … when they may have only moved the deckchairs on their own “Titanic”.

If the process is essential, then the strategies implemented by the process must be clearly defined. Associated goals or objectives must be quantified for those strategies. Relevant policies that
define boundaries of responsibility for the process and its planning statements must be clarified. Missing components of the plan are thus completed, with clear management direction for the
process.

The third apex of Figure 3 is an alternative starting point for Business Re-Engineering. In fact, business information is a far more powerful catalyst for re-engineering than business processes, as
we will see.


7.3. Business Re-Engineering from Business Information

Data models developed for business areas or organization units should ideally be based on directions set by management for the future. These are defined in business plans. Where business plans are
not available, or are out-of-date, or the reasons why business processes exist today are lost in the dark recesses of history, data models of business information provide clear insight into future
needs.

Data models can be developed from any statement, whether it be a narrative description of a process, or a statement of a policy, goal, objective or strategy. The redundant data versions that have
evolved over time (see Figure 1) are represented as data models, consolidated into integrated data models. Data versions from different business areas are integrated so that any common data can be
shared by all areas that need access to it. Regardless of whichever area updates the common data, that updated data is then available to all other areas that are authorized to see it.

With this integration, redundant business processes – earlier needed so that redundant data versions are maintained up-to-date – are no longer required. Instead, new processes are
needed. As common data is integrated across parts of the business, data that previously flowed to keep the redundant data versions up-to-date no longer flows. With integrated data models,
implemented as integrated data bases, data still flows to and from the outside world – but little data flows inside the organization. The original processes that assumed data existed
redundantly may no longer work in an integrated environment. New, integrated, cross-functional processes are required.

But how can cross-functional processes be identified? Data Flow Diagrams provide little guidance in this situation. And Affinity Analysis provides little help either. This technique is still used
by many CASE tools to group related entities into business areas or subject areas. But it is highly subjective. It depends on the knowledge that the data modeler has of the business; of data
modeling; and the thresholds that are set. As a technique, it is not repeatable. It is potentially dangerous, as indicated next.

Where allowance is made for its subjectivity, affinity analysis can be still be useful. But when its results are accepted blindly, without question, the end result can be disaster. It lacks rigor
and objectivity. It can lead to the grouping of more data in a subject area than is needed to deliver priority systems. This will require more resources to develop those systems; they will take
longer and cost more. This is merely embarrassing, as in: “the IT department has overrun its budget yet again.”

But the real danger is that essential, related data may not be included in the subject area. This related data may indicate inter-dependent processes that are needed for the priority processes to
funtion. The end result? When delivered, the systems may not support all business rules that are essential for correct functioning of the business process. The systems may be useless: developed at
great cost, but unable to be used. This situation is not embarrassing: it is disastrous! At best, it represents wasted development time and cost. At worst, it can affect an organization’s
ability to change rapidly, to survive in today’s competitive climate.

Related data that indicates existence of inter-dependent processes leads to the definition of cross-functional processes, derived from data models. These suggest re-engineered business processes.


7.4. Identifying Business Processes from Data Models

Business processes can be identified from the analysis of data models, based on the concepts of Entity Dependency. This is an objective technique, described in [Finkelstein, 1992]. Its importance
for Data Administrators was acknowledged in [McClure, 1993]. Entity dependency is rigorous and repeatable: it can be applied manually, or can be fully automated. When used to analyse a specific
data model, the same result will always be obtained – regardless of whether the analysis is done by man or machine.

Entity dependency automatically identifies all of the data entities that a specific process is dependent upon, for referential integrity or data integrity reasons. It will automatically identify
inter-dependent processes and indicate cross-functional processes. It uncovers and provides insight into business re-engineering opportunities.

Consider the following example, based on the analysis of a data model developed for the business processes discussed earlier in this paper: involved in Sales and Distribution. Figure 4 shows an
integrated data model that consolidates the previously separate functions of Order Entry, Purchasing, Product Development and Marketing. This data model represents business processes in each of
these business areas, stated as follows:

  • Order Processing:
“A customer may have many orders. Each order must comprise at least one ordered product. A product may be requested by many orders.”
  • Purchase Order Processing:
“Every product has at least one supplier. A supplier will provide us with many products.”
  • Product Development:
“We only develop products that address at least one need that we are in business to satisfy.”
  • Marketing:
“We must know at least one or many needs of each of our customers.”

Figure 4 will be used to illustrate an important principle of entity dependency, used to identify process re-engineering opportunities from a data model. This principle is stated as:

Intersecting entities in a data model represent functions, processes and/or systems.

This leads to identification of cross-functional processes that arise from integration of the Order Entry, Purchasing, Product Development and Marketing functions as we shall soon see.

 



Key to Association Symbols and Meaning

 



Figure 4: Sales and Distribution data map showing the integration of Order Entry, Purchasing, Product Development and Marketing functions.

In Figure 4, order product is an intersecting (“associative”) entity formed by decomposing the many to many association between ORDER and PRODUCT. It represents the Order Entry Process used in
the Order Entry business area. (When implemented it will be the Order Entry System; but we will focus on identifying processes at this stage.) Similarly, PRODUCT SUPPLIER is an intersecting entity
that represents the Product Supply Process used in Purchasing. PRODUCT NEED is the Product Development Process used in the Product Development area. Finally, CUSTOMER NEED is the Customer Needs
Analysis Process used in Marketing.

The data model in Figure 4 is common to many organizations and industries. I will use it to illustrate the principles of reengineering opportunity analysis. For example, by inspection we can
already see re-engineering opportunities to integrate some functions based on our understanding of the business. But what of other business areas where mandatory rules have been defined that we are
not aware of? How can we ensure that these mandatory rules are correctly applied in our area of interest? The complexity of even this simple data model demands automated entity dependency analysis.

Reengineering analysis of the data model in Figure 4 was carried out by an I-CASE tool that fully automates entity dependency analysis for Business Re-Engineering [Visible Advantage]. The results
are shown in Figure 5, an extract from the Cluster Report produced by entity dependency analysis of the data model in Figure 4.

 



Figure 5: Entity dependency analysis of the integrated Sales and Distribution data map from Figure 4, showing the prerequisite processes for the Order Entry Process.

Each potential function, process or system represented by an intersecting entity (as discussed above) is called a Cluster. Each cluster is numbered and named, and contains all data and processes
required for its correct operation. A cluster is thus self-contained: it requires no other mandatory reference to data or processes outside it. Common, inter-dependent, or instead mandatory, data
and processes are automatically included within it to ensure its correct operation.

Figure 5 shows Cluster 2, representing the Order Entry Process. It has been automatically derived from the data model by Visible Advantage. Each of these clusters addresses a business process and
is a potential sub-project; common data and processes appear in all clusters that depend on the data or process. The intersecting entity that is the focus of a cluster appears on the last line.

An intersecting entity indicates an process; the name of that process is shown in brackets after the entity name. The intersecting entity on the last line of the cluster (order product in Figure 5)
is called the “cluster end-point”. It is directly dependent on all entities listed above it that are in bold: it is inter-dependent on those above it that are not in bold; these represent
prerequisite processes of the cluster end-point process. Inter-dependent entities represent common data and processes that are also shared by many other clusters. Thus we can see in Cluster 2 that
the Order Entry Process (the cluster end-point) depends on the prerequisite processes: Product Supply Process, Product Development Process and Customer Needs Analysis Process.

 



Figure 6: Further Entity dependency analysis of Figure 4, showing that the Product Development Process and Product Supply Process are both inter-dependent.

Figure 6 next shows in Clusters 3 and 4 that the first two of these processes are fully inter-dependent: a product supplier cannot be selected without knowing the needs addressed by the product (as
each supplier names its products differently to other suppliers).

Notice that each entity in Figures 5 and 6 is preceded by a right-bracketed number: this is the project phase number of the relevant entity in the process. Shown in outline form, with
higher-numbered phases further indented to the right for each cluster, it represents a conceptual Gantt Chart – the Project Plan for implementation of the process. This Project Plan is
automatically derived by Visible Advantage, also by using entity dependency analysis.

A cluster in outline form can be used to display a data map automatically. For example, vertically aligning each entity by phase, from left to right, displays the data map in Pert Chart format as
illustrated in Figure 7. Alternatively, entities can be horizontally displayed by phase, from top to bottom, in Organization Chart format as shown in Figure 8. An entity name is displayed in an
entity box in Figures 7 and 8; the attribute names may optionally also be displayed in the entity box. And because the data map is generated automatically, it can be displayed using different data
modeling conventions: such as IE notation, or instead IDEF1X.

This ability to automatically generate data maps in different formats is a characteristic of the latest generation I-CASE tools: data maps can be displayed from clusters. They do not have to be
manually drawn; they can be automatically generated. When new entities are added, or associations changed, affected data maps are not changed manually: they are automatically regenerated.
Similarly, process maps can be generated from data models. For example, data access processes (Create, Read, Update, Delete), that operate against entities as reusable methods, can be automatically
generated as object-oriented process maps by such I-CASE tools.

 



Figure 7: Pert Chart data map format, with entities vertically aligned by phase from left to right

 



Figure 8: Organization Chart data map format, with entities horizontally aligned by phase from top to bottom. Higher numbered phases representing operational level entities are
automatically displayed lower in the organization chart hierarchy

So why have all processes been included in Cluster 2 of Figure 5 for the Order Entry Process? Because I-CASE tools such as Visible Advantage also provide direct assistance for Business
Re-Engineering, as we discuss next.


7.5. Identifying Cross-Functional Business Processes from Data Models

We saw in Figure 4 that a product must have at least one supplier. Figure 5 thus includes the Product Supply Process to ensure that we are aware of alternative suppliers for each product. But where
did the Product Development Process and Customer Needs Analysis Process come from?

The data map in Figure 4 shows the business rule that each product must address at least one need relating to our core business. Similarly the data map follows the Marketing rule that each CUSTOMER
must have at least one core business need. The Product Supply Process, Product Development Process and Customer Needs Analysis Process have therefore all been automatically included as
prerequisite, inter-dependent processes in Figure 5.

The sequence for execution of these processes is shown in Figure 9. This shows each cluster as a named box, for the process represented by that cluster. Each of these process boxes is therefore a
sub-project for implementation. This diagram is called a Project Map as it suggests the development sequence for each sub-project that implements each relevant process.

We can now see some of the power of entity dependency analysis: it automatically applies business rules across the entire enterprise. It becomes a business expert: aware of all relevant business
facts. It determines if other business areas should be aware of relevant business rules, data and processes. It derives a Project Map for clear project management of each sub-project needed to
implement those processes as potential computer systems. This is illustrated in Figure 9.

 



Figure 9: The Order Entry Process depends on prerequisite, inter-dependent processes to its left. These suggest re-engineering opportunities for Order Entry.

So what do these prerequisite, inter-dependent processes suggest? How do they help us to identify re-engineering opportunities? And how can we use the Internet?


7.6. Re-Engineering Opportunity Analysis

The Project Map in Figure 9 is also used for re-engineering opportunity analysis. It enables us to identify re-engineering opportunities.

Figure 9 shows that the prerequisite processes for the Order Entry Process are cross-functional; these separate processes can be integrated. Consider the following scenario for the Order Entry
Process – before Business Re-Engineering:

Customer:“Customer 165 here. I would like to order 36 units of Product X.”
Order Clerk:“Certainly … Oh, I see we are out of Product X at the moment. I’ll check with the Warehouse. I will call you back within the hour to let you know when we
can expect more of Product X into stock.”
Customer:“No don’t bother, I need to know now. Please cancel the order.”

Clearly, this example shows that the Order Clerk has no access to the Inventory Control System in the Warehouse. There is no way to determine when outstanding purchase orders for out-of-stock
products will be delivered. It requires a phone call to the Warehouse staff to get that information. A call-back in an hour is no longer responsive for today’s customers. The sale was
therefore lost.

Now consider the same scenario – after Business Re-Engineering:

Customer:“Customer 165 here. I would like to order 36 units of Product X.”
Order Clerk:“Certainly … Oh, I see we are out of Product X at the moment. One moment while I check with our suppliers. … Yes, we can deliver 36 units of Product X to you
on Wednesday.”

What has happened in this scenario? Product X was out of stock, so the Product Supply Process then automatically displayed all suppliers of Product X. The Purchasing function had been re-engineered
so the Order Clerk can link directly into each supplier’s inventory system to check the availability and cost of Product X for each alternative source of supply. For the selected supplier,
the Clerk placed a purchase order for immediate shipment and so could confirm the Wednesday delivery date with the customer.

But there are problems with this approach, due to incompatibilities between the supplier’s Inventory Control System and the Order Entry System. There may be incompatibilities between the
Operating Systems, Data Base Management Systems, LANs, WANs and EDI data formats used by both organizations. We will discuss these problems and their resolution using the Internet, shortly.

The re-engineered Product Supply Process discussed above seems revolutionary, but other industries which also take orders online consider this inter-enterprise approach to Order Entry the norm.

Consider the Travel Industry. We phone a travel agent to book a flight to Los Angeles (say) because we have business there. We need to fly there on Wednesday evening, for business on Thursday and
Friday. But we also decide to take the family and plan to stay for the weekend, returning Sunday evening. The travel agent uses an Airline Reservation terminal to book seats on suitable flights.
These are ordered from an inventory of available seats offered by relevant suppliers: the Airlines.

Let us now return to the customer on the phone – still talking to the Order Clerk, who says:

Order Clerk:“By the way, do you know about Product Y. It allows you to use Product X in half the time. I can send you 36 units of Y as well for only 20% more than your
original order. If you agree, we can deliver both to you on Wednesday.”
Customer:“OK. Thanks for that suggestion, and Wednesday is fine. I look forward to receiving 36 units each of Products X and Y on that day.”

The Product Development Process displayed related products that met the same needs as Product X. This suggested that Product Y may be of interest. An order for Y, based on the current order for X,
was automatically prepared and priced – and Y was in stock. This extension to the order only needed the customer’s approval for its inclusion in the delivery.

What happened here? We see that the Product Development Process has also been re-engineered. The ordered Product X satisfies certain needs (see PRODUCT NEED in Figure 4). Other products may also
satisfy those needs, as indicated by Product Y above.

Once again, this is commonplace in the Travel Industry. The travel agent knows the customer will be in Los Angeles over several nights and so asks whether any hotel accommodation is needed. If so,
a booking is made at a suitable hotel using another supplier’s system: Hotel Reservations.

The customer continues with the Order Clerk, who now says:

Order Clerk:“We find that customers using Product X also enjoy Product Z. Have you used this? It has the characteristics of … … and costs only … … Can I include 36
units of Product Z as well in our Wednesday delivery?”
Customer:“Yes. Thanks again. I confirm that my order is now for 36 units each of Products X, Y and Z – all to be delivered on Wednesday.”

Finally, the Customer Needs Analysis Process knew that customers in the same market as Customer 165, who also used Products X and Y, had other needs that were addressed by Product Z. A further
extension to include Z in the order was automatically prepared and priced. Z was also in stock and was able to be included in the delivery, if agreed.

This is analogous to the Travel Agent asking if a rental car and tour bookings were also needed: quite likely if a family is in Los Angeles, and thus near the many LA tourist locations for a
weekend.

Instead of waiting for stock availability from the Warehouse in the first scenario based on separate, non-integrated processes for each function, the re-engineered scenario let the Clerk place a
purchase order directly with a selected supplier so that the customer’s order could be satisfied. And the Product Development and Customer Needs Analysis processes then suggested
cross-selling opportunities based first on related products, and then on related needs in the customer’s market.

Re-engineered, cross-functional processes identified using entity dependency analysis can suggest reorganization opportunities. For example, inter-dependent processes may all be brought together in
a new organizational unit. Or they may remain in their present organizational structure, but integrated automatically by the computer only when needed – as in the re-engineered scenario
discussed above.

But what about the incompatibilities we discussed earlier with inter-enterprise access to suppliers’ Inventory Systems? The Internet and XML offer us dramatic new ways to address these
otherwise insurmountable incompatibiltities.


8. THE STATUS OF INTERNET AND INTRANET TECHNOLOGIES

The Internet has emerged since 1994 as a movement that will see all businesses inter-connected in the near future, with electronic commerce as the norm. Let us review the status of Internet and
Intranet technologies today, as summarized in Box 1. This indicates that most DBMS and Client/Server tools will interface directly and transparently with the Internet and Intranet. Web browsers,
Java, HTML, XML, the Internet and Intranet will all provide an open-architecture interface for most operating system platforms. Previous incompatibilities between operating systems, DBMS products,
client/server products, LANs, WANs and EDI disappear – replaced by an open architecture environment of HTML, XML and Java.

The open-architecture environment enjoyed by the audio industry – where any CD or tape will run on any player, which can be connected to any amplifier and speakers – has long been the holy grail of
the IT industry. Finally, once the industry has made the transition over the next few years to the open-architecture environment brought about by Internet and Intranet technologies, we will be
close to achieving that holy grail !!!


Box 1: A Status Update of Internet and Intranet Technologies

  • >Web browsers are now available for all platforms and operating systems, based on an open architecture
    interface using HyperText Markup Language (HTML). A key factor influencing future computing technologies will be this open architecture environment. The Web browser market will be largely shared
    between Microsoft and Netscape. The strategy adopted by Microsoft has seen it rapidly gain market share at the expense of Netscape: it has used its desktop ownership to embed its browser technology
    (Internet Explorer) as an integral component of Windows NT and Windows 98.
  • The Internet is based on TCP/IP communications protocol and Domain Naming System (DNS). Microsoft, Novell and other network vendors recognise that TCP/IP and DNS are the network standards for
    the Internet and Intranets. This open architecture network environment benefits all end-users.
  • The battle to become THE Internet language – between Java (from Sun) and ActiveX (from Microsoft) will likely be won by neither. Browsers will support both languages, and will automatically
    download from Web servers, as needed, code in either language (as “applets”) for execution. Instead, the winners of this battle will again be the end-users, who will benefit from the open
    architecture execution environment.
  • XML will be the successor to HTML for the Internet, Intranets, and for secure Extranets between customers, suppliers and business partners. XML incorporates metadata in any document, to define
    the content and structure of that document and any associated (or linked) resources. It has the potential to transform the integration of structured data (such as in relational databases or legacy
    files) with unstructured data (such as in text reports, graphics, images, audio and video). Data Base Management System (DBMS) vendors (those that plan to survive) will support dynamic generation
    of HTML using XML, with transparent access to the Internet and Intranets by applications using XML tools. They will accept HTML input direct from Web forms, process the relevant queries using XML
    for integration of dissimilar databases and generate dynamic XML and HTML Web pages to present the requested output.
  • Client/Server vendors (again those that plan to survive) will also provide dynamic generation of HTML for browsers that will be used as clients, with transparent access to the Internet and
    Intranets for XML applications built with those tools. Client code, written in either ActiveX or Java, will both be supported and downloaded as needed for execution, and for generation of dynamic
    HTML output to display transaction results.
  • Data Warehouse and Data Mining products will provide a similar capability: accepting XML input and generating HTML output if they are to be used effectively via the Intranet and Internet. And
    also Screen Scraper tools that provide GUI interfaces for Legacy Systems will become internet-aware: accepting 3270 data streams and dynamically translating them to, or from, HTML to display on the
    screen. XML will provide an integration capability for easy migration of Legacy Systems to the Internet and Intranets.

The client software will be the web browser, operating as a “fat” client by automatically downloading Java or ActiveX code when needed. Client/server tools will typically offer two options, each
able to be executed by any terminal that can run browsers or XML-aware code:

  1. Transaction processing using client input via web forms, with dynamic XML web pages presenting output results in a standard web browser format, or
  2. Transaction processing using client input via client/server screens, with designed application-specific output screens built by client/server development tools. This optional client
    environment will recognise XML, dynamically translating and presenting that output using designed application-specific screens.

These client/server development tools will provide transparent access to data base servers using HTML-access requests, whether accessing operational data or Data Warehouses. In turn the data base
servers will process these requests – transparently using conventional languages, Java or ActiveX to access new or legacy data bases as relevant. These may be separate servers, or instead may be
mainframes executing legacy systems.

Web servers will then operate as application servers, executing Java, ActiveX or conventional code as part of the middle-tier of three-tier client/server logic distribution, with data base servers
also executing Java, ActiveX or conventional code as the third logic tier.


8.1 Beyond HTML

Tim Berners-Lee at CERN, the originator of the Word Wide Web (WWW) in 1990, developed Hypertext Markup Language (HTML) as a subset of the Standard Generalized Markup Language (SGML). A standard for
the semantic tagging of documents, SGML evolved out of work done by IBM in the 1970s. It is used in Defense and other industries that deal with large amounts of structured data. SGML is powerful,
but it is also very complex and expensive.

HTML was defined as a subset of SGML – specifically intended as an open architecture language for the definition of WWW text files transmitted using Hypertext Transport Protocol (HTTP) across
the Internet. HTML defines the layout of a web page to a web browser running as an open architecture client. Microsoft Internet Explorer and Netscape Communicator share over 90% of the web browser
market; both are now available free.

An HTML page contains text as the content of a web page, as well as tags that define headings, images, links, lists, tables and forms to display on that page. These HTML tags also contain
attributes that define further details associated with a tag. An example of such attributes is the location of an image to be displayed on the page, its width, depth and border characteristics, and
alternate text to be displayed while the image is being transmitted to the web browser.

Because of this focus on layout, HTML is recognized as having some significant problems:

No effective way to identify content of page: HTML tags describe the layout of the page. Web browsers use the tags for presentation purposes, but the actual text content
has no specific meaning associated with it. To a browser, text is only a series of words to be presented on a web page for display purposes.

Problems locating content with search engines: Because of a lack of meaning associated with the text in a web page, there is no automatic way that search engines can
determine meaning — except by indexing relevant words, or by relying on manual definition of keywords.

Problems accessing databases: We discussed earlier that web pages are static. But when a web form provides access to online databases, that data needs to be displayed
dynamically on the web page. Called “Dynamic HTML” (DHTML), this capability enables dynamic content from a database to be incorporated “on-the-fly” into an appropriate area on the web page.

Complexity of dynamic programming: DHTML requires complex programming to incorporate dynamic content into a web page. This may be written as CGI, Perl, ActiveX, JavaScript
or Java logic, executed in the client, the web server, the database server, or all three.

Problems interfacing with back-end systems: This is a common problem that has been with us since the beginning of the Information Age. Systems written in one programming
language for a specific hardware platform, operating system and DBMS may not be able to be migrated to a different environment without significant change or a complete rewrite. Even though it is an
open architecture, HTML also is affected by our inability to move these legacy systems to new environments.

Recognizing these limitations of HTML, the W3C SGML working group (now called the XML working group) was established in mid 1996 [W3C]. The purpose of this group was to define a way to provide the
power of SGML, while also retaining the simplicity of HTML. The XML specifications were born out of this activity [XML Info].

XML retains much of the power and extensibility of SGML, while also being simple to use and inexpensive to implement. It allows tags to be defined for special purposes, with metadata definitions
embedded internally in a web document – or stored separately as a Document Type Definition (DTD) script. A DTD is analogous to the Data Definition Language script (DDL) used to define a
database, but it has a different syntax.

As we discussed earlier, data modeling and metadata are key enablers in the use and application of XML. The Internet and Intranets allow us to communicate easily with other computers. Java allows
us to write program logic once, to be executed in many different environments. But these technologies are useless if we cannot easily communicate with and use existing legacy systems and databases.

Consider the telephone. We can now make a phone call, instantly, anywhere in the world. The telephone networks of every country are interconnected. When we dial a phone number, a telephone assigned
to that number will ring in Russia, or China, or Outer Mongolia, or elsewhere. It will be answered, but we may not understand the language used by the person at the other end.

So it is also with legacy systems. We need more than the simple communication between computers afforded by the Internet. True, we could rewrite the computer programs at each end in Java, C, C++,
or some other common language. But that alone would not enable effective and automatic communication between those programs. Each program must know the metadata used by the other program and its
databases so that they can communicate with each other.

Considerable work has been carried out to address this problem. Much effort has gone into definition and implementation of Electronic Data Interchange (EDI) standards. EDI has now been widely used
for business-to-business commerce for many years. It works well, but it is complex and expensive. As a result, it is cost-justifiable generally only for larger corporations.

XML now also provides this capability. It allows the metadata used by each program and database to be published as the language to be used for this intercommunication. But distinct from EDI, XML is
simple to use and inexpensive to implement. Because of this simplicity, I like to think of XML as:

“XML is EDI for the Rest of Us”

XML will become a major part of the application development mainstream. It provides a bridge between structured databases and unstructured text, delivered via XML then converted to HTML during a
transition period for display in web browsers. Web sites will evolve over time to use XML, XSL and XLL natively to provide the capability and functionality presently offered by HTML, but with far
greater power and flexibility.


8.2 Extensible Markup Language (XML)

A customer example using XML is shown as Figure 10. This illustrates some basic XML concepts. It shows customer data (in italics), such as entered from an online web form or accessed from a
customer database. It shows metadata “tags” (surrounded by ) — such as . A metadata tag must start with a letter or underscore. It must comprise one word and so cannot contain spaces.

XYZ Corporation

123 First Street

Any Town

WA

12345

Figure 10: A Simple XML Example

The tag: is a start tag; the text following it is the actual content of the customer name: XYZ Corporation. It is terminated by an end tag: the same tag-name, but now preceded by “/” — such as .
Other fields define , , , and . Each of these tags is also terminated by an end tag, such as , , and . The example concludes with and end tags.

From this simple example of XML metadata, we can see how the meaning of the text between start and end tags is clearly defined. We can also see that search engines can use these definitions for
more accuracy in identifying information to satisfy a specific query.

Even more effective applications become possible. For example, XYZ can use XML to define the unique metadata used by its suppliers’ inventory systems. This will enable XYZ to place orders via the
Internet directly with those suppliers’ systems, for automatic fulfillment of product orders to satisfy its customers’ orders. We will see an example of this use of XML shortly. XML is
enabling technology to integrate unstructured text and structured databases for next generation electronic commerce and EDI applications.

Development will be easier: many of the incompatibilities we previously had to deal with will be a thing of the past. Open architecture development using the technologies of the Internet will also
be part of the Intranet: able to use any PC and any hardware, operating system, DBMS, network, client/server tool or Data Warehouse. This will be the direction that the IT industry will take for
the foreseeable future.


9. BUSINESS RE-ENGINEERING AND XML

We now see that the rush to implement systems based on Internet and Intranet technologies will resolve the incompatibility problems we discussed earlier. XML will become the standard interface
between the Order Entry system and the Suppliers’ systems.

Suppliers are providing a capability for the world to order products via the Internet, using Web forms to make these order requests. These Web form transactions are sent from the customer’s
browser to the URL (Uniform Resource Locator) of the supplier’s web server, with XML metadata tags defining the field names and contents entered by the customer on the Web form. The input
transaction is processed by the supplier’s Inventory system and ordered products are shipped to the nominated delivery address.

An example of an XML application that integrates multiple suppliers’ inventory control systems is discussed next. A full description of this application can be viewed online from the XML
section of the Microsoft web site [XML Scenarios].

The particular Microsoft scenario discusses an unnamed company that needs to order parts from multiple suppliers, to be used for the manufacture of products. Purchase orders are placed depending on
the suppliers’ current prices and available inventory. Each supplier’s inventory control system uses different terminology to identify its parts, their availability and price. Because
each system has different metadata, it is difficult to integrate those systems with the company’s own inventory control system.

With XML and each supplier’s URL, the latest quotes can be automatically obtained from the relevant supplier’s parts catalog, or via a database query to the supplier’s back-end
inventory control system. The Microsoft [XML Scenarios] web site provides an example of the XML quote format, reproduced here as Figure 11.

M8 metric wing nut, steel, zinc

$7

2000

Figure 11: Hourly supplier quote format, using XML and ColdFusion

The application uses the Allaire ColdFusion Application Server. The server automatically makes hourly requests to each supplier for the latest quotes. Figure 11 illustrates the XML-structured data
snippet that is received, with the product name, price and available quantity. The supplier’s name is then added to the quote, together with a date/time stamp, integrated with quotes from the
other suppliers. The integrated quotes XML document from the Microsoft [XML Scenarios] web site is repeated here as Figure 12.

[create whole file]

House of Hardware

May 6, 1998, 9:00

M8 metric wing nut, steel, zinc

$7

2000

Nutz ‘N Such

May 6, 1998, 9:02

M8 metric nut, steel, zinc

$6

1500

Hardware Haven

May 6, 1998, 9:04

M8 metric nut, steel, zinc

$4.50

500

Figure 12: ColdFusion adds Supplier’s Name and Date/Time around XML Quote snippet from Figure 11, to create an integrated XML Quote document.

Figure 12 shows the ColdFusion conversion of each supplier’s quote and inventory data format into an XML document of all supplier quotes. The supplier metadata and the data from these
dissimilar supplier back-end systems can now be integrated with the company’s own inventory control system.

All supplier data from Figure 12 is presented as an HTML table, able to be sorted on columns. When the company’s Purchasing Officer accepts a specific supplier’s quote, the selected
supplier’s Purchase Order (PO) Form is then automatically displayed. When the PO Form has been completed, it is transmitted directly to the supplier for processing and delivery of the ordered
parts. The supplier’s PO details are then inserted via SQL into the company’s PO-Tracking database.

The re-engineered XYZ Order Entry System from the earlier customer – order clerk dialog can use a similar solution using XML. A computer-generated XML transaction can be sent to the web site
of each supplier of an out-of-stock product. Based on the product availability from each supplier, a Purchase Order is then issued for the selected supplier – who now can deliver directly to
the customer.

A number of other XML applications are also included in the Microsoft [XML Scenarios] web site. Those available online as at March 1999 are listed below; the supplier application discussed above is
listed as example 8.

  1. Integrated Maintenance and Ordering using XML
  2. Automating Customer Service with XML
  3. Creating Multi-Supplier System with XML
  4. Improving Online Shopping with XML
  5. Personal Investment Management Using XML
  6. Interactive Frequent-Flyer Web Site using XML
  7. Consumer Product Ratings Online with XML
  8. Purchasing and Supplier Data Integration with XML
  9. Time and Attendance: Integrating Data with XML

Microsoft presents all XML applications on the [XML Scenarios] web site using a common format: a statement of the business problem; a discussion of the role of XML; an application description; a
graphic illustrating the application; and an example of the XML solution. This illustrates some of the many applications being re-engineered through the use of metadata and XML.

Thus the XYZ Order Entry System discussed earlier can construct an XML document as if it was a transaction entered from a Web form. This computer-generated transaction can then be sent to the URL
of each supplier of an out-of-stock product. A Purchase Order is generated for the selected supplier – who now can deliver directly to the customer. Thus the earlier incompatibility problem,
in automatically accessing suppliers’ systems, disappears.

New reengineering opportunities emerge from immediate access to customers and suppliers via the Internet. But this also means that the chaos of redundant data that exists in most enterprises
… will now be visible to the world! If this redundant data problem is not resolved and new re-engineered process implemented, as discussed in this paper, the chaos will now be apparent from
the front window of each organization’s web site. Not by what can be done, but rather by what they cannot do when compared with competitors. Customers will therefore leave with the click of a
mouse, and go to those competitors that can and will offer them the service they demand.


10. CONCLUSION

To re-engineer only by improving processes using Business Process Re-engineering (BPR) is like closing the barn door after the horse has bolted! Existing processes must be related back to business
plans. Only those processes that support plans relevant to the future should be considered for re-engineering. If a process is important and there are no plans today to guide the process, then
plans must be defined that will provide the needed process guidance for the future. If this is not done, then BPR has the same long term impact on an organization’s competitiveness as
rearranging the deckchairs had on the survival of the Titanic.

Business plans include policies, goals, objectives, strategies and tactics. They may need information for decision-making that is not presently available in the enterprise. This information may be
derived from data that does not exist today. Thus no processes will presently exist to provide the information, or that maintain non-existent data up-to-date. By looking at processes only, BPR may
never have seen the need for this new information, data and processes.

However the business plans provide a catalyst for definition of data and information in data models defined at each relevant management level. These data models are analysed automatically by entity
dependency to determine inter-dependent processes. In turn, these suggest cross-functional re-engineered processes. Entity dependency analysis derives the project plans automatically that are
needed to implement the data bases and systems required by these re-engineered processes.

Only when all three apexes are addressed in Figure 3 can Business Re-Engineering fully consider the needs of the business for the future. These are the three steps to success in BRE. Only then can
re-engineered organizations be built that are effective, efficient, best-of-breed, and able to compete aggressively in the future.

New reengineering opportunities emerge from immediate access to customers and suppliers via the Internet. But this also means that the chaos of redundant data that exists in most enterprises
… will now be visible to the world! If this redundant data problem is not resolved and new re-engineered processes implemented, as discussed in this paper, the chaos will now be apparent
from the front window of each organization’s web site. Not by what can be done, but rather by what they cannot do when compared with competitors. Customers will therefore leave with the click
of a mouse, and go to those competitors that can and will offer them the service they demand.


REFERENCES AND GLOSSARY

IE and EE:

Information Engineering (IE) – first developed in Australia in the late 1970s – is a dominant systems development methodology used world-wide. Enterprise Engineering (EE) is a further
business-driven extension of IE.

Finkelstein, 1981a:

Clive Finkelstein, “Information Engineering”, Series of six InDepth articles published by Computerworld, Framingham: MA (May – June 1981).

Finkelstein et al, 1981b:

Clive Finkelstein and James Martin, “Information Engineering”, Savant Institute, Carnforth, Lancs: UK (Nov 1981).

Finkelstein, 1989:

Clive Finkelstein, “An Introduction to Information Engineering”, Addison-Wesley, Reading: MA (1989) [ISBN 0-201-41654-9].

Finkelstein, 1992:

Clive Finkelstein, “Information Engineering: Strategic Systems Development”, Addison-Wesley, Reading: MA (1992) [ISBN 0-201-50988-1].

Finkelstein, 1996:

Clive Finkelstein, “The Competitive Armageddon: Survival and Prosperity in the Connected World of the Internet”, Information Engineering Services, Melbourne: Australia (1996).

Hammer, 1990:

Michael Hammer, “Reengineering Work: Don’t Automate, Obliterate”, Harvard Business Review, Cambridge: MA (Jul-Aug 1990).

McClure, 1993:

Stephen McClure,”Information Engineering for Client/Server Architectures”, Data Base Newsletter, Boston: MA (Jul-Aug 1993).

Tapscott et al, 1992:

Don Tapscott and Art Caston, “Paradigm Shift: Interview with Tapscott and Caston”, Information Week (Oct 5, 1992).

Tapscott et al, 1993:

Don Tapscott and Art Caston, “Paradigm Shift: The New Promise of Information Technology”, McGraw-Hill, New York: NY (1993).

Visible Advantage:

Visible Advantage is an I-CASE tool for Windows 95, 98 and Windows NT. It automatically uses entity dependency to analyse data models, identify cross-functional process opportunities for business
re-engineering, and derives project plans from data models.

W3C WWW Consortium:

XML, XSL and XLL specifications – http://www.w3.org/

XML Info:

James Tauber’s XMLINFO Web Site – http://www.xmlinfo.com/

XML Scenarios:

Microsoft XML Scenarios Web Site – http://microsoft.com/xml/scenario/intro.asp

Zachman, 1992:

John Zachman, “Concepts of the Framework for Enterprise Architecture”, Zachman International, Los Angeles: CA (1997).

Share this post

Clive Finkelstein

Clive Finkelstein

Clive is acknowledged worldwide as the "father" of information engineering, and is Managing Director of Information Engineering Services Pty Ltd in  Australia. He has more than 45 years of experience in the computer industry. Author of many books and papers, his latest book,  Enterprise Architecture for Integration: Rapid Delivery Methods and Technologies,  brings together the methods and technologies for rapid delivery of enterprise architecture in 3-month increments. Read the book review at http://www.ies.aust.com/ten/ten32.htm. Project references, project steps and descriptions are available from   http://www.ies.aust.com. Click on the  Projects link from any page. Clive may be contacted at cfink@ies.aust.com.

scroll to top