Support Your Local Metamodeler!

Published in TDAN.com April 2005


Overview

For years the lowly meta modeler has been reduced to gathering “data about data” to publish overlooked data dictionaries, misunderstood data lineage and under utilized data integration mappings.
With the advent of new regulatory requirements and the wider acceptance of Enterprise Architecture, the meta modeling skill set is coming under increased demand. No longer relegated to “Prove your
ROI or get a real job!” by IT management, the meta modeling team has become an accepted part of the enterprise data architecture landscape. In a rapidly changing perspective on their job, meta
modeling is being charged with the thorough set of meta information covering operational, technical and administrative metadata. Taken as a whole, meta information now covers data retention,
compliance, governance, standards and quality. This new contextual view of metadata gives significant value and usability to virtually all business units acting as a one stop shop for information
meaning and value. The focus on data about data is a bit too narrow. Current metamodeling activities help to set the context about the data as well.


An historical perspective

Metadata development and management has been a thankless job for many years. Difficulties in defining a metadata process along with a heavy dependence on tools that seemed to provide only a minimal
advantage to manual updates often relegated metadata management to minimal resources. The attempts to find meta model and metadata standards have met with minimal success. The picture is slowly
changing as commercial interests are dragged kicking and screaming into new data management and propagation techniques that will require a formal metadata development processes. And the more formal
the better. The ability to quantitatively measure data requirements and capture requirements at a meta data / metamodeling level create a concrete foundation for data development.

For the novice, Metadata has been a place of inconsistencies and confusion. For the experienced manager, it is a work area with a message, and that message is “Run away…”. Please give me a
nickel for every time I have been asked to “Explain the value of metadata for the team…”. Trying to get the appropriate level of visibility and support within the metadata management design and
development groups has been at best a dicey proposition. But for the brave souls that have ventured into metadata in the past, recent trends indicate a far brighter future, particularly in larger
organizations.

The new mantra in larger organizations is to simplify data management (finally) through the use of “Enterprise Models”. As with hybrid cars, video on demand and corporate accountability, it is
finally a concept whose time has come. Enterprise modeling initiatives are not new, just finally successful. In the late 80’s and early 90’s a number of federal and corporate initiatives attacked
application integration through the development of data standardization initiatives, enterprise modeling and centralized data management. The 90’s saw the use of client server applications and
unfortunately continued to make application specific data models resulting in the typical silo application. We somewhat compensated for this with the advent of the data warehouse as a data
standards repository. Unfortunately most organizations cleaned up the data but just created multiple data warehouses. Good tactical approach in providing the data to the knowledge worker, bad at
developing the reusable data assets necessary for the organization.

It’s not like we don’t get it. By the late 90’s everyone wanted a ‘single view of customer’. Having grown through acquisition everyone suffered through the data integration crunch. While the
Mergers and Acquisition departments looked for the next company to acquire, the IT group feverishly looked for resources to integrate their information. ETL became the life line. We couldn’t
analyze data until we integrated the data. We still really can’t. Somehow we need to ensure we integrate the apples with apples and oranges with oranges, with the occasional tangerine.

Meanwhile, those organizations that had invested in their metadata strategy were rapidly uncoupling data from applications, creating information assets in their data warehouses and driving data
quality significantly higher. Their data models changed only when new business services were added or existing services were changed. Their analytics used integrated views across the organization.
They reduced or reassigned resources because the operational data environment was stable. They had captured their meta information and propagated it as a resource through other initiatives.

So here we are almost halfway through the first decade of the 21st century. What is different from the roaring 90’s? We have come a ways but still prefer the cheap fix rather than the long term
strategy. Relatively recent changes in the way public companies need to collect, and long term manage, information have created new opportunities. New catch phrases like “Governance” and
“Information accountability” have forced Information Management departments to look inwards to shore up data management practices. The complexity of data can be simplified not so much through its
management as through its documentation. Much of the recent accountability required through regulatory enforcement can be accomplished through an improved metamodel and metadata content.

In examining Department of Defense regulations from the early 90’s (DOD-8320 in particular) you can find a very rich metadata requirement and data management practice that only now (15 years
later!) is the corporate world beginning to catch up to. Concepts such as regulatory and governing documents, retention periods, data quality metrics etc. were included as standard meta model
components.

This rich metadata content is also used as the backbone of the data standardization initiative as well. Naming conventions, naming structures all led to clear and precise data development practices
that turned data points into long term information assets. The data standards also permitted the rapid sharing of information assets as needed, particularly in the areas of financial management,
logistics, human resources and provisioning, permitting organizational integration and efficiencies. These efficiencies resulted in significant operational savings without reduction in services.

On the commercial side, those organizations mature enough to adopt the data standards practices reaped significant benefits through the ability to rapidly integrate acquired organizations as well
as extend their own capabilities. By having highly stable operational systems, resources could be shifted into improved data quality and analytical services.


So what happened?

In the rush to service internal clients during recent times, little things like conceptual and logical data models were tossed into the scrap heap of “unnecessary” project artifacts.
Subsequently, each initiative created the same data component over and over again with the idea that it could be “reconciled at the Data Warehouse”. With the increasing popularity of Usecase
models, overlaps became apparent and time required to reconcile the various data attributes became apparent. Additional resources for ETL and EAI services mushroomed. Rather than spending time on
gaining concurrence on data once up front, organizations spend increasing amounts of time to reconcile it downstream through ETL and EAI initiatives.


Where are we headed?

Organizations are now seeing the advantages of dismantling Stove Piped information assets through the integration of siloed data warehouses and applications. Sharing data through EAI initiatives
and integrated applications are reducing the needs for isolated applications.

Siloed ( I think that’s a word…) data can be integrated into common subject data creating reusable data components for operational as well as analytical purposes. Additionally, data volumes can
be reduced dramatically through the reuse of shared data components. This reduction in data sources results in increased data stability, accessibility and usage.

All of this can only be achieved through the increased and standard use of enhanced meta modeling.


Service Oriented Architecture

The development direction of the week, Service Oriented Architecture (SOA) brings together the concepts of integration and reuse quite successfully. The addition of a Data Service Architecture
(DSA) is a branch under development and hopefully will create an approach that generates a rich semantic layer based on integrating various data sources. Though based on great principles for the
development community, the DSA approach has to ensure accessibility by various other groups and applications such as third party products. In all good things of value there is a lot of work still
to be done. The shifting processes will hopefully settle on some new measurable architectural standards that we can educate new and up and comers with. Perhaps the blitz of design processes will
settle down enough to create some artifacts of value.


Conclusion

Several years ago I was working with a very large utility company in the Midwest. The gentleman I was working with was preparing to retire within a matter of months and I was asked to prepare a
transitional strategy to ensure his knowledge base would be available to his replacement. No small feat. We were reviewing several data models as part of a re-hosting initiative. He had developed
the models related to provisioning. I rather glibly asked him what revision this was. The gentleman looked at me very modestly and said, “Oh… We really haven’t changed the business much since I
started. I think it’s still the way I made it in ’68. We had a lot of time to think about things back then.” How many developers or modelers willing to bet some of their work will be around for
30 plus years?

Share this post

John Murphy

John Murphy

John is a 1975 graduate of Bridgewater State College, Bridgewater Massachusetts. Following a brief career as a public and private school teacher, John went to work for Core Laboratories as a geo technician operating early computerized well logging units in the gulf coast for companies such as Gulf, Exxon and BP. John then joined the R&D staff of Teleco Oilfield Services, a subsidiary of Southern Natural Gas, forming their first data integration and analysis department while building early relational analytical data models, integrating drilling, formation and production data. In the late 80’s John worked as a consultant to the Department of the Army in building the Department of the Army Data Dictionary and the Department of Defense Data Repository System, two early metadata repositories. John also worked with the Defense Information Systems Agency (DISA) on data standardization, data modeling and enterprise data development practices.

John became an independent consultant in 1992  From this, John applied his knowledge of Metadata, data architecture, and data standardization to developing Enterprise data design and management practices at companies such as Qwest Communications, Jeppeson Sanders Flight Information Systems, Interactive Video Enterprises and the Federal Aviation Administration, Cigna Health Care, Safeco Insurance, Marriott International and Ford Motor Corporation. Mr. Murphy provided design and architectural support for several large scale initiatives including the Canadian ISPR migration, the Mexican National Retirement Systems (Processar) and early internet marketing ventures with Pacific Telesis. John has developed several e-marketing models for view / visit and navigational analysis along with wirelesss call switch analysis. Both forms of analysis focus on data clean-up and reduction. John also developed several data visualization and analytical processes for the rapid identification and analysis data anomalies.

Through the remainder of the 90’s and to present John has continued consulting in the areas of Data Warehousing, Database architecture, data standardization, data modeling and data migration for companies such as AT&T Broadband, USBank, Marconi Communication, Cigna Health Care and SUN Computers. John’s recent work has focused on data cleansing and standardization based on detailed metadata modeling.

scroll to top