This is Part One of a Two-Part Series.
Many government agencies and corporations are currently examining meta data tools in the marketplace to decide which of these tools, if any, meet the requirements for their meta data management
solutions. Often times these same organizations want to know what types of functionality and features they should be looking for in this tool category. Unfortunately, this question becomes very
complicated as each tool vendor has their own personalized “marketing spin” as to which functions and features are really the most advantageous. This leaves the consumer with a very
difficult task, indeed especially when it seems like none of the vendors tools fully fit the requirements that your meta data management solution requires. At EWSolutions we have several clients
that have these exact same concerns about the tools in the market.
Although I have no plans on starting a software company, I would like to take this opportunity to play software designer, and present my optimal meta data tool’s key functionality.
One of the challenges with this exercise is that meta data functionality has a great deal of depth and breath. Therefore, in order to properly categorize our tool’s functionality, I will use
the six major components of a managed meta data environment (MME):
- Meta Data Sourcing & Meta Data Integration Layers
- Meta Data Repository
- Meta Data Management Layer
- Meta Data Marts
- Meta Data Delivery Layer
I will now walk through each of these MME components and describe the key functionality that my optimal meta data tool would contain.
Meta Data Sourcing & Meta Data Integration Layers
For simplicity sake I will be discussing this “dream” tool’s functionality for both the meta data sourcing and the meta data integration layers together. The goal of the meta data
sourcing and integration layers is to extract the meta data from its source, integrate it where necessary, and to bring it into the Meta Data Repository.
It is important for the meta data sourcing technology to be able to work on mainframe applications, distributed systems and from files (databases, files, spreadsheets, etc.) off of a network. These
functions would have to be able to run on each of these environments so that the meta data could be brought into the repository. I did not include AS 400 environments in my list of platforms
because of its fairly sparse use; however, if your information technology (IT) shop’s preferred application platform is AS 400, clearly your optimal meta data tool would work on that
Many of the current meta data integration tools come with a series of prebuilt meta data integration bridges. The optimal meta data tool would also have these prebuilt bridges. Where our optimal
tool would differ from the vendor tools is that this tool would have bridges to all of the major relational database management systems (e.g. Oracle, DB2, SQL Server, Informix, Sybase and
Teradata), the most common vendor packages (e.g. Siebel, SAP, PeopleSoft, Oracle, etc.), several code parsers (COBOL, JCL, C+, SQL, XML, etc.), key data modeling tools (ERWin, Designer, Rational
Rose, etc.), top ETL (extraction, transformation and load) tools (e.g. Informatica, Ascential) and the major front-end tools (e.g. Business Objects, Cognos, Hyperion, etc.),
As much as is possible I would want my meta data tool to use utilize XML (extensible markup language) as the transport mechanism for the meta data. While XML cannot directly interface with all meta
data sources, it would cover a great number of them.
These meta data bridges would not just bring meta data from its source and load it into the repository. These bridges would be bi-directional and allow meta data to be extracted from the meta data
repository and brought back into the tool.
Lastly, these meta data bridges wouldn’t just be extraction processes, but also have the ability to act as “pointers” to were the meta data is located. This distributed meta data
capability is very important for a repository to have.
Error Checking & Restart
Any high quality meta data tool would have an extensive error checking capability built into the sourcing and integration layers. Meta data in a MME, like data in a data warehouse, must be of high
quality or it will have little value. This error checking facility would check the meta data which it is reading and would check it for errors and then capture any statistics on the errors that the
process is experiencing (meta meta data). In addition, the tool would have error levels of the meta data. For example it would give the tool administrator the ability to configure the actions based
on the error that occurred in the process. For example, should the meta data be 1) flagged with an informational/error message; or 2) flagged as an error and then not loaded into the repository; or
3) flagged an a critical error and the entire meta data integration process is stopped.
Also this process would have “check points” that would allow the tool administrator to restart the process. These check points would be placed in the proper locations to ensure that the
process could be restarted with the least degree of impact on the meta data itself and on its sourcing locations.
Meta Data Repository
The meta data repository component is the physical database which is persistently cataloging and storing the actual meta data. The repository, and its corresponding meta model comprise the
backbone of the MME. Therefore, in listing out the optimal meta data tool’s functionality I will pay special attention to the design and implementation of the meta model.
Meta Model Design
A meta model is a physical database schema for meta data. Anytime an MME is being implemented there are integration processes that need to be custom built in order to bring meta data into the
repository. Therefore, a good meta model needs to be understandable to the repository developers working with it. As a result, the meta model should not be designed in a highly abstracted,
object-oriented manner. Instead mixing classic relational modeling with structured object-oriented design is the preferable approach to designing a meta model. On the other hand, when highly
cryptic (abstracted) object-oriented design is used for the construction of the meta model, it becomes unwieldy and difficult for the IT developers to work with.
The possible exception to this guideline would be if the abstracted object-oriented model has relational views built on the model that would allow for read/write/update capabilities. These views
must be understandable and fully extendible.
Meta Model Implementation
The meta data repository must not be housed in a proprietary database management system. Instead it should be stored on any of the major open relational database platforms (e.g. SQL Server, Oracle,
DB2, Informix, Teradata, Sybase) so that standard SQL can be used with the repository.
Many government agencies and large corporations IT departments are looking to define an enterprise level classification/definition scheme for their data. This semantic taxonomy would then provide
these organizations with the ability to classify their data, in order to identify data and process redundancies in their IT environment. Therefore, the optimal meta data tool would provide the
capabilities to capture, maintain and publish a semantic taxonomy for the meta data in the repository.
In part two I will conclude designing our optimal meta data management tool by presenting its key functionality in the Meta Data Management and Meta Data Delivery layers of a managed meta data
© 2004 Enterprise Warehousing Solutions, Inc. All Rights Reserved