Designing the Optimal Meta Data Tool -Part One of Two
By David Marco
Often times these same organizations want to know what types of functionality and features they should be looking for in this tool category. Unfortunately, this question becomes very complicated as each tool vendor has their own personalized “marketing spin” as to which functions and features are really the most advantageous. This leaves the consumer with a very difficult task indeed, especially when it seems like none of the vendors tools fully fit the requirements that your meta data management solution requires. At EWSolutions we have several clients that have these exact same concerns about the tools in the market.
Although I have no plans on starting a software company, I would like to take this opportunity to play software designer, and present my optimal meta data tool’s key functionality.
One of the challenges with this exercise is that meta data management functionality has a great deal of depth and breath. Therefore, in order to properly categorize our tool’s functionality, I will use the six major components of a managed meta data environment (MME):
- Meta Data Sourcing
- Meta Data Integration Layers
- Meta Data Repository
- Meta Data Management Layer
- Meta Data Marts
- Meta Data Delivery Layer
I will now walk through each of these MME components and describe the key functionality that my optimal meta data management tool would contain.
Meta Data Sourcing & Meta Data Integration Layers
For simplicity sake I will be discussing this “dream” tool’s functionality for both the meta data sourcing and the meta data integration layers together. The goal of the meta data sourcing and integration layers is to extract the meta data from its source, integrate it where necessary, and to bring it into the Meta Data Repository.
It is important for the meta data sourcing technology to be able to work on mainframe applications, distributed systems and from files (databases, files, spreadsheets, etc.) off of a network. These functions would have to be able to run on each of these environments so that the meta data could be brought into the repository. I did not include AS 400 environments in my list of platforms because of its fairly sparse use; however, if your information technology (IT) shop’s preferred application platform is AS 400 clearly your optimal meta data tool would work on that platform as well.
Many of the current meta data management tools come with a series of prebuilt meta data integration bridges. The optimal meta data tool would also have these prebuilt bridges. Where our optimal tool would differ from the vendor tools is that this tool would have bridges to all of the major relational database management systems (e.g. Oracle, DB2, SQL Server, Informix, Sybase and Teradata), the most common vendor packages (e.g. Siebel, SAP, PeopleSoft, Oracle, etc.), several code parsers (COBOL, JCL, C++, SQL, XML, etc.), key data modeling tools (ERWin, Rational Rose, etc.), top ETL (extraction, transformation and load) tools (e.g. Informatica, Ascential (IBM)) and the major front-end tools (e.g. Business Objects, Cognos, etc.).
As much as is possible I would want my meta data tool to use utilize XML (extensible markup language) as the transport mechanism for the meta data. While XML cannot directly interface with all meta data sources, it could interface with a great number of them.
These meta data bridges would not just bring meta data from its source and load it into the repository. These bridges would be bi-directional and allow meta data to be extracted from the meta data repository and brought back into the tool.
Error Checking & Restart
Any high quality meta data management tool would have an extensive error checking capability built into the sourcing and integration layers. Meta data in an MME, like data in a data warehouse, must be of high quality or it will have little value. This error checking facility would check the meta data which it is reading and would check it for errors and then capture any statistics on the errors that the process is experiencing (meta meta data). In addition, the tool would have error levels for the meta data. For example it would give the tool administrator the ability to configure the actions based on the error that occurred in the process. For example, should the meta data be 1) flagged with an informational/error message; or 2) flagged as an error and then not loaded into the repository; or 3) flagged as a critical error and the entire meta data integration process is stopped.
Also this process would have “check points” that would allow the tool administrator to restart the process. These check points would be placed in the proper locations to ensure that the process could be restarted with the least degree of impact on the meta data itself and on its sourcing locations.
Meta Data Repository
The meta data repository component is the physical database which is persistently cataloging and storing the actual meta data. The repository, and its corresponding meta model comprise the backbone of the MME. Therefore, in listing the optimal meta data tool’s functionality I will pay special attention to the design and implementation of the meta model.
A meta model is a physical database schema for meta data. Anytime an MME is being implemented there are always integration processes that need to be custom built in order to bring meta data into the repository. Therefore, a good meta model needs to be understandable to the repository developers working with it. As a result, the meta model should not be designed in a highly abstracted, object-oriented manner. Instead mixing classic relational modeling with structured object-oriented design is the preferable approach to designing a meta model. On the other hand, when highly cryptic (abstracted) object-oriented design is used for the construction of the meta model, it becomes unwieldy and difficult for the IT developers to work with.
The possible exception to this guideline would be if the abstracted object-oriented model has relational views built on the model that would allow for read/write/update capabilities. These views must be understandable and fully extendible.
The meta data repository must not be housed in a proprietary database management system. Instead it should be stored on any of the major open relational database platforms (e.g. SQL Server, Oracle, DB2, Informix, Teradata, Sybase) so that standard SQL can be used with the repository.
Many government agencies and large corporate IT departments are looking to define an enterprise level classification/definition scheme for their data. This semantic taxonomy would then provide these organizations with the ability to classify their data, in order to identify data and process redundancies in their IT environment. Therefore, the optimal meta data tool would provide the capabilities to capture, maintain and publish a semantic taxonomy for the meta data in the repository.
Next month I will continue designing our optimal meta data management tool by presenting its key functionality in the Meta Data Management, Meta Data Marts and Meta Data Delivery layers of a managed meta data environment (MME).
About the Author
Mr. Marco is an internationally recognized expert in the fields of enterprise information management, data warehousing and business intelligence, and is the world’s foremost authority on meta data management. He is the author of several widely acclaimed books including “Universal Meta Data Models” and “Building and Managing the Meta Data Repository: A Full Life-Cycle Guide”. Mr. Marco has taught at the University of Chicago, DePaul University, and in 2004 he was selected to the prestigious Crain’s Chicago Business “Top 40 Under 40” and is the chairman of the Enterprise Information Management Institute (www.EIMInstitute.org). He is the founder and President of EWSolutions, a GSA schedule and Chicago-headquartered strategic partner and systems integrator dedicated to providing companies and large government agencies with best-in-class solutions using data warehousing, enterprise architecture, data governance and managed meta data environment technologies (www.EWSolutions.com). He may be reached directly via email at DMarco@EWSolutions.com