Meta Data History

By David Marco

Computer aided software engineering (CASE) tools, introduced in the 1970s, were among the first commercial tools to offer meta data services. CASE tools greatly aid the process of designing databases and software applications; they also store data about the data they manage.

It did not take long before users started asking their CASE tool vendors to build interfaces to link the meta data from various CASE tools together. These vendors were reluctant to build such interfaces because they believed that their own tool’s repository could provide all of the necessary functionality and, understandably, they did not want companies to be able to easily migrate from their tool to a competitor’s tool. Nevertheless, some interfaces were built, either using vendor tools or dedicated interface tools.

In 1987, the need for CASE tool integration triggered the Electronic Industries Alliance (EIA) to begin working on a CASE data interchange format (CDIF), which attempted to tackle the problem by defining meta models for specific CASE tool subject areas by means of an object-oriented entity relationship modeling technique. In many ways, the CDIF standards came too late for the CASE tool industry.

During the 1980s, several companies, including IBM, announced mainframe-based meta data repository tools. These efforts were the first meta data initiatives, but their scope was limited to technical meta data and almost completely ignored business meta data. Most of these early meta data repositories were just glamorized data dictionaries, intended, like the earlier data dictionaries, for use by DBAs and data modelers. In addition, the companies that created these repositories did little to educate their users about the benefits of these tools. As a result, few companies saw much value in these early repository applications.

It was not until the 1990s, as newer tools expanded the scope of meta data addressed to include business meta data, when some business managers finally began to recognize the value of meta data repositories. Some of the potential benefits of business meta data identified in the industry during this period included:

  • Provide the semantic layer between a company’s systems (operational and business intelligence) and their business users
  • Reduce training costs
  • Make strategic information (e.g. data warehousing, CRM, SCM, etc.) much more valuable as it aids analysts in making more profitable decisions
  • Create actionable information
  • Limit incorrect decisions

The meta data repositories of the 1990s operated in a client-server environment rather than on the traditional mainframe platform that had previously been the norm. The introduction of decision support tools requiring access to meta data reawakened the slumbering repository market. Vendors such as Rochade, RELTECH Group, and BrownStone Solutions were quick to jump into the fray with new exciting repository products. Many older, established computing companies recognized the market potential and attempted, sometimes successfully, to buy their way in by acquiring these pioneer repository vendors. For example, Platinum Technologies purchased RELTECH, BrownStone, and LogicWorks, and was then swallowed by Computer Associates in 1999.

With the growing focus around the World Wide Web, data warehousing and the pending year 2000 (Y2K) deadline looming, the mid to late 1990’s saw meta data becoming more relevant to corporations who were struggling to understand their information resources. Efforts began to try to standardize meta data definition and exchange between applications in the enterprise. Examples include the CASE Definition Interchange Facility (CDIF) developed by the Electronics Industries Alliance (EIA) in 1995 and the Dublin Core Metadata Elements developed by the Dublin Core Metadata Initiative (DCMI) in 1995 in Dublin, Ohio. The first parts of ISO 11179 standard for Specification and Standardization of Data Elements were published in 1994 through 1999. Microsoft and Oracle battled over development of a meta data standard throughout this period. The Object Management Group’ (OMG), supported by Oracle, developed the Common Warehouse Metadata Model (CWM) in 1998. While rival Microsoft supported the Metadata Coalitions’ (MDC) Open Information Model in 1995, By 2000, the two standards were merged into CWM. Many of the meta data repositories began promising adoption for the CWM standard.

In the early years of the 21stcentury, existing meta data repositories were updated for deployment on the web, plus some level of support for CWM was introduced in the products. During this period, many data integration vendors began focusing on meta data as an additional product offering. Examples include Ascential Metastage acquired through an acquisition from Informix, Informatica’s Superglue (now Metadata Manager), and AbInitio Enterprise Metadata Environment (EME).

However, relatively few companies actually purchased or developed meta data repositories, let alone achieved the ideal of implementing an effective enterprise wide Managed Meta Data Environment as defined in “Universal Meta Data Models”. There are a number of reasons for this, including:

  • The scarcity of people with real world skills
  • The difficulty of the effort
  • The less than stellar success of some of the initial efforts at some companies
  • Relative stagnation of the tool market after the initial burst of interest in the late 90’s
  • The still less than universal understanding of the business benefits
  • The too heavy emphasis many in the industry placed on legacy applications and technical meta data

As the decade proceeds, companies are beginning to focus more on the need for and importance of meta data. Focus is also expanding on how to incorporate meta data beyond the traditional structured sources and include unstructured sources.

Some of the factors driving this renewed interest in meta data management are:

  • Recent entry into the market of larger players like IBM
  • The challenges that some companies are facing in trying to address regulatory requirements like Sarbanes-Oxley and privacy requirements with unsophisticated tools
  • The emergence of enterprise wide initiatives like information governance, compliance, enterprise architecture and automated software reuse
  • Improvements to the exiting meta data standards, such as the RFP release for the new OMG standard Information Management Metamodel (IMM) (aka CWM 2.0) which will replace CWM
  • A recognition at the highest levels of the organization by some of the most sophisticated companies and organizations that information is an asset (for some companies the most critical asset), and must be actively and effectively managed

Special thanks to Steve Hyatt

About the Author

Mr. Marco is an internationally recognized expert in the fields of enterprise information management, data warehousing and business intelligence, and is the world’s foremost authority on meta data management.  He is the author of several widely acclaimed books including “Universal Meta Data Models” and “Building and Managing the Meta Data Repository: A Full Life-Cycle Guide”.  Mr. Marco has taught at the University of Chicago, DePaul University, and in 2004 he was selected to the prestigious Crain’s Chicago Business “Top 40 Under 40” and is the chairman of the Enterprise Information Management Institute (www.EIMInstitute.org). He is the founder and President of EWSolutions, a GSA schedule and Chicago-headquartered strategic partner and systems integrator dedicated to providing companies and large government agencies with best-in-class solutions using data warehousing, enterprise architecture, data governance and managed meta data environment technologies (www.EWSolutions.com).  He may be reached directly via email at DMarco@EWSolutions.com

 
Free Expert Consultation