Eyes on Data: The Value of Data Products and Innovation for AI Optimization

Artificial intelligence has entered a phase that feels both inevitable and strangely precarious. The hype is unmistakable. So is the sense that we are still early — too early — for the broader economic transformation enthusiasts promise. If history is a guide, the current wave of investment and experimentation may look like a bubble that will eventually burst. But if the dot-com era taught us anything, it’s that bubbles clarify markets rather than invalidate them. The internet didn’t vanish after the tech bubble collapsed in 2000. Instead, the shakeout separated winners from losers while the web became the “killer app” that redefined how value is created, delivered, and scaled. Two decades later, the largest companies in the world were built upon that foundation. 

AI is on a similar trajectory. The message for data leaders and business executives is simple: Stay the course. Pursue innovation and proceed with investment. Organizations that do not adopt and expand their use of AI will eventually find themselves uncompetitive. Survival in an AI-mediated economy will depend on it. 

Garbage In, Garbage Out — Revisited 

Early adopters of modern AI have discovered a reality that computer scientists have understood for decades: Value depends on quality. The value of AI does not just emerge from the models selected or the use cases targeted. It depends as much, if not more, on the data they consume. 

Since generative AI entered the public sphere, non-technical users have marveled at its fluency while simultaneously being disturbed by its occasional “hallucinations” — confident, authoritative answers that are incorrect or misleading. These model behaviors reveal a dependency that often remains underappreciated: AI learns only from what we give it. This has catalyzed renewed attention to data quality at scale. Organizations are rediscovering something the data management community has long asserted: Garbage in, garbage out is not a slogan — it is a business outcome. 

Unsurprisingly, those who see AI’s potential are putting its automation power onto the goal of data quality itself. Cleaning, linking, contextualizing, and governing data at scale is becoming a first-order requirement for AI performance. But data quality alone is insufficient for the next horizon of AI-optimization. 

Data Do Not Interpret Themselves: Enter Ontology 

At its core, information is communication. Communication requires shared meaning. And without shared meaning, even the most elegant systems amplify confusion rather than insight. For decades, data management leaders have evangelized the importance of shared meaning — through standards, governance, semantics, ontologies, and controlled vocabularies. Efficiency and effectiveness hinge on a collective understanding of what our data represents. 

Ontologies have long existed in the background of knowledge-intensive domains such as life sciences, defense, and finance. But generative AI has propelled ontology into mainstream vocabulary. Simply put, an ontology formally represents shared meaning. It defines the things that matter — entities, relationships, constraints, and context — in a way that both humans and machines can understand. 

Ontology provides the connective tissue between information, insight, and action. It encapsulates the semantics necessary to scale decision-making across large, distributed environments with trust and confidence. In an AI-fueled world, ontology is becoming the mechanism for human-in-the-loop at design time, rather than intervention after errors occur. 

AI-Optimization Through Knowledge Graphs: “Gold In, Gold Out” 

Knowledge graphs put ontologies to work. They are data structures optimized for machine processing of meaning and context. Unlike most traditional datasets, knowledge graphs are inherently intelligible — a human can read them, inspect them, and validate their structure and logic. This is human-in-the-loop from beginning to end. 

Ontology-driven knowledge graphs enable interoperability within and across organizations by aligning systems on shared meaning. They invert the old paradigm. Instead of battling “garbage in, garbage out,” knowledge graphs allow us to pursue gold in, gold out. Research now shows that ontology-driven knowledge graphs improve AI performance so substantially that the conversation is shifting from error avoidance to value amplification. Why leave outputs to probabilistic guesses when the domain knowledge already exists? 

Ontology-driven knowledge graphs enrich generative AI with expert meaning — and meaning is the currency of reasoning. 

From Data Mesh to Data Products 

While ontology and knowledge graphs address semantics and interoperability, the operational packaging of data matters as well. Data Mesh helped popularize the concept of data products — data managed as productized assets with clear ownership, context, and value propositions. 

In 2025, the DPROD standard formalized what constitutes a data product optimized for reuse, intelligibility, and control. Standards matter because they accelerate scale. They make discovery easier, reduce friction, enable automation, and allow marketplaces to emerge. Data products serve as the bridge between data producers and data consumers, whether those consumers are humans, applications, or AI models. 

The Data Economy Begins With Data Products 

When data products are enriched with ontologies and expressed as knowledge graphs, they become more than operational assets — they become economic assets. These ontology-driven data products improve AI performance while enabling new forms of exchange, licensing, and reuse. They are the foundation for data marketplaces and, by extension, a new data economy. 

A marketplace cannot function without shared meaning, trust, and interoperability. Data products provide the mechanism for all three. In such an ecosystem, a rising tide truly can lift all boats. Organizations benefit not only from improved internal AI outcomes, but from the positive externalities generated by shared standards, shared semantics, and shared innovation. 

The Work Ahead 

To fully realize AI optimization, business applications must leverage ontology-driven data products as a foundational ingredient. Beginning this year, the EDM Association will be partnering with our technology members to develop and adopt ontology-driven knowledge graphs as data products for their solutions. Members who adopt these solutions will not only enhance their AI performance, but also realize meaningful gains in system interoperability. 

We will begin with an official DCAM data product (DCAM is our data management framework, the Data Management Capability Assessment Model), followed by releases of FIBO (Financial Industry Business Ontology) data products informed by use cases defined through our communities of practice. This collaborative approach matches the reality that AI enablement is no longer linear — it is ecosystemic. Value emerges through alignment. 

Mission and Call to Action 

At the EDM Association, our mission is to elevate the practice of data management as a crucial function for all businesses, organizations and the public good. But we cannot fulfill this mission without the support, participation, and leadership of our community. 

Our members are our mechanism for innovation. They shape our standards. They define our use cases. They drive the marketplace forward and extend the value of our collective capabilities. To them, we offer our gratitude. To the broader industry, we offer an invitation: Join us, adopt our frameworks, contribute to our communities of practice, and help build the infrastructure that will enable the data economy to flourish. 

The journey to AI optimization has begun. It will not be linear. It will not be effortless. But the stakes — and the rewards — could not be higher. 

Learn More  

  • Please note: DCAM is available exclusively to EDM Association member organizations. Not yet an EDM Association member? Learn about the benefits of membership or contact the team. 
  • Read our previous article on TDAN, and contact us to learn more and get involved, whether you are a data-rich company or a corporate service provider  

This quarter’s column contributed by:    

Jim Halcomb, Chief Research & Development Officer, EDM Association   

Jim Halcomb is a strategy, data management, and cybersecurity executive with 30 years of international business experience. Jim leads EDM Association’s Communities of Practice, Best Practices Frameworks (DCAM & CDMC) and Training & Certification programs.   

Share this post

EDM Association

EDM Association

EDM Association (formerly EDM Council) is the leading global community of data experts and practitioners, setting data best practices and standards across sectors, collaborating to produce practical innovations, and offering hands-on help to its member organizations so they achieve excellence in data management and drive success. EDM Association provides industry-leading data maturity frameworks and benchmarks, conducts training to embed best practices, connects members facing similar problems through its community, enables access to the expertise and innovative R&D to solve them, and ensures data excellence is recognized. Develop a data-driven culture in your organization and reap real-world rewards. For more, explore edmcouncil.org  and follow us on LinkedIn and Twitter.

scroll to top