The Book Look: Data Interoperability

Dave Wells is a thought leader and influencer in the world of data. Early on in his role as TDWI education director — in the wild frontier days of data warehousing — he selected some of the data movers and shakers, such as Bill Inmon and Ralph Kimball, to present at world conferences. Dave played other roles for me, too, over the past 20-plus years, including mentor and friend. But most recently, and most importantly for this column, I worked with Dave in his role as author, where his book, “Data Interoperability,” has just been released (last month!).  

One of Dave’s skills, which is present in this book, is his ability to take complex subjects and not only make them easy for others to understand but also offer templates, roadmaps, and approaches for applying these subjects within our organizations. I remember this with data warehousing, and in this book, he does the same with data interoperability. 

Interoperability is about shared understanding, requiring our software systems to exchange data with shared meaning and purpose. It ensures that when one system says “customer,” another doesn’t confuse it with “client” or “consumer.” It’s about aligning context and meaning so that data can move fluidly across domains, tools, and technologies without constant cleanup. 

Chapter 1 frames interoperability as the next evolution of enterprise data architecture. For decades, we’ve relied on copy-and-transform methods leading to data warehouses and data lakes. These solutions worked, but they introduced complexity, increased costs, and fragility. Interoperability is the way forward — a means of moving from data duplication toward a system of shared semantics, where operational and analytical systems work well together. 

Chapter 2 focuses on operational systems, such as CRM, ERP, IoT, billing, and logistics, which are the “workhorses” of business. They generate and use vast amounts of data but are notoriously difficult to align. They introduce the problems of sprawl, silos, disparities, and friction. Systems use different models, naming conventions, and assumptions, which makes connecting them a constant challenge. Without interoperability, operational data is fragmented and hard to trust. 

Chapter 3 explores analytical systems, such as warehouses and lakes, which take the strain off operational systems to allow analysts to build and run reports and dashboards. But this approach has limits. Analytical silos, metric disparities, and legacy baggage weigh them down. And in a world of real-time decision-making, waiting for batch extract, transform, and load (ETL) isn’t practical anymore. Copying the data is not ideal for a number of other reasons discussed in this chapter. 

Chapter 4 is about architecture. Operational data architecture has been neglected, while analytical architecture has received most of the attention, leading to a shaky foundation. This chapter advocates for a new, unified data architecture that strikes a balance between operational and analytical needs, eliminates redundancy, and fosters a future-proof, resilient system. Design interoperability from the start, instead of patching it in later. 

Chapter 5 explains the differences between our interoperability and integration, as well as how integration is needed for interoperability. Integration copies and standardizes, while interoperability works with meaning in place. The benefits are speed, agility, trust, and less redundancy. Barriers are discussed in this chapter, such as technical debt, legacy systems, and organizational inertia. Learn how to overcome these through approaches such as culture and technical change.  

Chapter 6 is all about the importance of semantics, the backbone of interoperability. By defining not just the structure of data but its meaning, semantics allow systems to communicate without ambiguity.  
This chapter explains that a shared semantic layer can ensure that data products and services exchange information consistently. Interoperability succeeds only when meaning is standardized across the enterprise. 

Chapter 7 explains why semantic data modeling is so important and how to model using different approaches. Knowledge graphs, property graphs, and semantic modeling processes all provide ways to capture and manage meaning. By linking data elements to well-defined concepts, semantic models help bridge differences between systems, enabling them to interpret data in consistent and predictable ways. 

Chapter 8 explores the enterprise semantic layer, including APIs, data products, contracts, virtualization, mapping, translation, and linking. This layer makes sure that data from different domains and platforms can be consumed, reused, and trusted without requiring endless custom integration. This layer turns semantics into an actionable and scalable capability. 

Chapter 9 covers the tools and technologies that support interoperability. It contrasts “before” and “after” pictures of enterprise architecture, showing how interoperability simplifies connections and reduces fragility. The chapter closes with a call to embed interoperability into the core of enterprise data management, making it a deliberate, planned capability rather than an afterthought. 

In summary, Data Interoperability” is concise and contains many important messages about getting a holistic data picture. The book makes a strong case that the future of data management isn’t about copying more data into more silos, but instead creating shared meaning. By elevating semantics, restoring attention to operational data architecture, and building an enterprise semantic layer, organizations can move from chaos to clarity. Interoperability isn’t just a buzzword; it’s the key to making data trustworthy, useful, and ready for whatever’s next. 

Click to view larger

Share this post

Steve Hoberman

Steve Hoberman

Steve Hoberman has trained more than 10,000 people in data modeling since 1992. Steve is known for his entertaining and interactive teaching style (watch out for flying candy!), and organizations around the globe have brought Steve in to teach his Data Modeling Master Class, which is recognized as the most comprehensive data modeling course in the industry. Steve is the author of nine books on data modeling, including the bestseller Data Modeling Made Simple. Steve is also the author of the bestseller, Blockchainopoly. One of Steve’s frequent data modeling consulting assignments is to review data models using his Data Model Scorecard® technique. He is the founder of the Design Challenges group, Conference Chair of the Data Modeling Zone conferences, director of Technics Publications, and recipient of the Data Administration Management Association (DAMA) International Professional Achievement Award. He can be reached at me@stevehoberman.com.

scroll to top