Crossing the Data Divide: A Case for Data Leaders Embracing Process Modeling

In the early days of enterprise data and business systems, process modeling and data modeling went hand-in-hand. It was standard practice to design processes and data structures simultaneously, ensuring a seamless alignment between how work was done and how information was captured, stored, and used.  

Tools like flowcharts, data flow diagrams, and entity-relationship diagrams were staples in every IT and business project. In fact, during this period, one of my former employers, LBMS, went public on the back of its leading process modeling tool, Process Engineer.   

But somewhere along the way, we stopped doing this. Process modeling fell out of favor and was sidelined. As data leaders in large enterprises, it’s time to rethink this separation and recognize the value of bringing these practices back together. 

A Brief History of Process Modeling’s Golden Era 

Process modeling became a cornerstone of enterprise system development in the 1980s and 1990s. It gained particular prominence during the 1990s, fueled by the business process reengineering (BPR) movement popularized by Michael Hammer’s seminal book “Reengineering the Corporation.” 

This era saw organizations embracing techniques like Business Process Model and Notation (BPMN), flowcharts, and IDEF (Integration Definition for Function Modeling) to visualize and optimize their workflows. These models were often paired with data models, which provided the structural foundation for managing the information required by those processes. 

This dual approach was essential at the time. Businesses built large-scale systems, automated workflows, and integrated disparate functions. Process modeling ensured that workflows were efficient and aligned with organizational goals, while data modeling guaranteed that information was accessible, accurate, and fit for purpose. The synergy between these disciplines resulted in robust, scalable systems, even if they took a long time to build. 

Why Process Modeling Fell Out of Favor 

Interestingly, business process reengineering did not remain coupled with IT system design. Instead, it spawned its own business-led optimization efforts, and process modeling for system requirement validation and design began to decline in the late 1990s. Several factors drove this decline: 

Shift to ERP/Packages: Until the mid-1990s, large enterprises built their systems in-house, which required a thorough understanding of the business processes and roles for which the system must work. ERP systems changed that by providing “off the shelf” standard modules for typical business processes. This left IT primarily responsible for integrating and structuring data for reporting and analytics. 

Agile and Iterative Development: For the system and application work that was still happening, everyone wanted it to move faster, so instead of methodical modeling, agile methodologies shifted the focus from upfront design to iterative development. Teams prioritized speed and working software over comprehensive validation and documentation. 

Self-Service Business Intelligence: New self-service BI tools emerged that empowered business users to explore and analyze data independent of IT. This shifted the emphasis of IT data management from providing business solutions to a data service provider. Even now, the focus remains on technical data architecture, data assets, quality, lineage, and governance, with diminished importance on aligning data with processes. 

The Negative Impacts of Process Modeling’s Decline 

Some might say, “What’s the big deal?” We no longer build big mainframe or client-server systems. We all use packaged applications, on-premise and cloud, to support our business processes. Also, enterprises are trying to move toward the decentralized creation of data products by business function. They understand their processes, so why model them? 

The problem is that a narrow, system-centric IT perspective fails to consider the broader impact on the enterprise. Ponder this general question for a few moments: What is the business impact of any process in which the participants are not aligned on what their responsibilities are — what they are expected to produce, deadlines, quality standards, etc.?   

The impact is higher costs and, less obviously, opportunity costs. In other words, low quality, confusion, and rework make it take longer for everyone to do their jobs, blocking innovation and optimization. 

What are the symptoms of these process inefficiencies related to our work with data? 

Report Sprawl 

Defining report requirements is not difficult when the business process is well understood. Reports need to present key metrics with three views with filters: a historical look back, history compared to the present, and a forecast of the future. 

When an organization has thousands and thousands of reports and little understanding of who, when, and why they were created, it’s a clear sign there is a lack of process clarity and clear ownership. 

Term Silos and Lack of Precise Language 

Organizations that lack clearly defined business processes struggle with the meaning and context of terms. For example, a seemingly innocuous term such as  “customer” is often interpreted and used differently by business functions. We’ve all seen the never-ending cycle of everyone explaining their interpretation of meaning so others can try to translate how it impacts them. 

Metric Confusion 

Let’s be honest: Almost every core business process, such as order-to-cash, purchase-to-pay, demand-to-supply, and issue-to-resolution, is not new, and its key metrics are not new either.   

Any time competing metrics or unauthorized derived metrics are included in yet another spreadsheet; it clearly shows a lack of process definition, ownership, and governance.   

Low Cross-functional Data Quality 

In the past, the primary cause of poor data quality was the lack of validation rules when entering data. Fortunately, those days are mostly over. Today, the primary cause of poor data quality is when data passes between systems, and the fields are used differently with different values. 

To data management people, those “handshakes” are integrations that require data mapping and then coding to resolve. That is true, but from a business perspective, they are boundaries with inputs and outputs that are part of a more extensive process. Data quality suffers when the business process is poorly understood, and there is a lack of clear rules for these “handshakes.” 

Compliance and Risk Exposure 

I’ve seen organizations try to tackle data exposure and security risks by creating a list of applications. The problem with this approach is that it lacks business context, such as when, who, how, why, and where the application is used. Without that, it’s tough to identify key risks, define controls, and monitor them. 

The backbone of risk identification starts with understanding the business process and then associating the applications, integrations, people, etc., with each step. 

Choices for Data Leaders  

No “one size fits all” exists for how data leaders should respond to business process modeling gaps. Here are some potential strategies that are worthy of consideration: 

Limit Responsibilities to Data Architecture, Quality, and Analytics 

This is the “do almost nothing new” option. The action is to train and require your teams to use process modeling techniques to help validate requirements within a project’s defined scope. 

It will improve communication and clarity for projects, but stops there. There is no commitment to helping the enterprise model its processes. 

Expand to Business Process Engineering Responsibilities 

A data leader could attempt to persuade leadership of the value of a continuous process improvement program and establish a small team to map, measure, and suggest improvements for all business processes. 

The pros are significant business cost reductions and productivity increases. Mature Six Sigma and Lean Sigma approaches can also be adopted, so nothing needs to be invented. 

The cons are significant. The concept is difficult to sell internally, and the data team does not typically take on this role. Also, an operations leader may already see this as part of their responsibility in large organizations, so it may be better to partner rather than duplicate. 

Expand Compliance Process Mapping Responsibilities 

Data organizations are responsible for data governance and data quality. What I am suggesting as a strategic option goes beyond that. 

The suggestion is to partner with whoever leads risk management to help them create a business process map and overlay data input, outputs, and usage. 

This strategy has several potential wins. First, it can significantly improve the risk compliance effort. Second, process mapping can be leveraged for all other data-related projects and reporting. Third, it introduces process mapping to the organization while providing immediate benefits. It may also be a bridge to broader process optimization efforts in the future. 

Extend Stewardship Responsibilities 

Assuming that some stewardship process is already in place, the suggestion is to broaden that to being the stewarding business process models. 

I intentionally did not use the word governance. I doubt it would be helpful to start by positioning the data team as the ones governing how business functions operate. 

It would be more straightforward and valuable for the business if stewards on the data team acted as internal consultants, helping them document and maintain their processes. 

This is also a natural extension for organizations using a data product-oriented strategy for data sharing and consumption. 

Practical Next Steps for Data Leaders 

Here are some practical next steps to help you and your team decide on a strategy. 

  1. Processing Modeling Education: If it has been a while or you are new to process modeling, some education would be good. Coursera, Udemy, and the BPMInstitute.org have a variety of courses.
  2. Investigate Internally: Large organizations often have some experience with process modeling. It can be helpful to investigate what is happening, who the experts are, and its impact. The challenge can be discovering the owner. Candidates likely include operations management, the project management office, or quality assurance. 
  3. Connect Dots to the Data World: Most of what you learn and find internally will not directly apply to a data organization. You will have to consider how you want to connect the dots to your team’s current scope of responsibilities and roles.   
  4. Run a Pilot or Two: Gather a small team and ask them to build on your thoughts by defining a pilot project that introduces processing modeling techniques and tests the value proposition. 

Final Thoughts 

The lack of focus and attention on process modeling is a missing link. Without it, data and analytics risk being a rudderless sailboat, and the enterprise will remain burdened with inefficiency. 

It may not currently be considered a part of their responsibility, but data leaders can decide to solve the problem. And why not? Once upon a time, not so long ago, process and data modeling were a traditional part of the job. 

Share this post

John Wills

John Wills

John Wills is the Founder & Principal of Prentice Gate Advisors and is focused on advisory consulting, writing, and speaking about data architecture, data culture, data governance, and cataloging. He is the former Field CTO at Alation, where he focused on research & development, catalog solution design, and strategic customer adoption. He also started Alation’s Professional Services organization and is the author of Alation’s Book of Knowledge, implementation methodology, data catalog value index, bot pattern, and numerous implementation best practices. Prior to Alation, he was VP of Customer Success at Collibra where he had responsibility for building and managing all post-sales business functions. In addition, authored Collibra’s first implementation methodology. John has 30+ years of experience in data management with a number of startups and service providers. He has expertise in data warehousing, BI, data integration, metadata, data governance, data quality, data modeling, application integration, data profiling, and master data management. He is a graduate of Kent State University and holds numerous architecture certifications including ones from IBM, HP, and SAP.

scroll to top