Best of 2013 – The Evolving Data Modeler Skillset

Next week is our second annual Data Modeling Zone (DMZ) conference in Baltimore, MD, and I am really looking forward to this event. Not just for the sessions, but also to catch up with data modeling buddies. I remember planning the first DMZ in 2012 and then this year’s DMZ. There are two observations I’ve made across the two conferences, and I think these two observations reflect what is happening in our industry as well. Both observations have to do with the data modeler skillset. Not replacing our skills, but adding on to them.

I’ve been data modeling for 25 years, and have modeled through the CASE (computer aided software engineering) craze of the 1980s, the data warehouse/dimensional modeling craze of the 1990s, the agile craze of the 2000s, and now the big data/cloud computing craze of the 2010s. The core skills of the data modeler (and data architect for that matter) are timeless and remain the same through the decades. There are however, two observations, which I see happening at our conference and in the industry. The first observation is a stronger emphasis on softer skills, and the second observation is a retooling of technical skills.

The first observation is that more data modelers are taking a greater interest in the softer side of data modeling. I remember reading Chapter 13 of my book Data Modeling Made Simple – Graeme Simsion contributed this chapter and it is about how the data modeler should work with other members of the team. I think at the time that my book was first published, Graeme’s thoughts were ahead of our time. However, I am seeing much more of an interest today to communicate and negotiate with business sponsors, business analysts, and developers.

At DMZ 2012 we had a lot of sessions focusing on core data modeling skills, such as normalization, dimensional modeling, modeling tools, subtyping, etc. These are essential skills for a data modeler – however, what we are seeing this year in addition to sessions covering these core data modeling skills, we have a number of sessions on the softer skills. For example, we have a half-day session on negotiating and conflict management for the data modeler, and another half day on facilitation skills for the data modeler. Also sessions on how the data modeler should work with process modelers, business analysts, data scientists, and developers. Many of these sessions are case studies, so the presenters are the ones who actually have learned something from a project at work and would like to share what they learned.

The second observation is a retooling of technical skills. There is a greater willingness on the part of the modeler to experiment with new techniques. Instead of applying our core skills of relational and dimensional modeling, we are searching for new techniques to apply in our big data agile world, such as Columnar, Anchor Modeling, Data Vault, UML – all techniques at DMZ this year. For example, anchor modeling and data vault modeling will help preserve the logical data model (and keep it timeless), while lead to a very efficient physical design that handles historical information, an important factor to consider in our big data world. UML offers a number of models that work well with agile developers and business analysts…and the list goes on.

I know personally I am experimenting with anchor and data vault modeling, and also FCO-IM for conceptual modeling. I have enrolled in a class on MongoDB, and built my first MongoDB “database.” The skills of the data modeler are timeless, and we just need to adapt them for each new technology wave. Do you have similar observations? Please share your ideas!!

Share

submit to reddit
Top