#datavault #agile #methodology #edw and business value

There seems to be an interest (again) in having a short and simple “business value” definition of the Data Vault Model & Methodology.  That said, I will do my best to list it as succinctly as possible.  However, I would love to hear your thoughts on what you think is business value, and whether or not it’s a differentiators for the DV.

More specifically, the differentiators that make the Data Vault Model and Methodology a unique value proposition.

Issue #1:  People confuse and combine the Methodology with the Model.  Please don’t make that mistake.  The DV Model is just that: an architectural design pattern for encapsulating, and organizing data sets.  A DATA MODEL!  The methodology is just that: a method for implementation, combining repeatability, scalability, flexibility, performance, ease of use, ease of replication, consistency, and IT implementation processes.

Ok, that said:  There are two categories of value add propositions:

Category #1: the Data Vault Model

The model has the following business values (as I see it) – feel free to add more, or ask me to change this list.

  • Flexibility – ease of addition of new table structures without disturbing historical data stores, and without the need for re-engineering other parts of the model.
  • Scalability – as it relates to  traditional Relational Database Management Systems (RDBMS) data stores.  The ability to scale your model due to the horizontal and vertical partitioning that naturally take place with the Hubs Links and Satellites ensure that you can take it to petabyte levels in an MPP environment without changes to the standard model structures.    The DV MODEL is based on set logic and the frequency of data changes, allowing you to split and merge data sets without losing history AND without impacting any other parts of the EDW model.
  • Consistency / Repeatability – Hubs are Hubs are Hubs, Links are Links are Links, and Satellites are Satellites are Satellites.  It doesn’t matter how big the model gets, or how many tables there are.  The design of the model never changes, it remains consistent and repeatable pattern based approach for your entire EDW lifecycle.

Now: BECAUSE certain folks insist, but are unwilling it seems to ask for comparisons to other data modeling methods, I will try to elaborate here:

3rd normal form AS AN EDW: exhibits these problems

  • Flexibility – 3NF as an EDW (with time-series or temporal adaptations) causes versioning problems in the data sets.  In a transactional definition of this word, it basically means “transactional consistency might be compromised” because of the way parents embed their keys directly in to child records.
  • Scalability – 3NF as an EDW has trouble scaling to any size.  This includes the number of tables, as well as the overall performance of the model, and the amount of data that it can store in a parent-child-relationship is “finitely limited” as described by discrete mathematics.  In other words, once again the parent-child enforcement of relationships eventually limits the ability of the 3NF as an EDW data modeling paradigm
  • Consistency / Repeatability – Ok, in 3NF as an EDW – everything is supposed to follow 3rd normal form rules.  BUT because of the temporal aspects of the data, it cannot.  It follows 2nd and sometimes 1st normal form rules. In order to overcome these limitations, people adapt or change the data model – resulting in a serious loss of fidelity.  In other words, they introduce “exceptions” to the model itself, to the architecture.  As the model grows, it becomes less & less documented, and less & less understood.  This is neither consistent, nor repeatable in design patterns.

Dimensional Model form AS AN EDW: exhibits these problems

  • Felxibility – Dimensional Model Form as an EDW: has trouble being flexible.  Particularly if it is not modeled at the granular level  of the source data set.  Why?  Because when it’s modeled at a super-set level (or an aggregate level) it generally introduces summary data, along with hierarchies of data. The flexibility fails if a CHANGE IS MADE TO THE UNDERLYING DATA THAT CONTROLS THE HIERARCHY – resulting in forced re-engineering of the Data Model (generally this occurs at the dimensional levels).  IF the Dimensional model IS at the pure granular level, then it is less likely to suffer problems – until the hierarchy of the data set changes!  You see, the Dimensional Model becomes “brittle over time” because it is a data driven design rather than a pattern driven design.  It represents sets of tuples and relationships combined in to single dimensions.  There-in lies the application or restriction of finite mathematics.  Finite math will state that as complexity rises (which changes drive dimension complexity), that ultimately the cost to maintain such complexity will over-run the budget or the ability to keep it alive. Thus forcing a re-design, or re-engineering to occur.
  • Scalability – Ok, a LOT has been done at the infrastructure level over the past 10 years to support dimensional models.  It’s where “star join optimizers” came from, and so on.  So yes, today – with the right hardware, and with the right budget, you can scale a star schema effectively.  Now scalability of the data set is a different issue.  When dimensions become too wide and they blow the rows per block ratio out of the water, then I/Os will rise beyond the benefits seen from denormalization in the dimension (see my past post on joins, normalization, and denormaliation).  So what happens is the dimension can no longer perform effectively.  So what does the customer do?  Buy bigger hardware, or turn to an appliance like Netezza for “big table handling.”  The Data Vault Model will scale further in raw data form than a dimensional model before you ever need to even look at moving your EDW to another platform just to get it to return results.  However: if you put both models on an RDBMS with compression on – the dimensional model will “most likely” outperform the Data Vault until a certain size is reached.  Where the I/Os for table scans and joins outweight the cost of distributed computing power – and parallel compute abilities.  This can be seen in todays “trend” to go to the ultimate 6th normal form K=V store: Hadoop and its’ ability to handle terabytes of information, where document stores simply haven’t got there yet.  Same thing with the Data Vault Model when comparing it to a denormalized Dimensional Model.  Its all in the mathematics here folks.
  • Consistency / Repeatability: well, once again – Dimensions are Dimensions, Facts are Facts.  Except when?  Except when Dimensions contain hierarchies, or Dimensions are Snow Flaked Hierarchies, or when Dimensions are Degenerate Dimensions.  And Facts, well when Facts are NON-FACT FACT Tables!  What?   Oh yea, Factless Facts. Or When Factless Facts are Helper Tables, or when Factless Facts are Dimensional Link Tables….  Ok, do I make myself clear?  Consistency of this kind of model with all kinds of exceptions to the modeling rules does not bode well for a scalable enterprise model.  At the “core” or first glance of Dimensional Models, yes – facts & dimensions are simple and easy to understand.  But then get in to the real world, and what do you get?  Mass confusion as the model grows to scale the enterprise (I’m talking about the number of tables here, not the size of data).  There’s one more problem (mentioned above in flexibility).   Dimension to Dimension will be different (generally).  Why?  Because the business rules are upstream of the Dimension, causing the aggregates to change when the business rules change – which then again, causes cascading change impacts to the “child” tables linked to that dimension.  And if you are not changing your facts when are changing the grain of your dimensions, then your facts will not add up properly for the business – and you will end up tearing the whole thing down and re-engineering a new model to meet the business needs.

Category #2: The Data Vault Methodology

Ok – so those are the fundamental data model comparisons – and by the way, this isn’t magic sauce folks, it’s basic mathematics (discrete math, linear math, finite math, set logic math, and the math of complexity).  Please please please, do not confuse the model for the methodology.  They are two separate and distinct things!!!

The methodology is just that: a method for implementing your Enterprise Data Warehouse in a timely (agile) fashion with the least amount of riskpre-assigned patterns, and optimized IT (business) processes to lead you to a consistent goal.

There are hundreds of methodologies in the world today for all kinds of things, and no one methodology is king.  You should adapt, apply, use, and learn from all methods and approaches that suit your particular needs.

Ok, now for a bit of Q&A:

Q: Is the DV Methodology really that different from Waterfall or Spiral?
A: Yes, in one way: it’s a hybrid, just like the model is a hybrid.

Q: Can I use the DV methodology without the DV Model?
A: Yes, but you increase the risk of failure of your project, because the methodology has been tuned to work with the Data Vault Model

Q: Can I adapt parts of the DV methodology in to my own project management methods?
A: Yes, of course!  The DV methodology is created in a modular fashion for just that reason

Q: What makes the DV Methodology Special when compared to Waterfall, Spiral,?
A: The DV methodology is specifically tuned and targeted at the business practice of creating an Enterprise Data Warehouse.  Waterfall, and Spiral are project management practices which are included in the DV Methodology.

Q: Well, wait a minute, why then is DV Methodology tied to “Agile”?
A: I’ll let my good friend Kent Graziano handle this one, but here’s my two cents:  Because of its consistent pattern based design and project approach.  It can be rapidly built, easily adapted, and even more: automatically generated (these days).  But in the end what makes a project agile is truly nothing but the people who build it and how fast they can respond.  The DV Methodology is a quick and easy ramp up for all those who follow it.  It has dedicated and prescribed standards and rules that guide the entire process.

Well, that’s all I have time for now, if you have further questions, or perhaps even insights – I am sure I missed a bunch of things here, feel free to comment.

Thanks,
Dan Linstedt

Tags: , , , , , , , ,

One Response to “#datavault #agile #methodology #edw and business value”

  1. Kent Graziano 2012/06/07 at 8:23 am #

    Thanks for the shout out Dan!

    So DV and Agile – pretty much what Dan said about everything being pattern based and generate-able is what I found to be the factors that allow me to take an “agile” approach with a data vault project.

    Currently I am on a project doing 2 weeks sprints and we can easily build out a few (sometimes more than a few) hubs, links and sats (then facts and dimensions on top) in each sprint. Because of the structure of the model, we have had no issues hooking the data from one sprint to data uncovered in the next sprint. We generally work from a business process decomposition and state change diagrams – which maps very well to data vault structures.

    Because of the repeatability of the model patterns, I have SQL templates for developing the load routines. With these I can whip out first cut insert statements in minutes then hand them off for “polishing” to the ETL programmer. In fact based on those templates, she has now developed PL/SQL programs that generate the load procedures dynamically! With that it becomes very easy to refactor the loads each sprint if the model changes.

    As Dan stated, it is not that DV is an agile method itself but rather the nature of the architecture and the patterns lends itself to applying agile techniques (like sprints). I actually stated this and was doing it (with a data vault model) almost 10 years ago at Denver Public Schools.

    You can do agile data warehousing without data vault, but why would you want to?

Leave a Reply

*