#datavault – Another Tool In Your ToolBox

I’ve been reflecting on recent events, and having some great discussions with many of you in the community.  For that, I thank-you.  I wanted to take a minute to return to the roots of the Data Vault, and why it was created in the first place. I will also do my best to provide a non-biased (even though you think I may be) minor comparison of the Kimball / Data Vault statements going around the industry.

The Data Vault is made up of three basic parts:

  • Architecture (systems architecture) or 3 tier architecture
  • Methodology (rules around how, and why)
  • Model (standardized data model with strictly defined entities)

The Data Vault is nothing new, it is a hybrid design made up of “best of breed” from multiple normalized formats (1st, 2nd, 3rd, 4th, etc..) and Dimensional Modeling (Dimensions and Facts).   As such it is not a be-all-end-all solution, nor should everyone implement a Data Vault.  There are times when a Data Vault is not necessary, nor even warranted for building.  Those times might include some of the following reasons:

  1. You are building a single-business-unit focused answer set
  2. You do not need an enterprise view
  3. You do not need an auditable historical data store (as you have all the data in a single source system, backed up forever)
  4. You do not have a number of external systems to integrate (3 or more?)
  5. You are running a columnar data store, Key=Value store, NoSQL store, or denormalized store (like Netezza)

I’ve always maintained (and still do) that I am not here to sell you on Data Vault. If you want to use Data Vault, and you would like knowledge or training around it, then that is one of the services I can provide (among others in the industry who also provide excellent services too).  I am not here to tell you that using Data Vault will somehow be the answer to solving all your data warehousing problems; it won’t.  The Data Vault Model (being a hybrid data model design) will solve some of the problems you might see in enterprise data warehousing, but has plenty of other issues that will still exist.

The Data Vault Systems Architecture is based on a three-tier architecture.  This is a classic architectural design, where the Kimball Bus Architecture is based on a two-tier architecture.  The debate rages on – do you need three tiers? are two tiers sufficient to isolate the business and the structures from the impact of changes?  This is all for you to decide.  Some would argue today – that one tier data warehouse architecture is enough, and I would have to say, in certain situations (like 100% real-time data feeds with 100% real-time analytics and alerts) that this statement may in fact be true.

Why does the DV recommend three tier architecture?  Well – in a batch oriented situation (even mini or micro batch) it is still (today) helpful to replicate data from the source to some form of a staging area for preparation of load to the enterprise data warehouse.  The middle tier (the Data Vault Model Layer) is introduced to isolate the business presentation layer from the impact of adding source systems, adding new data, changing structures of historical storage, and to isolate the historical storage layers from the impact of changing the presentation layers to the business.

The DV Architecture does however revert to a two-tier architecture as you approach near-real-time solution.  The staging area “disappears”, with the data sets now arriving on a message bus or message queue, and delivered passively to the Data Vault Model – in other words, the DV based EDW becomes a listener on the message queue.  Then you can decide what to do with the transaction further downstream in the analytical and release sides of the house.

At the end of the day, you need to decide if you want a two-tier architecture (batch oriented), two-tier architecture (100% real-time oriented), three-tier architecture (batch & real-time oriented).  This is where your decision lies.

The DV Methodology is nothing new either – it is based on tried and true best practices for Data Warehousing, project management, ETL management, Risk analysis, Risk Reduction, Risk Mitigation, Cost Measurement and containment, etc.. etc.. etc.. – all of these things usually go in to creating software – in other words, we are software engineers to a degree – when we build ETL in and around data warehouses.  Hence the ties to best practices of software engineering.  Like being agile, or implementing parts of SCRUM, using JAD/RAD sessions in conjunction with Use Cases to get your requirements.  All of this needs to be applied from the methodology side of the house.  The only thing interesting about the methodology of the Data Vault is that most of the artifacts can and or should be auto-generated.  Why? because the data model is pattern based with strict rule sets.

Does the Dimensional Model have patterns? Can it be generated too?  Sure.  The simple patterns are Dimension and Fact.  But where it gets hairy or difficult to auto-generate, is when the business rules are involved in loading these conformed entities Thats where the Data Vault has a minor advantage – why? because the business rules have been moved downstream – meaning going from the DV Model in to dimensional models.  Now, can you auto-generate a raw data based dimensional model?  Sure, but you might lose the ability to conform data (maybe, maybe not).

Can you auto generate the ETL for both of these architectures? Sure, you can base-line it, and IF you are loading raw data to your dimensional warehouse, then voila – you are set.  Can you do this with Anchor Models? Key=Value Pair Models? Hadoop & NoSQL models?  Yes, yes, yes – every model that has a pattern, that you opt to store RAW data in, the ETL /Loading processes can be generated.

Ok, down to brass tacks.  What are some of the issues or sticking points for the Data Vault Model?  Let me see if I can list a few here:

  • There are more joins – because all relationships, events and transactions are listed as Many To Many tables.  You can think of these as factless facts. They would be defined exactly the same way, so if you are creating factless facts in a dimensional model at the lowest level of grain – then you are building almost as many tables in your dimensional model as you would build in the Data Vault.
  • The tables are narrower – packing more rows per block, requiring more physical CPU power and parallelism (parallel query/parallel ETL) from the infrastructural components.  But if you partition your tables in a data mart vertically (split the dimensions in to narrow dimensions for instance) you are doing exactly the same thing as the Data Vault model proposes.
  • A main focus on this model is finding business keys, attempting to tie the physical model to the place in the business processes where the business keys are passed.  This requires the modeler to better understand the business.  But after all – isn’t that supposed to be a part of data warehousing in the first place?
  • More ETL processes – due to standards, rules and best practices we (in Data Vault Land) end up with far more processes than the ETL teams are used to.  This can cause a raucous, until the team understands that each of the processes runs at optimal speed (generally), and is much simpler because there is no business logic to contend with – no “conforming” of the data sets.

Ok, so here’s the point: if you look at columnar databases, you get the same thing: joins (ie: vertical partitioning) covered by specialized hardware for extreme parallelism.  If you look at a raw data dimensional model, you get similar statements or draw backs – especially if you do not conform the data sets (resulting in vertical partitioning or splitting of the dimension tables).

At the end of the day, there is really only one big difference between Data Vault Models and alternative data modeling methods (2nd, 3rd, 4th, 5th normal forms) and dimensional models.  That difference is:

* the way parent-child relationships are handled

Yep – that’s it. That’s the whole enchilada.  That’s the”big secret” if there really is one. (and I personally don’t think its a secret). But anyway… if you look at it like this:

Dimensional Models:
* Dimension to Fact = Parent to Child
* Dimension to Fact = Fact stores Relationships, transactions, events for a point in time
* Dimension Table = Stores descriptors, hierarchies, temporality, business keys, and sometimes history (depending on dimensional type)
* Master Key Table = stores key lookups, key hierarchies, key resolution
* Helper Table = stores Dimension to Dimension joins
* Junk Table = stores “undefined / uncategorized” information
* Snow flaked Dimension Table = relates hierarchies in a fixed 1 to many architectural definition

Data Vault Models:
* Hub To Link = Parent to child
*  Link Table = Link stores Relationships, transactions, events, for a point in time (good) , Links also store hierarchies, Links store key lookups, key hierarchies, key resolution, Link table stores Business Key to Business Key joins (covers what a helper table covers), covers the relationship between Hubs (what the snow-flaked dimension referential integrity would hold)
* Hub Table = stores business keys and business keys only
* Satellite Table = stores descriptors, temporality, history, can store undefined or uncategorized information – but generally forces it to be attached to a business key of some sort

At the end of the day, it comes down to preference.  What it is you like, what it is you are comfortable with.

If you like the parent-child relationships split out in to “its own structure” then you would like the Data Vault model.  If you would rather represent these parent-child relationships in a fixed architectural design (meaning data model enforced) then you would prefer (mostly) the Dimensional Model.

In closing I just want to say:

There are pros and cons to adding more tables, adding more parallel resource requirements to the infrastructure.  It’s not so much the Data Model that needs to be weighed here, as it is where you want to place your business rules, the choice of two-tier or three tier architectures, and the requirements of parallelism or the ability to divide and conquer.

* it is my personal belief that a) you can do all these things with a raw data based dimensional model, but b) IF it is raw data based dimensional model, THEN you can no longer conform the data sets.  Because to have raw data means: no coalescing, no changing of the data sets, no standardization, no conforming at all.  At that point, you end up with roughly the same number of tables in a raw dimensional model as you would end up with in a Data Vault Model.

Anyhow, I hope this helps clear the air. Data Vault is not for everyone, nor will everyone like the Data Vault.  If you end up wanting to try it, please feel free to do so.  There are plenty of knowledgeable people out there willing to help.

All the best,
Dan Linstedt

Tags: , , , , , , ,

2 Responses to “#datavault – Another Tool In Your ToolBox”

  1. Kent Graziano 2012/07/13 at 7:45 am #

    Great post Dan! Good summary and comparison.

  2. Jonathan 2012/07/13 at 9:24 am #

    I agree with Kent as well. Thanks for the transparent perspective and review of the modeling techniques by laying all the “cards out on the table”. It is not a silver bullet that will make all the issues disappear, but I have had great success using the DV to deliver value to my customers.

Leave a Reply

*