Data Vault Versus Dimensional – Part 1

I just received an email with a question about Data Vault versus Dimensional.  There have been many questions like this along the way from all parts of the world.  In this entry, I’ll post the question – and I invite you to comment with your thoughts about how you might answer.  I too will chime in and give my opinion in the matter.

And now, for the Questions:

I’m working as a project manager/maintenance leader for our DW for a large retailer in . With this mail I hope for your recommendation/thoughts for a near future project we are going to implement.

We have a DW where (new) parts are built with the data vault principle (with ETL engine) and others with the dimensional DM principle. We have just finished a pilot where we aimed to rebuild one flow from the “old school” (Dimensional and stored procs) to the data vault (and ETL). The source data seemed rather simple and almost no users or applications used the data.

The findings where:
– The “new school” gave a lot more tables which made the user applications had to rebuild as well (more risk)
– The rebuild itself where A LOT more complex than first estimated. And yes that is not the data vault principles fault but still the school seems more time consuming than dimensional model as a first investment (yes probably the winnings will come later).

And now it seems to me that we shall keep the old school principle but rebuild it in ETL to get the solution better monitoring, alarm functionality and also not be a SPOF-solution (since I do not really see the business profits rebuilding and changing principle . But what will happened to the DW if we have some parts that are built on Data vault and others on dimensional principles? Is that a future problem?

Let’s address the findings:

– new school (Data Vault) gave a lot more tables which made the user applications rebuild as well…

#1: The Data Vault model is for use with a Data Warehouse.  The data warehouse should be a backend system of historical storage and passive integration of business keys.  The Data Vault model is *NOT* geared to be used by operational applications directly, unless a message bus (EAI or Queuing mechanisms) are used to pass messages back out in operational format to the application

#2: The Data Vault model should *NOT* be accessed directly by business intelligence applications.  The Data Vault model is built to be a back-end data warehouse, and should serve as the enterprise memory store that provides you and your team the ability to quickly build new data marts (Star Schemas).  The BI Applications should still source Star Schemas, and should not source the Data Vault.

#3: The Data Vault is built for flexibility.  Because ALL relationships are extrapolated in to Link tables you are bound to end up with “more tables” than Star Schema modeling.  This however, is one of the flexibility component of the modeling techniques.  More joins isn’t necessarily a bad thing.  Especially if you focus on the nature of MPP, parallelism, massively parallel loads, and massively parallel queries.  Which gives rise to scalability.

I address these concepts in depth in my classes that I teach, along with the coaching area that I have on-line.  You can take direct on-line classes from me at: http://LearnDataVault.com

– The rebuild itself where A LOT more complex than first estimated.

#4: More tables (in the case of the Data Vault) Should make thing less complicated and much easier, faster, and simpler to build – both in the loading cycle, and the querying cycle.  It’s a division of work.  In fact, if the standards have been followed correctly for the Data Vault model, then you should be in a position to generate 90% of your loading ETL routines in to the Data Vault, as well as the staging area.  You should also be able to generate the baseline loads in to raw star schemas and from there, modify only the routines that need extra business logic.  This should account for easier ETL, NOT more complex ETL.

If you have a more complex system, then something has gone wrong – either the standards are not implemented properly, or the model isn’t built correctly, or the Data Vault is exposed to the Business Intelligence applications.

In terms of generating ETL, there are several tools on the market today that do this for the Data Vault, all you need to do is search the web.

And now it seems to me that we shall keep the old school principle

Fair enough, if that’s truly what you want to do…  But I will suggest to you that you might run in to the following problems:

  1. Scalability issues.  Star Schemas don’t scale very well in to the hundred plus terabyte ranges, if you have massive amounts of data for a Star Schema, you might want to think about Netezza, or a columnar DB, or a NO-SQL HADOOP DB backend.  However, the Data Vault allows you to scale (due to the architecture) in to the petabyte ranges if so desired.
  2. Flexibility issues.  The larger the Star becomes, the more conformed you try to make it – which results in HUGELY COMPLEX ETL, and extremely slow loading times.  It also results in incredibly long turn around cycles for your business users.  I’ve seen it over and over again, eventually the weight of this turn-around time (to building in new requirements) will force your business to stop and restart the entire project,  or build it themselves,  or worse yet: outsource the whole thing, or basically kill the entire project.
  3. Design time problems.  Adding new systems, and new systems data becomes a huge nightmare, especially the longer the “old-style” EDW is in play.  Continual conformity of data drives the complexity ratings SKY HIGH, when the system becomes too complex, it becomes too costly.  When it’s too costly, then all of those dreadful things happen (that I just mentioned in #2).  Eventually leading to Re-engineering, and full and complete re-architecture.

The Data Vault model solves these problems once and for-all!  Again, the Data Vault is meant to be a back end system, with zero accessibility by direct business users – and certainly not for your Business Intelligence engines.

 But what will happened to the DW if we have some parts that are built on Data vault and others on dimensional principles? Is that a future problem?

No, this isn’t a problem – as long as the Data Vault is used appropriately, and there are Data Marts put between the BI applications and the true Data Warehouse (Data Vault).  The Data Vault only causes problems (as you’ve seen) when it is directly available to end-users, queries, and reporting tools.

The Data Vault model was not built for user accessibility, that’s the job of Star Schemas – and as you’ve discovered, the Star Schemas do a better job of it.  No, the Data Vault was built for scalability, flexibility, and accountability on the back-end, for the life of the Data Warehouse.

If and when you have the proper Data Vault architecture in place, you can get good at building Star Schemas.  I once managed a team of 3 people, and in 1997 we built new star schemas within 45 minutes of receiving the 2 page requirements document, and we DIDN’T EVEN HAVE an ETL application in place!  We had to code our business rules in to SQL for Sybase, and run stored procedures around it.  Bottom line is: our turn-around time was so fast, that many of the business units would come to us for answers, they refused to work with their own business units who took weeks and sometimes months to get them results!

We turned our IT department in to a profit center for our lead business user, and without rapid turn around time like this (which you won’t see with Star Schemas as a data warehouse), you’re bound to end up getting the project shut down at some future point.

I hope this answers your questions, I would love to hear more about what’s happening with your project, and perhaps I can assist you through the coaching area.

Thank-you kindly,
Dan Linstedt

Tags: , ,

Trackbacks/Pingbacks

  1. Tweets that mention Data Vault Versus Dimensional – Part 1 -- Topsy.com - 2010/12/29

    […] This post was mentioned on Twitter by George Jovanovic. George Jovanovic said: RT @dlinstedt: Data Vault Versus Dimensional – Part 1: A couple of questions came to me through e-mail, and… http://goo.gl/fb/MqSdm […]

Leave a Reply

*