#datavault standards how & why

In recent times, there have been several discussions around the standards of the Data Vault Modeling components.  In fact, this isn’t the first time (nor the last) that I expect the standards to be challenged.  That said, I feel it necessary to discuss just what I put the standards through before publishing, in hopes that if you feel the urge to suggest changes, you can apply the same rigor as I do before telling the user group that “I’m wrong, the standards don’t work and here’s a change”.  This is a post in to the reasoning behind, the insight and the rigor applied to the standards (ALL standards) for the Data Vault 1.0 Model, and Data Vault 2.0 Model.

Scope of Post

This post is highly focused on the process for building and accepting standards around the Data Vault Model.  This post does not include any standards processes for Methodology, Implementation, nor Architecture – those are separate.

Introduction

The standards of the Data Vault Model (both 1.0 and 2.0) are tested in rigorous situations.  Many questions are asked, and many challenges are put forward against the structures, the design, the attributes, and their definitions.  All in the name of producing a standard that overall achieves business value, AND achieves IT value.  The hope is that the standards will play a pivotal role in getting the data vault models built consistently, appropriately, and repeatably (following pattern based design) and steer the project successfully around potential pot-holes that so often cause re-design and re-engineering to happen in the market place known as Data Warehousing.

The Background of the Current Standards

Before launching in to a short discussion on HOW the standards are constructed, I want to share with you why the current standards are, the way they are today.

I spent 10 years from 1990 to 2000 (long before even publishing in 2001) researching, testing, designing, and re-designing the base standards around the model, the methodology, the architecture, and implementation.  However since the focus of this post is the Data Vault Model, I will stick to those portions of this discussion.

In those 10 years, I tested what was then “big data” (volume and velocity, and variety), internal and external feeds, web-service provided data sets, FTP, Cobol and mainframe extracts, relational extracts, and yes, even document extracts.  By tested, I mean: I put these types of data in to the design, and tried intentionally to BREAK the system, ie: break the ETL, break the ELT, break the bulk-load, break the database, break the staging area, break the data model, break the relationships, break the keys, break the user-defined data set, break the temporality (ie: load dates), and so on.

You see, I have spent part of my previous life (prior to data warehousing) in Quality Assurance as a lead QA role for a compiler company.  If I failed my testing job, or the tests I wrote didn’t provide enough coverage, then chances are the compiler would fail in the field – NOT a pretty situation, especially since compilers by nature are meant to have a very low tolerance for failure.  In other words, not only was I trained how to write test cases, but I was responsible for sign-off as Lead in the Q&A department before the compiler would be approved for release to the end-customers.  All this to say, I had learned a thing or two about regression, white-box, black-box, and clear box tests.

Now the Data Vault Model needed testing, as did the load routines and everything else I mentioned a minute ago.  Unfortunately I would have to write a full book in order to explain ALL the tests I put the model through, and I don’t have enough time for that.  So, in short, I will attempt to explain the base level questions that the components and the attributes must meet in order to qualify to be PART of the modeling paradigm.

If you can put your own “suggestions for changes to the standards of Data Vault Modeling” through these questions, and test appropriately – proving results, then I would be happy to examine the results, and quite possibly warrant a change in standards.  It’s how I got to Data Vault 2.0 Modeling changes in the first place.

Questions To Ask of “Proposed” Standards Changes

Soon, I will write a white paper detailing JUST the load date, the load end date, their purpose, and their necessity.  From that perspective, I will dive in to details about the specific element known as Load Date – why it is, what it is, what it is not, and why it should never ever be changed.  But for now, if you have a desire or a need to suggest changes to the entire user community around Data Vault Modeling, then PLEASE be aware of the following questions, as sooner or later I will ask you for the results of these questions – OR I will challenge your assertions as you challenge mine.

I expect nothing less of those of you wanting to change the standards, at the end of the day, if the “new proposal” you make to the entire user group fails any one of these challenges, then you are putting the future of the entire Data Vault Model at risk of failure (ie: re-design or re-engineering), and putting the entire effort of those who listen to your suggestion at risk of failure due to a break in the consistency and applicability of your suggested change.

Here are the questions you will want to test / ask:  * all should work without IF rules

  1. Does the new proposed standard work with external data
  2. Does the new proposed standard work with real-time data
  3. Will the new proposed standard work with unstructured data?
  4. Will the new proposed standard work with structured or multi or semi-structured data?
  5. Will the new proposed standard cause cascading change impacts in the model?
  6. Does the new proposed standard introduce temporal (single or multi) confusion?
  7. Will the new proposed standard introduce more foreign keys?
  8. Does the new proposed standard fit in the Business Vault or the Raw Data Vault?
  9. Is the benefit of the new proposed standard business or technically driven (or both)?
  10. Will the new standard work at 1 row and at 100 Billion rows (in the same table structure) without forcing re-design?
  11. Does the new proposed standard inhibit high speed loading?
  12. Does the new proposed standard introduce dependencies in the loading process or remove them?
  13. Is the new proposed standard a physical improvement for performance? or a Logical change for business value, or does it fit in both physical and logical designs?
  14. Does the new proposed standard improve query speed over 100 million rows (in a single table) or does it cause performance problems in this case?
  15. Does the new proposed standard work for ANY feed from ANY source system at ANY time? or does it *require* certain technology be in place up-stream of the Data Vault in order to work?
  16. Does the new proposed standard introduce Cartesian products on queries? if so, how, why?

In reality, there are about 100 more questions for testing that I introduce that get you in to the weeds.  Very laborious indeed.  These are just some of the fundamental questions that I put the modeling techniques through before announcing them as part of the standard.  It’s also why Data Vault 2.0 “stayed in the labs” for at least 2 years before being fully released to the public.  Testing, testing, and more testing.

Remember: ANY STANDARD YOU SUGGEST TO THE WORLD WIDE USER GROUP CAN CAUSE GRAVE DAMAGE IF IT DOESN’T WORK AT VOLUME, VELOCITY, VARIETY – OR REQUIRES CERTAIN SOFTWARE OR HARDWARE IN ORDER TO BE EXECUTED PROPERLY.

Conclusions

I put the original Data Vault 1.0 Model through these and many more tests, in practice, and in production as well. I DON’T JUST COME AT DATA VAULT WITH A THEORETICAL APPROACH.  Everything I teach, everything I espouse, everything I build, everything I talk about, write about, or standardize I have put in to practice in production systems (usually at large scale with lots of active data and big teams).

I encourage you, if you feel the need to “tell users to change the defined standard” to PROVE to the user group that you’ve answered these questions, explain where, why, and how you’ve accomplished these tests, and what the test results truly are.  Because if you don’t – I will challenge you to bring answers, because you run the risk of suggesting BREAKS or “workarounds” that ultimately fail in the consumers eyes.

Now that said, I am constantly challenged by new customers and new cases.  I am very happy to say that DV2.0 is the first time in 14 years after “production” that a the modeling standards have changed – due to a performance break in high volume, and dependencies not-sustainable at EXTREME volume across multi-platform loads.  The break?  Sequence numbers..  yep, the very suggestion that “improves query performance” at smaller volumes, has a re-design consequence at MASSIVE volume systems (something we could not foresee, and could not test until recently with the advent of Hadoop).

Hence the ONE and only change to the Data Vault 2.0 model – the replacement of Sequence numbers with Hash Keys.  Mind you there is a specific set of rules and standards around HOW this works, and I’ve written an 8 page white-paper that all my students receive when they go through my courses (in person or on-line).

So in closing, all I ask is IF you are going to suggest changes to the modeling paradigm and standards, that you please test your suggestions before making blanket statements in public about how the standards I propose are wrong or don’t work.

And remember – Load Dates will be covered in a white-paper, due out soon.  You will get an in-depth look at how to test a SINGLE element for use within the standards of the Data Vault Modeling paradigm.

With best regards,
Dan Linstedt
DanLinstedt@gmail.com

http://LearnDataVault.com

Tags: , , , , , , ,

2 Responses to “#datavault standards how & why”

  1. Lakshman 2015/06/22 at 3:29 am #

    Hi Dan, we have successfully implemented data vault for our client. At the moment we have few scenarios and would like to know if this is allowed as per the data vault standard

    We have a hub and satellites to store organisation level details (code, name, address, parent etc etc). This will be loaded once in a quarter into the organisation hubs and satellites. The scenario we have got is that we have new organisation details (code and names only) that will come on monthly/weekly basis from a different source. I think we should use the existing hub, populate the new organisation code in the hub, create an empty record in the satellite. When the actual quarterly organisation data comes in, update the corresponding empty record with actual values. Is this the correct data vault standard, or no updates are allowed at all ?
    Or, should be load only the hub and leave satellite without populating any empty records. When the actual quarterly data comes in, populate the satellite data ?

    Although technically these solutions are feasible, we would like to know if we are adhering to the data vault standard ?

    Your advise will be of great help.
    Thanks,
    Lakshman

  2. Dan Linstedt 2015/06/23 at 3:44 pm #

    Hi Lakshman,

    Thank-you for taking the time to contact me. I typically offer responses to these types of questions in a paid-consulting arena. I will provide the short answer here, but please note – I cover all of this and more in my Data Vault 2.0 Boot Camp & Private Certification course.

    1) use the existing hub
    2) Separate satellites by source system (as documented in my Super Charge your Data Warehouse book)
    3) NEVER update in the Data Vault, which means you never have to insert empty records.
    4) Correlate or coalesce the data from the TWO satellites on the way OUT of the warehouse, downstream to the data marts.

    In other words: One Hub, One Quarterly Sat, one Weekly/Monthly Sat, and away you go. Never ever insert EMPTY records, then issue updates – this is an improper use of the Satellites.

    Now, all that said: IF they are supposed to have all the same attributes, and the two systems are SUPPOSED to be in sync, then insert to a single sat, when the data arrives regardless of source. Just make sure to run the delta process prior to insert.

    Any further information would require my time to be booked, and to review your solution. I am available for consulting.

    Thank-you kindly,
    Dan Linstedt

Leave a Reply

*