For many years, I have built, authored and maintained the #Datavault standards. This includes Data Vault 1.0, and Data Vault 2.0. There are others in the community who believe that “these standards should evolve and be changed by consensus of the general public”.
I have a number of issues with this approach. In this article I will describe what it takes to author a standard for the Data Vault 2.0 System of Business Intelligence. You are certainly more than welcome to contribute to the standards body of knowledge around Data Vault, I simply want contributions to be held to the highest level of integrity.
Why people insist on “breaking the rules and standards” I set forth is beyond me. Would you trust a heart surgeon whom has never been to school for proper training (standard methods and procedures) for operating on your heart? How about a brain surgeon? Well, of course it goes without saying that when your life depends on it (whatever it is, from a car functioning properly in a crash, to an airplane flying according to the laws of physics) that all of the sudden: good standards matter.
Now with all sets of standards there are the purist standards (those that I document), and the pragmatic standards (those that contain minor alterations or deviations). Now, the bigger the gap between purist standards and pragmatic standards, the more likely the project / process / design will fail under stress.
The issue isn’t necessarily the alteration itself, it’s the lack of rigor applied to testing the pragmatic approach and alteration proposed that eventually results in failure.
There are some cases, on specific projects where I have vetted and approved minor alterations for a pragmatic approach to implementing Data Vault. One such case is for Teradata. The way the relational engine works, a Hash Key is not necessary, oh by the way, neither is a surrogate sequence identifier! Teradata can and does hash it’s primary key / business key under the covers. This is an optimization NOT made by most other platforms (except SAP HANA).
Most of the time however, the standards as I have defined them, must stay in place – or some part of the architecture, methodology, model, or implementation will suffer (in some cases, multiple parts will suffer). Then come my competitors whom I originally taught Data Vault 1.0 to. They make claims as they see fit. I’ve made a list of some of their false claims below:
Poor Judgement Claims Made in the Market:
- A Link can be a Hub
- A Hub can have a foreign key to another Hub
- A Satellite can have it’s own Sequence ID / Primary Key identifier
- Sequences are FINE to continue utilizing, you don’t need Hash Keys
- Standards for Data Vault should be managed by consensus, and by the community at large
- Satellites can have more than one foreign key to more than one parent structure
- You don’t need Change Data Capture
- Data Vault 2.0 is nothing more than a change from Sequences to Hash Keys in the Modeling Level
And more! Some are far too outlandish to list here, they would simply provide a good laugh.
Want to Suggest a Change to the Standards?
I am not saying you cannot suggest changes. I have always kept my door open (and continue to do so). I welcome suggestions, and thoughts around how the standards can evolve to better suit the needs of the market place, automation, big data, and so on. In fact, it was with a team of individuals that I collaborated with in order to innovate Data Vault 2.0 in the first place. This team included: Kent Graziano, Michael Olschimke, Sanjay Pande, Bill Inmon, Gabor Gollnhofer, and a few others…
I didn’t make sweeping changes by myself, or just because I thought it would be a good idea, no – I tested (and tested and tested), and vetted the ideas with my colleagues before announcing (about 2.5 years later) the Data Vault 2.0 system of business intelligence.
I am more than happy to have you suggest changes, or to hear your ideas. Standards do need to evolve, change, adapt (hopefully without causing re-engineering effots). That said, I expect you to apply proper rigor before making suggestions. Below are a list of conditions I expect you to run your changes through, and bring documented results of – before I can consider the change to the greater standard.
- Test against Large Volumes of data (these days it must be > 500TB of data) This number will continue to increase as systems are capable of handling larger data sets.
- Test against Real-Time feeds (burst rates of up to 400k transactions per second). This number too, will continue to increase as systems are capable of handling larger data sets.
- Test against Change Data Capture and Restartability
- Test against multiple platforms, including (but not limited to) Oracle, SQLServer, DB2, Teradata, MySQL, Hadoop (HDFS and Hive and Spark), Cloudera, MapR, HortonWorks, SnowflakeDB.
- Test in multiple coding languages: Python, Ruby, Rails, Java, C, C#, C++, Perl, SQL, PHP (to name a few)
- Test in Recovery situations: restore, and backup
Below are a list (sample list) of questions I typically ask of the change: (I track, record metrics around these items)
- Does it negatively impact the agility or productivity of the team?
- Can it be automated for 98% or better of all cases put forward?
- Is it repeatable?
- Is it consistent?
- Is it restartable without massive impact? (when it comes to workflow processes)
- Is it cross-platform? Does it work regardless of platform implementation?
- Can it be defined ONCE and used many times? (goes back to repeatability)
- Is it easy to understand and document? (if not, it will never be maintainable, repeatable, or even automatable)
- Does it scale without re-engineering? (for example: can the same pattern work for 10 records, as well as 100 billion records without change?)
- Does it handle alterations / iterations with little to no re-engineering?
- Can this “model” be found in nature? (model might be process, data, design, method, or otherwise, nature – means reality, beyond the digital realm)
- Is it partitionable? Shardable?
- Does it adhere to MPP mathematics and data distribution?
- Does it adhere to Set Logic Mathematics?
- Can it be measured by KPI’s?
- Is the process / data / method auditable? If not, what’s required to make it auditable?
- Does it promote / provide a basis for parallel independent teams?
- Can it be deployed globally?
- Can it work on hybrid platforms seamlessly?
And quite a few more. There are those out there who say: volume and velocity don’t matter… Well I beg to differ. Volume and velocity (data moving within a fixed time window from point to point) cause architectures, models, and processes to fail – having to be re-engineered at the end of the day. Unless you’ve had this level of exposure (today at the 400TB + level) you would never have this experience.
If volume and velocity did not matter, we never would have seen the creation of Hadoop and NoSQL in the first place.
I welcome suggestions to changing the standards – all I ask is that you put the proper rigor and testing behind the changes first. One-off cases or one-time changes do not work and will never be accepted as changes to the core standards. Just a refresher: I have put in 30,000 test cases between 1990 and 2001, and another 10,000 test cases since then in order to build common standards that everyone can use, and create successes in your organization.
With the advent of Data Vault 2.0 I have (finally) included the necessary documentation for the Methodology, Architecture and Implementation. I’ve enhanced the Modeling components to meet the needs of Big Data, Hybrid Solutions, Geographically split solutions, privacy and country regulations. The changes to the data modeling paradigm (while subtle) are important.
I did not build these standards by myself in a closet somewhere. I had a team of 5 people at Lockheed Martin every step of the way, and no – my current competitors were NOT part of that team. In fact, they didn’t even know Data Vault existed at that time, because it was still under development between 1990 and 2001. That team consisted of: myself, Jack, Arlen, Jackie, and John. All of whom worked for Lockheed Martin. I have reserved their last names to protect their privacy.
Please note: I have just released the new Data Vault Data Modeling Standard v2.0.1 FREE for you. You can get a copy of it by registering for http://DataVaultAlliance.com
Coming soon: Data Vault Implementation Standard v2.0.0, and a few more!!
Have something negative or positive to say?
Post a comment below, happy to hear from you directly
(C) Copyright Dan Linstedt, 2018 all rights reserved.