What is Data Vault 1.0?
The current iteration of the Data Vault was created over 20 years ago and published over 10 years ago. Systems and technologies evolve. So do architectures and methodologies.
Up until now the Data Vault model and methodology worked very well with most systems, especially relational databases.
There have always been issues surrounding columnar databases, key-value pairs stores, triple stores, cloud databases and unstructured data. I’ve even written about the issues with them.
New technologies like Hadoop are becoming more and more popular and I’ve had a lot of pressure from both students and clients on implementing a Data Vault on big data systems and/or within an environment of such systems. It’s also a reality because these new systems are emerging as both sources and targets and the existing standards for the Data Vault simply do not fit.
Although the Data Vault has scaled to multi-petabyte capabilites as far back as 8 years ago, today’s systems can handle volumes in multiple factors of that and scale linearly, however they’re not the same. Using the same thinking that we apply to relational databases on these systems just won’t work.
There are also substantial performance improvements in the new model with tweaks to the Data Vault Modeling standard.
“Everything out there today about the Data Vault model and methodology that
has been referred to or is being referred to as the Data Vault is
now deemed as Data Vault 1.0.”
It has stayed unmodified for over decade which is not a bad run at all. Any usage of the term “Data Vault” with reference to the Data Vault model and methodology and the Data Vault architecture for building Data Warehouses can automatically be labelled Data Vault version 1.0
Solving problems to introduce new ones has never been the goal of the Data Vault.
Therefore evolution of systems and tools has led to the new and improved …
Data Vault 2.0
Data Vault 2.0 subsumes and supersedes Data Vault 1.0. This is an automatic extension because 2.0 has enough new material and changes to the standard that it warrants versioning both for protecting any investment already made in the Data Vault by companies or individuals and for the natural evolution of the standards. (This is not very different from the evolution of star-schemas to the Kimball BUS architecture or with Bill Inmon redefining CIF with DW2.0 to address the needs of today).
Data Vault 2.0 has a bigger focus on implementation than Data Vault 1.0.
Data Vault 2.0 addresses many issues including performance concerns of the data models, big data, real-time and unstructured data components.
Data Vault 2.0 addresses issues with big data systems like Hadoop and HPCC compared to relational database systems like SQL Server, Oracle, MySQL, PostgreSQL, DB2 and others.
Data Vault 2.0 addresses MPP databases like Teradata, GreenPlum etc.
Data Vault 2.0 addresses issues surrounding other NoSQL databases like MongoDB, Riak, CouchDB, Cassandra, HBase etc.
These are reality and need to be addressed. Customers are already asking for it. There are customers who have come forth and are willing to be beta testers with the implementation of a Data Vault directly on Hadoop using Hive (and there’s a lot of exciting news on that front).
While Data Vault 2.0 is still in it’s nascent stages, we have already done performance tests on the new modeling techniques using relational databases with substantial improvements. Hadoop is in the lab stage.
Data Vault 2.0 is a new specification and the certification will be introduced in 2013. So far the only place you can get this certification is directly through me or through folks who have signed an agreement authorizing them.
The licensing for DV 2.0 is still being worked out. It will probably be along similar lines as 1.0 with the modeling being “open source” but I will retain copyright to it. We’re debating on this and it may just end up being trademarked like DW 2.0 or may follow some flexible open source license.