Data Vault System of BI

#DataVault Issues Resolved

I wrote a post a long time ago (2010) about the pitfalls and issues of Data Vault Modeling.  In this entry I will dispell some of the pitfalls and issues I originally discussed, as technology and platforms have come a long long ways.  Also, Data Vault 2.0 has been published and solves many of these issues.

Lets take this bit by bit….

In my article I first state:

” Thus, the data in the Data Vault is not for end-user access (direct access), it is for power-user access and data mining or discovery operations.”

This is not entirely true.  The Data Vault model comes in two forms: Raw Data Vault and Business Data Vault.  In the Business Layer we construct point-in-time and bridge tables, along with business rule driven information sitting in Hubs and Links and Satellites, constructed for business access.  The Business Data Vault is not a full copy of the raw vault model, it is a sparse creation.

The other piece here, is that we utilize views (virtual dimensions, virtual facts, virtual flat-wide tables) directly on top of the PIT and Bridge tables, joining directly to the raw data vault model.  These views (virtual marts) are directly accessible by business users.  With this level of control, we can be flexible, dynamic, and tune queries for high speed.

That said: I have a customer with 360 Billion to 500 Billion records in their raw data vault tables, using row-level security joins in the virtual dimensions and virtual facts, and getting sub-second query response times….

Turn our Data In To Information.

This is always the mantra of all properly built BI solutions, Data Vault is no exception.

“How can real-time be realized if the data has to stop in different places?”

Real-time?  NO PROBLEM

Real-time is accommodated easily by the Data Vault modeling patterns and Loading patterns, there really isn’t any issue here at all.  We buffer the output in accordance with the Service Level Agreements (SLA’s) by utilizing the Point-in-time and bridge tables.  These snapshot structures perform many tasks, including allowing high performance querying against the raw data vault directly without moving or replicating data downstream to physical marts!

Business Issues

In my previous article I opened up a list of business issues, I will now address each issue as they are in today’s world.  My answers are in BLUE (right below the Issue)

  • Old Issue: Data in the Data Vault is non-user accessible.
  • Answer: Not true.  Data in the Data Vault is user accessible, through virtual dimensions, virtual facts, virtual flat-wide tables that all utilize point-in-time and bridge structures underneath.  This way, we do not need to replicate the data to yet another target.  When the business rules change, the team can respond with incredible speed and agility – by keeping those views virtual.
  • Old Issue: Data in the DV is not “cleansed or quality checked”
  • Answer: True.  Data in the Data Vault is not cleansed or quality checked.  That said however, if you want “quality checked data” and to turn it in to information, it is done through the business rules on the way to the mart layer.  This can be done inside the business Data Vault, or in the virtual mart layers, or in the PIT and Bridge table loads.
  • Old Issue: Benefits of the DV are indirect, but very real.
  • Answer: Not all benefits are indirect.  90% of the benefits are direct, especially from Data Vault 2.0 (Not necessarily from Data Vault 1.0 or just the Data Vault model).  In Data Vault 2.0, the direct benefits are incredible agility, big data adaptation, inclusion of NoSQL / Hadoop platforms, ROI calculations on Gap Analysis in the business layers, performance of the queries, and so on.  There are direct attributable and calculatable benefits that can be seen as a result of Data Vault 2.0 properly built!!  What does DIRECT mean?  Lower costs to build, to maintain, to enhance.  Faster time, easier automation, easier to scale, easier to run parallel, AND the ability to calculate monetary value on EACH individual element.
  • Old Issue: More up-front work is required for long-term payoff.
  • Answer: NOT Entirely True.  This all depends on scope.  Properly scoped sprints can result in one-day build and release cycles, again, this is dictated by Data Vault 2.0.  This is not available in Data Vault 1.0.  Up-front work can be limited to a single set of output requirements, and a decisive list of business keys.  These business keys can be resolved in a 20 to 40 minute kick off meeting with the right individuals involved.  Long term pay-off is seen right out of the gate, especially with implementing “Day Long Sprints”.
  • Old Issue: Business Users believe (in the beginning) that they don’t need an “extra copy of the data”
  • Answer: True!!  In the beginning (due to slower hardware in 2010 when this original article was written), it was necessary to physicalize the information marts downstream.  With today’s technology (as of Q3-2017), this is no longer the case.  Platforms like SnowflakeDB, Oracle Exadata, Teradata, SQLServer Big Data Edition, Kudu, Impala, Hive, and more – it is no longer necessary to physicalize the marts.  We can (and often do) virtualize the downstream mart results without copying the data out of the Raw Data Vault EDW.
  • Old Issue: Elegant Architecture is secondary to business churn.
  • Answer: True.  This will always hold true, but what proves out time and time again in Data Vault 2.0 is that the elegant architecture as designed requires little to no re-engineering over time! This, in addition to virtual marts, cuts down on delivery time, and allows rapid and agile / fast deliveries to occur.  Sometimes in a matter of 20 to 40 minutes (specifically in day long sprints).  Iterations of these outputs can be made within a single hour (including unit testing).  This is NOT specifically related to Data Vault!  This occurs in ANY properly built EDW.
  • Old Issue: Using a DV forces examination of source data processes, and source business processes, some business users don’t want to be accountable, and will fight this notion.
  • Answer: True.  They still fight this notion sometimes, but more and more business users actually have a strong desire to fix their data and their source systems.  Remember, this list was created in 2010 (a LONG time ago!!).  These days most business users want to find and fix their issues, they want to be transparent and auditable.  So this is no longer a problem.  This is NOT a Data Vault problem, this is a PROPERLY BUILT DATA WAREHOUSE, that raises these questions.
  • Old Issue: Businesses believe their existing operational reports are “right”, the DV architecture proves this is not always the case.
  • Answer: True, this still happens.  It is a part of gap analysis that occurs during normal and proper auditable data warehousing.  This actually is not an issue, its a source of benefits for the business.  This HAPPENS IN EVERY GOOD DATA WAREHOUSE that is built from ANY Raw Data storage.
  • Old Issue: Business Users from different units MUST agree on the elements (scope) they need in the Data Vault before parts of it can be built.
  • Answer: This is no longer true.  With proper scope, focused on day long sprints, we can achieve wonderful output in extremely rapid turn-around times.  We do not need to wait until the business users agree to get output from the Data Vault.  In fact, we can deliver information even when they don’t agree.
  • Old Issue: Currently there is only one source of information exchange, there are no books on the Data Vault (yet).
  • Answer: No longer true, Data Vault 2.0 has been published in: Building a Scalable Data Warehouse with Data Vault 2.0 (available on Amazon.com world wide).  A huge book with 15 chapters and 698 pages, chapters 4 through 15 are hands-on exercises complete with real-world data sets, getting data in, out, through cubes, and even master data services are covered.
  • Old Issue: Some businesses fight the idea of implementing a new architecture, they claim it is yet unproven.
  • Answer: This is also no longer true.  Data Vault 2.0 has been proven time and time again from the Department of Defense, to the NSA, to Lockheed Martin, Microsoft, Commonwealth Bank (Australia), Pepper Financial (Australia), and more…  It’s been proven all over the world in many different organizations, and many different situations.  We have reference clients that you can speak with if you like.

Now that I addressed the business issues, let’s address the technical issues with Data Vault 2.0 in mind….

Technical Issues:

  • Old Issue: Modelers struggle to grasp the reasons behind “not enforcing relationships” on the data model level.
  • Answer: No longer true.  This is now viewed as the only method for little to no re-engineering of the data model, to leverage a many to many relationship is the only way to load history, current data, and future proof the warehouse model against business level change impacts.
  • Old Issue: Data Vault model introduces many many joins
  • Answer: True, but this has been resolved.  MPP also introduces many joins.  We built a point in time and bridge table set of structures that overcome this issue, along with overcoming data co-location, partitioning, snapshots, equal joins, and high performance for virtualization.  The join issues are no longer a problem.  Platforms have gotten bigger and stronger as well.  Anyone struggling with joins either doesn’t understand PIT and Bridge tables properly, or hasn’t gotten the right Data Vault 2.0 training.
  • Old Issue: Data Vault model is based on MPP computing, not SMP computing, and is not necessarily a clustered architecture.
  • Answer: Data Vault Model IS NOT MPP dependent!  Just because it is based on MPP mathematics, does not mean it doesn’t run exceedingly well on SMP computing!  In fact, it runs exceptionally well on SMP machines.  It does NOT require MPP in order to be performant or successful.  All this means it is enabled to run on MPP IF the platform is available.  We have plenty of Data Vault solutions built on SMP databases around the world.  Again, the answer for performance in querying lies in Point-in-time and Bridge Table usage for getting data out, managing table and row elmination, star join optimization, buffering output, and sub-second query response times…
  • Old Issue: Data Vault contains all deltas, only houses deletes and updates as status flags on the data itself.
  • Answer: This is true, ANY good data warehouse or analytical solution should show deltas only.  This is no different than a type 2 dimension which also delivers deltas.  In fact, solutions that are not delta driven often lead to double counting, and double loading (especially on failures and re-starts of loading jobs).  This actually is a function of ANY properly built EDW (Data Vault, 3rd normal form, OR star schema).
  • Old Issue: Data must be made into information BEFORE delivering to the business.
  • Answer: Half true, again, this is important for any properly built EDW.  This is NOT specific to Data Vault.  Even if you have “just a data dump/data junkyard” – it’s just data until it is turned in to information, and made usable by the business.  That said: some data has value in it’s raw format, and can be delivered directly to the business (again through virtual dimensions and virtual facts) queried against point-in-time and bridge tables.
  • Old Issue: Modelers must accept that there is no “snowflaking” in the Data Vault.
  • Answer: True.  Snow-flaking hasn’t been recommended in Dimensional Models for years either.  The risks are too great, and it causes nested sub-queries which do not perform well under heavy volumes (billions of records in a single table).  Snowflaking is poor and lazy design, whether it’s in Data Vault, Dimenisonal, or 3rd Normal form modeling.  It also breaks the agility of the design, causing problems with “changes” that happen naturally to hierarchies over time, resulting in full re-engineering of old-style dimensions.
  • Old Issue: Stand-alone tables for calendar, geography, and sometimes codes and descriptions are acceptable.
  • Answer: True, although the proper term is called: Reference Data.  We teach this in Data Vault 2.0 Certification.  This is no different than what we do in dimensional modeling with reference data, and no different than 3rd normal form.  Again, this is the responsibility of a properly built EDW.
  • Old Issue: 60% to 80% of source data typically is not tracked by change, forcing a re-load and delta comparison on the way into the DV.
  • Answer: Not necessarily true.  Today’s EDW’s are usually driven by audit trails and change data capture mechanisms, whether Dimensional or Data Vault, delta tracking is required.  (See above note on this issue).  These days, most of the customers with Data Vaults are loading transactions off message queues in real-time.  This way, ONLY the changes / deltas are actually sent to the target for ingestion.  Again, this is not specific to Data Vault, but rather applies to all enterprise data warehouses that are built properly.
  • Old Issue: Tracking queries becomes paramount to charging different user groups for “data utilization rates” and funding new projects.
  • Answer: Well, not necessary for success.  90% of the EDW’s today still don’t track queries or utilization.  Only a very few RDBMS technologies actually do this for the environment.  This literally has zero impact on the success of a Data Vault, or a Dimensional Model, or a 3rd normal form data warehouse.  That said: IF tracking is turned on, the results and resulting analytics can be fed to a deep learning / machine learning algorithm to make dynamic changes to a Data Vault Model (this is something the other modeling paradigms will not support.  Yes, Dynamic Data Vault Models can be created, maintained, and generated with Neural Nets, Machine Learning and metadata algorithms.  I’ve done it with Government Agencies in 2003.
  • Old Issue: Businesses must define the metadata on a column based level in order to make sense of the Data Vault storage paradigm.
  • Answer: Not true.  The only business definition required in the Data Vault is understanding the business keys.  That said: if a dimensional model is built properly, it too requires the thought process around business key identification! Otherwise, how is a type 2 dimension truly defined?  through disparate and unconnected metadata.  That means the dimensional model will fall down, the same as a Data Vault model.  In fact, if you read Kimball’s Data Warehouse Lifecycle Toolkit book carefully, you’ll actually see the definition of a dimension calls for business key identification!   After all, the Data Vault model is built from a hybridized best practice set across Dimensional Modeling and Normalized Format data modeling.

In Conclusion

I hope you enjoyed this journey down memory lane.  It’s fun to revisit and correct old notions, and to remind ourselves of the reality of true enterprise data warehousing. The Data Vault is, was, and will continue to be a serious driving force in the success rates of true enterprise data warehouses going forward.

Data Vault 2.0 brings with it methodology, architecture, modeling, and implementation – best practices, standards, automation and more.  The ability to encompass and leverage Disciplined Agile Delivery, and SEI/CMMI, Six Sigma, Lean Initiatives, Cycle Time Reduction, and proper build practices lead us to one day sprint cycles.

By the way, if you can’t get something delivered (in development only) through sourcing, staging, Data Vault, and all the way to a data mart, in a day, then you are not exercising agility properly… and most likely need to brush up on your skill set for Data Vault 2.0.  These are all things that data vault 1.0 (CDVDM) does not, will not, teach – it cannot teach these concepts because it does not include them.  It is all relegated specifically to Data Vault Modeling (less than 15% of the overall value when separated from the whole DV2 system of Business Intelligence).

However, when the Model is utilized in conjunction with the other components, true enterprise value can be achieved in one day sprint cycles.  That discussion will be saved for another day.

As always, comments and questions are welcome, please post them below.

(C) 2017, Dan Linstedt, ALL rights reserved

 

Tags: , , , , , , , , , , ,

2 Responses to “#DataVault Issues Resolved”

  1. Joel Wittenmyer 2017/09/26 at 8:02 am #

    Thank you Dan. This will hopefully clear up some misconceptions about Data Vault that I’ve heard recently, using your old post as a source.

  2. Dan Linstedt 2017/09/26 at 8:43 am #

    You are most welcome Joel. If you have any other topics (or anyone reading my posts has topics or questions), please make suggestions!! Thank you kindly, Dan Linstedt

Leave a Reply

*