Data Vault has Many Joins: Part 1

Here is a copy of a posting on the Data Vault Forums.  In the blog, I will get into the join specifics, the mathematics for the joins, what the joins do, how to optimize them, etc..
____________________________________________

The next technical issue on the docket is:
Data Vault model introduces many many joins

This is true, very true – the DV model remains extremely flexible but produces forced joins across the model in order to get the information out.

BUSINESS REASON?
Because the join represents a relationship between two business processes, or a transaction with specific elements existing as attributes. When the business changes, the model needs to change without disturbing the history, the model needs to represent the business as it stood in the past before the change. The business changes frequently, and the model should not impede the business from making changes.

Why? Because all relationships are driven through link tables (many-to-many tables). This is a part of the architecture which cannot and should not collapse in to 1 to many, nor many to 1 – if the relationships collapse we lose traceability, auditability, and flexibility moving forward.

The joins exist, that’s a fact of life. What can we do to get the queries to go faster? I’m covering this right now in my Data Vault book, but here are some brief thoughts:

1. Install an MPP shared-nothing RDBMS engine that is highly scalable and extremely fast, the DV has a propensity for working extremely well in an MPP environment with volume.
2. Build Data Marts to get the data out, physically instantiate tables – run processes to load those tables as you go.
3. Load the data marts at the same time you load the Data Vault, then run a background “check” process to make sure they synch up (BEWARE, THIS METHOD HAS MANY FLAWS, and can cost a lot in terms of architecture and horsepower utilization).
4. SPLIT the Data Vault into a copy of history, versus current – I don’t advocate this approach, but if it must be done, so be it. This approach REQUIRES separate data release areas like data marts and flat-wide tables for users that contain a full view of the data.
5. Use BRIDGE and PIT tables to assist with the queries (the most common solution).
6. FOLD relationships where possible – experiement with higher level grain, and relationship FOLDING, and SYSTEM generated record sources.

We will go into query tuning techniques in the book (which will be done soon), and will help you with these notions.

_____________________________________

So what exactly does it mean to the business?
Why does it matter?  The Data Vault should not be accessible by business users (except those who are data mining).  Those who are data mining have been taught how to run queries against the Data Vault.  All other access against the Data Vault is scheduled – process based access; and if it’s process based access – it can be TUNED!  It is not the ad-hoc query environment without a really powerful MPP engine (like Teradata, or Netezza, Vertica, or IBM DB2 MPP).  In the MPP Environments, parallelism is KING, having the right hardware eliminates any worries of joins across the tables – most MPP platforms will “flatten” queries before executing them, then run many bits and pieces in parallel until results are achieved (most optimizers try to make the joins occur where the most optimal cost would be).

If the access is NOT ad-hoc, then tuning the Queries and the access paths are easily “set and forgotten” until the model changes.  Also, you can add indexes, horizontal and physical partitioning, Index Organized Tables, In-Memory Tables, Hash Join Indexes, Point in Time Tables, and Bridge Tables to speed queries along.  All of this is entirely possible without destroying the flexibility of the Data Vault Model.

Now, back to the business part of this:
Yes, the Data Vault has many joins – you can’t get away from this and still have a really flexible model.  From a business perspective what this allows is fast IT reaction time when the business changes it’s requirements, adds a new system to the EDW, or decides they want to represent relationships in a new way. 

From a technical perspective:
The joins in the Data Vault are a moot point.  Why?  BECAUSE OF THE WAY MPP WORKS!  If you aren’t familiar with MPP (massively parallel processing), or how to setup your RDBMS engine for it, then I might suggest that you find out how to implement it.  Sorry for sounding brash, but in this day & age, this skill is CRITICAL for any DBA on the job.  Here is a real life cases:  In Teradata, we had a Data Vault, and performed a 15 way join across the model with 5 Terabytes of information in test – returning 1.2M records in under 4 seconds.  Teradata didn’t even break a sweat.  There are ways to make IBM DB2 MPP perform as well – but it relies on the right architecture, and the right operating system.  IBM DB2 MPP performs REALLY well on LINUX 64 bit (after it’s been tuned properly, and there is enough memory on many of the boxes to handle the caching/synchronization).  I’ve recently heard of a Data Vault on Netezza doing really well (as in not needing Physical Data Marts/Physical Star Schemas to get the data out).  In this case, they used VIEWS to represent the dimensions and facts, and were able to pull data back very quickly (I am trying to get the performance numbers).

On Vertica, the case is different – vertica and ParAccel operate in different manners.  We have no test cases yet (that I’ve heard of) to provide information about Data Vault on these devices, but I’m sure it would do just fine, even at scale.

Now, why shouldn’t the number of joins matter?
Well, take a look at the definition of MPP:

Massively Parallel Processing or Massively Parallel Processor) A multiprocessing architecture that uses up to thousands of processors. Some might contend that a computer system with 64 or more CPUs is a massively parallel processor. However, the number of CPUs is not as much the issue as the architecture. MPP systems use a different programming paradigm than the more common symmetric multiprocessing (SMP) systems used as servers.

In an MPP system, each CPU contains its own memory and copy of the operating system and application. Each subsystem communicates with the others via a high-speed interconnect. In order to use MPP effectively, an information processing problem must be breakable into pieces that can all be solved simultaneously. In scientific environments, certain simulations and mathematical problems can be split apart and each part processed at the same time. In the business world, a parallel data query (PDQ) divides a large database into pieces. For example, 26 CPUs could be used to perform a sequential search, each one searching one letter of the alphabet.

To take advantage of more CPUs in an MPP system means that the specific problem has to be broken down further into more parallel groups. However, adding CPUs in an SMP system increases performance in a more general manner. Applications that support parallel operations (multithreading) immediately take advantage of SMP, but performance gains are available to all applications, simply because there are more processors. For example, four CPUs can be running four different applications. See SMP.
http://www.pcmag.com/encyclopedia_term/0,2542,t=MPP&i=47310,00.asp

As indicated, MPP divides tasks into parallel groups.  Yes, it requires more and faster compute power – but in the end, division of work spread across computing power is what it is all about (for speed and performance).  Just look at the rise of Cloud Computing which is MPP, but by having shared nothing resources available on-demand.

The Data Vault breaks the data up into “vertical partitions” so that it can be accessed in high-speed parallel processes.  If you are not familiar with horizontal and vertical partitions, please take a look here…  http://www.google.com/url?sa=t&source=web&ct=res&cd=1&ved=0CAYQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.94.8455%26rep%3Drep1%26type%3Dpdf&rct=j&q=vertical+and+horizontal+partitioning+defined&ei=u-epS8SEHYi78gbz2uhB&usg=AFQjCNFsiPV544BdCQXShX36NYKIBliL6Q

or here: http://download.oracle.com/docs/cd/A87861_01/NT817EE/server.817/a76959/dt_conc_.htm

There are many architectural discussions by different database vendors that would provide information on these concepts.  The point is: Vertical Partitioning is not commonly “done” automatically by Relational/Traditional database vendors.  It IS done by appliance vendors (DatAllegro (now Microsoft), ParAccel, Vertica), ie: column based databases.

If you have an architecture that vertically partitions columns at the right spots – you can have indexes that provide maximum coverage, but more importantly, you can provide the engine with low-cost parallel query options that it didn’t have before.  Also, by squeezing the unique values into shorter rows (Hubs & Links are shorter rows), you get maximum packing of data into disk blocks, making the reads much more efficient – and providing the B+ tree traditional indexes with binary search ability.  When the data is replicated, your engine will go after the Satellites – this is where the largest difference happens.  Satellites not “printed or used” in the query can be dropped without cost, Satellites used in the query are already split apart by type of data OR rate of change.  This makes the patterns more “repeatable”, which in turn (when you turn on compression) makes compression ratios multiply (much much higher).  It also increases the performance of the query dramatically. 

Never mind the fact that you have more tables,  again, if you have parallel query available, turned on, and tuned properly – the database shouldn’t even break a sweat until it hits 100 Terabytes or so, and even then – enough compute power to handle the parallelism, and you’re right back to 4 second response times – no matter HOW BIG the data set gets.

You see, the Data Vault Modeling techniques are based on the mathematics of the MPP architecture, and Parallel Processing.  Basically the enabler for all of this is Vertical Partitioning in the right places (Business Keys for Hubs, Associations for Links, and Descriptive Data for Satellites).    No other modeling technique (save one: Anchor Modeling) brings you this type of parallelism and query ability.  The old saying is: if you have a large problem to solve, divide it and conquer it.  Why shouldn’t we apply this logic to our ever growing data sets?

If you want to understand the mathematics behind the Data Vault, study MPP, study Vertical Partitioning & Horizontal Partitioning, Study Parallel Processing versus Symmetrical Processing.  These concepts are built into the architecture of the Data Vault Model, and when the rules of the Data Vault are followed, and the hardware and RDBMS are tuned properly, then success is sure to follow.

Now, let me say this: WHEN would a 3nf out-perform a Data Vault?
Well, in situations where there are no “repetitive data” within the tables, when the indexing coverage is good, and 99% of the data set is “current” or operational in nature.

WHEN would a star-schema out-perform a Data Vault?
In situations where the RDBMS engine is not tuned properly for MPP, in situations where Clustered Harware, NUMA architecture are used, in situations where the RDBMS does not/cannot execute parallel query, and sometimes when the RDBMS offers “Star-Join” core engine optimizations…. But as your data grows, it WILL outgrow these architectures, and it WILL be forced into the MPP arena.  Then, the Data Vault will out-shine other data modeling techniques because it is designed for MPP.

Does this mean I have to have a large data set to use the Data Vault?
NO, absolutely not.  There are a number of benefits for using the Data Vault modeling in other smaller situations – and yes, they require physically moving data from the Data Vault to Star Schemas to get the query performance needed in a business analysis situation.

Anyhow, I’m getting beyond this post into another blog post… watch this space, and as always – if you have questions PLEASE ASK!

Tags: , , ,

One Response to “Data Vault has Many Joins: Part 1”

  1. marius 2010/06/28 at 6:20 am #

    Hi Dan,

    When is the second part of this article coming?

    Kind regards
    Marius

Leave a Reply

*