Data Vault Model & MPP Architecture

MPP – Massively Parallel Processing systems, a beautiful site to behold.  There used to be a contingent (maybe there still is) that believed scaling SMP would outperform scaling MPP.  (Scaling up versus scaling out).  The HP SuperDome is one heck of a server, and a prime example of SMP (symmetrical multi-processing).  The HP SuperDome I knew about had the capacity for 64 CPU’s on one motherboard with availability for 256 GB of RAM.  Although the specs have changed (increased) I still don’t see how it can out-scale the MPP systems.

Now, if you take two, three, or four superdomes and scale them up to maximum, then tether them together as a shared nothing MPP architecture – THEN you have something akin to cloud resources.  Of course, you have to add the software to handle all this architecture as a cloud and virtualized availability.  But the good news is: whatever you put on this configuration will be lightening fast (if it’s tuned properly).

Enough talk, what parts of MPP drive the Data Vault Model?

Ok already, this is what I promised.  MPP is basically a divide and conquer architecture.  It is usually a shared-nothing set of components with a “director” which doles out the work (splits it up – sends it to each machine, then collects the results).  MPP offers the ability to truly run all processes in parallel across all machines.  Homogeneous MPP machines provide the same exact configuration/hardware and software to the director.  Heterogeneous MPP includes all kinds of different machines (even potentially in different geographical locations).  For instance Homogeneous MPP architecture would house 4 or 5 of the same HP superdomes that I started the discussion with.  A Heterogeneous MPP system (or a cloud system) may include some windows servers, Linux servers, UNIX servers, and maybe even a mainframe or two.

MPP’s divide and conquer is really a strategy for splitting up WORK to be done.  Rather than doing the work in serial, the work gets done in PARALLEL (all the time).  The work could be a database query, it could be database sorting, it could be loading the database, or it could even be partitioning the data, re-organizing the disk, defragging the disk, etc. etc. etc…  Usually parallel processing is just that: parallel activities or what was a “single process request” becomes a split task across the resources.  If the “answer” requested by the user is a “single answer” the director waits until all parts finish their tasks and provide a result.  It is at that point that the results are collated back into a single stream and passed back to the requestor (or user).

So how does this affect Data Modeling, especially the Data Vault Model? 

The Data Vault Model uses Hubs and Links and Satellites to VERTICALLY PARTITION (or divide the data set by specific column organization).  In other words, Hubs being a list of pure/unique business keys have the following attributes:

  • Shortest rows in the Data Vault, allowing for most rows per block, allowing for fastest possible disk scan access
  • Potentially the most unique rows in the database, making compression a moot point – BUT enabling horizontal partitioning by either time, or range across the data dividing up the data set further, so that a query request can be further divided into parallel access, and only hit the parts of the disk where data is needed.
  • Providing the most coverage of any indexing system the database provides.  The better the coverage of the index, the faster and more efficient the indexing systems are.

When related to MPP, Hubs allow you to SPLIT the data sets across physical hardware without too much effort – in other words, all data belonging to “customers” can be assigned to a specific machine in the MPP stack, and will keep all the customer data together – because the Hubs and Satellites go together on to that MPP node.

Links have a different functionality, and act a lot like something called a JOIN INDEX (provided by Teradata).  I believe IBM DB2 MPP EDW has a similar functionality, but I can’t remember what it’s called.  In architectural spaces of database engines it’s generally called a “collocated index”.  In other words – you might have two different Hubs and Satellites: one Hub (customers) + Satellites resides on System A1,  One Hub (Employees) + Satellites resides on System B2.  When you want to join Customers to Employees, neither one of the MPP nodes “knows” what data to retrieve without some sort of “hint” (that would be passing KEY ROWS across the MPP backbone to A1 and B2 systems – so that the “join” can correctly identify which rows to retrieve.

Suppose now that Customers is 45 million records, and employees is 10,000 records (quite small, but will work for this example).  There is a Link structure between the two.  The physical location of the Link table will make all the difference in the world for performance, as far as where it’s placed.  In terms of this case (as with most cases in MPP world), it is highly suggested that you place the Link structure (collocate) with the Employees Table on the proper node.  Placed with the smaller of the two tables.  This allows the Join engine to “match” all potential customers first that have employee records by using the “small to large join table” ie: the Link table on the B2 system.

After which, a SMALL NUMBER of Employee Rows cross-over (are passed across the network) to the A1 system where the large table lives, then the join can finish executing against the Customers Table.

WAIT A MINUTE, THIS ISN’T HOW THIS WORKS!

I know, I know!  This is a pure MPP system with no further partitioning.  You see, most MPP systems do one better than pure vertical partitioning, they want to spread the work across the machines to leverage the computing resources.  In this case, it is best to have a Homogeneous MPP system, the director can only retrieve results as fast as “the slowest machine” in a Homogeneous architecture.  So, one better – what do they do next to achieve an even distribution of data across machines?  They HORIZONTALLY partition it.  Generally by proprietary Hashing algorithms.  In other words, there are some Customer rows on System A1, and Some customer rows on System B2, there are some Employee rows on System A1, and some Employee rows on System B2 – and they usually are not co-located unless the “set of fields used to compute the hash key” in both tables, are the same.

Often, hashing Customers the same way as Employees would optimize the one single query: Customers Joined To Employees, but would slow down ALL the remaining queries against the database.  SO: it is not common, nor recommended to hash (nor even practical) all tables by the same sets of keys, resulting in “most average distribution across most average sets of queries”.  THIS is where the Join Index really helps.  In fact, multiple join indexes carrying Hash Key pointers, and indexed join key values are really incredible technology.

It begins to go against conventional wisdom here, it is recommended (because of the multi-partitioning going on) that you actually HASH the join index according to the “larger” of the two tables, so that the large data set never leaves the MPP node it lives on unless it has a matching row on the other side.  (please correct me if I’m wrong, I’ve been wrong before, and this wouldn’t be the first time nor the last time for my mistakes… )

Again, what does this have to do with the Data Vault?

Well – quite literally nothing (joking), but wait!  There’s More!  Actually, it has everything to do with the Data Vault because the Data Vault Model is based on this principle, it enables horizontal and vertical partitioning of the data to occur on systems that are not MPP based – allowing MPP simulation to happen at the lowest levels of database parallelism.  Did you miss that?  The mathematics of scale that back the MPP architecture, and the principles and applied math that backs the performance of MPP scalability, are inherited by the Data Vault Architecture even on non-MPP systems.  By using the Hubs to isolate and Hash data locations, and using the Links as a join-index to hash and collocate the data sets large and small, you can cross into an MPP design and use an SMP system.

In other words, you can use MySQL, SQLServer, or other database engines to make this completely possible, fast, efficient and scalable.  I’ll cover more on this in another post.

What about the Satellites?  Where do they fit in?

Anyone who’s studied indexing and query performance knows that non-sparse indexes (maximum coverage of the data) is what you want.  Repetitive data sets cause all kinds of performance problems to MOST database engines.  It’s the repeating data, along with the wide row sets that cause IO’s to rise, table scans to rise, and performance to go down the tubes.  This is one of the reasons why: “Denormalization of tables (reduction of the number of joins down into single table scans) has an UPPER LIMIT.”  In other words, once you denormalize (make your dimension have too many columns), performance begins to take a nose-dive.

So again, the Data Vault, how does this apply and why MPP?

Well, let me put it this way: The first thing you should be doing in any Good data warehouse (Data Vault or NOT) is turning on column and page or row compression – especially for Satellites in the Data Vault.  This greatly reduces “repeated values”, and increases index coverage dramatically.  This improves performance of the queries over ever growing data sets.  The catch?  It slows down load processing times – nothing we can do about that today.  That has to be solved by hardware manufacturers using RAM caches, and smart algorithms.  Anyhow, the second thing you can do with a Satellite is back to partitioning principles – thus applying to MPP principle #1: Divide and Conquer.  Basically SPLIT the Satellites by either Rate of Change (of correlated data sets), or Type/Classification of Data (one class might be by field data type, one class might be by “function” or business definition), etc…

By Partitioning or splitting the Satellites you do introduce more joins, but at the same time, you also increase the potential that parallel queries can be constructed to do the work.  Then, it’s a matter of tuning the hardware and the I/O channels and data placement underneath.

What do you think?  Does this help you?  I’ve spent 10 years in research and design of the Data Vault to address scalability and performance… please let me know if I’ve forgotten something.

Oh yea, shameless plug: I do offer MPP Systems Design and Architectural consulting – along with Optimization of your current environment (even if it isn’t MPP).

Thanks,
Dan Linstedt
DanL@DanLinstedt.com

Tags: , , , , ,

No comments yet.

Leave a Reply

*