ETL and the Data Vault

Sneak peek at implementation components (something I’m starting a 3rd book on – yes I know, I have to get the 2nd book about the modeling published – it’s coming I promise!)   Nearly every ETL or ELT tool works well with the Data Vault.  Why is this?  It is because of the ETL/ELT tools’ ability to run based on patterns.  The Data Vault model is based on patterns of design (Hub, Link, Satellite) along with very specific rules for loading.  These rules + patterns make it very easy to construct ETL/ELT load routines.

Idea #1: Patterns make for good optimization

Having patterns allows a minimum of variance (in design of the load) for each table type.  For instance, there is really only 1 way you can load a Hub – there may be 3 ultimate mechanisms, but 1 most efficient way – and depending on the tool that may vary.  Especially if you have push-down (or ELT – in database SQL).  So, in light of this pattern – it is easy to spot the flaws in the architecture early on, correct them, and offer solid solutions for high-performance through parallelism.  This goes for Satellite, and Link loads too.

Idea #2: Patterns make for good automation

Having patterns provides great foundations for automating the workflow.  Particularly when each “server” running the ETL/ELT engine has different configuration specifications.  The workflow must remain flexible while the loading routines remain strong and balanced.  It is _very_ easy to automate the loading processes to the staging area, and to the Data Vault.  It is because of idea #1, and the base rules and patterns of specific structures that make it this way.  Automation is simple, and yet dynamic.  Increase the server’s hardware, and voila – you can easily change the automated loading pattern without touching the underlying code.

Idea #3: Patterns drive repeatability and good documentation

With the patterns established, there are only so many ways you can/should load a Hub, load a Link, and load a Satellite.  Because these are design patterns, it makes the task of creating loading cycles redundant and mundain.  But at the same time – the load processes should follow a SINGLE document, meaning that every Hub load follows one “process diagram” and is documented one time, every Link load follows a single process diagram and is documented one time, and nearly every Satellite load follows a single process diagram and is documented one time.  This allows you to focus on what you need to do to tweak and customize 20% of the loading processes (should be closer to 10%).  This means that 80% or 90% of your loading processes are fully documented, easily understood, and easily optimized because they are repeatable in their design pattern.  It also means ramp-up-time for a new resource is VERY VERY SHORT.  They say people learn by example, well – with 100 tables or 1,000 tables in the Data Vault, you have only 3 loading patterns to teach, and hundreds of examples for them to play with.

Idea #4: Repeatable Design Patterns are Easily made restartable

What is built in to the loading patterns (which is one of the concepts we teach in class), is restartability.  The idea that once the problem is fixed, it’s a simple matter of “restarting” the loading process from where it left off.  All loading processes going in to the Data Vault are designed in such a way that they *do not* load duplicates, can pick up where they left off, and *do not* need code changes to make them restartable in nature.

Idea #5: Design patterns drive high performance

The design patterns used in loading the Data Vault have already been tuned to provide the maximum performing architecture.  Sure, you have to add physical components like partitioning, parallelism, cache management, and database tuning – but those always need to happen.  In this case, the pattern is already created for *optimal* performance, so that when you partition – you don’t have to change the code to get it to perform.  When you add parallelism, or database tuning, the “data mapping” automatically inherits the performance benefits.   Too often, when we think of Data Warehouse performance and tuning in the ETL world, we end up “changing” the code in the ETL layers, and what happens is: we gain performance for one load, but when the data changes – we lose performance later.  With the design patterns of the Data Vault, we gain optimized performance out of the gate.

Idea #6: Design patterns make for easy “generation” of loading processes

As a part of SEI/CMMI Level 5, automation and optimization bubble to the top.  Why should ETL be any different?  Why should we spend/waste our time hand-coding 100% customized loading processes all the time?  We shouldn’t!  With the Data Vault design patterns WE DON’T HAVE TO!   We can actually construct a load-process generator for our current ETL tool, and generate 80% to 90% of the loading processes – as long as we have a data lineage mapping from the staging area to the Data Vault, this is what makes this work.  I have many customers generating loading code for OWB, SSIS, Informatica, Pentaho, Talend, and so-on.  This leaves us time to build the real-work/real-customization where we have the most impact, and focus on the business rules (moving data from the Data Vault to the Star Schemas) – which by the way can be base-line generated so we have a starting point.

There are more, but I hope this helps get you started.  Please let me know what you want to hear about.

Cheers,
Dan L
DanL@DanLinstedt.com

Tags: , , , , , , , ,

No comments yet.

Leave a Reply

*