Data Vault And Staging Area

I’m often asked about the Data Vault, and the Staging Area – when to use it, why to use it, how to use it – and what the best practices are around using it.  Those of you who’ve been through my training, understand that there is a LOT of ground to cover, and I cover all of this (and more) with examples, inside my one-on-one coaching area.  That said, I will answer some of the above questions here in this brief post.  This post is focused on batch processing and micro batch processing.  This post does not answer the real-time feed questions.

What is a STAGING AREA?

A staging area is used in batch load situations.  A staging area (typically) is a location on the same server as the data warehouse in order to eliminate network traffic between the ETL from the staging area and the data warehouse.  Notice that I didn’t restrict the definition of staging area to database only…  This is absolutely CRITICAL.  Sometimes staging areas are also called landing zones for flat files, XML files, Cobol files and the like.

What is in the architecture of the staging area?

The architecture is: INDEPENDENT TABLES / FILES that “arrive” when the data is ready on the source.  The whole point to staging batch data is: Data Ready –> Logon to source –> Get Data in parallel process as fast as possible –> put data in staging area as fast as possible –> logout of source.  By having them be independent, (no foreign keys, no referential integrity, no lookups, no matches, no checks of any kind) our IT team can be agile and fast when new systems are due to be added to the EDW, our processes are optimized to scale at high speed (whether or not we have BIG DATA to deal with).  Truncate and re-load makes the processes completely restartable.

What are the reasons for using a staging area?

  • Scalability – able to partition the structures as needed, able to add new parallel processes as needed
  • Flexibility – able to absorb new feeds, new columns, new systems QUICKLY
  • Dynamic – able to load *optional* feeds that have undetermined arrival dates/times (inconsistent feeds, or on-demand feeds)
  • Restartable – able to restart the PART of the load that failed, after the problem is fixed – WITHOUT affecting all the other feeds
  • Schedule – able to schedule the stage load WHENEVER the data on the source is ready, this removes serious timing depenendancies across the system (which hinder scalability and flexibility)
  • Backup and Restore – After the entire staging area is “loaded” for a load cycle, I like to back it up, zip it, timestamp it, and compress it – then check it in to a Version control system.  This provides me with auditability along the way, as well as full restoration capabilities.

Why split the staging area from the Data Vault?

The Data Vault is meant to have data warehousing abilities.  The structures contain indicies, primary and foreign keys.  The data being loaded to the Data Vault generally has a specific order (in batch only) – in order to tell the story of which system is actually delivering the data at what date/time.  When you add these structural components to the data model, you push “processing sequences” up-stream.  If you were to use a Data Vault (empty or truncated every load cycle) as the staging area, you would lose all the performance benefits, and flexibility benefits, and scheduling benefits from the list above.

Restartability, backup and restore are still built in to the Data Vault.  Don’t get me wrong…. Performance benefits are in the Data Vault model too – they just are viewed slightly differently.

Using a Data Vault model as a staging area causes you to have to “worry” about availability and timing of the data set from the multiple source systems **** ALWAYS TRY TO THINK IN TERMS OF MULTIPLE SOURCE SYSTEMS *** Data Warehousing folks sometimes make the mistake of thinking of only ONE source system when architecting their solutions.

What about ETL Performance and Ease of Load?

Well, I’ve blogged on this many many times, in many places around the web.  I’m currently building best-practices, explaining why, how, and the mathematics behind it all – inside the coaching area.  But I’ll give you some information here that might make sense.

Despite what you might think (ie: it’s easier to go from source straight to Data Vault…) – this is not true.  Especially in systems of scale or size.  Source to Data Vault means you push all the multi-system dependencies and availability of data issues *up-stream* back to the base loading cycle.  The problem is: you have one source feed ready at 10pm the previous night, and another source feed ready at 4am the following morning.  Well, if they both load the Satellite, or Link, or Hub – they have to be synchronized.  Especially if the 4am feed is the MASTER system.  Now, you end up “waiting” to run the 10pm load cycle until after the 4am (next morning) feed is done.

Well, what happens if the source system for the first feed is only available from 10pm to 12am, and then the window closes?

Cross-Feed Dependencies (based on timing/availability) are one of the major reasons why EDW/BI projects become in-flexible and non-agile as time goes on.  These cross-feed dependencies (as described) are based on timing and availability of the data.  Which the system admins of those sources are constantly wanting to change the schedules around – re-prioritize the jobs.  This by itself, usually leads to HUGE re-engineering efforts, SUPER HIGH COSTS for maintenance, and the beginning of the downfall of the EDW/BI system!!  This is where the source of the problems for business users funding the BI effort start to occur.

It’s eventually what forces a business to shut-down and re-build (from the ground up) the entire data warehouse.  It’s the cause of the problem.

I’m sorry to be so adamant about this, but I’ve seen it in hundreds of business intelligence projects.  I’ve also helped some of these companies avoid this pain in the future by moving to the Data Vault and following the standard and best practices that I’ve setup and defined.

ETL Performance:  ETL or EL, or ELT (doesn’t matter) is 4x to 10x slower when loading a table with Primary/Foreign keys, and indexes ON when compared to an empty, truncated table with no Primary/foreign keys, and no indexes.  These factors matter.  Especially when the data set grows from 40 million to 400 million or to 1 billion rows to load PER PROCESS.

ETL Performance to the Data Vault is fast, IF the staging area is in the database to begin with.  There are ways to get the database to “bypass logging”, shut-down indexes, delay foreign key checks, etc…  if the operations are executed IN DATABASE.

ETL Performance slows down IF you go from source to Data Vault because of the timing issues, as well as the remote connection and IP transfer restrictions.  NOTE: THIS IS *NOT* A PROBLEM OF REAL-TIME or BURST DATA ARRIVAL.

Ease of load: Well, it’s easier to load a STANDARD staging table (to me) that mimics the source system, than it is to try and normalize the data (if I were to go from the source to the Data Vault directly).  Once all the data is SQL accessible, has been aligned (datatypes only!!), and duplicates have been removed – then it becomes easy to load from stage to vault.

If you try to combine data type alignment, accessibility (direct to source), timing/availability, duplicate removal, source system ordering of load processes, and normalization in to a SINGLE ETL process, you then are increasing (dramatically) the complexity factor of your source-to-warehouse load.  When you increase the complexity ratings, your ability to remain agile as a solutions team begins to drop.  Your ability to maintain the system in an efficient and timely manner begins to drop.  Your ability to add new systems FAST begins to drop.

There are too many benefits dropped (including scalability and flexibility) to use the Data Vault model as a staging area.  But this is just my opinion.

SEQUENCE NUMBERS….

I posted a blog entry on sequences recently, but here’s the gist: Sequence numbers in the staging area should stay in the staging area.  They are good for ONE THING ONLY in the staging area: identification and removal of true duplicate rows.  They should always start at 1 for new batch load cycles.  They always need to be unique, but don’t need to be in order.  This allows you to use this standard approach on ANY database system (columnar, appliance, MPP, SMP, clustered… whatever).  Sequences in the Data Vault are there to stay, and are to be used as fast-join placeholders in the vault – they represent a 1:1 relationship with the business keys.

Do any of you have a different opinion?  Do you agree with me?  What do you think about this subject?  Pros and Cons?  Register for the blog FOR FREE, then add your comment.

Cheers,
Dan L
PS: Don’t forget – there is a wide array of lessons about agility, flexibility, scalability, and architecture inside my on-line lessons at: http://LearnDataVault.com

Tags: , ,

2 Responses to “Data Vault And Staging Area”

  1. Marius 2010/08/22 at 11:19 pm #

    Hi Dan,

    How do you use sequence numbers in the staging area to remove duplicate records?

    Kind regards
    Marius

  2. Stefan Verzel 2010/08/26 at 1:40 pm #

    Marius,

    I think what Dan means is that you *need* sequence keys in order to delete duplicate records. The alternative is to identify both (or all) rows and delete them in full without keeping one of them.

    I reckon the alternative is to aggregate the data into a second table, but I suspect that might break a Data Vault law. 🙂 Strictly speaking, it’s not manipulation if the records are 100% duplicates and you mash them into one row. You may still record the event in any case.

    Kind regards,
    Stefan

Leave a Reply

*