in a recent post, i described loading patterns for data vault and how this might work going forward. there was a question asked about comparing the loading patterns for loading source->staging->dv per system versus for all systems simultaneously. within the coaching area i go through these topics in great detail, and discuss the pros and cons, the different options, and also teach how-to make this a success. in this blog entry, i will highlight some of the issues, and provide a few hints and techniques on making each one work.
what do i need to know?
first, you need to know that both patterns accomplish the same goal. then you should probably know that each one provides a potentially different result when querying the data (depending on design choices). finally, you should probably also be aware that the closer you get to real-time information processing, the more likely you are to be forced to load data in parallel all the time (regardless of system).
so, then. tell me about per system style.
when you load on a per-system basis, you are making a conscious choice to dictate that the “most recent system loaded within a given batch cycle – is in fact the master system.” this is especially true if you are loading multiple systems to a single satellite rather than splitting each satellite by source system. *note: this is discussed at length in the technical modeling book.
per system loads, also indicate that the data from the entire system is 100% ready to be sourced at the same time. this can be difficult, and in some cases can cause the loads to wait far too long to even get started. now, you could split the load into components, like: load all of single system to stage when the data is ready. once all of stage is loaded for a system, continue downstream loading stage to data vault, etc… but this may lead to all kinds of interesting problems in the link tables – where associations are “expected” to be there, but arent, or don’t arrive until later when another part of the source is loaded.
another statement you are making when you load per-system, is that: all data is self-contained. meaning that none of the incoming data uses data from other systems during the loading cycle. but this is ok in most cases, as the link tables are late-binding data driven anyhow.
the only time this “model” would cause a problem, is if the loading cycle took the data set from the data vault and moved it to data marts downstream without waiting for a complete view of the data to be loaded to the warehouse. which brings up an interesting point in real-time feeds… when is it “safe” to move data downstream, regardless of the load style? (which i’ll answer in another blog post later).
ok – their are several issues with this style of loading and managing components, the best practice or recommended rules are not to follow this, but rather to follow a different rule. (provided further down the post).
loading all systems simultaneously.
this method is the end-game. this is the point we arrive at, especially when loading the data vault in real-time. real-time loading is different than batch loading. today the lines are blurred, especially when discussing micro batch or mini batch. i put real-time loading at the 3 second interval arrival latency or less. now, in general: if you are following this particular method (batch or not), you realize quickly that collisions happen in the hubs and links, and possibly in the satellites. well, the best practice is to split satellites by source system (i explain why this is in the technical book, and in the coaching area).
ultimately the collisions in the hubs and links are insert only collisions, and if they can’t be managed by a read-only not-exists query, then they must be managed by the database deadlock mechanism, and two-phase commit. the ultimate rule there, is: whomever gets the lock wins the insert, the other is rejected as duplicate business key. this is one of the many reasons i also prefer database engines where the identity or sequence column is built-in to the tables, and isn’t a separate object.
conclusions on this topic:
when thinking about batch loading, especially in big numbers, i prefer to:
a) create a batch load cycle
b) load all systems as soon as they are ready in massive parallelism to the staging area
c) once all the data has reached the staging area, begin loading all hubs in parallel
d) then load all the links in parallel
e) then load all the satellites in parallel.
for me this drives out natural master system selection per source system, as one hub’s master is anothers’ child. this allows different master systems to dictate origination points for different keys. i guess you could say i use a bit of both techniques.
after all the data is loaded to the data vault, i then proceed to load all dimensions in parallel, followed by all fact tables. this achieves highest parallelism possible for large scale batch loads. real-time loads compete for resources, and ultimately use the database two-phase commit and locking system to decide who wins out.
again, how and why are covered in detail in the coaching area.