there is a lot of talk about data vault automation. in this post i discuss the easiest method possible to automate the loading of data vault.
when you build etl / elt load routines for loading a data vault, generally, you end up with hundreds of processes. at least one per source field/set of source fields that are tracked to each target. so if you have the following source table:
you end up with a minimum of the following load objects:
- load hub firm
- load sat firm
- update sat firm end date
so, a minimum of 3 etl objects (data flow designs) for one single source table. so if you have 25 staging tables, you end up with a minimum of 75 etl objects.
so how can we reduce the number of etl objects and still maintain the loading cycle?
it’s really quite easy if you think about it. the solution is: parameter driven etl routines. yes, very easy.
so, you end up with the following patterns: (requiring only one load map per pattern)
- load hub
- load link
- load satellite
- update satellite end date
- update last seen date hub
- update last seen date link
now, by parameters – i mean very simple parameters. the following parameters are necessary:
- the sql statement to get the columns out of the source staging table
- the cross join parent lookup sql statement (to find the parent sequence number)
- the insert statement or a dynamic target name – so that the target table can be changed on the fly
- in the case of updates – the dynamic update statement.
there are a few more pieces to get this going properly, but it can be done in informatica, kettle, ssis, sql procedures, and i’ve been told it should work in data stage too.
at the end of this, i end up with maybe 20 or 25 etl “template” objects, and as many parameter files as i need to move the data from source to target. each parameter file may have up to 4 sql based statements.
in case you are interested in downloading a working example on how to make this work, you can reply here (let me know you are interested). my newest on-line class over at: http://datavaultalliance.com/training/ will be covering this in detail.
ps: data vault 2.0 specifications help with performance and consistency. the dv2.0 specifications really help make these topics work much better, and i hope to release them in january 2013.