Facebook Twitter LinkedIn YouTube RSS
magnify
Home Data Vault DV Standards Data Vault Loading Specification v1.2
formats

Data Vault Loading Specification v1.2

Published on 2010/05/13 by in DV Standards
1.0 Goals and Objectives
The following elements are part of the goals and objectives for loading the Data Vault. The data warehouse should truly represent the source systems in their original formats, and the only thing that should be done to the data is basic defaults, and error reporting.

1.1 Consistent Process Architecture
Each process must be consistent. Loading every Hub, Link, and Satellite should be exactly the same process template. There are 3 process templates today, and each can be represented in SQL as the “T” in ELT, or represented as the mapping design in ETL. The EAI process has the same process set as well. The queries must also follow a similar process template. Getting data IN and OUT should be consistent formats – to make the Data Vault fully maintainable, and provide the possibility for automating the SQL build-out. (See the Data Vault Wizards on this site for further Information). A process should only break the template WHEN the end-user has signed off on a special requirement, allowing documentation to be created. (see 1.2 below).

1.2 Restartable Process Without Requiring “change” to the process itself.
The processes that are built should contain CDC components so they can recognize what they’ve already loaded (should the process break in the middle). The fully restartable process means: when it does break, the only change is re-loading or re-setting the source/stage data set. The process itself DOES NOT CHANGE, and is simply restarted to reload the new data. You may (with sign-off) actually delete data from the Data Vault due to a processing error, but this should occur only in the Alpha and Beta release phases of the warehouse project. The restartable and automation requirements go hand-in-hand with Six-Sigma and S.E.I. compliance.

1.3 99.999999% Up-Time of the process – failing only on hard-errors.
A process should not fail because of DATA errors. A process should handle all possibilities of data issues by default values and inserts to error reporting/error logging tables. A process should only fail when: it runs out of disk space, it dead-locks, the network goes down, the server goes down, the process has no privelages, the passwords are changed, etc. The end-user owns ALL (100%) of the data set. (see spec 1.9). We should ONLY be getting up at 2:00am to handle a busted process, not to handle data breaks or soft-errors. Those should be architected out of the processing layers.

1.4 SEI Level 5 Compliant process, with documented metrics and repeatability.

Each process is rigidly defined by templates to contain restartability, and some layer of fault-tolerance. Each process should be tracked in run-time (start to finish), row movement, run-cycle inclusion, and number of rows inserted, delta processed, and number of data errors. To be SEI Level 5 compliant, these metrics must be available on a run-by-run basis, and each of the processes should be fully restartable and fault-tolerant, as well as re-generated from an automated mechanism based on changes.

1.5 Sarbanes-Oxley and BASIL II compliant processing.
Based on Government requirements, SOX, and BASIL II, our processes must now track – what data changed, when it was changed, what it was before the change, and what it changed to. This also means that NO MORE MANUAL FIXES TO THE DATA SETS AT 2:00am ARE ALLOWED BY THE I.T. TEAM JUST TO GET THE DATA LOADED. This means, all processes must be fault-tolerant, completely restartable, and the data (if changed) must be traceable. It also means, that any data in the warehouse can be audited by (your) Government or international accounting agencies looking for fraud. This means, that even if we “default” values going into the warehouse, we must track the “before changed” value somewhere in order to be compliant. It is also suggested that we track dual-date entry on the data, (load date and extract date and modified and applied date if possible). This allows aggregations to be created that roll by modified date (to see all the adjustments), or by applied date (to see totals for a specific month), and have both calculations of information pulled from the warehouse.

1.6 Maintain full Traceability of Data
As discussed above, SOX and BASIL II,, BASIL III along with government restrictions now require full traceability of the data. When it changed, what it changed from/to, and whom changed it.

1.7 Data Driven Design
(See fully restartable designation above). This means that the process template does not change, that the loading mechanisms are fault-tolerant and handle ALL and ANY kind of data that hit it, that NONE of the data “drops out” of the load to the data warehouse (Data Vault). ALL data makes it in to the Data Vault no matter how bad it is. Yes – it may be recorded in an error mart, but it still makes it to the warehouse. (see the 100% rule below). It also means that changing things like the load-date are data driven, and that the queries / views that pull data from the warehouse are also user-date driven without re-coding.

1.8 Real-Time Provisions
All Data Vault Loading templates should be applicable to EAI or real-time template use as well. The Real-Time load stream should handle data as it arrives in a mini-batch, or burst. It should be grouped as a burst with a burst-datetime. It should be checked (delta’d) against what exists, and defaults values should be applied. Errors in the data set should be recorded, and the data should be loaded. Obviously, once a table is being loaded in real-time, it cannot be loaded by batch AT THE SAME TIME, (due to DB restrictions and deadlocks). This means if Batch has to load the table too, then real-time needs to re-direct it’s load temporarily to a staging table (even though the data may decay). The only tables that may run into contention is the Hub and the Link. The Satellites should be unique to batch or real-time, and the data should be separated by RATE-OF-CHANGE (see the Data Vault Specification).

1.9 100% of the Data Loaded to the Data Vault 100% of the time.
Finally, ALL data arriving at the warehouse MUST be loaded into the Data Vault ALL the time. Just because data formats or values are bad, doesn’t mean it should fall-out of the process to an error flat file. Also doesn’t mean it should not be loaded to the warehouse. Processing errors upstream, source system data errors – all must be recorded in the warehouse as to the fact it happened, and this is what the Data Vault saw. We are in the business of recording EVERYTHING as a fact – establishing a System-Of-Record is of utmost importance. In order to get the data in, apply default values (to correct data type mismatches), and zero keys (to correct NULL key values), and record which records in what tables were changed (in an error logging mechanism).

1.10 Large Data Scalability.
The loading processes should be linearly scalable in volume and timing. They should be able to scale into the terabyte/pedabyte and beyond levels. Because of the architectural rules and the load rules about grain, we have effectively removed all the barriers (dependencies) normally seen in the data load cycle on other loads, and other data or even transformed data. Because of this, the loads should be designed to be highly efficient and highly restartable at both large volumes and low arrival latencies.

2.0 Batch – or Strategic Pre-Requisits
The pre-requisits for batch loading are: using a set of staging tables, load data (truncate and re-load) to refresh, do *NOT* keep history in these staging tables, if you do, then your staging area is acting as a data warehouse (be warned of this danger). In batch loads, all processes must be consistent, and repeatable – highly parallel and scalable. The objective is “basic” data manipulation, such as: datatype cleanliness, and other issues (listed below). Make use of partitioning (as needed by RDBMS platform).

2.1 Batch – Staging Definitions
The staging definitions should populate required fields upon load. The definitions should follow the rules below in order to sufficiently populate the Data Vault. It is possible to have multiple lavels of staging tables to assist with cross-table integration, and additional data set heuristics/statistical analysis.

It is suggested that these multiple layers assist with the integration of data, and not with the alteration (unless the Data Vault will be storing BOTH the before and after pictures).

Please be aware, with the advent of 64 bit systems – staging may actually be done in RAM. Also – if the source systems have CDC installed upstream, staging may not be necessary at all, it may be possible to load directly to the Data Vault structures. Less storage requirements = less maintenance and less room for error.

2.1.1 Required fields (structure):
These fields should be populated on the way in to the staging tables, keep transformation to a minimum during load of the stages in order to make the loads as parallel and independent (from other stage loads) as possible.
The structure should be as CLOSE to the source system as possible, so that the data isn’t completely altered going into the Data Vault Staging Area.
a: Staging Sequence Number
b: Load Date
c: Record Source
d: Extract Date (if one is available)
e: OPTIONAL: Delta Flag/New Row Flag
f: OPTIONAL: Target Table Sequence Number

2.1.2 Goals for Stage Table Loads
The following goals and objectives should be met when loading the staging tables:
a: Parallel – all stage loads are completely independent of any other stage load, sometimes relying on “small” lookups to add descriptions for codes.
b: Run when the data is ready. Whenever the data is ready on the source, that’s when the process should run, it shouldn’t have to wait until other data is ready.
c: Big network bandwidth, lots of memory, lots of disk space – there should be a lot of available resources for the parallel batch loads so they can utilize as much of the machine as possible (loading data as quick as possible).

2.1.3 Datatype Changes going into staging tables
The following data type changes are suggested when loading the staging area:
a: Character/string to Datetime
b: character/string to numeric
c: split of over-loaded fields (if rules permit) into respective parts. There are cases where no amount of rules can be established to PARSE the field content, in that case, the fields should be loaded as-is.
d: Character to var-character (varchar/varchar2) – trim all spaces (leading/trailing).
e: OPTIONAL: UPPER CASE all strings (it is suggested that either upper or lower-case) be signed off on by the user – this makes processing much fasster when going into the satellites, and comparing keys in hubs. Otherwise, the Upper function must be executed across all satellite data when running compares (unless the user cares about tracking case-changes).

IF charcaters cannot be converted to Date/Time then it’s recommended that they be defaulted, preferrably to 1970 (this way, no future-dating problems occur). It is required to get sign-off on any data that is to be changed/altered/defaulted.

IF characters cannot be converted to numeric, it is recommended that they be defaulted to NULL. In this way, the numerics will not affect sums, counts, averages, and other mathematical algorithms. Again, sign-off is required for default values.

The best practice is: if the disk space is available, store both the original value in character mode (prior to conversion), and store the new value in the “domain aligned field”. If the disk space is not available, then it is recommended to backup the staging area POST-LOAD, and zip it, and name it by date/time – then save it away. Domain conformity is usually expected due to the way RDBMS handles values within queries. When the disk is not available to store both values and the data is being aligned to the domain (datatype) of the field, then please refer to the DEFAULT VALUES section for further information on how to resolve values outside the domains.

There are new devices appearing on the market which “claim” to load the data while profiling it. This is still under investigation as to how helpful this can be.

2.1.4 Default Values
The suggested default values are as follows: Pick characters that are readable/recognizable as a Non-Value by the end-user. Remember that NULL’s in the Data Vault can always be changed on the way into a data mart. ALWAYS have the end-users/business sign off on any default values you choose, and BE CONSISTENT with the values you use.
a: Defaults for Datetime: 1/1/1970 00:00:00.0000
b: Defaults for Numeric: NULL
c: Defaults for Character: (1) ? (2+) ? or (3+) ?
Just suggestions here, you can use whatever values you want, so long as they are signed off on.
Alternately you can choose to leave non-critical date times NULL. You can also choose to change date-times to UTC across the board for consistency.

2.2 Batch – Hub Loading
Watch for duplicates across staging tables if you do this in one SQL statement, otherwise as separate load processes, duplicates will be naturally eliminated. Just make sure the case chosen is consistent (upper/lower, etc.)
a: gather a unique union of ALL DISTINCT keys in the staging tables that are targeted to that hub. Make SURE the “master” record source selected system is run first, either process wise or in the UNION SQL statement.
b: check to see if they exist in the target table
c: if they do exist, filter them from the load.
d: if they don’t exist, assign the next sequence number and insert.
During the HUB LOAD process, it is possible to store the “business key to Sequence ID matchup” in a staging table area. This staging table is a second target from the load process (primary or first target is the hub itself). These staging tables are truncated and re-loaded, they serve as a fast-join for inserted and updated rows when going to load the LINKS.

2.3 Batch – Link Loading
The goal of loading the links is to have NO DUPLICATES across the business key structures (sequence ID’s). Each representation of an intersection or relationship must be unique. Links represent any and all relationships that the source systems bring in. Unique sequences for the links are generated when a particular relationship is seen. If the above “second level staging” tables have been built to hold the keys from the newly updated/inserted hub rows, then these are used to speed up the link generation process.
a. Gather the business keys
b. Default the “null” business keys to an Unknown value or “Zero record” in the hub.
c. Go get the hub sequence ID for that individual business key.
d. Repeat “c” for all hub keys.
e. Check to see if this collection of sequences exist in the link already, if so – filter out of the feed.
f. If it doesn’t exist, generate a new sequence for the link and insert it.
It is also possible (just like the hubs second level stage) to insert these results to a second level link stage, this makes attaching satellite rows to link keys much easier.

Link Loading can take a different tack as of late. There are two new styles for link loading to save time and complexity of the ETL / ELT components.

The first is: embed copies of the hubs/links (parent) business keys in the link – duplication – but remember this only works if you don’t have “dueling source systems” with same key, different record sources. This works in very specific cases. This allows the load routine to bypass the parents altogether and go directly after the link table. A variant of this is to use a computed hash key based on the business key such as CRC32 or CRC64, or MD5 as the physical key – this allows computational “location” of the link record based on the business value in-stream. However be careful, CRC32 is unique 1 out of every 10M rows. MD5 and CRC64 are much more unique than that, but also twice as large to store.

The second, and more probable, more scalable is to “record” the hub keys that were just looked up, or inserted (based on the instream flow that builds the hubs) – insert the surrogate keys to business key matches in a truncated and re-loaded staging table. This provides a “reference table” that can be joined directly back to the incomming source on a 1 for 1 match, and in fact if the table looks like the link table as far as keys are concerned, can then become a candidate as the “source” for the link table itself. It also keeps the size of the data set down to matching the size of the incoming data, so the link processing (and link satellites) no longer have to ask the questions of spreading themselves across the hubs to get the surrogate key matches.

By far, the second routine is easier to maintain, and doesn’t change the base-architecture, remember: anytime the base architecture is altered, flexibility may be lost…

2.4 Batch – Satellite Loading
The goal of Satellite loading is two fold: 1) to only load delta changes, those rows that actually have a change (from the last most recent load), even if the most recent load broke in the middle, and this is a re-start. This load should pick up where it left off 2) split the work, by type of data and rate of change.

Sometimes Satellites require Link PK’s (link sequence numbers), and unfortunately, the fine-grained links are all dependent on many different hub key joins. To overcome this problem, run nested selects against the hubs/links when pulling the data out, or use the PIT table, or maintain a second level staging table which “bridges” natural keys to surrogate keys for all the data existing currently in the stage (it may be another sequence to pull and load the 2nd level stage, but it will save tremendous loads of time going forward).

The process for batch loading on a Satellite looks like this:
a. Gather the link or hub surrogate key for the dependent satellite
b. join to a “current load-date” table, and compare the “most recent” data in the staging area to the Satellite (AS OF the date-driven/data driven timestamp).
c. Select only those rows that have changed
d. Place the new load-date into the PK of the Satellite
e. Track the rows to be inserted, to make it easy to end-date old rows in the next pass, as well as easy to update the PIT and Bridge Tables.
f. Insert the new Satellite records.

3.0 Real-Time – or Tactical Pre-Requisits
The pre-requisits are that
a: you have the proper technology to get data in, in rapid succession,
b: your data is available in near-real time – whatever the business defines to be the “right-time” feeds, be it 5 minute refreshes, or 5 millisecond refreshes.
c: the business has guaranteed the money to solve the business problem, and can justify the exponential cost increase to getting data fed at lower intervals of latency.
d: your tool to load is MOST LIKELY NOT an ETL tool set, it is either ELT, or EII, or EAI (messaging/queueing).

Please keep in mind that batch ETL tools (even though they sometimes claim they can) cannot do a good job (today – October, 2005) of real-time loading under 1 to 3 minute intervals, moving at least 10,000 rows per cycle. Also keep in mind that moving 1 row per minute is NOT the same as moving 100,000 rows per minute, which is NOT the same as moving 100,000 rows per second or milli-second.
Technical pre-requisits:
a: The data model must support whatever latency requirements defined by the business.
b: Snapshot tables refreshed at “regular intervals” must be defined to govern the results of queries, if data must be examined below the regular interval then it must be delivered to the end-user via alerting and thresholding.
c: data arrival must NOT be impeeded by data retrieval (table locking, index locking, mart refreshes, snapshot table updates, etc).
d: Throughput is VITAL for scalability, meaning the LEAST amount of complexity going IN to the vault as possible.
e: Data “Decays” or get’s old when it’s staged, or landed in a near-real time system. Data is considered “DEAD” (from a tactical standpoint) to some degree if it’s decayed past the next point of refresh. From a strategic standpoint, it’s still very much alive.
f: True real-time data feeds usually feed a “live” data mining engine in parallel with feeding the Data Vault. The resulting Knowledge is then pumped back into the operational systems for immediate feedback to the end-user. (both f & g are true at the same time).
g: True real-time/right-time systems use a combination of bit flags, and triggers to execute code. The data is “regulated” or applied to a series of pre-built thresholds to see if it breaks the “standards” – then it is promoted to mining to judge the impact.
h: data (usually) CANNOT be staged, it cannot be “setup” to wait or be dependant on another systems feeds in order to make it into the warehouse / Data Vault. See #e on data decay rates, dead data is useless.

3.1 RT – Staging Definitions (if used)
Staging definitions and staging tables are USUALLY NOT used within a real-time system, unless the refresh time limit is at or above 3 minutes. Staging tables cause data to decay. The longer it “sits” in the staging buckets, the less applicable to the current situation it is. The current situation is the state of the current operational system. IF staging tables are elected for loading, time-limits must be set before another automated routine kicks in and deletes “old” or decayed data that has landed.

In most situations this amounts to a truncate and re-load of the staging tables, a complete wipe of data. In some situations real-time data is allowed to decay for up to 2 days before it is deemed unusable, or dead. BUT just because it’s sitting in a staging table DOESN’T mean it hasn’t been “graded” or put through alternate processing downstream. It may mean that it has been seen, reviewed, and considered a non-impact therefore it is dropped from the insertion stream.
This is very dangerous and doesn’t play well with compliance unless records are kept as to what was deleted, how long it decayed before it was deleted, and when it was deleted. At that point, the record should be able to be used to “reload” an image of that data if necessary.

3.2 RT – Hub Loading
Hub loading – in a real-time system the business keys nearly ALWAYS arrive with the transactions, otherwise the source systems wouldn’t know how to attach the data to it’s own business processes. If you have a system in real time that isn’t providing the business keys, that’s a cause for SERIOUS red flags in which the sourcing business process needs to be changed or altered before proceeding. In the real-time case, it’s exactly the same cycle as the batch case for hub-loading, insert the key if it doesn’t exist.

Please note that architectures that handle millisecond feeds use a millisecond time-stamp hub as their key, and sometimes we multiply the hubs out to avoid contention across the feeds. The millisecond timestamps capture just a time-stamp, from which a “computed surrogate” based on the millisecond can be generated – which means NO lookups for existence, and NO lookups for finding the hub key when inserting the satellites. Computational logic is executed to “create” the surrogate for the satellites. The linking logic in millisecond time-frames can only be done by “lazy processes” that kick up every second to score/grade and associate data by context decisions – usually a data mining algorithm of sorts.

3.3 RT – Link Loading
Link loading in real-time is quite easy, if the batch load LINK option from above is utilized (where the new keys/matched surrogates for the incomming data) are kept in a stage level 2 table. In other words, as hubs match keys to business keys, the matches are built into a new staging surrogate table – this will reduce the time it takes to load links. Remember, the LINK usually IS the transaction, which forces a HUB insert or several, and usually becomes the date-time stamped satellite off the link immediately.

Millisecond feed transactions are kept differently, again with a millisecond hub stamp (or several), and are joined across links via a “lazy” process that runs once a second, or once every few seconds to build the associations – in this case all we’re concerned with is: how fast can we roll data in, and when the data is “dead”, where can we back it up to?
So you might ask the question: The same data architecture, cool – but why two different data loading architectures? Well, if we don’t load the data differently, we can’t match the speed of arrival of information. Sometimes the information arrives so quickly that we have to change the manner in which it’s recorded. In other words, introduce parallelism to the processes, and move some of the complexity out of the direct loading path. Hence the multiple copies of the same hub, and hence the lazy process for linking things together downstream.

Furthermore, in these situations – triggering based on mathematical formulas is quite common, which means the math algorithms compute specific weighting factors and measurements by which the program can immediately determine: is this “fact” out of synch with the rest? Is it a cause for an alert? And then raise the level of the fact as a potential issue, from there a data mining engine can further analyze it across the board and either downgrade it again, or raise it up to the operational system. All of this (usually) happens within 5 minutes of the arrival of the information.

3.4 RT – Satellite Loading
Loading satellites in real-time is a bit of a challenge, unless the “most current” data is found easily marked. The last thing you want technically is to have to surf through the entire satellite looking for the current row. This becomes a problem in some real-time systems. But remember: satellites are split by type of data and rate of change, which gives us the ability to actually create a “current only” satellite, and again move the data from current to historical with a “slightly lazy process” At most, the current satellite will have only X numbers of records (usually 2 or below) for any given key. Again, this is if the system is loading records every second or less.

If the system is loading one second or more intervals, then the data must be indexed properly to detect the “current record” quickly, from there comparing columns and inserting deltas. Keep in mind that the issue of comparing columns in the satellite disappears as the arrival latency gets shorter. In other words, there are some systems in which the data arrives TOO FAST to spend time comparing columns to the previous picture. In these cases, the data is loaded directly to the satellite. A lazy “cleanup” process is then instituted to detect duplicates and remove them within the satellite. This process may run every 5 minutes or less, depending on the context and the impact to the resources.

Finally, a lazy delete or roll-off / backup process is instituted which also helps keep the table short, and running smoothly.

Keep in mind that not all data will need real-time, up-to-the second loads, 80% or more of the system will be consistently above 10 minute refreshes, usually only 5% or less need second by second or millisecond refreshes – this allows us to focus our architecture and design to answer specific needs in specific parts of the model. If you have a full system that requires 1 second or less refreshes, then you’ve got a need to purchase enough horsepower to run all the necessary processing to keep things in synch through parallelism.

3.5 Real-Time and System Of record
There’s a new movement afoot, and the Data Vault is providing the mechanisms for building Operational Data Warehouses (we’ve finally stated these as published facts here on the forum). Any data that arrives in a real-time data stream MUST be treated as System Of Record within the data warehouse. Why? Because we don’t know which system (if any) upstream is actually storing the data itself as an Operational system. Normally SOR is the Operational System upstream, but in the case of real-time feeds – it flips to become a responsibility of the Data Vault.
Real time data is fed directly in to the Data Vault. If a backup of the data is absolutely necessary, then make it incremental backup at a snapshot level while using Point-In-Time Tables to query and retrieve it. The other option is to “copy it” to a rolling staging table that never truncates. Backup the staging table using QUERY logic, clear the table of the rows which have been backed up. The problem with this? The staging table is in constant flux, and if the data is arriving in millisecond timing rates – the staging table can get very large very quick. Which means, that backup get’s further and further behind.

Within subsecond loads in a real-time environment, it is best to funnel the transaction direct to the Data Vault – leave your backups within the framework of the Data Vault, but they should be at the DISK BLOCK REPLICATION level. It’s the ONLY way to keep up. This is a DW2.0(tm) specification for on-line, and near-line storage. It also has to do with measuring the temperature of the data sets.

4.0 Tracking and Tracability – System Defined Field Updates
(load dates, load-end-dates, last seen dates, tracking satellites, record sources)

4.1 Loading Load Dates: Every record in a BATCH cycle must contain exactly the same load-date and time UNLESS each record is tied by a load cycle ID. The function is to “tie” the records together as a group, in case something goes wrong with that load, and isn’t discovered until later. This way, the loaded group can be backed out (if necessary) and re-applied – this maintains compliance of the load cycle.
In an RT environment, the load-dates require time stamp of the individual transaction as it’s loaded to the warehouse. There is no “mechanism” to recover a batch of information for a single period of time, unless the transactions are loaded to the Data Vault and a staging area at the same time. This would mean a regularly scheduled “clean-up” process to “decay/delete/roll off” old records in the staging area, and a back up process to go with it.

4.2 Load-End Dates: Load End Dates should be 1 second or 1 millisecond BEFORE the next active record. Depending on the grain of the RDBMS engine. This provides a temporal view against the data, and allows the data set to be “aged” appropriately, and queries to execute spatial-temporal analysis. (see my postings on spatio-temporal analysis, or the posts regarding temporal indexing). Load End Dates ARE NOT the same as an “end-date” item fed to the warehouse by a source system, please don’t confuse the two. Load-End-Dates are MECHANISED and computed by the loading paradigm.

4.3 Last Seen Dates: Last Seen dates are another type of tracking mechanism, they are generated (again) by the load process, RT or batch. They can be defined two ways, either as the last time we saw the data for the satellite, or the last time we saw the hub key on the feed. Context determines the meaning. If they are IN a Satellite with other attributes – they CAN be “updated” (in the CURRENTLY ACTIVE RECORD) without breaking compliance. Why? They are a SYSTEM managed field, not user modifiable, therefore they are not auditable.
If they are tracking a HUB, they can be in a HUB table – it is preferrable however to track a history of HUB KEY arrivals by putting this one attribute into it’s own satellite. So maybe, a Satellite with: LAST_SEEN_HUBKEY, and LAST_SEEN_SAT1..SATN data (depending on the satellites you want to track).
This can also be a part of the “tracking Satellite” defined below:

4.4 Tracking Satellite: The tracking satellite is a SYSTEM MANGED,
SYSTEM GENERATED Satellite. It contains fields that are generated during the load cycles, and can be updated (if no history is desired), or preferrably, inserted to track history. It tracks the HISTORY OF ARRIVAL and UPDATES to the other tables like HUB, LINK, and SATELLITES. It provides additional metadata about the processing cycles that the data set actually goes through. It MUST be attached to the HUB or LNK key set in order to be effective.

4.5 Record Sources: I’ve defined record sources many times, the definition just keeps getting better. Record sources are system generated/managed elements, they are metadata about WHERE the data in that record came from. They contain the source system, source application, and sometimes even the source function (if that can be provided). The more granular or specific the record sources, the more we can learn about the operations in our source system, and if they are meeting the business requirements. Alas, this data is not auditable either.
Record Sources may now be a CODE. In codifying record sources we can save space, and use a reference table to retrieve them. This further improves performance of the Data Vault by shrinking the row size. I would recommend an alpha-numeric of 4 characters, this allows many different combinations and high levels of customizations. Place the code/description in a REFERENCE area of the data vault, i.e. in a reference table.

5.0 Business Rule Implementations
(Default values, acceptable transformations of data, tracability, compliance issues, aggregations, etc.)

5.1 Default Values

Before applying default values, pick a set of values that will work, and have the end-users sign-off on an SLA (service level agreement) – so that you have a record, and so that when the users see these values they can relate to what they see has been defaulted.

5.2 Generalities and business rules.
There are HARD and SOFT business rules. HARD business rules are those that have a very low propensity to change over time. These rules are the “lines in the sand” that provide consistency of integration on the way in to the hubs themselves. These HARD business rules are assumption based. That said, we are in fact applying a “base-color lense filter” on the way in to the Data Vault. Why? because we have to start somewhere, and we have to drive the data together at very specific and defined points of interest.

These hard business rules are executed during the load cycle, but because they don’t change very often (again we’re talking HUB integration here) they do not appear to impact the agility of I.T. very much. SOFT business rules on the other hand are all those with a much higher propensity to change. Such as the interpretation of the data set. These will be discussed in the new standard we are defining for QUERYING the data vault.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
4 Comments  comments 

4 Responses

  1. gaborg

    Hi Dan,

    I have a question regarding satellite loading (section 2.4).

    Am I right that I’ll have to create an empty satellite record
    when there is a new HUB record but there are no connecting data in the satellite’s source?
    (based on Modeling Specification Section 8.1 (Avoiding Outer Joins))

    It seems that this is missing from the satellite loading steps.

    If I compare the current satellite to its most current source in the staging
    this “missing” row won’t show up as new or changed.

    I think that in this case I’ll have to do an additional step to insert all missing HUB_ID-s into the satellite without descriptive attributes, too.

    Is it correct or I missed something?

    TIA,

    Gabor

  2. dlinstedt

    Hi Gabor,

    The standards section is a guideline. I try to publish best practice rules for people to follow. You are correct in one manner of speaking: If you want to avoid outer-joins, and you don’t want to introduce EXTRA tables, then yes, the only way is to “go back” and insert 1 empty Satellite row in each Satellite for every Hub key that doesn’t have any Sat rows. This only has to happen 1 time for each key.

    On the other hand, if you don’t mind adding a new table structure, you can read about Point-In-Time tables to help with this.

    You are correct, you have not missed anything. Otherwise, you are stuck with outer-joins, OR splitting the loading process across inserts and updates, which means EVERYTHING is caught perfectly.

    Cheers,
    Dan L

  3. [...] Some acceptable minor changes and default values (article 2.1.3 and 2.1.4 of the Data Vault loading specification) [...]

  4. rob mol

    Hello Dan,
    We are wordking for a cliƫnt and discussing how to process the deletes from a batch with change data captures.
    Setting the load end date for a deleted record has the disadvantage that we do the same in the process of updating. So we cannot recognize from the record itself whether a record is deleted or an old version of an updated record.
    This is specially a burden when propagating the updates and deletes to the next layer (to the bDv and the datamarts).
    What is your advice?
    Greetz, Rob

Leave a Reply

Your email address will not be published. Required fields are marked *


*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Current day month ye@r *