DataVaultAlliance Key Article

DV2 Sequences, Hash Keys, Business Keys – Candid Look

Primary Key Options for Data Vault 2.0

This entry is a candid look (technical, unbiased view) of the three alternative primary key options in a Data Vault 2.0 Model.  There are pros and cons to each selection.  I hope you enjoy this factual entry.

(C) Copyright 2018 Dan Linstedt all rights reserved, NO reprints allowed without express written permission from Dan Linstedt

There are three main alternatives for selecting primary key values in a Data Vault 2.0 model:

  • Sequence Numbers
  • Hash Keys
  • Business Keys

Sequence Numbers

Sequence numbers have been around since the beginning of machines.  They are system-generated, unique numeric values that are incremental (sequential) in nature.  Sequence numbers have the following issues:

  • Upper limit (the size of the numeric field for non-decimal values)
  • Introduce process issue when utilizing Sequences during load because they require any child entity to look up its corresponding parent record to inherit the parent value.
  • Hold no business meaning

The most critical of the issues above is that of negative performance impacts associated with lookup or join processes, particularly in heterogeneous environments or in environments where data is legally not allowed to “live” or be replicated on to other environments (geographically split, or on-premise and in-cloud mix).   This process issue is exacerbated during high speed IOT or real-time feeds.  Consider what happens in an IOT or real-time feed when data flows quickly to billions of child records, and each record must then wait on a sequence “lookup” (one record at a time); the real-time stream may back up.

Lookups also cause “pre-caching” problems under volume loads.  For example, suppose the parent table is Invoice and the child table is Order.  If the Invoice table has 500 million records, and the Order table has 5 billion records, and each order has at least one matching parent row (most likely more) – then each record that flows into Order must “lookup” at least one invoice.  This lookup process will happen 5 billion times, once for each child record.

It doesn’t matter if the technology is an ETL engine, real-time process engine, or SQL data management enabled engine.  This process must happen to avoid any potential orphan records.  If the referential integrity is shut-off, the load process can run in parallel to both tables.  However, to populate the “parent sequence”, it must still be “searched / looked up” on a row by row basis.  Adding parallelism and partitioning will help with the performance, but eventually it will hit an upper limit bottleneck.

In an MPP environment (MPP storage), the data will be redistributed to allow the join to occur, and it is not just the sequence that has to be shipped– it’s the sequence PLUS the entire business key that it is tied to.  In an MPP engine with non-MPP storage (like Snowflake DB), the data doesn’t have to be shipped but the lookup process still must happen.

This act of a single-strung, one record at a time lookup, can tremendously (and negatively) impact load performance.  In large scale solutions (think of 1000 “tables” or data sets each with 1 Billion records or more), this performance problem is dramatically increased (load times are dramatically increased).

What if there is one child table?  What if the data model design has parent->child->child->child tables? Or relationships that are multiple levels deep?  Then the problem escalates as the length of the load cycles escalate exponentially.

To be fair, let’s now address some of the positive notions of utilizing Sequence Numbers.  Sequence numbers have the following positive impacts once established:

  • Small byte size (generally less than number(38)) (38 “9’s”) or 10^125
  • Process Benefit: Joins across tables can leverage small byte size comparisons
  • Process Benefit: Joins can leverage numeric comparisons (faster than character or binary comparisons)
  • Always unique for each new record inserted
  • Some engines can further partition (group) in ascending order the numerical sequences and leverage sub-partition (micro-partition) pruning by leveraging range selection during the join process (in parallel).

Hash Keys

What is a hash key?  A hash key is a business key (may be composite fields) run through a computational function called a hash, then assigned as the primary key of the table.  Hash functions are called deterministic.   Being deterministic  means that based on given input X (every single time the hash function is provided X) it will produce output Y (for the same input, the same output will be generated).  Definitions of hash functions, what they are and how they work, can be found on Wikipedia.

Hash Key benefits to any data model:

  • 100% parallel independent load processes (as long as Referential Integrity is shut off) even if these load processes are split on multiple platforms or multiple locations
  • Lazy Joins; – that is, the ability to join across multiple platforms utilizing technology like Drill (or something similar)– even without Referential Integrity. Note, lazy joins on sequences can’t be accomplished across heterogeneous platform environments. Sequences aren’t even supported in some NoSQL engines.
  • Single field primary key attribute (same benefit here as the sequence numbering solution)
  • Deterministic – it can even be pre-computed on the source systems or at the edge for IOT devices / edge computing.
  • Can represent unstructured and multi-structured data sets – based on specific input hash keys can be calculated again and again (in parallel). In other words, a hash key can be constructed as a business key for audio, images, video and documents. This is something sequences cannot do in a deterministic fashion.
  • If there is a desire to build a smart hash function then meaning can be assigned to bits of the hash (similar to Teradata – and what it computes for the underlying storage and data access).

Hash keys are important to Data Vault 2.0 because of the efforts to connect heterogeneous data environments such as Hadoop and Oracle.  Hash keys are also important because they remove dependencies when “loading” the Data Vault 2.0 structures.  A hash key can be computed value by value.  The “Parent” key can also be computed and can be repeated for as many parent keys as there exist values for.  There is no lookup dependency, no need to pre-cache, use the temp area, or anything else to calculate each parent value during load processing.

Big Data system loads are nearly impossible to scale properly with sequence numbering dependencies in place.  Sequences (whether they are in DV1 or dimensional models or any other data model) force the parent to be loaded, then the child structures.  These dependencies on “parent first – then lookup parent value” cause a sequential row by row operation during the load cycles, thereby inhibiting the scale out possibilities that parallelism offers.

This type of dependency not only slows the loading process down, but also kills any potential for parallelism – even with referential integrity shut off.  Furthermore, it places a dependency into the loading stream in heterogeneous environments.  For instance, when loading Satellite data into Hadoop (perhaps a JSON document), the loading stream requires a look up for the sequence number from the Hub that may exist in a relational database.  This dependency alone defeats the entire purpose of having a system like Hadoop in the first place.

Hash keys do have their issues:

  • Length of the resulting computational value when the storage for the hash is greater than sequences
  • Possible Collision (probabilities of collision are dependent on the hashing function chosen for utilization).

The first issue leads to slower SQL joins and slower queries.  This is because it takes longer to “match” or compare longer length fields than it does to compare numerics.  Hashes (in Oracle and SQLServer) are typically stored in fixed binary form (yes, this works as a primary key).  Hashes in Hive or other Hadoop based technologies, and some other relational engines must store the hashes as fixed character set lengths.   For example, an MD5 hash result is BINARY(16), which results in CHAR(32) fixed length hexadecimal encoded string.

The flip side is of using a hash is its unlimited scalability in parallel loading.  All data can be loaded in complete parallel all the time across multiple platforms (even those that are geographically split or split on-premise and in-cloud).  Hash keys (or business keys) are part of the success of Data Vault 2.0 in a Big Data and NoSQL world.  Hashing is optional in DV2.  There are a variety of hashing algorithms available for use that include:

  • MD5 (deprecated circa 2018)
  • SHA 0, 1, 2, 3 – SHA1 (deprecated circa 2018)
  • Perfect Hashes
  • And more…

The Hash is based on the business keys that arrive in the staging areas.  All lookup dependencies are hence removed, and the entire system can load in parallel across heterogeneous environments.  The data set in the model now can be spread across MPP environments by selecting the hash value as the distribution key.  This allows for better mostly random, mostly even distribution across the MPP nodes if the Hash Key is the MPP bucket distribution key.

“When testing a hash function, the uniformity of the distribution of hash values can be evaluated by the chi-squared test.“  https://en.wikipedia.org/wiki/Hash_function  – note there are those out there who claim not to see the average random even distribution, that just means they don’t understand the mathematics of the distribution of hash values or how to apply it.

Luckily the hash functions are already designed, and the designers have taken this bit of distribution mathematics into account.  The hashing function chosen (if hashing is to be utilized) can be at the discretion of the design team.  As of circa 2018, teams have chosen SHA-256.

One of the items discussed is the longer the hashing output (number of bits), the less likely / less probable for a potential collision.  This is something to take into consideration, especially if the data sets are large (big data, 1 Billion records on input per load cycle per table for example).

If a Hash Key is chosen for implementation, then a Hash Collision Strategy must also be designed.  This is the responsibility of the team.  There are several options available for addressing Hash Collisions.  One of the recommended strategies is reverse hash.

This is just for the Data Vault 2.0 model which acts as the enterprise warehouse.  It is still possible (and even advisable) to utilize or leverage sequence numbers in persisted information marts (data marts) downstream to engage fastest possible joins within a homogeneous environment.

The largest benefit isn’t from the modeling side of the house; it’s from the loading and querying perspectives.  For loading, it releases the dependencies and allows loads to Hadoop and other NoSQL environments in parallel with loads to RDBMS systems.  For querying, it allows “late-join” or run-time binding of data across JDBC and ODBC connectivity between Hadoop, NoSQL, and RDBMS engines on demand.  It is not suggested that it will be fast, but rather that it can be easily accomplished.

Deeper analysis of this subject is covered in Data Vault 2.0 Boot Camp training courses and in Data Vault 2.0 published materials.  It is beyond the scope of this article to dive deeper in to this subject.

Business Keys

Business keys have been around for a long time, if there have been data in operational applications.  Business keys should be smart or intelligent keys and should be mapped to business concepts.   That said, most business keys today are source system surrogate ID’s and they exhibit the same problems that sequences mentioned above exhibit.

A smart or intelligent key is generally defined as a sum of components where digits or pieces of a single field contain meaning to the business.  At Lockheed Martin, for example, a part number consisted of several pieces (it was a super-key of sorts).  The part-key included the make, model, revision, year, and so on of the part.  Like a VIN (vehicle identification number) found on automobiles today.

The benefits of a smart or intelligent key stretch far beyond the simple surrogate or sequence business key.  These business keys usually exhibit the following positive behavior at the business level:

  • They hold the same value for the life of the data set
  • They do not change when the data is transferred between and across business OLTP applications
  • They are not editable by business (most of the time) in the source system application
  • They can be considered Master Data Keys
  • They cross business processes and provide ultimate data traceability
  • Largest benefit: can allow parallel loading (like hashes), and also work as keys for geographically distributed data sets – without needing re-computation or lookups.

They do have the following downfalls:

  • length – generally smart business keys can be longer than 40 characters,
  • meaning over time – the base definition can change every 5 to 15 years or so (just look at how VIN number has evolved over the last 100 years)
  • sometimes source applications CAN change the business keys, which wreaks havoc on any of the analytics that need to be done.
  • they can be multi-field / multi-attribute
  • they can be “not unique or specific enough” to uniquely identify data.

If given the choice between Surrogate Sequences, Hashes or Natural Business Keys – Natural business keys would be the preference.  The original definition (even today) states that a Hub is defined as a unique list of business keys.  The preference is to use natural business keys that have meaning to the business.

One of the functions of a properly built raw data vault 2.0 model is to provide traceability across the lines of business.  To do this, the business keys must be stored in the Hub structures according to a set of design standards.

Most of the business keys in the source system today are surrogate sequence numbers defined by the source application.  The world is full of these “dumb” machine generated numeric values.  Examples include: Customer Number, Account Number, Invoice Number, Order Number, and the list goes on.

Source System Sequence Business Keys

Source system sequence-driven business keys make up 98% of the source data that any data warehouse or analytics system receives.  Even down to transaction ID, e-mail ID, or some of the unstructured data sets, such as document ID, contain surrogates.  The theory is that these sequences should never change and should represent the same data once established and assigned.

That said, the largest problem that exists in the operational systems is one the analytics solution is always asked to solve.  That is, how to integrate (or master) the data set, to combine it across business processes and make sense of the data that has been assigned multiple sequence business keys throughout the business lifecycle.

An example of this may be customer account.  Customer account in SAP may mean the same thing as customer account in Oracle Financials or some other CRM or ERP solution.  Generally, when the data is passed from SAP to Oracle Financials, typically the receiving OLTP application assigns a new “business key” or surrogate sequence ID.  It’s still the same customer account however the same representative data set now it has a new key.

The issue becomes, how do you put the records back together again?  This is a Master Data Management (MDM) question, and with an MDM solution in place (including good governance and good people) can be solved and approximated with deep learning and neural networks.  Even statistical analysis of “similar attributes” can detect within a margin of error the multiple records that “should” be the same but contain different keys.

This business problem perpetuates into the data warehouse and analytics solution typically because no master data management solution has been implemented up-stream of the data warehouse.  Therefore, to put together what appears to be “one version of the customer record” and not double or triple count, algorithms are applied to bridge the keys together.

In the Data Vault landscape, we call this a hierarchical or same-as link.  Hierarchical if it represents a multi-level hierarchy, and same-as if it is a single hierarchy (parent to child re-map) of terms.

Placing these sequence numbers as business keys in Hubs have the following issues:

  • They are meaningless – a human cannot determine what the key stands for (contextually) without examining the details for a moment in time
  • They can change – often they do, even with something as “simple” as a source system upgrade – this results in a serious loss of traceability to the historical artifacts. Without an “old-key” to “new-key” map, there is no definitive traceability.
  • They can collide. Even though conceptually across the business there is one element called “Customer Account”, the same ID sequence may be assigned in different instances for different customer accounts. In this case they should never be combined.  An example of this would be two different implementations of SAP; one in Japan and one in Canada.  Each assigns customer ID #1, however, in Japan’s system, #1 represents “Joe Johnson” whereas in Canada’s system, #1 represents “Margarite Smith”.  The last thing you want in analytics is to “combine” these two records for reporting just because they have the same surrogate ID.

An additional question arises if the choice is made to utilize Data Vault Sequence Numbers for Hubs and the source system business keys are surrogates.  The question is, why “re-key” or “re-number” the original business key?  Why not just use the original business key?  (Which by the way is how the original hub is defined).

To stop the collision (as put forward in the example above) – whether a surrogate sequence, a hash key, or the source business key is chosen for the Hub structure – another element must be added.  This secondary element ensures uniqueness of this surrogate business key.  One of the best practices here is to assign Geography Codes.  Example, JAP for any Customer Account ID’s that originate from Japans’ SAP instance, and CAN for any Customer Account ID’s that originate from Canadas’ SAP instance.

Multi-Part Source Business Keys

Using a geographic code, as mentioned above, brings up another issue.  If the hub is created based solely on Source System Business key (and not surrogate sequence or hash key), then with the choice above (to add a Geography Code Split) the model must be designed and built with a multi-part business key.

The issue with a multi-part business key is with performance of a join.  There are multiple mathematical tests and quantitative results that show time and time again that multi-field join criteria is slower than single field join criteria.  It only goes “slower” in large volume or big data solutions.  At this point, perhaps, a Hash Key or surrogate sequence in the Data Vault may be faster than a multi-field join because it reduces the join back to a single field value.

Another alternative is to concatenate the multi-field values together thus forming somewhat of an intelligent key, either with or without delimiters.  This would depend on how the business wishes to define a set standard for concatenating the multi-field values (i.e., the rules needed – just like the rules needed to define a smart key).

The last thing to watch when choosing a multi-part business key is the length of the combined or concatenated field.  If the length of the concatenated fields is longer than the length of a hash result or surrogate sequence ID, then the join will execute slower than a join on a shorter field.  As a reminder, these differences in performance usually can only be seen in large data sets (500 M or 1 Billion records or more).  The hardware has advanced and will continue to advance so much so that small data sets exhibit good performance.  There is simply not enough of a difference in a small dataset to make an informed decision about the choice of the “primary key” for the Hubs.

The suggestion ultimately is to re-key the source data solutions, add a smart or intelligent key “up-front” that can carry the data across instances, across business processes, across upgrades, through master data, across hybrid environments, and never change.  Doing this would centralize and ease the pain and cost of “master data” and would lead to easier use of a virtualization engine.  It may not require complex analytics, neural nets, or machine learning algorithms to tie the data sets back together later.

In fact, fixing these re-keying issues, according to one estimate – costs the business 7x the money to “fix” this problem in the warehouse, instead of addressing it in the source applications.  Fixing the problem in the data warehouse is one form of technical debt. (Quote and metrics paraphrased from Nols Ebersohn).

If the source system cannot be re-keyed or the source system cannot add an “intelligent” or “smart key” which is a contextual key, the recommendation is to implement master data management upstream.  If MDM cannot be implemented, the next recommendation is leverage the source system business keys (unless there are composite business keys) – in which case, a Hash is the base level default recommendation.

Tags: , , , , , , ,

No comments yet.

Leave a Reply

*