there’s been some discussion, nay, arguments of late – around whether or not to replace surrogates in dv2 models with hashes, or to simply use natural keys… ok – perhaps natural keys is too strong a word, maybe business keys is a softer side (as frequently found, source system models these days usually contain surrogates acting as business keys, and natural keys are no where to be found). this blog entry (albeit short) will explain the pros and cons of each. i welcome you to add your thoughts to this post (on linkedin “data vault discussions” group).
let’s take a look at how we define each.
natural keys: data elements or attributes (may be composite, may not be composite) which when viewed by the human intellect appear to contain some form of descriptive metadata, enabling deciphering of the true meaning or representation without “adding” additional context (ie: without looking up additional information to figure out what it means).
business keys: any “unique identifier” that is presented to the business user, for the purposes of uniquely finding, locating, and understanding data sets. it *may* be a sequence number, or it *may* be a natural key. frequently these days, business keys (as bad as it sounds) are generally sequences provided by, generated by, and maintained by a single source system application.
one part of the data vault 2.0 standard requires changing from surrogates in the data vault model, over to hash keys as the primary key for hubs, and links. the question is: why not simply use the business keys or the natural keys? why go through the trouble of hashing the natural or business keys to begin with?
let’s examine this a little deeper:
#1) what is driving the need to switch off sequences to begin with?
- sequences cause bottlenecks in big data solutions.
- sequences require dependencies in loading cycles, slowing down real-time feeds, regardless of whether or not referential integrity is turned on in the database physical layers.
the bottlenecks are to a hybrid system, one that uses hadoop (or nosql) for staging and deep analytics. these bottlenecks are because of the dependency chain mentioned in item #2 above. the load to “hadoop” or nosql requires a lookup on the hub or link sequence in the relational system before it can “insert” or attach it’s data. sure, you can copy the file in to hadoop, but the “sequence” has to be attached before it can be joined back to the relational system. this defeats the purpose of having a hadoop or nosql platform to begin with. the simple fact is, it has to “lookup” one or more sequences from the relational database, what a pain!
in a fully relational system, larger loads require “longer” cycles for loading. why? because the hubs must be inserted before the links can use the sequences, and both must be inserted before the satellites can be inserted. why? because the satellites depend on the sequences generated by the parent tables. this happens even with referential integrity shut off at the database level.
neither of these situations are sustainable, nor even advisable in big data or high velocity systems, forcing the data architects to ultimately “re-engineer” the system for growth or arrival speed. oh, there’s one more issue: mpp data distribution. if data is distributed by “ranges” of sequences, then there is a chance of forcing a hot spot to occur in an mpp environment. nothing that anyone wants, and again defeats the purpose of having mpp in the first place.
hashing is currently in use (behind the scenes) by many database engines in the mpp world (both relational and non-relational mpp systems) to distribute data along “buckets”, and preferably, reach an average equal distribution in order to avoid the hot-spot problem. that, my friends, still leaves us with the bottleneck and dependency issues to deal with…
enter data vault 2.0
data vault 2.0 modeling (the modeling component isn’t the only thing that has changed for data vault 2.0), provides a standard that states: replace all sequence numbers with the results of a hash function. then, goes on to suggest several hash functions to choose from: md5, md6, sha1, sha2, etc… and the standards document / 5 page white paper i’ve made available (to my students only on datavaultalliance.com) describes the best practice of how to implement it properly, and cross-platform.
the result of the suggest hash: md5 is a 128 bit quad-word. in reality, it’s two separate big int’s (if you will). unfortunately in order to handle this kind of data-type natively, the operating system and / or the database engine would need to declare and provide a numeric capable of handling 128 bits in length.
due to the fact that this is simply not feasible today, we change the result of the hash from the binary representation in to a char(32) (128 bits converted to ascii hex string). again, all of this is covered for the students on datavaultalliance.com – those who buy a class, or buy the super charge book pdf from the site.
back to the point and the comparison….
hashing instead of sequencing means that we can load in complete 100% parallel operations to all hubs, all links, all satellites, and enrich all hadoop based or nosql documents in parallel at the same time (without dependencies). providing of course that the proper hash values have been “attached” to the staging data. it also then allows us to join across multiple heterogeneous platforms (from teradata to hadoop to ibm db2 bcu to sqlserver, to couchbase to mysql and more). (there is more to this discussion, about collisions, and so on – that are beyond the scope of this article at this time).
back to the original question…
so why not simply use “business keys” or “natural keys”? why hash at all?
- not all database engines (relational or non-relational) have the capability or capacity to use natural or business keys for data distribution
- not all database engines (relational or non-relational) have the capability to execute efficient joins on natural or business keys
- and the biggest reason of all: in a data vault model we leverage a many to many relationship table called a link. it is made up of multiple keys (from different hubs). to join all these keys together, would mean replicating the business keys to the link – resulting in (most cases) a variable length multi-part, multi-datatype key set, which would ultimately perform slower than a concisely measured, precise length field. for satellites, it means replicating the business keys to each of the satellites as well.
another statistic: 80% of the “business keys” are variable length character strings (most in unicode these days, making them twice as long).
the reality of it is?
the business keys that are “longer” than 32 characters in length on average, will perform slower over the life of large data sets than those which are 32 byte hex represenations. of course, the 32 byte ascii hash hex strings will perform “slightly” slower than the sequences (which are smaller yet), but more than make up the query performance difference by resolving the other issues mentioned above.
hashes, like it or not, serve a purpose. natural keys and business keys, as good as they are cause additional join issues (like case sensitivity, code-set designation, and so on). believe me, solving a heterogeneous join (from sqlserver for instance to hadoop) and dealing with two different code-pages, can cause other problems that jdbc won’t solve.
when to use natural or business key joins instead of hashes?
when you are embedded on a single system (like teradata, or greenplum for instance) for your entire data warehouse solution, and don’t have “cross-technology” or heterogeneous technology joins to accomplish.
when to use hashes instead of natural or business keys?
when you need to join heterogeneous environments, or resolve unicode and case sensitivity issues, or the length of the keys exceeds 32 characters (non-unicode) in length.
in the end, it’s your call if you want to “replicate” business keys/natural keys to link structures and do away with hashes altogether. it would certainly solve the problem, but if you go this route, please do not return to using sequences, oh – and be aware of the issues mentioned above.
hope you’ve enjoyed this entry, and i’m looking forward to your feedback.