wait time

#datavault 2.0 and Hash Keys, Again?

Once again, unfortunately, people in the market have chosen to lambast Hash Keys and Data Vault 2.0 for it’s use.  I will now attempt (again) to clear the air on this issue.

What is a Hash?

Learn, read, understand – you can search Google for this information, many many many blogs, mathemeticians, and even wikipedia define it for you.

What is a Hash Key?

Simply put?  A Surrogate Key – consistently generated and assigned for a given combination of business key field(s).  With the expectation that it’s mostly unique.  In Data Vault 2.0 we replaced sequence numbers with Hash Keys.  Why?

  • Sequence numbers break under volume forcing re-design and re-engineering of the architecture
  • Sequence numbers don’t allow cross-system joins without “looking up the value and copying it across to other systems”
  • Sequence Numbers bottleneck on High Speed and High Volume systems (see point #1 above).

Wait, I thought Primary Keys should ALWAYS be Unique?

Yes, they should.  However, there are mathematical issues with Hashes as primary keys, which tend to render them mostly unique.  This introduces something called a collision.  If you don’t understand what a mathematical hash collision is, you can read more about it here.

So why did DV2 choose Hashes as Keys?

Really?  Again?  The list above wasn’t enough for you eh?  Ok…  Take a read on the different types of Hashing Functions, including something called a perfect hash.  There are different types, performing differently that provide mostly unique numbers.  That said, if you are going to choose to use a hash function then you must understand that you need to build a collision strategy.  If you refuse to “know or acknowledge” a collision strategy then what is left for you to use as a primary key?  (more on this in a bit).

DV2 chose Hash Keys for the following reasons:

  • Loading your Data Vault in parallel over massive volumes
  • Loading a geographically distributed (or heterogeneous server distributed) Data Vault EDW in parallel, allowing lazy joins later.
  • Semi-encrypting data sets in order to meet country regulations (where clear-text cannot be copied across country boundaries).
  • AND #1: NOT ALL RELATIONAL DATABASE ENGINES WORK ON BUSINESS KEY JOINS!!!

We already know sequences are dead for large scale systems.  Don’t believe me???  Why then do MPP solutions (like Teradata) HASH your data before executing joins?  How about SAP Hana??  It does the same thing…   In order to scale, the “lookup the parent record before inserting the child”, NEEDS TO GO….  Otherwise, the system will ultimately reach it’s maximum processing capabilities, and bottleneck.

So just add more hardware you say…

Wish it were that simple…  It isn’t, not over 200+ TB of data in your warehouse, lookups simply don’t work…  Oh yea, what about multi-system distributed data warehouses???  What do you do then?  Lookup over the ocean from one country to another?  How do YOU process terabytes of data this way with a cross-country lookup???  Will it sustain performance as your data sets grow?  NO, No No no no….  it won’t.

Don’t take my word for it, try it yourself.  Put 360 Billion records in one parent table, and 200 billion records in the child – split them across the ocean, and try to perform Lookups.  Come back and tell me your performance results….

But wait, there’s more – now do this for 20 loads IN PARALLEL, and tell me it performs….  It won’t, it doesn’t.   More hardware reaches a law of diminishing returns.  You won’t see the scalable gains from scale-up, it’s why MPP is Scale-Out.

But What About Natural Keys?

Natural Keys are good if you have them AND your platform works with them to join data under the covers AND your platform is housed in a SINGLE data center AND you don’t have to ship clear text data over country boundaries.

Systems like SAP HANA and Teradata both work with Natural Keys / Original Key Values…. Why??? Because they HASH the values UNDER THE COVERS before the join is actually executed.  That’s why.

The difference between DV2 proposed Hash is: Uniqueness, where Hashing of the Business / Natural keys on Teradata is BUCKETIZING (the general use of hashing).

Yes, these two systems are FAST and can work wonders on Natural keys.

Wait, I have Oracle or SQLServer…  What about them?

Well these two database engines DON’T hash character columns underneath for distribution or joining processes.  Here, the ONLY time a join on a natural key / business key is FASTER than a join on a Hash is simple science:  When the LENGTH of the natural/business keys are shorter than the LENGTH of the hash key OR – if the business/natural key join is on 2 or more fields where the join on Hash Key is a SINGLE field.

Now, that said: if you are using DV2 Hash Keys, then you *should* be aware of this: storing the Hash Key in fixed Binary Format is acceptable on BOTH platforms (save them in Binary, cuts the storage down by half).

Collisions, what about them? what if I can’t afford them?

Data loss in a data warehouse that is supposed to be a raw auditable system of record is unacceptable.  To that end, IF you are engaging in use of a Hash Key then you should be smart enough to read about hash collision strategies.  Unfortunately the nay-sayers out there in the market place put too much focus on ASKING the question (what about collisions), and don’t bother making ANY statements about resolution or strategies that solve the problems…  Interesting eh?

Ok, so turns out I do teach the best possible solution, and one of the only ones that work:  a) you must watch for collisions, and b) the solution is called a reverse hash of the original string, storing both.

Now, I will say this: there are customers with 350+ TB of raw data in DV2 landscape, been watching for Hash Collisions using both MD5 and SHA-1 for the past 6 years, billions of records, and have found NONE so far.

Also remember: the collision rate is PER HUB (because the business keys are split across hubs), so it’s not the full total of all business keys you have in your business that equals or drives the collision ratio, it’s the number of unique keys for each Hub.

In Conclusion…

Hashing is a replacement for Sequences.  Hashing should not be used on platforms where natural keys work, as long as joins to other environments also work and perform adequately.  Hashing your business keys remove dependency on “parent-child lookups”, simplifying the loading processes, and allow them to scale significantly across hundreds of terabytes PER process without forcing re-engineering.

Again: use Natural Business Keys where platforms support them, and heterogeneous platform joins work & perform in accordance with SLA’s….  (ie: teradata / Hana).

I’ve said it before, I’ll say it again: SEQUENCES are DEAD!!  in big data, if they weren’t, we would see them attached to EVERYTHING Hadoop or Big Data Related…  Sorry Data Vault 1.0 Followers – Data Vault 1.0 is a standard published over 16 years ago, long before “big data NoSQL solutions were around”,   Data Vault 2.0 is the only way to scale enterprise data warehouses properly.

Want to know more?  I teach all of this and more (including security, privacy, and division of data sets over global enterprise data warehouses in my class: CDVP2 (Certified Data Vault 2.0 Practitioner).

Tags: , , , , , , , , ,

No comments yet.

Leave a Reply

*