#datavault 2.0 Hashing and Sequences Round 2

Once again, the debate rages on….  I’ve now been inundated with requests and demands, and issues around Hashing, so I wish to clear the air.  In this entry I’ll dive in to Sequences (AGAIN) and Big Data, and the problems they pose with Big Data and HIGH VELOCITY systems.

First, let me get this off my chest:

I NEVER CLAIMED THAT HASHES CAUSE ZERO COLLISIONS.  To my knowledge and understanding, there is currently, no such thing as a “PERFECT HASH” that will produce a guaranteed unique identifier for a single input.  People in the community at large seem to forget, or have amnesia when it comes to this statement, and INSIST on continuing to argue with me that I “somewhere / somehow” made this claim.  LET ME BE PERFECTLY CLEAR:  I NEVER SAID THAT HASHES HAVE ZERO COLLISIONS!!

So PLEASE, don’t bring that up again.  I will not spend any further cycles or thought on that particular issue.

Second: YOU NEED A COLLISION HANDLING STRATEGY baked in to your DV2 implementation paradigm.

I am currently working on a few different alternatives that I will announce at next years WWDVC conference, and if you haven’t signed up, you should – it’s the place to be if you are thinking of Data Vault, or using Data Vault in your enterprise data warehousing efforts.

Third: In order to “build” DV2 Compliant Models, you MUST use a Hash

No, Sequencing is not compliant with Data Vault 2.0.  Hashing is required.  In fact, replacing sequences with Hash results is _required_ to be Data Vault 2.0 Model compliant.

Fourth: What are my hash function choices?

There is a LIST of Hash Functions here on Wikipedia:  http://en.wikipedia.org/wiki/List_of_hash_functions  It is NOT complete, but has some decent functions listed.  What I recommend is the use of MD5 – why? because it is most ubiquitously available across most platforms, and has a decently low percentage chance of duplicate collision – especially when the data is salted on the way in.  Now in reality, what I am suggesting is the use of an algorithm that produces 128 bit or larger result.   Why?  Not for cryptographic reasons – no, we are not trying to protect the data from attack, we are merely attempting to computationally setup a known value “in-stream during load”.

If you don’t like, or don’t approve of MD5, then choose another function!! One that you are comfortable with….  Choose SHA1, or MD6, SHA-256, Spectral, or Murmur, or City, or Spooky.

Just remember: if you choose a function that is NOT currently available, you have to write the code / extension for your tool set (which is fine, and most code in their source forms are downloadable).  But the maintenance is up to you at that point, not the vendor.

Fifth: Assign the HASH value on the way in to the “staging area”

Remember: the staging area MIGHT be a Hadoop or NoSQL system, or it might be a relational database.  You have freedom and time at this point to check for “duplicate collisions”, and handle hashing issues before continuing with the loading cycle.

Sixth: Why don’t range partitioned surrogate keys work?

Well the jury is still out on this one, but here is the conundrum:   Nearly every “data model”, whether it’s Data Vault or not, has a parent-child relationship expressed.  Even with referential integrity shut off / shut down, the process (if using surrogates) still requires the “creation of the parent surrogate key” before the child row can be inserted.  It also still requires the child row to “look up” / be dependent on the parent value before the insert can take place.

THIS is a problem in cross-platform (heterogeneous environments), this is also a problem in large volumes arriving at high velocity.  Anything that introduces dependencies to the process, will slow it down (sometimes significantly), and the fact still remains, that surrogate key lookups introduce parent-child dependencies in the chain, not to mention “increasing caching”, use of temp areas, sorting and clustering needs, increase in CPU power, etc…

Hashing is currently the only known technique to allow a CHILD “key” to be computed in parallel to the PARENT “key”, and be loaded independently of each other.

Sixth: Want to argue these point?  Do you homework, look up the mathematics for yourself.

The point is, until the “conundrum” I posed above, is solved, surrogates remain (and will continue to remain) a bottleneck in the loading processes of massive data sets at high speed of arrival.  If you have a mathematical solution to the conundrum, I can gather it would be very valuable as a mathematical proof – made available to the world, and I would encourage you to publish it, and patent your solution.  Until then, Hashing is the only way to truly achieve near linear scalability.  By the way, range-partitioned Surrogate Keys (SK’s, sequences) work as long as ALL the data lives on a homogeneous platform (single database environment).

If you are happy with Surrogates, and don’t have performance problems, then perhaps you only need Methodology and other components of DV2 – and not the modeling pieces.

 

Hope this helps,

Dan Linstedt

 

Tags: , , ,

One Response to “#datavault 2.0 Hashing and Sequences Round 2”

  1. Albert Garcia Diaz 2017/01/04 at 12:58 am #

    So most of the relational databases perform better with INT cause they can find the value using binary algorithm (check if the value is in the first half of all the rows, if so, checks again inside the half of that half, etc), or even TS following similar procedures. In case of hashing, it is an extremely random characters, not sorted at all, so it has to go one by one, all of them, until it founds the one that you are looking for. If instead of hashing, we just put and link through the natural key, at least, when sorting that string, you’ll get much faster into that row, cause you are avoiding so many that doesn’t start with x and then avoid the ones that doesn’t start with y and so forth.

    So my question is, if we don’t have 100+ char BK (which is not very common) there is no gain at all hashing, we should use the normal natural key, and the database will find the value much faster, cause you can sort it and eliminate so many rows that when scanning are not corresponding to the value that you want.

    Thanks!

Leave a Reply

*