People arranging the important part

#datavault 2.0, Hashes, one more time

To all: I must apologize for the public display of conflict that has occurred on Twitter.  It was unprofessional of me. That said, I cannot stand by and let someone take my statements out of context.  Let’s talk brass tacks here (lets get down to facts shall we)?

NOTE: All of these statements have been and are taught in the CDVP2 (Certified Data Vault 2.0 Practitioner) Class since the beginning of the course when Hashes were introduced.   You can find out more about the class and where to take it here: http://DataVaultCertification.com

If you want to know more about hashes, please read the following links:  (it is not my personal math, the mathematics of Hashes and Collisions need to be taken from mathematicians).

Fact 1: Core Definition of a Hub is: A Unique List Of Business Keys (nothing more, nothing less), no sequences, no hashes, no load dates, no record sources. (since 1995 before general public publications)

Fact 2: Sequences were added to DV1.0 in 2001 when I first published to allow effective joins on all platforms EXCEPT Teradata.

Fact 3: Teradata uses hashes to bucket data on Nodes, Amps, Modules, Disk, Segments, Sectors etc..  Read more HERE: https://www.tutorialspoint.com/teradata/teradata_hashing_algorithm.htm

Fact 4: Teradata joins on the internal hashes created from the column(s) selected to be a PI (primary index), Teradata does NOT join on literal values, even if sequences are in place.  Read more HERE: http://www.info.teradata.com/htmlpubs/DB_TTU_14_00/index.html#page/SQL_Reference/B035_1142_111A/ch03.125.092.html

Fact 5: SAP Hana now does the same thing, hashing distribution and bucket columns, and joining internally on the hashes.

Fact 6: Hashes and hash collisions are mathematically defined.  I did not create any of the hash algorithms, nor did I create the chances or probabilities of collisions.  Read more here: https://en.wikipedia.org/wiki/Hash_function

Fact 7: Yes, keys can collide if using a Hash.  IF a hash is chosen, then a collision mitigation strategy must be applied in order not to lose data.  Yes, the mitigation strategy may include a Reverse Hash of the key columns.  However, that said: hash collision strategies must be consistent in their resolution approach – otherwise the data will not match on the next load and queries could pull back unwanted row sets.

Fact 8: Sequences bottleneck load processes at certain levels of volume, requiring re-engineering and re-design of your entire solution in order to scale.  Volume levels vary depending on system size, hardware size, and tuning abilities of the DBA(s) and System Administrator(s).

Fact 9: MPP Systems like SAP Hana, Teradata, and Kudu require Hashes to distribute data across the nodes effectively.

Fact 10: the Hash algorithms that these engines apply (Teradata, Kudu, SAP Hana) are not available on other platforms, they are not heterogeneous – therefore if selected or applied, are subject to internal use on that platform alone.

Fact 11: Sequences were replaced with Hash Keys in DV2 as a way to execute Joins in large scale AND global distributed systems, as well as cross-platform heterogeneous joins. Please NOTE: Hashes are not a replacement for encryption.

Fact 12: You can *always* join on natural or business key columns, this will always work in any database.  It may not perform, but it will work.

Fact 13: Natural / Business Key joins will be faster than joins on hash keys IF the length of the business key field is less than (smaller) than the length of the hash AND if it is not a multi-column predicate.

Fact 14: Joins on Sequences will always be faster IF (again) the byte length that holds the sequence is smaller than the byte length holding the hash key.  (See fact 8 above as to why sequences are deprecated in large scale solutions)

Fact 15: in case it was missed, Hashes, and Sequences are not part of the core architecture, never were – they are true surrogate values  (See fact 1).

Fact 16: Load Dates, Record Sources are system driven fields that are there solely for the mechanical purposes of tracing data when something goes wrong.  Because data lineage at a row level is important.

Fact 17: Hashes or Sequences are system driven fields that hold NO business value, and no meaning to the business.  Once generated in the Data Vault should NEVER leave the Data Vault or be shown to the business users.  Hashes are there to overcome scalability and distributed data issues (when platforms don’t support “joins on natural / business keys through hashing underneath”).

Sequences were there in Data Vault 1.0 to support SQL Select joins (see fact 11).   Neither were ever part of the core architecture, as joins on natural keys always work (may not always perform).  (see fact 1)

Fact 18: Hash storage in Oracle and SQLServer can be converted to a Fixed Binary, resulting in faster joins.  Performance gains will vary depending on tuning and hardware of the platform.

Fact 19: MD5 has been a suggested algorithm due to it’s well-rounded, well-tested and availability on platforms.  That said, MD5 is being deprecated by Database vendors due to it’s lack of security (not because of it’s use as a hash key).  Therefore, it is suggested SHA-1 or SHA-2 be applied instead.

Fact 20: Each Hub is it’s OWN unique set of business keys. Therefore, each hub must be treated as it’s own collision probability if hash keys are chosen.  It is not mathematically correct to consider the entire set of all business keys across the business as a single set (due to the fact they are split in to separate hubs in the first place).

Fact 21: The Data Vault Standard must work on Unstructured and Multi-structured data.  Elements such as Images, Video Files, Audio Files, Images, and Documents usually do not contain a “business key”.  Documents may be the exception here, and may contain many business keys.

The question then becomes: can an independent, unchanging business key be created and assigned so that Delta’s can be detected downstream?  Hashes work here, where surrogates do not.  Hashes can be calculated based on specific filtered content (for example: an image of a face, points of interest can be measured and a hash can be created equating to a business key).

The DV2 standard is designed to handle multiple “big data” problems, including unstructured and semi-structured information, and be able to tie them back directly to the relational and structured systems so that ad-hoc query and analysis may take place.

Conclusion / Commentary:

Data Vault 2.0 standards accommodate for: identification of unstructured, and multi-structured data, “big data huge scale” solutions, application of petabyte systems, and global distributed data computation / identification so that distributed joins can work.

YES Hash algorithms may have collisions, and yes, causing the architect to design a hash collision mitigation strategy.

In the end – creating a unique list of business keys is easy (except when it comes to unstructured data sets).  It can be done in parallel, it can be distributed, it can be joined on..  That’s not the problem.  The problem is: how to join on natural / business key fields when they are “large” (50+ characters, or they are stored in multiple fields – 2 or more fields).

What if there ARE NO BUSINESS KEYS or NO KEYS at all?  Without a Hash, the data cannot be identified.

It is a technical platform issue and has zero to do with the business application of the data set.  Due to this fact, Sequences have long been assigned to compensate, when in fact the platform should be creating better join optimizers and better access to data under the covers.  Two (and I’m sure others) platforms already do this: SAP Hana, and Teradata by hashing the natural keys / business keys under the covers for you.

Relatively large byte structures for natural and business keys cause string comparisons which are relatively slow or slower than numeric comparisons, hence the “default choice” for applying surrogates.  The issue is: setting the surrogates in place for “child records”, so that the joins and dependencies work.

This causes a mathematical issue during load – to be more precise, it causes a dependency which bottlenecks on orders of scale.  Of course the “bottleneck” won’t appear until the limits of processing power are overwhelmed by the amount of data being sought on a row by row basis (lookup of parent key).

The other restriction to sequences, is they lock the process design down – to a single instance to generate the “parent key”.  Which means, in distributed data base systems (like MPP or geograpic distribution), a lookup has to occur for each and every child record.  This, is infeasible under heavy load where parallel systems designs can’t afford the time to exercise this technique.

So how does an architecture or design time paradigm solve it?  One possible best practice today – Hashing.  If there were a better technique for uniquely identifying records that scaled with volume, and handled MPP solutions, then great – the standard will change and it would be chosen and applied.

Today, this is the best known technique.  Sure there are “issues”, but there are issues with every technique we apply to systems.  Just as there are issues with sequence numbers.  Unfortunately the issues with sequences bottlenecking have no “mitigation strategy” other than to consolidate on a single server all data (which with country based regulations isn’t always possible), and scale that single server (which introduces a law of diminishing returns on investment).

There is a facet of hashes called a perfect hash (not my invention by any means).  More reading on this subject can be done here: https://en.wikipedia.org/wiki/Perfect_hash_function There are problems with this approach too.  One being not all vendors even have this function coded or available, which means your EDW / IT team must code the function and test it.

Another being its length of output, and another being it’s time to compute the hash value (which vary depending on the perfect hash algorithm selected, and the hardware size it’s being run on).  Perfect Hashes are not a feasible approach (in my opinion) to solving unique key identification issues, much less identifying unstructured data sets.

THE DATA VAULT STANDARD FOR HUBS

Has been, always will be: A Unique List Of Business Keys – hence not requiring Surrogates OR hashes or any thing else in the Hub.

There are myriads of benefits to this approach (which has been in place since 1995 long before I ever published standards to the public).  One of which being natural joins, another being an advanced concept called floating Satellites, and more… However,the hardware underneath must support these joins efficiently at scale, as well as support efficient at-scale loading processes.

I hope this finally puts this entire issue to bed.  Again, Data Vault 2.0 Standards are taught in full in my CDVP2 classes by my authorized instructors.  To pull any of this knowledge out of context is to misunderstand the facts in this list.

Thank you for your valuable time,

Dan Linstedt

 

Tags: , , , , , , , , , , ,

No comments yet.

Leave a Reply

*