Fail to Prepare, Prepare to Fail reminder

Joins, Natural Keys, Hashing MPP and #datavault

I’ve written about all of this on my blog numerous times over the past 6 years.  You can see and read a compendium or collection of the articles here.  http://danlinstedt.com/?s=hash+teradata  That said, it seems there still are confusing statements being made about the reality of how all this works.  In this entry, I will take another look at this space – specifically for MPP systems, and even more specifically for Teradata. I hope to clear up any misconceptions about how this works, why it works, when it works, and when it doesn’t work in this particular environment.  This will be a highly focused article specifically for the Teradata relational database engine that is available today (as of November 2016).

Before I get going,I need to acknowledge the folks who have helped, inspired, and taught me along the way most of these components.  You see, with Data Vault 2.0 I truly do stand on the shoulders of Giants, and have leveraged many best practices that others (who have come before me) have developed and proven to be successful.

I first want to thank my friends at Teradata for a) supporting me all these years, and b) providing me with training in the early days so that I understand how Teradata works.  I also want to thank all my customers that work with Teradata, and all the wonderful DBA’s who have shared their knowledge with me.  I’ve met and chatted with folks who worked for Walmart, CocaCola, Air Lingus, and more – all Teradata DBA’s at the time.  I also want to thank folks like Stephen Brobst, for sharing his vast and extensive knowledge with me on big data solutions, MPP environments, and parallel query processing.  I want to thank Sanjay Pande for assisting me with Hadoop understanding, as well as my customer: Commonwealth Bank for implementing DV2.0 with Hashing on Teradata, and utilizing Cloudera Hadoop Platform along the way.  I want to thank my original team members at the Department of Defense, who also helped me understand the nature of true real-time and I.O.T. (machine generated data as it was known back then) in the late 1990’s.

Undoubtedly I have made a few errors in the statements /claims below.  If you wish, and have accurate citations, please feel free to correct me (include the citation), and I’ll be happy to have the correction on-board.

Data Vault 2.0 …

  • Data Vault 2.0 states: must use hashing as a PRIMARY KEY
  • Data Vault 2.0 does not state which hashing function you are required to use
  • Data Vault 2.0 does not tell you how to solve hashing collisions, only that you must decide on a collision strategy that’s best for your environment
  • Data Vault 2.0 states: hashing solves parallel heterogeneous system loads in large scale out operations, spread out over multiple servers and quite possibly across the globe.  The hashing then allows the data to be “late-bound” (ie: joined) after being inserted, without any inter-dependencies on “looking up parent records first.”

MPP…

  • MPP states: The best way to leverage an MPP scale platform is to evenly distribute data across all available compute resources
  • MPP and Mathematics states: to solve a different query’s over MPP and to make it performant, the architect or designer must properly co-locate data on the shared-nothing nodes.  This, in turn dicates the following: it is near impossible to achieve maximum performance for 100% coverage of all queries all the time, without full and complete replication of every “joined object in the query” – meaning: cartesian product applies to making copies of this data, and replicating it across each of the nodes, laying it out in full multiple times, in order to reach 100% query coverage with maximum performance benefit.

Teradata…

The Goals of Data Vault 2.0

  • The model goals: To be cross-platform, ubiquitous, vendor independent design that provides performant, scalable, high speed queries, agile build cycles, and works based on proven mathematical formulas for scale-out distributed data systems.
  • In Addition, the design’s goals are repeatable, consistent, redundant, fault tolerant.
  • The architecture’s goals are to be de-coupled, so that as pieces of the architecture become obsolete, they can be shut down (such as a staging area no longer needed by a true real-time system).
  • The process goals: to allow high speed, parallel load and query operations to be constructed that will not need re-engineering based on relative size of data, nor latency of arrival changes (this is one part where other data warehousing methodologies actually fall down).

That said:

  • in theory, only a perfect hash can actually produce a unique hash without said collisions.  However, that said the “proofs” for these being actually “perfect” are still running trials, looking for collisions throughout their set of data (these trials may actually be completed by now), however that said: perfect hashes are not yet ubiquitously available across all platforms (oracle, teradata, mysql, sqlserver, aster, hadoop, hbase, informatica, data stage, ab-initio, perl, php, C++, C#, .net, asemmbly, etc…)
  • Can a perfect hash be used in Data Vault 2.0? Yes.  IF you are willing to implement the function on your own according to it’s respective RFC (some downloadable code works, others don’t – are riddled with bugs).  Yes, even something like SHA-1 which is not a perfect hash has been shown to exhibit buggy behavior in Oracle 11g (ask Kent Graziano on this one).
  • WHY then MD5 for DV2.0?  Because it is the most ubiquitous, most well rounded, most tested, and widely available hashing algorithm in nearly every environment.  Is it the best solution?? no.  This, is a compromise – but again, I urge you to re-read the top of these statements: I do not specify WHICH hash function you have to use for DV2.0 compliance, just that you must use a hash as a primary key.

The Data Vault 2.0 Claim:

  • The only way to achieve true parallelism across multiple heterogeneous platforms during load, that will scale out without a “lookup bottleneck” is to compute a hash based on the available row data at the time the row is being processed.
  • I welcome any mathematician (or otherwise) to find a proof that shows this is an incorrect assumption, and therefore by default, to arrive at a mathematical formula which provides the functionality I speak of -without using any sort of hashing.  At that point, I will say: the mathematician is right, we have a new standard, let’s test it and vet it, and see if it qualifies to become the new standards for the Data Vault solution.   Any mathemetician that can solve this problem, also can solve the MPP co-location problem in an algorithmic fashion – and would have an incredible piece of potentially valuable intellectual property.

More Facts:

  • Sequence numbers, and sequences – no matter what the platform require lookups to parent records in order to establish relationships to children.  This very simple fact, disqualifies sequences from ever being the proper choice to meet the demands of big-scale systems, and/or high speed real-time solutions for ingestion without bottlenecks.
  • That said: Teradata never said sequences solve these problems.  The magic that seems to be escaping everyone (if there is any magic), is that Teradata hashes a primary index value. That primary index value might as well be a natural key.  Why?  Because Teradata does not use the natural key value in joins (IF and WHEN they are specified as the primary index) – it will hash the values first, then find the amps, distribute the logic and go straight to the disk blocks.  Don’t believe me?  Read the Teradata Documentation about how primary indexing and hashing and natural keys work.  What I specifically mean is: WHEN Teradata does go to disk to get a row to join, it uses the row hash id – not the physical value.

The Big Question…

Why then does Teradata say: use Sequences or Natural Keys instead of the DV2 suggested Hashing?

  • the answer? the sequences in Teradata function only as unique row identifiers, that’s right – they do not represent the primary index, but just the primary key – the unique value to a row.

Which leads to the next big question:

Why then do all the Teradata models I’ve seen (from Teradata) have parent child relationships expressed with Sequence numbers as primary keys?

  • Great question – why don’t you ask Teradata why they do that.  In doing so, this does indeed break the fundamental principle of Data Vault 2.0 that I stated earlier: massively parallel, heterogeneous, completely independent load cycles direct to the Data Vault model.  Yes, that’s right, without a staging area.
  • That said, these very same models are currently being sunsetted by the clients that I visit, due to their high levels of parent-child dependencies – which end up forcing the organization to incur ever increasing costs and delayed maintenance cycles due to cascading parent-child impacts.
  • My opinion on this matter is this: Sequences in Teradata work for loads, for several reasons.
    1. Sequences are assigned in blocks to the nodes / amps / compute units
    2. Sequences in Teradata are only guaranteed to be unique, NOT to be in ORDER, and NOT guaranteed to be fully sequential (they are allowed to have “holes”)
    3. The loads to Teradata (for most BIG data solutions) require FastLoad (sometimes Multi-load will suffice).  Once the data is IN Teradata, it is assigned sequences as parallel loading to the target tables take place using Teradata SQL procedural language.  This is typically done in block style execution.  However, that said: IT HAPPENS ALL WITHIN THE TERADATA ENVIRONMENT.  This is neither cross-platform parallelism, nor heterogeneous parallelism (it breaks the design goal of Data Vault 2.0)
  • OR the alternative: Why do some Teradata models work perfectly well with Natural or Business Key Values?  Again – because the hashing of the value is done internally, and the hash itself is used for the join, NOT the natural key, NOT the value of the data.
  • The list continues from here, but I have already bored you to tears with the technical details of how Teradata Works.

I’ve spent a good portion of this blog talking about the loading process. Now before I leave this subject, let me just say this:

The last thing you want when loading a Hadoop solution, is to a) wait for some relational database to first insert the parent record, b) have to lookup that parent key c) assign that parent key to a “child document”  This would defeat the entire purpose of ingesting data in to Hadoop in the first place.

Time to move on, let’s talk about querying, hashing and sequences for a minute.

  • Myth: Hashing rows slows down query engines.  Truth / Fact – Databases (outside of Teradata) simply aren’t properly optimized or configured to handle it, their optimizers don’t understand MPP, Big Data, and data co-location problems (or simply aren’t tuned yet to work with these situations).  Fact: If the actual hashing process and hash data were truly to blame for the slowness of the query, then engines like Teradata (and yes, even Hadoop) would never have utilized such a system in the first place.
  • Myth: sequences are always faster in joins than hashes.  Ok – outside Teradata, this is most likely true due to the fact that the Hash Value is longer than a sequence in bits/bytes.  Fact: Teradata never actually joins on the sequence number (* see my statements above about how Teradata actually constructs its join logic).  Read here: http://www.dwhpro.com/principal-teradata-join-strategies/  – it covers the join techniques, but always refers back to the primary index.
  • But But But in my <Oracle, SQLServer, DB2 UDB, MySQL> environment, my results actually show a slow down when using hashes to join…  Yet you say it’s a myth, where’s the disconnect?   The issue is again: these database engines are not equipped / tuned for MPP, the optimizers use literal values stored in index trees to perform the join matching, where teradata uses a computed hash value – so.. of course it will be slower in your environment.  We need to start asking a different question!!!!

What is the different question?

How can I get DV2 to be performance driven with Hashing and Joins on these other platforms?

  • Answer: Point-in-time tables, Link tables, and Bridge Tables.  If you do not understand these structures, their purpose, how they work, and why – then you will not have much success in creating performant joins and getting data out of the Data Vault in a hash based model.  These structures provide equal-joins, allow for virtualization, and query optimizers in the “traditional platforms” to utilize star-join techniques developed for Star Schema based models.  These structures (when designed properly) offer table elimination, index optimization and data co-location (similar to Teradata’s ideas of a Join Index but much less restrictive) on MPP environments.
  • Bottom line?  Hashes let you join from within a relational database out to a NoSQL engine (as long as the hashes are computed the same way with the same character sets and same rules),  on-demand or at-will.   Sequences can not achieve this without “looking up” the sequence and “copying it to” the child record living in the NoSQL engine.

Wait, Should I use Hashes in Teradata for DV2 or Not?

  • For Teradata, the answer sadly is: it depends.  My question to you is: is a heterogenous highly scalable, global solution what you are trying to build?  without inter-dependencies???  Do you have a business need to join Teradata Data to another platform on-demand / at-will (again, without the dependency of having to copy or move a sequence number to the other environment)?  If the answer to these questions is yes, then you have no choice but to use DV2 style hashing as your Primary Key (NOT necessarily your Primary Index).

If the answer to the above question is no, then I ask you this: do you have massive amounts of data bottle necking your batch load process today?  can you afford to have your loading processes stay this way? Do your lookups take too long?   If the answer is yes, then you have two possible solutions:

  1. choose to use sequences, but move to a 100% real-time feed, get rid of all the batch processing entirely.  That said, this solution will only scale for a while – until you have too many parallel real-time feeds loading data to the same targets at the same time, all needing that “dependent lookup”
  2. Switch to DV2.0 Hashing and call it a day.  Increase your hardware if you need added performance, or switch platforms to one that deals with this easily, OR push the vendor of said RDBMS to innovate their optimizers and finally “get with MPP properly”.  (which I’m happy to say Microsoft has just accomplished this with their latest release of SQLServer Data Warehouse Edition)

So at the end of the day, I’ve blogged on this before: can you use sequences in Teradata for Data Vault 2.0 and be OK.  Technically, Yes.  Methodologically speaking: No.  It breaks standards of scalability, and heterogeneous parallel systems.  Same goes for “Natural Keys” in Teradata.  Why only in Teradata?  Why not in Oracle or SQLServer with Sequences?  Because neither of these database engines (exception: SQLServer 2016 Data Warehouse MPP edition) actually implement MPP shared nothing environments, leading to bottlenecked loading processes over Big Data “in a single cycle”, and / or massive amounts of low-latency data arrivals.

Take your pick.  Consider yourself warned of the consequences of Sequences in any system.  End of the day?  Today you don’t have Big Data during your load cycle, so you choose sequences.  Tomorrow – when Big Data arrives, you will be forced to re-engineer – it is at that point, that Sequences WILL FAIL.  Or the alternative: today you don’t have a heterogenous or hybrid solution to deal with , so you choose sequences; tomorrow – voila you have to make the data “connect” to Hadoop or Hive, or HBase.. again – at this point, Sequences Bottle neck and FAIL – you will be forced to re-engineer your solution.

Please remember that most of the customers I help with, are building global, enterprise BI Solutions that handle (on average) 250 Terabytes all the way to 3 Petabytes in size.  I am trying my best to provide you with a road-map so that the system you build today, will work tomorrow without re-eningeering!

Thank-you, and I hope this helps,

(C) Copyright 2016, Dan Linstedt, all rights reserved.

Tags: , , , , , , , , , ,

2 Responses to “Joins, Natural Keys, Hashing MPP and #datavault”

  1. Kent Graziano 2016/11/18 at 12:02 pm #

    Thanks for the post Dan. I know a lot of folks keep bumping up against this question.

    For the non-Teradata folks I will add that I have used MD5 hash as PKs for Hubs and Links (and stage tables!) on both Oracle and SQL Server. Yes, the join performance is a bit slower than using numeric sequences (we did actually run tests and timing), BUT the benefit of being able to load multiple objects in parallel, using calculated MD5 PKs, out weighed the join performance hit because of the ability to load faster. Regardless, in both cases we were able to virtualize the reporting layer (mostly facts and dimensions) using simple views to no ill effect. Meaning the reports were plenty fast even though all the joins used char(32) columns. So both systems are positioned to expand, when/if needed, to utilize big data/Hadoop/NoSQL .

    In other words, we future-proofed the architecture by sticking to the DV 2.0 standards.

  2. Dan Linstedt 2016/11/18 at 12:29 pm #

    Hi Kent, thanks for the feedback… Actually to assist performance further in Oracle and SQLServer, you can now switch from a CHAR(32) to a Binary(16) in SQLServer and raw(16) in Oracle – yes, each platform allows this, and yes, it can be made a Primary Key, and it works so much better than a CHAR(32). This, is documented in my Hashing Document that is only available to students in my CDVP2 course.

Leave a Reply

*