Notes on Data Vault and Teradata

In this entry I will explore the use of implementation of Data Vault on Teradata.  However, this entry is applicable to everyone in the Data Vault community, as it covers definitions and descriptions of primary keys, indexing, surrogate keys, natural keys, and performance and tuning.  Please note, if you notice that I have quoted or stated something incorrectly, please by all means, leave a correcting comment below.

To be specific, the use of Surrogate keys versus natural keys on Teradata.    The last thing I would want to do is mis-state the Teradata environment.  This is in response to an excellent question I received via e-mail.

First things first

Before I get going, this material is reflective of the kind of training, and in-depth experience I bring to the table in both my on-line courses at http://LearnDataVault.com (we go beyond just Data Vault) and my on-site classes.  This material is the result of actually working hands on with the technologies, and attending classes taught by Teradata Certified Instructors.

I would be happy to bring this level of expertise to your environment, and assist your project in successful data warehouse and business intelligence deployments.  Please don’t hesitate to contact me directly with requests.  I have years of experience in building high quality, high volume, real-time systems on a multitude of platforms.  I can customize a course to meet your needs.

Please comment if you like this post or find it of value. I would love to hear your feedback on these topics.

According to Teradata engineering (ie: Todd Walter), the Teradata engine executes everything in parallel, thus leading to 100% parallel – 100% of the time.  At least these were the statements he made during a Teradata Influencers meeting that I have attended, and these were also the statements made and demonstrations shown during Teradata Masters Training for consultants – which I was lucky enough to be invited to attend.

Furthermore, they also made the following statements:

  • Teradata primarily relies on hash joins
  • Teradata uses a “primary key” concept – which is the identification of one or more columns for which, the hash result is calculated
  • Teradata spreads data across mpp environment by calculation of a portion of its hash result
  • Teradata also is enabled to use a feature called a “join index” which is a many to many table, which can then contain a primary key that enables co-location of different tables’ primary indexed fields

Before I get any further…

Joe Celko wrote a fantastic article about your choices, when to use and apply natural keys, vs the use and selection / generation of surrogate keys.  Check it out here.

And, if you want to really know how to generate surrogate keys on Teradata, then this post written by Marcio Moura is worth while reading.

Yes, there is always the “maybe not so accurate” post, but commonly acceptable information regarding how to define a surrogate key: here on wikipedia.

Tom Kite (an Oracle guru) weighs in with his thoughts on using surrogate keys, a good short read here.

Finally, last but certainly not least, a short entry (but really cool one) regarding the use of surrogate keys and slowly changing dimensions is offered by Raju Bodapati, here.  What I find particularly interesting about this post is the following statement near the end: “One good way to address this could be to implement a surrogate key with natural key for the table changing over the course of time.”

This quote reflects the very changing nature of natural keys / business keys, which is a well known fact, albeit they should not change and when they do, they cost the business time and money.

Back to basics, and the Standards of Data Vault Modeling

The foundations / standards, as I’ve defined them, attempt to solve many of the issues with converting non-temporal data in to a temporal data storage format, while also solving: query problems, ease of load, accountability, usability, and so on.  One of the very early (non-published definitions of 1997) of the Data Vault Modeling Standards made the statements that surrogate keys were not necessary.  However, since that time, I’ve modified the standards – which now dictate that surrogate keys are necessary (or are at least defined to be part of the fundamental standard for the physical data model).

The surrogate keys are not necessary for the architecture of the Data Vault to “function properly”.  However, without surrogate keys, the statement is made and therefore must be accepted, that the natural keys / business keys are non-volatile.  There are not many keys which can adhere to this kind of rigor.

Now here are some requirements for a “Data Warehouse” (aka: according to non-volatile, time-variant, etc… definition).

  • Business Keys may or may not be natural keys
  • Natural keys and business keys may or may not be unique
  • Natural keys and business keys may change – however, as pointed out by some of the above articles, changes to these “user visible/user accessed/user relied upon keys” may be disastrous from a temporal aspect and an auditing aspect, which both are required by data warehousing.
  • Surrogate keys used and applied within a data warehouse (any modeling style) are generally there to overcome a) the join issues on many relational database engines, and b) to help bridge the gap over to the temporal world by allowing creation of a versioned record.

Let’s suppose the natural key or business key is consistent (stays the same) over time (the temporal aspect), but something else in the record changes.  IF you are NOT using surrogate or sequence keys, then it becomes near impossible to create a “new version” of the record and uniquely identify it in the warehouse.

However, looking at it from a Data Vault perspective, that problem is overcome by the inclusion of a Load Date field, and possibly a sub-sequence field in the satellite primary key ** NOTE: primary key is not the same thing as primary index in Teradata **

In my courses that I teach, I will not only explain the why but provide you with hands on examples of how to do this and demonstrate some of the differences in applied techniques. Please contact me today, or sign up for on-line training here.

So, does the Data Vault Model require surrogate keys?

The answer is not necessarily, until you begin to examine the next set of requirements from the standards:

  • Performance, Performance, Performance –
  • AND cross-platform usability (in other words, heterogeneous support for multitude of RDBMS, appliance, and newer technologies such as Hadoop + Hive, or netezza + oracle, or paraccel and SQLServer, etc..)

When you try to use multiple servers with multiple underlying technologies, then linking the data sets together across these environments makes most sense with the implementation of a physical many to many table (aka join index, aka Link Table,regardless of it being Teradata or not).

Surrogate keys often take less space, therefore you can store more data in smaller sizes of disk blocks, and smaller numbers of I/O are needed to access the data.  Data Access in a data warehouse costs time, and money – especially in big data environments, so to counter-act this and other ill effects of natural world keys or business keys, the Data Vault standard says: for your physical Data Vault Model, Surrogates are required.

In other words, given today’s changing infrastructure, and looking to store Data Vault data on multiple heterogeneous platforms, surrogate keys are necessary for transportability, and join capabilities.  Especially to handle BIG DATA in a Hadoop environment.

Now on the note of Keys, Identities, and definitions – the Data Vault Model draws it’s definitions and this particular standard directly from Dr Codd:

A quote from Dr. Codd: “..Database users may cause the system to generate or delete a surrogate, but they have no control over its value, nor is its value ever displayed to them …”(Dr. Codd in ACM TODS, pp 409-410) and Codd, E.  (1979), Extending the database relational model to capture more meaning.  ACM Transactions on Database Systems, 4(4).  pp.  397-434.

And the follow on from Joe Celko’s post regarding definition of keys: (to which Data Vault also subscribes):
“This means that a surrogate ought to act like an index; created by the user, managed by the system and NEVER seen by a user.  That means never used in queries, DRI or anything else that a user does. ”

This is why my Data Vault methodology and standards specifically state:

“Surrogate keys used in, generated for, and applied in the physical Data Vault should never leave the Data Vault, never be shown to the user.”

On to the question at hand…

Should I use Natural Keys or Surrogate Keys in Teradata? Which is better for implementing a Data Vault on Teradatata?

The answer, sadly, is it depends.  IF you choose to use Surrogate keys in Teradata, then there are the matters of how to implement it, do you leave holes or not? is it done by the loader or an ETL tool or the database?  The answers to these questions can either limit, or enhance load performance, and at the same time, limit or enhance query performance.  However, IF you choose to use surrogate keys, then most likely you will need a join-index declaration.

There are several performance documents regarding tuning Teradata, one of which is here, and brings to light the following tidbit of interesting information:

” keeping up join index might help, but you cannot multiload to a table which is a part of the join index – loading with ‘tpump’ or pure ‘SQL’ is OK but does not perform as well.Dropping and re-creating a join index with a big table takes time and space.”

Nothing but the best is offered during my world-class training, including hands-on sessions that help you see, use and work with the results.  To find out more register for an on-line class here, or contact me directly with your questions.

So is there a way around this?

Yes.  It’s called the Link Table.  Rather than implementing a join-index as Teradata has defined it, you can implement a separate Link Table structure, and generate the primary index on the same field as the primary index of the largest of the two tables.  This will ensure co-location of the data set on the nodes so the join does not need to cross amps/nodes to execute.  However, this optimizes the join for one way only and thus other queries may need other link tables or link table copies organized differently.

It might not be as fast as maintaining an internal join index, but it gives you the flexibility of adding custom attributes to the Link table, along with the ability to co-locate a Satellite.

Furthermore, the Link Table is cross-database compatible. The architecture and physical implementation will work on any underlying infrastructure (Hadoop + Hive, cloudera, netezza, paraccel, etc.. included!)  The Join Index is Teradata only.

In conclusion…

Surrogate keys are an important and integral part of the physical nature of data warehousing, Data Vault Model or not!  Natural world keys can and should be used – but sparingly, and only if the natural world key is guaranteed to stay consistently single valued 99% of the time (which may be a pipe dream).  I believe, after research, that joins are in fact faster (regardless of relational database platform) when using surrogate keys as opposed to natural world or business keys.

A key part of the nature of portability and heterogeneous environments is the Link table, instead of the Teradata driven join index.  The join index may be query optimized within Teradata, but it will slow down and inhibit the use of Teradata loaders.  The Link table “appears” as a standard table to Teradata, and is simply hashed the same way as ONE of the parent tables (preferably the largest parent table) to achieve maximum co-location.  You can also use any and ALL Teradata loaders for high speed loading by using the Link table, but yes – it may or may not take full advantage of the optimizations that a join index offers.

The Link table is a standard structure defined within the Data Vault modeling context. It is a highly powerful and flexible structure when used / applied appropriately.  You can already learn more about the Link table by purchasing my book: “Super Charge Your EDW” – available on Amazon, it contains all you need to know about the standards and definitions for the Data Vault model.

You can also learn more about the implementation of the Data Vault through signing up for my on-line courses  today.

  • Sequences and Identity columns are different – as far as definitions go.   Identity is an applied function to a column.
  • Primary Keys and Primary Indexes are not the same thing – as far as Teradata is concerned
  • Teradata uses Hashing to “as evenly as possible” distribute the data sets across nodes
  • Hashing on a surrogate key is the same as hashing on a natural key (or business key)
  • Surrogate numeric based keys take up less space, therefore generally require less I/O – meaning they are faster at joins, regardless of the underlying technology.
  • Surrogate numeric based keys are meaningless without being tied to a natural world key, or business key
  • Natural keys and Business Keys may be different, or may be the same (just ask my good friend Kent Graziano about this one)
  • Natural keys/business keys MAY change, requiring a multi-key resolution for master data purposes; surrogate keys are good for this when placed in a hierarchical Link structure.

So, is this problem exclusive to Teradata? I think not.  Would it be nice if we could use natural world keys/business keys as our Primary KEY (notice the wording here)?  Yes, it would be wonderful, but given the fact that they change, are sometimes even missing, it is a utopian pipe dream to think that we can apply it in 100% of our EDW use cases – hence it is un-enforcable as a common standard.

My deep knowledge and experience in the implementation world allows me to bring this best-of-breed information directly to you.  It is also one of the driving factors why I maintain, evolve, setup and own the Data Vault world wide Standards; my experience allows me to continually evolve these standards to suit the needs of the industry – while maintaining their integrity.

Having a firm ground of Data Vault standards to build from ensures your future project success, which is something I care deeply about.  I want to personally make sure that I am providing working knowledge derived from application and implementation in the field.   Contact me today to chat about the possibilities.

I hope to hear from you in the comments below.

Tags: , , , , , , , , , , ,

3 Responses to “Notes on Data Vault and Teradata”

  1. Ari Hovi 2012/09/25 at 11:35 pm #

    Thank you for this fast and thorough answer to my question! It´s great to get help from the ‘source’.

  2. Dan Linstedt 2014/05/02 at 4:49 am #

    FOLKS… HERE IS AN UPDATE.

    I have since released Data Vault 2.0 – which states that to be “DV2.0 Compliant” model, the model MUST utilize Hash Keys – ie: replacing surrogate keys with hashing techniques / hash keys. These have proven to be THE method for “extreme scale” data warehouses (beyond big data – into the Petabyte ranges). On Teradata, Surrogates are nice, but cause bottlenecks in processing and lookup performance. They also *require* linear processing – which in extreme volume cases, simply isn’t affordable or feasible.

    Using Hash Keys & the hashing techniques found in DV2.0 – you can overcome all of these limitations.

    I have a document that students (of my classes) on http://LearnDataVault.com receive which talks about the Hashing standards, and teaches you how to do this properly.

    I also have several customers at this time using Teradata for their DV2.0 / EDW efforts, and are very happy with the hashing for a variety of reasons.

    Hope this helps,
    Dan Linstedt

Trackbacks/Pingbacks

  1. Notes on Data Vault and Teradata – Data Vault Modeling … » BlinkMoth Software Industries | BlinkMoth Software Industries - 2012/09/20

    […] Teradata Blog Post From Teradata – Google Blog Search: In this entry I will explore the use of implementation of Data Vault on Teradata . However, this […]

Leave a Reply

*