FAQ

Frequently Asked Questions

For Data Vault Modeling, Methodology, Architecture and Implementation.  I will do my best to answer questions here as they come up.  Feel free to post NEW questions, please read the proper category descriptions to keep them in the right place.

Have a Question?  Submit one HERE.

Data Vault Implementation

I recently saw a post about SQL server deprecating older hashing algorithms requiring the newer Sha2 versions to be used which would increase the hash. I tried to reply to that but could not for some reason so I’ll just pitch the question here. We currently get around this by converting the value from HashBytes to a BIGINT. That has the plus of introducing integer based joins versus character based hash joins as well as providing good partition distribution, but we have always wondered if it increases the risk of collisions. We’ve tested this with all the algorithms and have yet to come across a collision…keeping our fingers crossed.. I actually wondered why this was not mentioned in the book as an alternative. Is it because it could increase the chance of collisions or some other consideration Dan?

Did you find this FAQ helpful
0
0
  • Dan Linstedt says:

    No. Most relational databases (except for Java Implementations) make BigInt too small to carry the full hash converted format.
    They will automatically truncate bits, WITHOUT triggering an error. Please NEVER do this – as it will cause far more problems than it is worth dealing with.
    Bigint: https://docs.microsoft.com/en-us/sql/t-sql/data-types/int-bigint-smallint-and-tinyint-transact-sql
    Size is 8 bytes
    Result size of Hash is 16 bytes (DOUBLE!!!)

    You can & should test this result by converting the bigint BACK to a hash, and then comparing the value – as you will find, this will not work.

    Sorry, you are causing serious problems with your hashing by doing this.

  • Leave a Reply

    *

    HashBytes and MD5 Deprecated in SQLServer 2016

    I found this information in SQL Server 2016 documentation.

    Beginning with SQL Server 2016, all algorithms other than SHA2_256, and SHA2_512 are deprecated. Older algorithms (not recommended) will continue working, but they will raise a deprecation event.
    https://docs.microsoft.com/en-us/sql/t-sql/functions/hashbytes-transact-sql

    Would using this have a performance impact? Would you recommend using either of the supported algorithms as a DV standard?

    Did you find this FAQ helpful
    2
    0
  • Dan Linstedt says:

    Answer: YES. This particular change to Hashes in SQLServer 2016 WILL impact performance in a negative fashion. Not just for loading but for querying.

    In reality, we truly WANT to leverage Business Keys. Sadly, SQLServer does not “hash bucket” the business keys for partitioning under the covers. Teradata, SAP Hana, Kudu, and Hive are all capable of hashing by Business Key. So, I will look deeper at this function in an attempt to find a better solution.

  • Leave a Reply