Does Hyper-Normalization really exist? Or is it just more #datavault?

I’ve heard a lot of rumblings in the market recently about Hyper-Normalization, at least that’s what some are calling it.  I’ve also heard that there are classes offered at TDWI around hyper-normalization techniques which really are truly just simply Normalized Data Modeling based on Data Vault.  In this entry, I will discuss my view points on hyper-normalization.

Some in the industry are using a term called “Hyper-Normalization”.  Before I go any further, I went looking for a good definition of what this truly means.  Unfortunately none exists.  So I asked my friend CJ Date, and he said: “No such thing exists.”   In my opinion, if the man who helped establish normalized formats for data modeling says, no such thing exists – well then, I tend to believe him.

I agree with CJ Date, I don’t believe that there is “such a thing as hyper-normalization”.

To understand where I am coming from, let’s examine a definition of the root word: Hyper

a prefix appearing in loanwords from Greek, where it meant “over,” usually implying excess or exaggeration ( hyperbole); on this model used, especially as opposed tohypo-, in the formation of compound words ( hyperthyroid).”  http://dictionary.reference.com/browse/hyper-

Now this term: Hyper-Normalization has been used to describe Data Vault Modeling.  Why?  Data Vault Modeling really sits between 3rd normal form and 4th normal form.  If I were to correctly apply the label “hyper” to Data Vault Modeling, then I would be saying that Data Vault Modeling goes beyond 6th normal form.  Which clearly it doesn’t even come close to that level of abstraction or exaggeration.

So, is the Data Vault “HYPER-NORMALIZATION”?  No.  Definitely not.  Are forms of the Data Vault Model “hyper-normalized”?  No, Definitely not.

Let’s go back to the definitions of normalized data modeling formats:

  • First normal form (1NF) is a property of a relation in a relational database. A relation is in first normal form if the domain of each attribute contains only atomic values, and the value of each attribute contains only a single value from that domain.[1]
  • Second normal form (2NF) is a normal form used in database normalization. 2NF was originally defined by E.F. Codd in 1971. A table that is in first normal form (1NF) must meet additional criteria if it is to qualify for second normal form.
  • The third normal form (3NF) is a normal form used in database normalization. 3NF was originally defined by E.F. Codd in 1971. Codd’s definition states that a table is in 3NF if and only if both of the following conditions hold: The relation R (table) is in second normal form (2NF), and Every non-prime attribute of R is non-transitively dependent on every superkey of R.
  • Fourth normal form (4NF) is a normal form used in database normalization. Introduced by Ronald Fagin in 1977, 4NF is the next level of normalization after Boyce–Codd normal form (BCNF).
  • Fifth normal form (5NF), also known as project-join normal form (PJ/NF) is a level of database normalization designed to reduce redundancy in relational databases recording multi-valued facts by isolating semantically related multiple relationships.
  • A relvar R [table] is in sixth normal form (abbreviated 6NF) if and only if it satisfies no nontrivial join dependencies at all — where, as before, a join dependency is trivial if and only if at least one of the projections (possibly U_projections) involved is taken over the set of all attributes of the relvar [table] concerned.[Date et al.][3]  Sixth normal form is intended to decompose relation variables to irreducible components.

** All Definitions provided by Wikipedia **

Now, The HUB ENTITY:

If we remove all the system fields (Sequence, Load Date, Record Source, Last Seen Date), then the HUB is just a unique list of business keys. Which means, it qualifies as 6nf (sixth normal form) – irreducible components.

The LINK ENTITY:

If we remove all the system fields (Sequence, Load Date, Record Source, Last Seen Date), then the LINK is just a unique list of relationships. Which means, it qualifies as 5nf (fifth normal form) – multi-valued facts by isolating semantically related multiple relationships.

The SATELLITE ENTITY:

We cannot remove the system fields, and still maintain data over time – hence, the SATELLITE is: descriptive data over time.   Which means, it is 3nf (third normal form) – where Every non-prime attribute of R is non-transitively dependent on every superkey of R.

So, where does that leave us?  If we add the system fields back to the Hub, the Hub then moves to be 3nf.  If we add the system fields back to the LINK, then the link moves to be 3nf.

None of these data modeling components are “hyper-nomalized”.  Sorry, but that term simply cannot be expressed mathematically, and quite frankly makes no sense mathematically.

The next question is: Hyper-generalization, what exactly is that? Well, I disagree with that term as well.

Hyper-generalization (as defined by some in the industry) is this notion of “super-super type modeling”, well again, I think this is just a marketing term that should be “thrown away”.   I tend to agree (here) with another well-known friend of mine: David C Hay, who states that the proper term is: “Universal data model” – which really is the “THING” model.

Therefore, in conclusion:

1) Hyper-Normalization?  No such thing exists.

2) Hyper-Generalization? No such thing exists.

“Industry Conferences” are perpetuating incorrect instruction in the data modeling space by allowing presentations to be made that continue to enforce or suggest the existence of these terms within the data warehousing industry.

Furthermore, what some individuals are calling “hyper-normalized data warehousing” are just in fact, truly just “Data Vault Patterns” based on standard normalized formats.

I’d be curious to know what you think?

Hope this helps to clear the air,
Dan Linstedt

Tags: , , , , , , ,

2 Responses to “Does Hyper-Normalization really exist? Or is it just more #datavault?”

  1. Stephen Pace 2015/05/03 at 10:52 pm #

    I wrote a response on the hyper modeling terms. Bottom line, I think the hyper terms add value because the average data warehouse person that hears the term ‘normalized’ data warehouse is going to think 3NF whereas the average data warehouse person that hears ‘hyper’ modeling is going to either realize something different is going on and do more research or will already have a pretty good idea what is happening. Language evolves, and this is no different.

    http://blog.kalido.com/modeling-hyper-generalization/

  2. Dan Linstedt 2015/05/04 at 2:00 pm #

    Hi Stephen,

    While I appreciate your reply, I still tend to agree with people like David C Hay and CJ Date. I still tend to state that the terms introduce messy – unqualified versions of definitions which muddy the waters in the market place. I do not subscribe to the terms, because no solid definition exists. Whereas terms like Big Data actually DO have a solid definition offered by the key industry people who apply the term.

    Again, I truly wish that TDWI would come to the founder (ie: me) for proper training in Data Vault techniques during their conferences. However, that is neither here nor there.

    Thank-you for your feedback, and yes – it is always good when concepts are given due time at industry events, however in this case – they seem to be misrepresented, and that I’m afraid does not do anyone any good.

    Glad to see that Kalido finally is interested in Data Vault, that is a neat thing to see indeed.

    Good Tidings,
    Dan Linstedt

Leave a Reply

*