#datavault #hadoop #hive #nosql Modeling Breakdown

I’ve begun researching Data Vault models on Hadoop solutions, including HadoopDB and Hive.  Recently I came across a number of articles which describe the solutions of Hive and HadoopDB in detail on top of Hadoop solutions.  I had to take a minute to write this article, to explain my view points of using the Data Vault model on a Hadoop Solution.  I also explain where the Data Vault Model fits in the NoSQL or Non Relational world, and why it’s still relevant.  Furthermore, I touch on the changing nature of certification – why it’s not so relevant any more.

What’s the beef with Hadoop, HadoopDB and Hive?

Here is a scientific and mathematical article (one of many) which discusses the issues with HadoopDB when compared with Hive. Although it was written quite a while ago, it is still relevant.  Particularly in the fact that HadoopDB “replaces” the HDFS back end storage of Hadoop with it’s own relational libraries.

http://www.slideshare.net/thkoch/hadoopdb-a-major-step-towards-a-dead-end

Now, regarding DV on Hadoop: I’ve said it before in my blogs, I’ll say it again:  The Data Vault *must* become a logical model, to the point where columnar DB’s and KV stores, and triple stores don’t physically implement it.  In other words, Hive becomes a great solution – because it turns the Data Model in to a logical only component.

Personally, I am choosing Hive as my implementation paradigm for testing in my Data Vault Labs.

At this point I would highly recommend Hive over HadoopDB, but that is a personal choice.

How To put DV on a Non-Relational DBMS

IF you *really* want a physical DV on a system like this, then you *must* follow these rules:

  1. DE NORMALIZE – BUT ONLY AT THE PHYSICAL MODEL LEVEL!
    • The Hub must be replicated in to each Satellite,
    • Link parent data must be replicated in to each Satellte.

It really is this simple.

This goes for DB2 (as400), Netezza, Paraccell, Vertica, Green Plumb, Sybase IQ, Hadoop + Hive, and any other KV store, Triple Store, Graph Store, or column based store.

Why should I use the DV model then?

Good question.  There are still business reasons for using the Data Vault model, such as flexibility, and adaptability – BUT you should be thinking: LOGICAL DATA MODELand not physical data model.  However, at this point, the real value is in the methodology rather than the data model!

The benefits of the methodology still stand:

  • Accountability
  • Auditability
  • Understanding of the business
  • Tracking / Typing the business processes to the data model
  • Ease of use
  • Ease of build out
  • Low Complexity
  • Raw Data Loading
  • Massive Parallelism (from the Job Design perspective)
  • Massive Scalability (from the Job Design perspective)
  • Easy team scale-up (easy to add team resources)
  • Performance and Tuning Simplicity

So you see, the value of the model shifts when the physical implementation is changed to a Hadoop solution.  The value of the model now becomes 1 component of the methodology; and then – it’s the value of the methodoloy that makes a whole lot more sense.

So, that said, can I use ANY data modeling I want and still follow the Data Vault Methodology?

Yes, in fact, the future IS in methodology, and automating the methodology to our best abilities.  THIS is the focus of the future, and by looking at the cards, the future is already here to some degree.

Wait a minute, did you just say don’t use the Data Vault Model?

NopeNot what I said at all, go back and re-read the benefits section.  If you believe that the model holds no value now, then you’ve missed my point.

In a non-relational data storage system, The Data Vault Model still holds value as a component in the Methodology leading to better business understanding!  among other benefits that are also there.

What I truly am saying is: the Data Vault Model is fast becoming a logical design choice rather than a physically implemented solution.

For FAR TOO LONG, data models have been driven by taking logical designs and implementing them physically 1 to 1 in the relational database engines.  Well, now that we have non-relational database engines, that entire philosophy is changing; and the data model (any kind) can finally begin to shine as a value-added asset to the business in a logical standpoint.

What’s the end result?

IF you are moving toward any of the Non-Relational Database Management Systems for storing your data (Hive, Hadoop, NoSQL, Columnar, etc…) The end result is this:

  1. certification in the Data Vault Model begins to lose some of it’s luster or value to enterprises.  All you really need to understand the Data Vault model, you can now get from the book on Amazon or Kindle Select: Super Charge Your EDW
  2. However, understanding the Data Vault Methodology on the other hand, begins to really be important!  As does learning how to automate the build out of your Data Integration solution.

More will come on the changing value of Data Vault Model certification in future posts.

And From a Non-Relational (or Hadoop Perspective):

  • Less and less emphasis will be (as it should) placed on the physcial data model.
  • More and more emphasis will be placed (as it should) on the logical data model
  • My prediction: More and more shifts will be made in business away from traditional RDBMS engines, to Non-Relational Engines (like Hadoop+Hive, and others)
  • There will be (in the future) a blended database management system that allows the best of both worlds in a seemless setup.
  • METHODOLOGY & AUTOMATION of your Operational Data Warehouse / Real-Time Data Warehouse will move to the fore-front.
  • Your ability to understand your business as an IT individual, will also move to the forefront.
  • Your ability to connect business processes directly to the logical model of where the data is stored, will move to the forefront.

Again, the value of simple data model based certification is beginning to wane (across the board, and that’s not just for Data Vault Modeling certification).  The value of becoming certified in a repeatable and scalable methodology is really where the gold is, or at least – the value of understanding how-to implement & generate a repeatable, scalable method for your data integration project becomes much more of a pressing need.

Why? because the methodology is technology agnostic and can carry with you throughout any underlying technology changes.

I’m looking forward to hearing your comments, please do add them below.

Cheers,
Dan Linstedt
http://LearnDataVault.com

 

 

Tags: , , , , , , , , , , ,

One Response to “#datavault #hadoop #hive #nosql Modeling Breakdown”

  1. Kent Graziano 2012/06/04 at 3:16 pm #

    That last bit is pretty much what we are doing on my current project – modeling the data vault based on business process decomposition (and state changes). Currently implementing in Oracle but that may change before we are done.

    Based on the repeatable patterns our programmer is building a tool to generate PL/SQL load procedures to accept strings of information for real-time loads.

    Will keep posted as we progress. Very existing stuff!

Leave a Reply

*