#datavault and #hadoop – first looks

I am a newbie to Hadoop – so to all those die-hard Hadoopians, please forgive my ignorance, and correct me where I’m wrong.  This post is a first-look (opinion based only) about Data Vault models on Hadoop environments.

Hadoop way over-simplified:

For those of you unfamiliar with Hadoop, you can think of it in this very oversimplified manner:

A set of code running in parallel on a set of gridded machines with all the embedded rules for distributed computing handled for you under the covers.  Within Hadoop is an HDFS (hierarchical distributed file store/system) – which takes documents, xml, files, fields, data – essentially, and stores it across the nodes.  Also within Hadoop is HBase – a “semi” or quasi-structured environment – storing data in essentially Key=Value pairs.

In other words: Hadoop (conglomeration) is a large distributed data management / storage / retrieval system with a Key=Value pair base architecture.  You can think of it as an open source columnar-like data management system.

However, it’s deeper than that with the Key=Value pairing.  If you want to know more about Hadoop, there are tons and tons of resources available that discuss this subject in detail.

Ok, So where does this leave Data Vault Modeling?

Before I go on, and discuss the DV modeling aspects, let me say one more thing: today there is no “common data access layer” like there is with SQL and Relational Database Systems.  To work with, in and around Hadoop, you must write Java code…  or invest in a tool that writes java code, or invest in a tool that interfaces with Hadoop (in essence writing run-time java code).  etc…

SO: the Data Vault Model.  Well, the DV Model in a “physical” sense is great on relational systems.  Allowing the designer all the goodness of flexibility, scalability, tracability, and so on that they would want in a data integration project.  The Data Vault model brings to the table loads of goodness in utilizing relational data base management systems in an MPP and distributed data fashion.  A DV model can & is split by both horizontal and vertical partitioning mechanisms – which when combined with highly tuned parallel engines, and fast infrastructure, can allow the RDBMS systems to operate in maximum performance capacities.

BUT what does this have to do with Hadoop?

Well, I’ve spent the last few months reading a bit of literature, trying to understand Hadoop – and I’m getting to the point now where I’m nearly ready to install an instance, and play with a DV model – per say – on the instance.  So, more on that later….

IF you want to use Hadoop – then by all means, go ahead and do so!  If you want to use DV on Hadoop then there are a few things you should realize before getting going:

1) Your DV model should remain “mostly” logical

2) you should look into generating Hadoop data access code!! to handle the physical nature,  You will need a Mapper Class, a Combiner Class, and a Reducer class to handle the Logical Structure.

3) you can translate the Data Vault components (hub, link, satellite, and the applied derivatives) using Java inheritance code – possibly writing base routines that apply to ALL hubs, ALL satellites, etc… then over-ride each with the different structural mappings.

Hadoop under the covers will then map the routines to the Key=Value pairings (at least this is what I understand so far) – and as I get further in to implementation, I might change my mind!

Anyhow, please tell me what you think of this entry – if you want more, if you want some examples, etc… Leave a comment below!

Thank-you kindly,

Dan Linstedt

PS: you can find out more about Data Vault at: http://LearnDataVault.com/training

Tags: , , , , , ,

No comments yet.

Leave a Reply

*