Introduction to Hadoop, Big Data and Data Warehousing

This is an introductory look at Data Warehousing as an industry and it’s application or use with the Hadoop base platform.  I cover only some of the fundamental challenges, benefits, and issues we face when we are told to move existing data warehouses on to a Hadoop  base platform.

The buzzword: Hadoop and Big Data

The current “buzz” is all about Big Data, and whenever anyone mentions big data, immediately a Hadoop based storage system comes to mind or in to the discussion.  It seems as if businesses “all of the sudden” have severe memory loss regarding relational database engines, and the hundreds of millions of dollars in sunk investment in to their infrastructure.

Some immediately begin to ask the questions:

  • Can I get my data warehouse on Hadoop?
  • Why do I need a data warehouse if I have Hadoop?
  • Can’t Hadoop just “replace” my relational database engines?

And of course, they ask probably a hundred more.  The smart businesses will realize that Hadoop is a complimentary platform to your existing infrastructure.  It is not a replacement for all of your existing systems.  By the way, Hadoop is a management algorithm for large, scalable data sets.  It is not even a full data management engine!

Hadoop is a free, Java-based programming framework that supports the processing of large data sets in a distributed computing environment. It is part of the Apache project sponsored by the Apache Software Foundation.   http://searchcloudcomputing.techtarget.com/definition/Hadoop

So to really bring the focus home, comparing Hadoop with RDBMS (relational database management systems) is like comparing apples to oranges.

In another post I will dive deeper in to a basic overview of Hadoop and the solutions “built  in, around, and on-top” of the processing engine (things like Hive, HiveDB, Pig, Cloudera, MapR, HBase, and so on).  For now, we will go back to the topic at hand.

Can I get my Data Warehouse on Hadoop?

Yes, but it’s not very easily done, and it may not be the right platform for all EDW processing!  In other words, there are some things that can and probably should be left on a relational database system (like Teradata, Oracle, Sybase, SQLServer, MySQL, DB2 UDB etc…).

Some of the key points of the Hadoop based technology include:  (Some of these points are paraphrased from the following document: http://infolab.stanford.edu/~ragho/hive-icde2010.pdf)

  • The pure-store is a write-once, read-many solution (meaning NO UPDATES and NO ALTERATIONS of existing “structures” / files)
  • It is a full file-store, without any referential integrity (yes, this is what you give up to get performance – all that management overhead out the window in order to achieve fast performance)
  • Partitioning is wonderful, BUT the “columns” that the data is partitioned by, do NOT exist in the data set anymore – they are put in as part of the directory tree structure.  (I’ll explain why this is interesting in a bit)
  • Partitioning IS truly file splitting in to separate physical directories (sometimes separate physical machines)
  • Hadoop is a load and go file management system – meaning you copy raw files in to the Hadoop platform, there is no such thing as “ETL” to getting files IN to Hadoop.  They are copied in, and THEN the transformation rules must be written in code.
  • Right, EL-T is truly equal to “FILE-COPY (EL), followed by Insert into NEW FILE from OLD FILE(s) combined with Transformation rules” – Yes, to transform old data, you MUST make/create new files.

Some Benefits of this approach include:

  • Rapid loading (basic file copy, how much easier/faster can it get)?
  • Rapid Transformation (because there is NO referential integrity, it is easy to create new “aggregations” of data)
  • Distributed computing with MPP / Shared Nothing algorithms
  • Easy easy easy to create new “aggregates” and create a feeling of “self-service-business-intelligence” for users that can “write the code” to get what they want
  • Automated compression (in some Hadoop implementations – and some commercial systems) file compression is built in, reducing the amount of storage necessary to house the data
  • NO DELTA PROCESSING NECESSARY!  In Fact, because it is a file system, “load and go” is about the only thing that can be implemented.  This can be seen as both a benefit and a drawback.  If you want “delta sets”, you have to copy the new file in, and run algorithms for deltas that produce new delta files.
  • SCHEMA-LESS (to a degree) storage.  I say to a degree, because you still have to define the columns, and in some cases the base data types for the file in order for the code to “map” to the elements.  On the other hand, there are some types of files that just work natively (already mapped), like XML.  And still other documents (like TEXT files) that simply work based on internal searching for tags.  Schema-less is both a benefit and a drawback.
  • No SQL – well, you don’t have the limitations of SQL – however, you don’t have the benefits of SQL access either.
  • No “normalization” to speak of – it is not necessary to normalize data sets, in fact, it’s discouraged – to avoid “joins” across node sets.

Ok, so fundamentally Hadoop is a managed file store with capabilities for fully distributed file processing rules.  Sounds almost like Ab-Initio (one of the long-time ETL tools that have been in the market place – they did the same thing with their Co-Operating system, and ETL scripting code).

Some Drawbacks of this approach:

  • No referential integrity
  • Lack of proper partitioning BY THE ARCHITECT/ENGINEER or even end-user who simply “loads” a file, can easily cause havoc (a single hot node with all the activity).  Architecture and file layout become critical but with lack of governance or management, this is lost on the business.  This is something that Teradata does really really well (manage the partitioning FOR you according to the data model architecture (primary index) selection.
  • Well: lack of proper partitioning leads to serious loss of performance
  • Constant “re-partitioning” as data sets are dumped/loaded in *MAY* be required, which is WHY you see a lot written about Hadoop “addressing the machine generated data sets” so well.  Machine generated data sets can have a single consistent partition, where “data warehousing data sets” may vary over time – causing the need to re-partition.  WHICH in the case of Hadoop means: you guessed it!  MOVING all the files across the network to put them on the differently distributed machines!
  • Lack of “data modeling architecture” – lets face it, “users” are simply lazy, and if they don’t have to do data modeling to make a system work, well then, chances are pretty low that they will do it.  This is a drawback from a governance and management side.  With Hadoop, you can easily end up with hundreds of files, duplicated all across the system, with no real idea as to what columns are there, who’s using what, and how it’s being applied / loaded / used.  Remember: I’m talking about files that are “not machine generated”.  In a data warehousing sense, these files are source system, operational files, XML external files, and so on.  AND If we switch the EDW to self-service, well then – where’s the governance on the business users who CREATE new files from existing files that are already stored?
  • Ok, lack of Governance (governance and management are KEYS to successful use and application of Hadoop).  Look – if you don’t have it in the referential integrity layers, then you MUST have it in the human management layers.  A “DBA for Hadoop systems” is just as important as having a DBA role for relational systems.
  • Little to no joins – well, just as this is a positive in some cases, it is also a negative.  This leads to replicated data in multiple file sets.  The more data is replicated, the more it can cause /create disparate BI answers!!

Please don’t mis-understand me.  I am NOT saying that Hadoop is a bad solution, quite the contrary.  My point here is as follows.  IF you are going to use Hadoop or a Hadoop based solution THEN:

  • Setup a DBA role for the engine
  • Setup Governance procedures for “loading data in to the system” and for accessing data / creating new data sets in the system (process driven / architecture driven management)
  • Link Hadoop platform with your RDBMS engine – using the power of each respectively.
  • Attempt to make the Hadoop platform “seamless” to the business, so they don’t know/don’t care where the data is coming from (relational or Hadoop)
  • Augment your existing Warehouse with Hadoop solutions
  • Use Hadoop for Document stores, and XML processing
  • Use a tool like Pentaho Kettle, and it’s BI reporting platform for GUI access to Hadoop
  • Control WHO accesses the Hadoop backend – ensure they are trained properly to write MapReduce code
  • Use a versioning system (for the File definitions, and the code that accesses them).

Why do I need a data warehouse if I have Hadoop?

Data Warehouses are not “on hadoop or on relational systems” strictly speaking.  In fact, this shouldn’t even be a question (but I’ve heard it from customers).  Data Warehousing is a concept of storing historical data, then analyzing it for trends and analytics.  Hadoop is a platform for storing distributed files.  So, you DO need a data warehouse if you want business intelligence or analytics.   Whether you put the entire system on Hadoop platform, or you put the entire system on relational – it won’t matter.

In fact, I recommend you leverage a mix of both platforms (Hadoop and RDBMS) for a seamless Business Intelligence environment.

Hadoop in and of itself is not a means to an end – it is not there to “replace” the Data Warehouse.  Even with a full document only solution stored on Hadoop, you still have all the other issues (temporal access, delta processing, column management, aggregation, transformation) etc.. that must be executed in order to make the data meaningful.

Can’t Hadoop just replace my Relational Database Engine?

No, it can’t.  Not in it’s raw form. In order to “replace” relational database engines, there is one more piece of the puzzle that is necessary: metadata management, and structured access.  This is where projects like Hive, HBase, HiveDB, Pig, and so on come in.  Including some commercial vendors like Cloudera, and MapR for instance.

They offer (among other benefits) the additional metadata management layer, and in some cases, SQL – Like Access (similar based SQL statements that are then translated to Hadoop execution code (java, perl, python, ruby, etc..)

Hadoop (in its base form) again, is just a distributed file management platform.

Now, there are some technologies (that I just mentioned) which allow you to move off relational databases and on to the “Hadoop system”, but there are benefits and drawbacks to those as well.  The largest reason (I’ve read about) for businesses to move is “to get away from the referential integrity restrictions” which leads to massive performance gains.  But again, they experience some of the drawbacks that are listed above.

Conclusions and Thoughts

In other entries I will dive in to some of the different meta management layers (Like Hive and HBase) and discuss the pros and cons of specific data modeling components there.  My thoughts are as follows:  Hadoop is an interesting platform, it allows us to easily construct a “persistent staging area” without moving data in to a relational system first.  In other words, simply “copy” your “staging file” in to Hadoop, partition it by LOAD DATE, and voila – you have a managed set of persistent staging data.

Yes, at that point you have to write code to get your data “out”, but the GUI technology and ETL vendors are working furiously to connect to Hadoop solutions.  In fact, Pentaho Kettle (ETL engine) does this already, and there are JDBC connectors that allow you semi-SQL access to Hadoop file stores (particularly through Hive or HBase which is another story).

Hadoop IS a key-value store at the end of the day, with a single ROW key, with no referential integrity.

I don’t recommend you throw out your relational data warehouse, just as I don’t recommend you “simply decide to move everything to Hadoop” – it’s too much to bite off at once.  I would suggest integrating a Hadoop store and making the technology seamless.  However, with the addition of Hadoop technology I also recommend you add governance and management so that you do not end up with a “data junkyard” in a Hadoop storage system.

Thoughts? omments? Ideas? I’d love to hear from you.

Thanks,
Dan Linstedt
ps: Get free training and video lessons at http://LearnDataVault.com/

Tags: , , , , , , , ,

One Response to “Introduction to Hadoop, Big Data and Data Warehousing”

  1. sanjaypande 2012/11/26 at 2:00 pm #

    My personal opinion is that even if you want to make a switch to get the cheaper storage and scalability as well as the mapreduce capabilities of hadoop, the DV is probably one of the best architectures to experiment with because you can really start very small and organically grow your DW.

    JDBC connectors into hive and other databases are getting better as well.

    While you can simply persist the staging because storage on systems like Hadoop is cheap, there is still inherent value in the DV model and methodology even on a system like Hadoop with something like Hive on top. It allows you to leverage SQL skills on top of this file system and has a lot of promise.

    The value of business key alignment and separation of satellites cannot be argued.

    Also, just because there’s no referential integrity (which really doesn’t mean much in an insert-only system anyway), you can still join data.

    One advantage is you can use a completely denormalized bigtable implementation for each star schema as a table.

    I see the DV as an important leverage architecture for big data and hadoop systems and as always the DV shops should have no trouble migrating once the guidelines for insert-only are set.

Leave a Reply

*