Sky Diving Without a Parachute…

Would you go sky-diving without a parachute?  Well, if you simply “jump” on the NoSQL bandwagon with your data warehouse, that is exactly what you are doing.  This post is additional exploration and information around Hadoop, Big Data, NoSQL, and Data Warehousing.  Data Warehousing on NoSQL is “generally” a whole new ball game.

First, some thoughts…

No doubt you’ve heard of Big Data, no doubt you’ve read something on NoSQL.  Both of which are very interesting concepts in the world of development.  But probably what you didn’t know or didn’t hear is that 90% of the “work” to manage data sets in these environments is writing code.

To be specific, it’s writing (or generating) some form of Map/Reduce code that can be disbursed across multiple shared-nothing servers, where data (hopefully) is partitioned evenly.  Beyond that, there are a lot of things that are “left behind” from the relational world, in order to gain performance over big data sets.  Some of those things that are “left behind” include:

  • no more referential integrity
  • no more ACID compliance (in some cases)
  • no more SQL access
  • no more “indexing” (not as you and I would understand it)
  • no more Ad-Hoc querying <– THIS ONE IS A BIGGIE and often overlooked!
  • no more “data normalization”
  • no more UPDATES (caveat – some NoSQL systems allow this in specific conditions)

Before I go any further, and before you say: WAIT – NO, You’re wrong!! Hadoop DOESN’T have that restriction!!   Check out the References I used for my research, in case you wish to read some of this information on your own:

And the latest news (of course) is that Amazon and Paraccel teamed up to smash the entire Data Warehousing and Hadoop barriers:

Because it’s still SO NEW, there are still many questions to be answered – but personally I think, this will be a huge run-away success in the Data Warehousing Industry.  That is: IF you can get past “hosting” or outsourcing your EDW in a cloud on someone elses servers.

Old school knowledge, New School Techniques…

Ok, to way-over-simplify, you can think of Columnar data stores (for example HBase on Hadoop), and defining “a structure” to access structured data, similar to writing a Cobol Copybook complete with re-defines and nested table structures.  You can also think of “access” to those data sets as writing a Cobol program (in this case Java Code, or something that leverages Map / Reduce code) under the covers.

The storage mechanisms are different, and the simple fact that columns are “name-spaces” and can be dynamically added, or deleted is different.

But before you get excited and jump up and down over “denormalization vs normalization” or “data modeling”, remember this:  The LOGICAL data model is hierarchical.  The PHYSICAL data store is key,value, timestamp (triple store) technology.

Which basically says:

  • ALL COLUMN NAMES ARE REPLICATED FOR EVERY DATA ELEMENT IN EVERY ROW IN EVERY TABLE.
  • Nested or hierarchical complex types are allowed (and can change from ROW TO ROW!!)
  • Governance and Referential integrity ARE LOST from the “data model” or “base schema” and pushed to the Map / Reduce code set (become the applications responsibility)

Yes, MASSIVE storage requirements!  But wait, isn’t there compression?  Yep, but it’s clunky and isn’t always on demand.

So what do you gain from these environments?

Well, let me just say: I am no expert in NoSQL (Not only SQL) I am learning about these systems, but from many of the in-depth articles I’ve been reading, I’ve got some interesting conclusions:

  • You can see performance gains over HUGE / MASSIVE data sets (when compared to storing that same data in RDBMS and trying to query it) – particularly when the data set can be “housed” in a single schema structure, and referenced by  ROW KEY values.
  • easy dynamic column changes (this is where the schema-less comes in to play, but in reality, we STILL have to define structures, and schemas to manage and access data sets
  • rapid ingestion of machine generated data sets (higher transaction throughputs because of lack of RDBMS overhead) – this just is simply a “file copy”, with Hadoop re-distributing the file according to partitions across the nodes.  HEY WAIT: Re-distribution of HUGE FILES takes time!!  yes, indeed it does, and no matter WHAT you say, you MUST define a schema.
  • Raw Data Storage, accountable, and auditable
  • MPP on commodity hardware

There are a lot of “blog entries” that say Hadoop and NoSQL systems are “schema-less”.  It’s only schema-less in CERTAIN situations where you write MAP code that searches “raw bytes” for values (hence the ability to use binary, or pure unstructured data sets) – and produce structured results.  For the “rest” of the world using delimited file sets, or fixed with file sets, or XML file sets, we STILL have to define a schema with column names and table names in order to “use” the data effectively.

So is Data Modeling completely dead?

Exactly the opposite.  It is now more important than ever!  However, the WAY we model logically should follow natural hierarchical definitions (think Ontologies of key business terms rather than normalized data sets).  Physical modeling also matters (at least for common reference to “structured or semi-structured data”).  And guess what?  the schema is loosely coupled to the data set (in column based data stores), that’s what allows it so much flexibility to handle schema change.

Why is data modeling still important?  You have to know what you’ve got (data elements) in order to begin asking questions about it.

In fact, Data Modeling (and the modeling tools we use) must now undergo a paradigm shift.  I believe to be “successful” in the new NoSQL world, we should START with Ontology modeling of business terms.  This hierarchy of business terms then needs to be broken in to sub-sets (which effectively become TABLES in the NoSQL environment).  From there, we need existing tool sets like PowerDesigner, PowerArchitect, ER-Win, and the like to read in these “hierarhcy of business terms”, and allow us to map them to NoSQL Key-value stores (or SerDe definitions) for use with Hadoop.

What are some of the drawbacks of these environments?

Well, there are many different ones – and they vary based on the platform of NoSQL that you choose (ie: cloudera vs hadoop vs HBase vs Hive vs MongoDB vs CouchDB vs Paraccel vs Netezza and so on).  But here are some generic points for most Hadoop based systems:

  • Most of the time, you are stuck writing Map / Reduce CODE (not SQL) resulting in increasing maintenance costs, and increasing complexity
  • Data modeling (if not managed) is “tossed out the window” by programmers for expediency sake.  In other words: Data Governance becomes exponentially harder to achieve
  • Programmers quickly make “new files” to answer “today’s business question, and thus end up replicating data sets ALL ACROSS THE PLATFORM.  It can result in a data junkyardreally quicklyif not managed.
  • If the wrong implementation is chosen (to use Map for logic instead of Reduce or vice versa), performance can suffer dramatically
  • Rows can have different structures within the same table – and can easily result in “broken code” or missed data
  • Most solutions CANNOT SUPPORT AD-HOC QUERYING.  It’s not because of the SQL interface technology, but rather because of the execution speed of Map/Reduce code – trawling over hundreds of terabytes of data.  In other words, many “BI analytics customers” are used to sub-second response times to NEW queries issued, this simply isn’t happening in today’s NoSQL solutions (those with big-data anyhow).
  • LIMITED VERSIONS OF HISTORY.  HBase (for example) only keeps the last 3 versions of any given row (UNLESS part of the row key is a timestamp).  But then, your 300 TB warehouse explodes in to a 3 PB warehouse nearly overnight.

In addition to the above, there are other issues as well (that face the project level): (taken from the Informatica / Hadoop technical PDF)

  • Difficulty finding trained resources with expertise in Map/Reduce programming and management
  • Pre and Post Processing of data in Hadoop
  • Challenges in tracking, and managing diversity of data sets and schemas
  • Lack of transparancy, governance and auditability over development tasks
  • Limited data quality and governance of data sets
  • High cost of maintenance for scripting / code management
  • Challenges in meeting SLA’s for mixed-workload requirements and ad-hoc querying

Conclusions:

Ok ok, there’s a lot more to this, and I was just looking at the TIP of the iceberg…  Hopefully this information is helpful to you, and the reason I’m writing these entries is because I can’t discuss Data Vault Modeling on NoSQL stores without giving you some foundational rules around the subject.  In the new year, I will dive in to Data Vault Modeling on Hadoop and NoSQL-ish environments.  I have asked Amazon for a login to their Redshift “paraccel” platform for testing and development.  We will see what happens.  What I will say, is that platform looks promising indeed – including SQL access, and some management layers to “resolve or mitigate” some of the problems I’ve listed here.

What I will say about Data Vault modeling is: it’s evolving.  There are Changes that are necessary to the physical level in order to support Hadoop, Big Data, and NoSQL.  BUT the value of modeling at the Logical level (splitting business keys, descriptors and relationships) still remains high.  I will also say that the jury is “out” on using surrogate sequence numbers.  In a relational world, they work great.  In a Hadoop / NoSQL environment they can wreak havoc on the MPP distribution algorithms, and can cause hot-spots on MPP platforms (except in Teradata where you can choose a different primary index or hash key for data splitting).

I’d love to hear your feedback, thoughts, comments.  Are you USING Hadoop?  Have you put a Data Vault-Like schema structure on a NoSQL box?  If so, what did it look like? what modeling changes did you have to make?  How is it running for you?

Thanks,
Dan Linstedt

Tags: , , , , , , , , , , ,

No comments yet.

Leave a Reply

*