#NoSQL platforms and #datavault curiosity, #bigdata, #datamodeling

I recently had the pleasure of an e-mail exchange with regard to NoSQL platforms (one in particular), and the use of Data Vault Models.  I need to start by saying that this post is published with permission of the author of the email question.  In this post I will dive a little deeper in to the work that Sanjay Pande, Michael Olschimke, and I are working on together around Big Data, NoSQL, Data Vault Modeling.  This is all still work in progress.  I am more than happy to entertain corrections, additional thoughts, comments, and questions – just post them at the end of the blog.

Original Question:

Graham Pritchard, EIM, Steria (UK)

As a relative novice to DV models and methodology I’ dbe really interested to understand your high lvel views on how this methodology relates to the potential offered by NOSQL platforms such as MarkLogic to suck in disparate forms of data into a data lake, and then effectively model it retrospectively.

Is there a case for both approaches? Can they complement each other? If so, how?

My first response:

Hi Graham,

Thank you for contacting me.  These are warranted, and insightful questions.  Let me just start by saying: there are over 150 different vendors (at least) in the “NoSQL / Big Data Space”, each one has their own view of how they solve certain problems.  Trying to build a “standard” that works on all vendors (at least today) is next to impossible.

That said, I took a look at MarkLogic (on-line, read some technical documents on their TripleStore engine, etc…) – so I must first say that I am not an expert in their technology, nor have I had the chance to actually work with it in a hands-on project.  So I am a bit lacking in experience, and thus – my comments will be further tempered by this situation.

In terms of the work Sanjay and I are doing, we have worked hands-on with Cloudera, and Apache Hive, Apache Hadoop, and are beginning to experiment with MapR, and HortonWorks.  My customers (today) that are using Hadoop “platform” in their EDW environments, are a) quite large customers, b) have “big data problems” – both structured and unstructured, c) are applying the “Hadoop platform” mostly for two reasons: as a staging area in the BI space and for deep analytics – where queries can run long, and file scans are not a problem.

All that said: when you take a _hard look_ at “NoSQL” or a Hadoop Platform, from a modeling perspective a few things jump out:
1) 90% of these systems are really good at ingesting “anything you want to throw at it” without any modeling what-so-ever, hence: Schema-on-read
2) in order to “make business value” out of any data, you need to classify, organize, and identify the information in the files (one way or another).
3) eventually, some sort of “data model” is needed in order to reach the business with result sets that make sense (ie: turn data in to information).

So, at the end of all of this:  here’s what is happening in Data Vault 2.0 space / customer world:
1) Customers want to leverage their existing Relational DBMS investments (not just throw them out, or dump & replace with NoSQL / NewSQL, or Hadoop Platform)
2) Customers want to “release” the dependency on loading to the relational EDW (ie: they don’t want to have to “look up” some sequence number just to allow an insert to a Hadoop Platform)
3) Customers want to “tie” the Relational data sets in the RDBMS to the NoSQL (relational, or multi-structured, or unstructured) data living in the Hadoop Platform.

So – to that end, Data Vault 2.0 modeling changes the Sequence key – replacing it with a computational Hash Key value.  This answers all 3 of the needs above.

Now, the next topic: How does DV2.0 work / provide value to JUST the Hadoop / NoSQL platforms by themselves?
Depends on what answer you want.
TECHNICALLY SPEAKING:
1) Separating Hubs and Links basically provide physical indexing capabilities without moving the entire “set” / or file across all the MPP nodes, making “joins” a bit easier on distributed systems
2) Attaching Hash Keys to “JSON documents, Video files, audio files, images, documents, or even structured/multi-structured data” allows joins from the relational world to the non-relational world, or even – again, better indexing (as described above in #1)
3) Satellite “models/tables” in DV2 for Data residing on Hadoop platform are ONLY justified as schema-on-read, OR if you want an optimized access path through HiveQL via internal tables for instance.  So that means the Satellite table structures are logical in nature (when on Hadoop platform).

Business Value Speaking:
1) Hubs still provide business value as an integration point for a unique list of business keys
2) Links still provide business value as an integration point – for a unique list of relationships / attributes
3) Satellites – well, again no real business value except for housing data over time. They only provide “technical value” see point #3 above.  The business value is schema on-read, determined at run-time of the query.

Unfortunately I don’t have hands-on experience with MarkLogic, so I am not qualified to state the pros and cons of applying these thoughts directly on their platform.  Nor am I qualified to discuss if their platform would or would not benefit from Data Vault 2.0 approaches.  However, it is safe to say: that if they have any of the pain points mentioned above, that the Data Vault 2.0 Modeling components would assist in building business and technical value.

I hope this helps, sorry if it’s not clear enough – we are currently working “in our labs, on our servers” on specific implementations in an attempt to find these answers.

Lest we forget: DV2.0 is more than just the Data Vault Model, it includes: Architecture, Methodology, Modeling, and Implementation.  So it brings the entire package together for agility SCRUM, unstructured/no structured, NoSQL, automation / generation, and more.

Please feel free to ask more questions, should you think of them.

Thank-you kindly,
Dan Linstedt

Sanjay also Replied:

Hi Graham,
As Dan already stated, there’s so much variety out there right now just in solution choices. Even proprietary big data vendors like LexisNexis have now open sourced their core technology (HPCC) to complete with the “Elephant in the room” – which is arguably more mature and advanced than the current iteration of Hadoop (MapR being the only exception due to their own approach to architecture).

That said, our current research is focused on the Apache Hadoop platform for which we offer an alternate though process and an architecture (currently experimental).

There’s a lot of existing learning and deep knowledge-base in the BI world that’s getting thrown out with new technologies and I’m particularly sad to see this happen time and again causing unnecessary churn and re-learning of concepts in future.

MarkLogic is interesting, as is the Disco project by Nokia … and so many others.

The maximum momentum currently is gathered by Hadoop and technologies surrounding it or built on top of it. As is customary with IT hype cycles, everyone and their uncle jumps on the bandwagon and then we techs end up supporting architectures that eventually break.

Vendors never help. The situation looks very similar to commercial Linux vendor offerings when they happened, but Linux wasn’t really solving a pressing problem whereas Hadoop is in terms of storage. Linux eventually displaced proprietary unix. The competition is also a lot more intense than 10-15 years ago in this particular space with the same organization developing competing solutions which further confuses the implementers (Example: Cassandra and HBase are both Apache projects with minor differences. They’ve been forked as well and have other non-apache open source and proprietary competition).

The progression of technologies produced by the current open source big data crowd to me says, they know a lot about handling large amounts of data storage and processing technologically, but without understanding the business problems related to BI. Which is why you see things like Apache Drill and Impala working hard to best each other when another solution on HBase called Phoenix already beats them fair and square, and Spark/Shark increase query efficiency of HiveQL queries as well.

The separation of the storage and archiving activities and pushing business rules downstream of the data warehouse into the information marts is what has given the DV (and now DV 2.0) it’s share of success by essentially minimizing churn and pushing the “changeable” aspects of the solution as close to the consumer as possible.

The Data Vault (especially 2.0) is also a solution blueprint for business intelligence applications with various portions and how to build them done well and tested in the real-world.

Now, while there are many non-BI applications for big data technologies, there are several advantages in leveraging learnings from the DV as and where one can especially with patterning and code generation for quick build outs and for reduction of maintenance churn.

Our use cases are limited to Business Intelligence on our research of these platforms.

Warm Regards,
Sanjay Pande

Grahams’ follow on thoughts:

Hi Dan (and Sanjay),

You have mirrored many of my thoughts in your response, which hopefully is a positive.

In terms of NOSQL schema on read modelling aligned with Semantic Triple Store capabilities – and I might have this entirely the wrong way around – my instinctive view is that this from purely an insight perspective potentially removes many of the hard yards traditionally employed in developing the model (notwithstanding the methodologies employed)  up front. This is where I think my initial question to you was coming from.

Simply put, I am trying to work out in my own tiny mind, how to deliver best value and advice to my customers. I believe that DV has a real role to play in this, but I think that role may have changed since the realization that NOSQL platform with added search/discovery functionality, MapReduce and Semantic Triple Store capabilities can consolidate, identify, analyse and present information in a consistent, coherent and flexible manner.

In short, in the new world the potential end to end process  (in simplified form) looks like:

  • Identify relevant internal and external data sources based upon business need
  • Load and index data ‘as into NOSQL platform, irrespective of format
  • Build tripe store indexes into same platform
  • Use multi-tiered storage and mapreduce analytics where appropriate – HFDS/Hadoop
  • Group related data clusters into document forests for interrogation and subsequent EU access
  • Present data via UI of choice; Search capabilities, BI Front End, Monitoring dashboard, ESB

 This obviously overlooks data and information governance, data quality loops,etc,etc, etc, but potentially provides a process that captures all data (auditable) and potential subsequent changes in the one platform, and makes all that data available in real-time without many of the overheads and challenges that often afflict BI/DW deployments whilst providing business with a level of insight within highly responsive timeframes and also with the ability to draw insight from within unstructured data sets alongside structured data elements.  This, at its simplest level is clearly very attractive!

All that said, I personally struggle with not having  a data framework (Kimball,DV) to work within from a Business Information consumption perspective, but my presumption is that this will be post-ingest (would you agree?)  into a  Data Lake from which a form of data framework (DV) can be established for the multi-faceted presentation layers.

Not sure if any of this makes sense, but I’m really interested to hear how you see the juxtaposition between a) where we’ve been, b) where we are, and c) where our industry is going.

Regards
Graham

Sanjays’ Response to the above:

Hi Graham,
Thanks for the slides. It gives me some perspective.

As I said earlier, it’s interesting. HPCC is even more interesting (to me personally) because they’ve had some of these capabilities for the last 10 years.

I have certain issues with:

  1. Data Lake – Dumping without organization is in my opinion, the worst idea I’ve ever seen in the industry, even though I can see the “sales” appeal of it. It only generates more downstream work. In my personal opinion, a DV can even help organize it better.
  2. Schema on read – Again, generates more downstream work, despite it’s advantages.
  3. Semantic triple stores aka graph databases not only suffer from scalability issues (which appear to be solved with this vendor), but also non-linear approach to a solution when one is needed (The simplest examples being aggregation of sets). They’re fantastic for link analyses or exploratory work, where it would be pretty difficult to implement in relational databases (with exceptions), but they’re still not ideal for a majority of BI use cases as we see them today.
  4. SparQL is a standard but even graph vendors don’t like it so much. Neo4j for example supports it, but prefers Cypher (their own homegrown query language).
  5. It’s great to see ACID compliance in a NoSQL database and it for sure will help adoption, however I see the C-Consistency in the BI space pretty loosely defined where “eventual consistency” such as in CAP is quite acceptable in most use cases.

All that said, I’d be surprised (rather pleasantly) if the downstream builds which are EU facing will be easy to build and perhaps even automate. How do you automate when you don’t have patterns and everything goes?

The Data Vault is only one potential solution and it may not even be right for the architecture you need. For most BI use cases, it’s the DW component (leans more toward’s an Inmon style multi-tier but with capabilities to logically project marts to the business users with the flexibility of persisting if required).

In the relational world, it’s been proven to be well worth doing.

In the hybrid world, it serves as a glue where the NoSQL objects are tied to the relational world via hashed business keys.

In a purely NoSQL world, the jury is still out. It’s one possible solution and it’s definitely a good way to organize a data lake and make it more like a Data Warehouse while leveraging an existing knowledge base of BI history which enables downstream information mart builds as well. There’s of course more to it because the technical components are really a small part of the project. Besides modeling, it includes build cycles, automation, project management and all other components.

Also in terms of the juxtaposition question.

a) Where we’ve been hasn’t changed for us because despite a history of successes and many solved problems we still hit the wall many times with people who just don’t want to see beyond Kimball and still go and implement solutions which are doomed to eventually fail.

b) Where we are is a very interesting place in history in technology. I like it because finally, we’re getting out of the grip of relational thinking. Something that should have happened in the 1980s when Lisp Machines were available. It also becomes a very confusing environment because it’s filled with innovations with more to pick and choose than anything else.

c) If history tells us anything, marketing is what will eventually win over technical merit and the technically better solutions will finally get implemented into mainstream after a considerable amount of time with everyone thinking it’s the best thing ever to happen in technology. I’ve seen it and can pinpoint so many examples.

Warm Regards,
Sanjay

The e-mail trail continues, but for now – hopefully this is enough to give you a glimpse in to what Sanjay, Michael, and I are working on when it comes to NoSQL, BigData, Hadoop, and DataVault.

Again, if you have any additional thoughts to add to this conversation, comments or observations, please add them to this blog entry.

Thanks,
Dan Linstedt

Tags: , , , , , , ,

No comments yet.

Leave a Reply

*