#NoSQL, #bigdata, #datavault and Transaction Speed

Like many of you, I’ve been working in this industry for years.  Like you, I’ve seen and been a part of massive implementations in the real world where #bigdata has been a big part of that system.  Of course, they’ve done this in a traditional RDBMS and not a NoSQL environment.  There’s even been a case where Data Vault has been used to process big data with huge success.  In this entry I will discuss the problem of big data, and discuss the RDBMS along side the NoSQL approaches.

First, a little history please…

Why is everyone up in arms about #bigdata?  Why is everyone so glued to their technology that they feel the need to “switch to something new” (ie: NoSQL environment) just to handle big data?  Have we literally lost our minds?  Do we think that by switching to “magical new technology” that all of the sudden all of our problems will be solved?  If you listen to the industry hype, it appears this way.  The hype goes so far as to suggest that NoSQL platform is the magic bullet that everyone is waiting for.

Well, I’m here to tell you that it may not be everything that media hype has cracked it up to be; and if you aren’t careful, you may just lose sight of your massive investments in relational data base systems.  If you aren’t choosing NoSQL for the right reasons and for the right applications then you may just be losing your mind!  (ok, that’s a bit of a stretch, but it had to be said)

Historically speaking, can Traditional RDBMS do Big Data?  If so, where are the case studies?  Well let’s chat about this.  Everyone seems to believe (and I concur) that Big Data is made up of: volume, velocity, and variety.  Ok, lets take two particular pieces of the puzzle: volume and velocity.  For now, we won’t discuss variety.  Why? because many of the hyped up stories out there talk about volume and velocity as being “good enough” reasons to switch to NoSQL for big data purposes.

What’s one base argument for “switching to NoSQL” for Big Data?

Historically speaking, lets focus in on data ingestion rates or better yet: Transaction rates (aka: TPC-C benchmarks).  Many argue that you must have a NoSQL (distributed file store) without referential integrity turned on, in order to get massive ingestion rates on big data.  Well, ok – this may be true, but it depends on what you call “big data” and what level of transactions per second your business truly is doing.  If you are splitting atoms and recording 1 Terabyte per second, then yes, I’ll buy that argument – but that means that you are also in the top 1% of world wide cases that actually need this functionality.

Case studies from RDBMS that the media hype wants you to ignore

Any time there is a shiny new widget in technology, it seems everyone rushes to get one – rushes to build one, rushes to supplant their hard-earned and well designed investment.  Well, I’m here to remind you that it may not be necessary to jump ship…  I tend to argue that if you really think you have a business case for a NoSQL environment, then you should be looking at a cooperative seamless integration point.

By the way, The Data Vault Model offers this solution through architectural layers and good design.  In the future, the RDBMS engines will offer this level of integration, and in fact, some New NoSQL offerings are already “combined” under the covers offering both relational technology and Hadoop technology seamlessly.

So who’s done what?

  • Teradata: Nearly everyone talks about Big Data, and then throws Walmart out in to the discussion – unfortunately they tend to “forget” to mention that WalMart IS on RDBMS engine Teradata.  The case study here shows: Walmart processing nearly 5,000 items per second and 10 million register transactions over a 4 hour period.  http://news.walmart.com/news-archive/2012/11/23/walmart-us-reports-best-ever-black-friday-events
    Teradata can clearly “do” big data with no problem.  Teradata has had success over the years with Barabas Bank, Quantas Airlines, and a variety of Department of Defense initiatives – all of which doing incredibly high rates (volume) per second (velocity).  I’ve seen Teradata actually beat this number in other customer sites with 8,000 transactions per second.  Teradata is an MPP based database.
  • SQLServer 2008: yes, people forget – SQLServer 2008 R2 is an incredible beast when it comes to processing power.  It actually is a very good contender in the BigData space, and should be considered with proper TPC-C rates.  This study reports: about 16,000 transactions per second on SQLServer 2008 R2. http://sqlblog.com/blogs/linchi_shea/archive/2012/01/24/performance-impact-sql2008-r2-audit-and-trace.aspx,. SQLServer 2008 R2, can be configured in clusters of MPP machines.
  • DB2 for zOS edition: DB2 engine is very very good, granted this was the mainframe edition.  It’s matured over the years, and is a tremendous database in the world of relational database engines.  They’ve incorporated MPP at various steps as well.  This study stated the following (slide 26): 15,353 transactions per second at a Korean Bank.  http://www.slideshare.net/CuneytGoksu/db2-10-for-zos-update
  • DB2 UDB / LDW / RDBMS: I’ve personally been involved in a case study at a US Defense contractor where we were ingesting 10,000 transactions per second per node with 10 nodes – in to a Data Vault model on DB2 UDB (RDBMS instead of mainframe edition).  This was in 2001 – at the time, this was record breaking as well.  We were pulling Space Satellite data (structured transactions), then combining them in to a single DB2 node post processing.
  • Oracle RDBMS: Oracle’s Exadata V2 machines are fast, and only getting faster.  However, this was Oracle’s 10g product – and I’m sure since this was published several years ago, it must have improved since then.  60,000 transactions per second is the claim – very very good for an RDBMS engine.  http://www.dba-oracle.com/oracle_news/news_world_record_tpcc_hp.htm

I’ve personally been involved in Data Vault cases with each of these databases where the transaction per second ingestion rates were quite high (between 7,000 and 10,000 transactions per second) – and that’s with Referential Integrity turned on in these databases.

What about variety?

We can’t talk about big data without talking about variety.  Variety of the data can mean lots of different things.  To be completely fair, unstructured and semi-structured data needs to be included in big data, and admittedly this is one area where relational technology really doesn’t do a really good job.  They (the RDBMS engines) have introduced full-text indexing to attempt to solve the problems, they’ve also introduced XML data type stores and XQL-like query basics, but in reality the ingestion rates drop tremendously when dealing with this type of data.

Remember: variety can mean different things.  Variety can also mean: multiple structures (as in multiple different structured transactions), and this IS something the RDBMS engines are very very good at handling.

So where does that leave us?

It’s rather ironic really, there is so much hype out there about NoSQL and “why RDBMS won’t work for you” that it’s easy to get lost in the sea by swimming in to an undertow on purpose.  People for some reason conveniently “forget” that RDBMS engines are still the number 1 selling licensed database engine technology out there, and there’s very good reasons for that.  People tend to “forget” that the RDBMS engines bring tremendous value when it comes to the integrity of the information store.

Am I saying that NoSQL is “bad”?  No not at all.  I’m merely stating that it has it’s purpose, and in my mind it serves the “Data Warehousing community” as a raw storage area for text documents and XML documents and unstructured data sets best for 98% of the market space.  For the top 2% of the market space (whom might be dealing with 1 Terabyte a second of ingestion), a highly scalable solution where the data set can be broken into thousands of smaller files to be ingested, might be best suited to a NoSQL file store rather than an RDBMS.

We should not lose sight of the value that RDBMS engines have.  We should not lose sight of the vested investment in cost, engineering hours, and reliability that the RDBMS engines bring to the table.  We should not simply throw the baby out with the bath water to justify bringing in the new technology – just because “I want a shiny new object.”

The main point is this:  The more reading I do, the more discovery that I find (in labs, test cases, and build outs), the more I understand that NoSQL is and should be a complimentary platform, to sit along side your vested EDW / RDBMS engines.  It should be applied as a staging area (if you REALLY feel the need to put structured data on a NoSQL environment) , but better than that – it should be used as a raw text document storage area, and an XML file storage area, where these objects are then properly keyed and distributed.

Remember: some of these NoSQL technology vendors are offering seamless hybrid technology – these are the “ones to watch” in my book, these are the ones that will make it worthwhile for the transition to occur – BUT the engineering on these hybrid systems is mostly new, when compared with RDBMS technology that has been around for a long long time.  Yes, they will converge, BUT in doing so, they will each absorb best of breed from each others solutions under one roof.

End of the day, what are the main points for choosing NoSQL for Big Data?

In my humble opinion I would chalk it up to these:

  1. If you have to deal with, search, learn from, interrogate: Text documents, XML, audio files, images, or binary data
  2. if your ingestion rates are so high that you need a truly elastic compute cloud with highly distributed data sets, and here’s the kicker: the cost of RDBMS is prohibitive for scale out when compared with NoSQL

Hmmm, be careful about cost – cost can mean many things.  Often times the HYPE will have you focus in on only ONE aspect of cost: storage and number of MPP nodes.  They conveniently “forget” to mention cost of maintenance, cost of upkeep, cost of tuning, cost of storage, cost of coding to make the data usable , cost of accessibility and so on…

A final point:

Some have argued that graph databases require a NoSQL environment.  Those arguments are true for folks who have not looked at or do not know about Data Vault modeling.  Data Vault models actually enable you to place a “graph based database” directly in a relational database engine, and allow you to take full advantage of the SQL language for graph based queries and results.  The Data Vault model allows you to explore a graph based data set in a relational world.

So in closing, don’t throw out your RDBMS engines just yet, if you have BigData or big data needs, please try to evaluate the technology on principle and ROI.  Don’t forget that RDBMS engines have been “doing big data for years” in a structured world.

Leave a comment and let me know when you might apply NoSQL for Big Data, and whether or not you see Data Vault in these situations.

Tags: , , , , , , , , ,

No comments yet.

Leave a Reply

*