Column Based Data Vaults #datavault

Column based data stores have been around for a long time.  This blog will talk about what you need to do to make the Data Vault successful on a column based data store.  It’s quite simple really, and in the end – some parts of physical data modeling don‘t matter in a column based data store.

If you want a full on-line lesson on this and other topics, you will soon be able to take the class on http://learnDataVault.com – sign up today to be the first to be notified wen my on-line classes go live.

What is a column based data store?

A “database engine” which is vertically partitioned.  In other words, each column is it’s own table (for a simple definition anyhow).  You can think of this as “close to” or “relative to” 6th normal form.  The internal surrogate key attached to the column is used for join purposes, and is generally not seen / not viewed by the end user.

Is a Data Model necessary in a column based data store?

Yes, a logical data model is always necessary – so that you can understand how your data is managed, stored, and tied to the business.  We (humans) require classification systems (ontologies and hierarchies) to understand data sets – this is where data modeling (domain modeling) comes from.  That said: physical data modeling for a column based data store does not matter other than the data type and null/not null specification (perhaps also a default value and / or a range constraint).  What I’m referring to that doesn’t matter – is the structure and the foreign keys.

Whether a column is or is not in a particular table, and whether or not it has foreign keys, makes no difference to the manner in which the column based data store stores it’s information.

Ok – yes, foreign keys are important (implementation wise – the column based data store does enforce them), but a column based data store automatically vertically divides each table in to it’s constituent parts.

What matters in a physical data model and a column based data store?

The following items matter:

  • Data type, length, precision, scale
  • Null or not null specification
  • Default values
  • Constraints
  • Function based columns
  • Foreign Keys

What’s this got to do with Data Vault?

The point is this:  whether you use Data Vault, 3rd normal form, 1st normal form, 6th normal form, anchor modeling, key-value pair modeling, or dimensional modeling, ultimately the only thing that matters in the column based data store is how you logically manage your data sets.  The data modeling method you choose literally, doesn’t matter beyond the benefits of flexibility, scalability, maintainability at the logical level.

If the Data Vault model suits your purpose for managing data, then use it.  Remember this: With a column based data store underneath, what matters most in the project is the methodology chosen for implementation.  The Data Vault Methodology provides you with a set of guidelines for standards, re-work, scalability, measurement, quality, project planning/tracking/oversight, and automation that you simply won’t get any where else.

In the end…

The two BIGGEST problems that customers of mine have (why the “move OFF column based data stores in the future”) are as follows:

  • Lack of understanding / proper organization of data – people over time have “just added” another column to the database, without taking the time to understand what the column represents, why it’s there, how it’s loaded, and if it’s a duplicate.  This leads to a data junkyard – tables with 1000 columns or more exist, with 150 or more duplicate columns, or slightly different columns where the business doesn’t understand it’s data asset anymore (column based databases lead to a laziness for managing the data assets properly).
  • Scalability – when the column based databases have reached 80 terabytes in a single node, they tend to break down.  We’re not talking 80 TB raw here, but 80 TB max capacity, compressed data and everything else.  The customers give up, and have to move back to a traditional RDBMS in order to scale.

So, if your customer can “mitigate” the data understanding by rigorously following data modeling standards, and they won’t reach 80 TB (about 300 to 350TB raw) data, then you can choose to use a column based data appliance.

But regardless of the technology, the Data Vault Model and Methodology still stand, and still provide benefits to the overall project because of the standardization, automation, and lower maintenance costs that it brings to the table.

What are your thoughts?  Please reply below.
Cheers,
Dan Linstedt

 

Tags: , , , , , ,

No comments yet.

Leave a Reply

*