Q&A:Are there Hardware Dependencies for DV?

There literally are NO hardware dependencies for using the Data Vault model, but you still might want to read this entry… I cover why, and expose a small bit of the databases and hardware underneath.  If you have comments, thoughts, or other experiences I encourage you to add your COMMENT to the end of this posting. 

As you know, I’ve launched one-on-one coaching, this is the kind of knowledge you get within the walls of one-on-one coaching. But for TODAY  – I’m giving you the answers FREE to show you the kind of value you get when you sign up for my coaching sessions.  Contact me today: coach@danlinstedt.com for more information.

What are the hardware dependencies of this approach?

NONE.  ABSOLUTELY NONE.  The Data Vault Model is a DATA MODELING TECHNIQUE.  Like any “data modeling technique” the Data Vault favors MPP hardware for scalability – but only if or when you actually have “BIG DATA NEEDS”.  When I talk about BIG DATA, I’m discussing data sets that move 1TB per hour, and that have in excess of 80TB in their Data Vault already… 

At the same time, I have a 30GB Data Vault on a SQLServer instance, Windows Home Vista, AMD Turion 64 bit – and it’s just fine.  I’ve built Data Vaults on 2CPU oracle boxes, Linux hardware, standard servers – with up to 5 to 8 TB of data, and they run fine…  So THERE ARE NO HARDWARE DEPENDENCIES.

By the way, the world’s SMALLEST Data Vault (AND gives you business value) is one hub + one Satellite.  Talk about scope control.

Tags: , , ,

2 Responses to “Q&A:Are there Hardware Dependencies for DV?”

  1. Marcel Franke 2011/03/26 at 6:27 pm #

    Hm…I guess depends hardly on the performance requirements. If you need to get out data quickly, and this is very often the case, and if the data model has over 100 of tables you will have lot’s of joins. To perform quick queries you neeed a good I/O system and doing this on a SMP system with a 1TB of data in the warehouse can be very frustrating.

  2. dlinstedt 2011/03/27 at 8:15 am #

    Hi Marcel,

    Please read through the new technical book, in it I discuss the nature of two query-assist tables called Point-in-time and Bridge tables. These tables “span keys” of queries and greatly reduce the number of joins needed to get data out. These types of tables are not necessary within systems like Teradata where the natural parallelism of the engine, along with the performance characteristics do not require “helpers” to pull data of large sizes and quantities across hundreds of terabytes of information.

    One more thing about the General Data Warehousing in large data sets, 75% or more of the queries that “go after large data” generally are deep queries, that involve relatively low numbers of tables. They usually are NOT wide queries that tend to be more shallow. If you’re doing wide queries for data mining, then you should “take the data from the Data Vault” and prepare it by releasing it to a data mart or exploration mart for just such purposes.

    So the argument about the number of joins in the DV architecture being “too many too support large data sets” is really a moot point, unless you are on an underpowered system, or you are trying to “deliver” virtual data marts direct from the Data Vault to the business on underpowered hardware.

    Hope this helps,
    Dan Linstedt
    My new tech book can be found at: http://learnDataVault.com

Leave a Reply

*