in this entry i try to compare reporting performance across a traditional approach and a data vault approach. if you have comments, thoughts, or other experiences i encourage you to add your comment to the end of this posting. as you know, i’ve launched one-on-one coaching, this is the kind of knowledge you get within the walls of one-on-one coaching. but for today – i’m giving you the answers free to show you the kind of value you get when you sign up for my coaching sessions. contact me today: email@example.com for more information.
how does reporting performance comparison to a traditional approach? qualify?
once again, i wonder – how on earth can this be quantified? or even qualified? i can’t compare – this one there really is no comparison. but i will lend you my thoughts… here goes.
if you have a data vault on teradata, you don’t need to construct physical data marts!! everything, yes, everthing is virtual. that is of course if you are on teradata hardware (not just the database). their engine is so powerful, and so fast that virtual marts (views) work 100% of the time for your reporting needs – inclusive of business logic. ok, maybe i shouldn’t go that far… there is one area that requires heavy duty processing time and that’s householding – that’s just the nature of the game, that requires physical tables to make it work.
if you have db2 udb eee on mpp hardware, there’s a chance you don’t need to construct physical data marts… depending on the volume, the number of hops between the mpp servers, and the performance of the machines.
if you have db2 udb eee on p-series (mainframe) logical partition, there’s a good chance you don’t need to construct physical data marts… (see above statement)
if you have oracle on big smp iron (64 cpu, dedicated, single oracle instance, 128gb ram), etc… there’s a good chance you don’t need to construct physical data marts except when you’ve reached the 48 terabyte mark…
reporting performance is the same as any other reporting performance once you build physical data mart tables and move the data from the data vault to star schemas… it simply doesn’t change at that point.