More: Data Vault VS Star Schema – for EDW

I received a lengthy comment recently from a gentlemen kind enough to provide some great insight and feedback on how to get Star Schema data warehouses to perform.  He politely asked me to respond to his comments, which I will take the time to do here in this post.  This post is more on the subject of using a Data Vault Model (as an EDW) vs Star Schema as an EDW.  I wanted to say Thank-you to Frank for providing an honest answer.  As usual, before I get going, I invite everyone to post comments and questions as we all are learning about the next step in the process of building data warehouses.  Also, as usual, I have said it before, I’ll say it again: If what you are doing is working for you, then that’s wonderful, don’t change or fix what isn’t broken.  But if you are having problems or issues with your data warehouse, then it may be time to seek a new alternative approach such as the Data Vault model and methodology.

First a disclaimer:

Frank was very nice, and professional in his comments.  I mean no disrespect to him, his knowledge, or the system that Inergy uses, in fact, I truly appreciate the insights.  However, I sometimes get overly excited in making a point, and my writing can be “rough” around the edges.  This roughness is not directed at Frank or anyone else, it just happens to be my writing style (I write like a teach… with passion – I hope).  As always, take the parts you like, and leave the rest. I believe Frank to be a very smart individual with valid opinions, and great questions.   Thank-you for taking the time to write a lengthy comment – I enjoy reading comments like this.  I believe in what I am doing, and it comes through in what I write and teach.

His Thoughts:

I just read that you like to hear from me and our company Inergy, because we experience a lot of success with star schemas in terabyte environments, with multiple sources, including (near-)real-time-loading and you want to know what we’ve done that works.

Our architecture is used in our “BI in the cloud”-solution in the Netherlands. This is a true BIaaS environment with BI-PaaS (technology: Netezza, Powercenter & MicroStrategy) including development, maintenance, system management and support of the DWH. In other words, the complete ‘package’ for a data warehouse environment. This environment is almost 4 years in place and we serve a lot of customers.

Let me first start to explain where we agree: A good backend system and a good frontend system are required in an enterprise data warehouse. In the DV-architecture the combination of the DSA, raw DV and business DV is the backend system (in my definition) and the combination of data marts is the frontend system. We also agree on the requirements of the backend system and the frontend system:
– backend system: system-of-the-records, complete history (which leads to an auditable en compliant DWH), ability for real-time loading, preventing cascading-update-impact, easy to load, easy to build, easy to extent, easy to restart and easy to scale.
– frontend system: an easy-to-use environment for the end user, e.g. dimensional model.

We also agree that both the architecture of Inmon and Kimball do NOT meet those requirements.

My Response:

Yes, we agree on a good back-end system needing to be enterprise wide, scalable, flexible, and housing raw auditable data.  HOWEVER, I have never said the DV architecture is a combination of DSA (data staging area), raw DV, and Business DV.  This is an incorrect assumption.  The term: RAW-DV is what I call an Enterprise Data Warehouse, and is comprised of business keys, and soft-integrated raw data.  This is much different than a “raw-DV” as the term is used by so many people out there.  See my post (with Ronald Damhof) on Ronald’s site about Raw-DV definitions.

The business DV is NOT a back-end system, nor is it meant to be a back-end system.  Business DV IS another data mart, just modeled (similarly) to the DV modeling structures using Hubs and Links.  The Business DV is a front-end delivery mechanism, so please don’t mistake it for a back-end system.  Furthermore, I never advocated the use of a Business Data Vault as a required part of the architecture, in other words: it’s optional – if you want to implement a business DV then you have pain-points / needs / reasons / requirements to do so.

You see, I’m a minimalist, I believe you MUST have business justification for everything that you build, regardless of what is being built.  That makes the Business DV optional.

Front-end system: I do not define a front-end system as limited to a dimensional model.  I make the statements as follows: the front-end system will be data marts, let your data marts be modeled specifically for the purposes of rapid business retrieval and understanding.  The data marts may be a star-schema dimensional model, a cube, a flat-wide file, or a business DV model – it all depends on what the business users need.  Data Marts may also be virtual in nature, and should be “virtualized” or stored as views until such time as performance dictates otherwise.

A question about your statement:  “We also agree that both the architecture of Inmon and Kimball do NOT meet those requirements.” 

I’m not sure I do agree, because the previous bullet point you proposed (front-end system) was strictly a Kimball architecture (dimensional)… By the way, Dr. Kimball did NOT invent the dimensional model, he DID however make it popular, define it well, and formalize it as a method for data modeling.   Let me change your statement to the following:

We also agree that both the architecture of Inmon and Kimball do NOT meet the requirements of back-end data warehousing.

His Thoughts:

My concern with the DV-architecture is the usage of a lot of storage layers (DSA, raw DV, business DV, data marts). Let’s face it: every layer needs to developed, maintained, managed, the data has to be processed to every layer, (DV-)knowledge is required and of course the data needs to be stored. This leads to additional costs and a longer time-to-market. I think your argument against this firm statement is that the DV-architecture breaks complexity down in to bite-sized manageable chunks (divide and conquer). If I compare this to our solution (see below), I really don’t recognize this advantage and if there is a (small) advantage, the disadvantages of the additional storage layers are (in my opinion) much bigger. Bottom line, my concern with the DV indeed is YALS: Yet Another Layer of Storage. In other words, the DV will lead to a good enterprise data warehouse, but it takes (in my opinion) more time than needed.

My Thoughts:

First, I’d like to say this: I have taught this in class in the past…  the DV has no more “storage layers” than a traditional enterprise data warehouse.  There are: 1) the stage, 2) the EDW, 3) the data mart (which in my opinion should be virtualized, see my comments above).  Now, in certification class (in person) I teach the following: the future, for real-time data warehousing, actually makes the STAGING AREA obsolete!  I maintain that in a truly real-time data warehouse (operational data warehouse) that there will be no need for a staging area, why?  because the data is arriving in information queues (message queues) – and processed directly in to the Data Vault as-is.  At that point, there is no-time to stop and process multitudes of business rules to conform and align the data set to business needs.

Every layer needs to be developed, maintained, managed….  Agreed!  Again, I am a minimalist – I’ve always taught that: only build what is absolutely necessary and can be measured as tangible business value.  If it has no business value (including maintenance costs, complexity ratings, ease of use, flexibility, etc…) then don’t build it.  It also needs to be justifiable and measurable in order to assign a quantitative value to it.  DV Knowledge is required – yes, true, very true…  but it’s an evolutionary process to think in the DV fashion.  Take everything you know, combine it (DV is a hybrid data model built off the best of breed from 3nf and star schema modeling techniques) – there really isn’t anything “new” to learn, just different ways of thinking of data architecture.

If I compare this to our solution (see below), I really don’t recognize this advantage and if there is a (small) advantage, the disadvantages of the additional storage layers are (in my opinion) much bigger. Bottom line, my concern with the DV indeed is YALS: Yet Another Layer of Storage. In other words, the DV will lead to a good enterprise data warehouse, but it takes (in my opinion) more time than needed.

I can see your point, and we’ll discuss the comparison below.  Regarding the advantages…  If you’ve never been through certification training, then I agree, it would be hard for you to see the advantages and disadvantages of using an enterprise data warehouse architecture.  Keep in mind, it is not a Data Vault Architecture – the Data Vault is a modeling technique for your enterprise data warehouse layer and a methodology for how to implement an EDW properly.  Do NOT CONFUSE THE DATA VAULT WITH AN ENTERPRISE DATA WAREHOUSE ARCHITECTURE.  The EDW Architecture is a systems architecture, the Data Vault is a modeling paradigm and implementation methodology that fits in your enterprise data warehouse architecture (systems architecture).  When you examine the definition of my proposed EDW systems architecture, I talk about the following:

STAGING AREA -> EDW (Raw Data Vault) -> Data Marts (including star schema, cubes, flat-wide files, business data vault, etc…)

The only difference in “storage areas” that I can see between what you say Inergy has, and what I discuss is the staging area – and what we call a data warehouse.  These definitions are different. (we’ll get to this).

His Thoughts:

Ok, I think you’re now curious to the Inergy-architecture. Well, this architecture is quite simple (and not unique): Our backend system is a historical(!) data staging area. This historical DSA has the same structure as the source, including start- and end-timestamp. We generate the process from source to DSA 100%, including delta detection, transport and archiving. The frontend system is a (traditional) dimensional DWH storing the data to the lowest grain. Physically it’s one database with dozens of conformed star schema’s. The business rules are implemented from DSA to DWH.

My Thoughts:

Ok, I get it…  One of the disagreements you and I have is over what we “define” to be a data warehouse.  I maintain the following stipulation: (and I define it in my book): A Data Warehouse is defined by: non-volatile, time-variant, integrated, information.  I also include: Raw data in that definition,  and if you are building a Data Vault: Integrated by business key is also included in the definition.

I would argue that your Historical Staging Area is really a raw data warehouse – however, it is not integrated by business key.  The minute you put history in a staging area, it becomes a data warehouse – and requires all the maintenance, overhead, management and everything else that a “data warehouse” requires.  Such as performance, up-time, partitioning, parallelism, flexibility, scalability, auditability, etc…  I’ll agree that a Historical DSA is a Data Warehouse. I will also say: a Historical (non-integrated) DSA is NOT an Enterprise Data Warehouse.  Why?  Because it is not integrated by a horizontal view of the business keys across the different lines of business.  The data (as you pointed out), is the same structure as the source.

Some questions for you, that I may be ignorant on the answers due to lack of visibility:

  1. If the Historical DSA is the same structure as the source, how do you handle Cobol feeds?
  2. If the Historical DSA is the same structure as the source, how do you handle XML feeds?
  3. If the Historical DSA is the same structure as the source, how do you integrate “unstructured/semi-structured” data feeds?
  4. If the Historical DSA is the same structure as the source, how do you handle “real-time message inflows”?
  5. What happens when a source system is “retired” and a new one takes its place?  How do you load new data from the new system, what do you do with the old data from the old system in the Historical DSA?
  6. How do you recover from issues/problems if the Historical DSA gets’ “out of sync” with the source system?
  7. How do you handle 100% restartability / recverability of your loads to the Historical DSA if they fail?  Do you use Database Logs from the Historical DSA?
  8. What happens if the constrains in the source are defined, but broken?  How does “broken data” get stored in the Historical DSA?  I guess this assumes that you have all the FK’s, PK’s and constraints applied in the Historical DSA…
  9. What if you have NULL PK’s in the source, and you need to report all the broken data to the business, can this “busted data” make it in to the “exact replica Historical DSA?”
  10. How do you handle changes to the source systems structures in the Historical DSA?  Doesn’t this cause a cascading impact against the children?  Especially if Time is added as an attribute to the PK?

Don’t get me wrong, I’m NOT saying that a Historical DSA is a bad thing…  That’s just a concept…  I’m debating the value of modeling the historical DSA according to the source system, I’m debating the value of storing history in a staging area in the first place, why?  Because the next thing you say Inergy does is: “Copy all that history – run it through business rules, and store it again in what I call the data mart layers (your federated star schema).”  That’s two copies of the data (in full)….  Ouch.

In the Data Vault Methodology, when we load data marts (because the staging area has no history, and the EDW -DV has 100% of the history in one place), we can put rolling history in the data marts.  We can limit the amount of history in our data marts to just what the business needs and can justify.  This is not something that I can see is available given the architecture of Inergy.  By using the DV methodology, our overall storage costs are actually lower than the architecture you propose (we use less overall storage) – because we have no need to store all the history twice.

To your credit (or whomever built the system you have):

They Federated Star Schemas were done properly – at the lowest level of grain.  I’m assuming this is the case based on what I’m reading.  I’ve run in to so many “Dimensional Data Warehouses” that are not stored at the lowest level of grain, that they are the cause of missed auditability, missed opportunity, and can’t provide new requirements/requests to business users on demand.  I will also say, that from the sounds of it, your business never really “changes” too much.  In other words, they don’t acquire new companies, or split off parts of the company – it also sounds like they aren’t replacing systems too often, and that it’s a somewhat narrow line of business (easily defined by finite mathematics).  In this case, it is easy to “stabilize” dimensions, and not try to overload the dimensions with other components and multiple definitions.

The dimensional data warehouse models I run in to are quite simply put: a mess.  Their dimensions have been overloaded, abused, and continually “added to” until they simply can’t hold any more data (due to a number of reasons).  Also, they have at least 3 if not more, definitions per field in the dimension…  ie: when the moon is blue, the customer number means X, when the Customer name starts with a “*” then the customer number is not a customer, but a prospect… etc…  They can’t keep it straight, let alone combine new data sets from new source systems.  The maintenance costs become astronomical at this point.

But I will say, that it sounds like Inergy’s system is tight, and well-designed.  My congratulations to you and the team that built it.  It’s not this way in the environments I visit.  It usually takes a really good data modeler to build the right dimensionally based data warehouse – the first time out of the gate.

His Thoughts:

Don’t we have data marts? Well, only when required. It is required only in exceptional situations, e.g.:
– KPI-applications: a specific star schema with KPI’s is required, derived from the DWH star schema’s. But this KPI-schema is stored again in physically the same database with conformed dimensions to the DWH, so it isn’t a data mart from a user perspective, only from technical perspective (loading and storing the same data again).
– for Data mining purposes: e.g. for marketing analytics for each customer one record with a lot of characteristics is required. Again, this data (mart) can be stored in the same physical database (Netezza supports in-database analytics).

So, data marts are the exception instead of the default, resulting in a minimum of storage layers. Our experience is that 95% of is not copied to a data mart.

My Thoughts:

The exact same statements go for the Data Vault that I propose…  Don’t build the data marts unless they are justified.  Furthermore, build the Data Marts as virtual layers (in ram, or SQL Views) until such time as the business rules are too complex, or the performance is not good enough, then decide where to physicalize the tables.  We also only build data marts when required, so on these points we agree, and are the same.

His Thoughts:

The fun is, this architecture meets the requirements mentioned above:
– backend system: system-of-the-records, complete history (which leads to an auditable en compliant DWH), ability for real-time loading, preventing cascading-update-impact, easy to load, easy to build, easy to extent, easy to restart and easy to scale.
– frontend system: an easy-to-use environment for the end user, in our architecture a dimensional model.

The big advantage of our architecture: only two required storage layers with one very simple layer: the historical DSA. This layer is 100% generated and additional knowledge is not needed, it’s just a historical copy of the source system.

My thoughts:

Ok – I don’t see this as an advantage.  As I explained earlier, the Enterprise Data Warehouse Systems Architecture contains a Data Vault Model and Methodology as one component of the architecture.  The Staging area does not store history (it is not a Historical DSA in the world of Data Vault implementation), the DV model (your EDW) does store history, all of it.  The Data Marts (data delivery layer) should be virtualized, and when it is physicalized it stores only rolling history, or limited history – not the entire history that is contained in the Data Vault.  So, compared with your two layers (which I read to store ALL history twice), the approach that is recommended by the Data Vault Model and methodology actually requires less disk space overall.  So I’m not sure where your claim to “less storage” for your architecture is coming from, forgive my ignorance.

His Thoughts:

Using this architecture I really don’t see the complexity of the load routine from DSA to dimensional data warehouse. A unit is loading a fact or dimension table. In 95% this could be loaded easily. Only in exceptional situations (< 5%) we take an additional step (storing the data twice in de DSA), e.g.: for matching and deduplicate customer data. So, we use the same architectural principal: only an additional storage layer when required.

Of course, we have the benefit of the power of a data warehouse appliance: no aggregates, no partitioning, no indexes, high performance loading and querying. In other words: a data warehouse appliance not only boosts your load- and query-performance but it also allows a very straightforward architecture. That’s why I recommend every DWH-architect of a medium or large volume data warehouse to use an appliance.

My Thoughts:

Ok – we have a similar approach, apply business rules before delivering data to the business user, as well as “separate” the job of sourcing the data from the source systems (timing, availability, getting the data in), from the job of applying business rules.  In that regard, we are matching in beliefs.  In the world of Data Vault, we accomodate real-time delivery direct to the Data Warehouse (a term which we define differently), from cobol, XML, and unstructured data sets – along with batch cycles, we load a NON-HISTORICAL DSA, fully recoverable, fully fault tolerant, has no constraints, no foreign keys, and therefore allows the “good, the bad and the ugly” data to make it all the way in to the data warehouse/Data Vault.  I fear with a 3NF Historical DSA that you are discussing, and the questions I asked above, shed light on the fragile nature of “loading 100% of the data within scope, 100% of the time” making the approach that Inergy takes, less likely to pass a full and complete audit (especially given that the exact source system structures are copied to the historical DSA), but that’s just my opinion.

Appliances, appliances — it doesn’t matter what type of data model or systems architecture you have (Data Vault or Not…)  I have customers using Netezza and Data Vault, just as successfully as Dimenisonal models and Data Vault.  Now, some appliances (like columnar data stores) don’t care what model you throw at it.  The physical storage components vertically partition the entire model, there is no physical concept of “table” in a columnar DB…  just a logical definition of a table structure (made up of columns).  In Neteeza, it likes flat-wide structures, so…. we model the logical Data Vault, and the physical DV model denormalizes some components to make it work better with the box.  Appliances don’t matter in this discussion one way or the other.  I can say the same thing about Teradata boxes and Teradata Appliances, or Greenplum – no partitioning, no aggregates, no indexes, high performance loading and querying, etc…  It’s not about the appliance,  our discussion is about the merits of the Systems Architecture, and the Data Model Design.

His Thoughts:

We don’t have to discuss the strengths of a dimensional model for end users, because a dimensional model is also part of the DV-architecture. You mention some disadvantages of the dimensional model, but this is from the perspective if the dimensional model is the central (auditable, historical, system-of-facts) data warehouse. This DWH is in our situation the historical DSA. In other words, the DSA has got some additional purposes compared to the traditional DSA.

Finally, I already want to respond on some -possible- reactions and concerns:
– “With this architecture you don’t have the ability to drop the data mart (= star schema in the data warehouse), because you will lose your business rules.” That’s correct. But, our experience is that we never have to drop data marts, why should we? Dropping a data mart is in my opinion really an exception, the facts are designed around the process, and processes don’t disappear often. Beside this, I disagree when people talk easily about dropping data marts, because this is the product which the business uses. This is why they pay the bill. Moreover, a lot of reports are based on the data marts, it isn’t easy to tell the business: we’re going to drop your reports!

[ My response: Business Changes.  When business changes, they demand different results, when they demand different results – they want different structures.  When they ask new questions, they want new reports, new data sources.  Sure there are some core functionality reports, and those marts stay in tact (most of the time), but there are times when business needs a different grain of data, and you can’t simply run off and add a new dimensional key to a fact table, this would destroy the credibility of all the existing facts in that table (unless they were re-built using the new grain).  We resolve to build new marts for different business needs.  Sure there are times when we add columns to existing dimensions and facts, but only when it fits.  If Inergy’s business is not changing, then either it is a federally operated business (government owned), or it is a dying business because of lack of competitive nature, and lack of change to asking new questions.  I once had a case where auditors needed to review data once a year for 3 weeks, when they left, they no longer needed the data – so we dropped the data marts, and re-built them the next year (with new requirements).  I’ve had other, one-off requests: please run this report/answer this question for this months reports only….]

– “But with the DV you can easily create the dimensional data mart, that’s not the case with your architecture”. That’s correct, but in the DV-architecture the hard work is done from DSA, to raw DV to business DV. So, the complete data logistic must be compared, not only a part of the data logistic.

[My Response: No.  You have an incorrect assumption.  The hard-work as you put it, is done going from the Data Vault EDW to the Data Mart layers.  Your statements that it is done going from DSA to raw DV is incorrect, going from Raw DV to business DV is only correct IF you build a Business DV, and don’t forget, the Business DV is Just another data mart.  I do not understand your statement: So the complete data logistic must be compared…  if you mean complete history, then you are incorrect.  Going from the Stage to the Raw DV deals with Delta processing (we do not advocate nor use a Historical DSA), if you mean from the Raw DV to the Data Mart Layer, then that is also incorrect.  I have implementation designs which allow incremental build outs of the Data Mart Layers…  Remember: Scalability is a fundamental tennant across everything in the Enterprise Systems Architecture that includes a Data Vault.]

– “But with the DV you can use virtual data marts, so that’s one storage layer less”. I think you agree that the performance will be not optimal, because of the additional required joins (especially between big tables: link-satellite). You will almost in all situations materialize those views. Besides this, you still need to develop and maintain this layer.

[My Response: Performance depends on the environment, the hardware, the partitioning, and the size of data you are trying to retrieve.  I cannot and will not agree with the statement: Performance will not be optimal…  I have customers with Data Vaults on Teradata, where the join performance is EXCEPTIONAL.  I have other customers on SQLServer 2008 R2 Enterprise where the JOIN PERFORMANCE is exceptional, and I have other customers on Oracle, DB2 UDB, Netezza, and even SQLServer 2005 where the Join Performance is exceptional.   I’ve been doing performance and tuning as part of my career for systems like Nike, AAA, Pepsi Bottling Company, Nationwide Insurance, and Expedia, and so on for over 20 years.  I completely 100% disagree with your statement that: “you will almost always have to materialize these views” – this simply couldn’t be further from the truth.  In fact, it isn’t true.  I have customers using virtual data marts today.  Besides which, you still have another layer to develop and maintain: the Inergy system must develop and maintain conformed dimensions and facts, and what happens when these don’t perform?  What if the business wants a CUBE?  Again, the architectures are the same, both demand that Data Marts be made available, and both can virtualize them – performance is another story that all depends on needs and desires, and hardware layers.]

– “you don’t have the flexibility when the source changes”. Well, we only have to change the DSA-tables. Our experience is that in 95% of the cases this means adding an attribute. That’s no problem. And if the source drastically changes, OK, in that case we have to change also change the ETL to DWH and DWH schema. But, only two layers have to be changed. Bottom line: a flexible architecture is more important than a flexible data model.

[My Response: As I said earlier, congratulations on a stable, static dimensional model.  The only time I’ve ever seen a stable dimensional model is a) when the business is government based and doesn’t change much, or b) it’s a dying business – not being competitive in the market place, or c) doesn’t have any real competitors to what it does, so it becomes lazy, and “does business they way it always has, because it feels like it works.”  Are you overloading the definitions of fields in the dimensional model?  What happens when new “cases” come in from the business?  What happens when a new source system arrives with data that is defined differently?

Ok – I want to clarify: The Data Vault Model is an architecture.  It is a data architecture.  It is flexible. How are you defining flexible architecture?   When I look at a systems architecture I speak about the major components that need to be in place: Staging, EDW, Data Marts.  These components don’t need to be flexible – I’m not going to “wake up one morning” and invent a new layer for the systems architecture, so in that manner of speaking the systems architecture needs to be stable and strong.  Stand the test of time.  When you say: Flexible Architecture more important than a flexible data model, you have contradicted the previous sentences… where you state you only have to make changes to the “structures in the DSA…” well, that means you’re making changes to the data model does it not?  So, the data model does have to be flexible does it not?

In the Enterprise Systems Architecture world using a Data Vault approach, we too can easily absorb changes.  Adding a new system is simple: add the new staging tables (also 1 for 1 with the source system, but the tables do not contain history, nor do they contain PK or FK’s).  We then add the columns, or structures (Hubs, Links, Sats) to the Data Vault model (EDW) as necessary.  Sometimes it is not necessary.  Then, only on demand from the business – do we add them to the data mart or data delivery layer.  This could be virtual or physicalized.]

Summarized: what’s our ‘magic’? Well, it isn’t magic, it’s just practical approach: a historical DSA, conformed dimensional model in one physical database, additional storage layers only when required, a DWH appliance, generate as much as possible and a lot of standardization. Really, every day we experience this architecture works! I really don’t recognize your warnings: ‘mass complexity’, ‘hits you like a ton of bricks’ and ‘major re-engineering’. Honestly, if that would be the case, Inergy wouldn’t exist anymore!

[My Response: Again, I want to say: Congratulations on a well-founded, well-designed system.  Inergy has a system that works and it’s based on Dimensonal modeling at the lowest level – that’s wonderful; and again I want to emphasize to my readers and everyone out there: If you don’t see or feel pain, then don’t make changes – the DV model and methodology may not be a fit for you if what you have is working.  BUT when you start to experience pain, problems that you are seeking a new and different approach.  It’s at that time that the values and benefits of the DV begin to make sense.  It’s also when the dimensional modeling system has been poorly constructed to begin with, OR the business is going through rapid changes, that’s also when the DV approach makes sense.]

I want to emphasize that I appreciate and acknowledge your in-depth DWH-knowledge (and the knowledge of your ‘DV-colleagues’ in the Netherlands), but apparently we don’t agree on ‘the best’ DWH-architecture. I hope you can tell me what the advantages are of the DV compared to our architecture. I’m looking forward to your reaction, because I want learn from your experience as you want to learn from others! Another reason is that I’m currently writing an article for a Dutch online architecture magazine which describes this alternative architecture.

[My Response: Thank-you for your kind words and in-depth discussion.  I thoroughly enjoyed the tough and well thought out questions/challenges you put forward.  I also enjoyed hearing about a successful Dimensional Data Warehouse (depends on how you define success, I know….) It’s just very rare that I see companies in action with a dimensionally based data warehouse that is working for them, and not causing problems – perhaps it’s because of the line of work I’m in, perhaps it’s because I’ve been a “trouble shooter” all my professional life, whatever the reason – again, congratulations.  They are well deserved.

I want to emphasize that in no way shape or form did I ever intend to bash you, or your systems – I am excited, and over-zealous (just a bit) in my responses, and if anything I said offended you, my apologies. I hope to have more discussions like this one in the future, cheers]

If anyone has any questions, please don’t hesitate to post comments, or send me an email.  Do others have success stories for dimensional warehouses they’d like to share?  I’d like to hear about them.

Dan Linstedt

Tags: , , , , ,

4 Responses to “More: Data Vault VS Star Schema – for EDW”

  1. Frank Habers 2011/08/11 at 9:41 am #

    Hello Dan, thanks for your response. Because of readability you can find my my second response on our blog in PDF-format, see: http://bi-buzz.nl/2011/08/response-to-dan-linstedt-what%E2%80%99s-the-added-value-of-the-data-vault/

    Best regards,

    Frank

  2. Erik 2011/08/31 at 2:44 pm #

    Dan, this is great stuff and really contributes to the relevant discussions we need te have in our field.

Trackbacks/Pingbacks

  1. Data vault, ideologische verblinding? | BI-Buzz - 2011/09/21

    […] heb onlangs online en telefonisch gediscussieerd met de uitvinder van de data vault, Dan […]

  2. “Learn 3 things a Day – Learning 1” | Abhyast - 2015/06/08

    […] http://danlinstedt.com/datavaultcat/more-data-vault-vs-star-schema-for-edw/ […]

Leave a Reply

*