consulting time

#datavault 2.0 supports dynamic data warehousing

It’s been a while since I blogged about features and functionality of the Data Vault, but here is an entry that discusses something I’ve been talking about since 2001.  Dynamic Data Warehousing.  This is the next step beyond active data warehousing / operational data vaults.

First some references:

So as you can see, these ideas are not new.  I’ve intentionally designed the Data Vault model to be adaptable at run-time in production environments.  However, the technology and the capabilities have finally caught up with the designs and intentions.

Now, today, you can (if you haven’t already) create an actual Dynamic Data Vault / Dynamic Data Warehouse (DDW).

The Parts of a DDW

As a reminder, there are several moving parts to the DDW, and I’ve tried to summarize my previous articles by providing the following list / overview:

  • Adaptable Data Warehouse Structures
  • Automatic / Self-Healing Structures
  • Real-Time Changes to Source Structures resulting in real-time changes to Data Warehouse Structures
  • Automated generation / addition of Data Mart structures
  • Physical Attributes accompanied by Business Names and Metadata definitions – from source all the way to the data marts
  • Detection of new arriving attributes
  • Some form of “understanding” of where & how to automatically attach attributes to the DDW, and to the Data Mart Layers

The basics of this, are: the source system is dynamic, whether the physical source is changing with physical structure changes, or it’s a source based on Key-Value, EAV, or JSON or something else.  The point is, it doesn’t matter how the change originates.  What matters is: that the change originates in real-time in production and is driven by the customer.

The next basic tenant is that the physical changes are accompanied by sufficient metadata / business meaning / definitions, logical attribute names.  That metadata is then used during the “automation process” that makes the structures in the warehouse/data vault dynamic.  The automation process also takes a look at where best to place the attribute(s) downstream in the virtual marts / and/or physical marts – resulting in Dynamic Information Marts downstream of the DDW.

Now, the fact that all of this happens in real-time, and in production cannot be overlooked.  These structural changes, must be applied in real-time, in production to the DDW, to the integration and movement processes, and to the downstream data retrieval / data release processes.  Automation and generation become key processes in the self-healing notions.

Finally, at last, understanding how to release this in to the BI and Analytical tooling layers – so that business users can access these changes immediately.  This, is the crowning achievement in a great DDW.

Can Any Vendor do this today?

Not to my knowledge, there is no single vendor focused on this specific space.  No single vendor addresses structural change detection, much less encompassing a neural net or statistical decision making capabilities for applying the structural changes all the way through to the BI / Analytical layers.

But…  There is hope!  I cannot announce anything here, yet… except to say that I am currently working with at least one, possibly two vendors on building it on a client site.  More on that once I can release details – so stay tuned.

Can I Build this today?

Yes, to a degree.  There are several different ways to make a DDW happen – but for the most part, these methods do not include dynamic information marts!  Getting the new structures / new attributes to be picked up (even by Managed Self-Service BI tools) is a huge problem today (as I mentioned above).

So HOW can I build this today?

Well, first – you can hire me.  I’m not kidding.  You really need a guide to get you to the point of making this happen.  I’ve built about 10 of these over the past 12 years, and there are pitfalls, there are problems, there are issues to solve – even at the business level, not just at the technical level.  I can give you the lessons learned and the best practices around building this kind of environment so that your project stays on track in the most agile fashion.

Ok, that said: to get just the technical pieces started you need / require:

  1. Customer facing, real-time, production adaptability.  Meaning that the customer using the source application MAKES THE CHANGES / adds custom fields, etc…  While you can make the argument: we in IT change those things – I will disagree with you on the fact that IT does not constitute “in production, dynamic structure modifications”.  Yep, your source application (like Salesforce for instance) needs to allow custom field definitions.  OR: you ingest XML, JSON, or web-service based content that is dynamic in nature, yet still semi-structured.
  2. Real-time delivery (or near real time delivery) to a relational database system OR Hadoop / Hive based system.  Real-time delivery using an ESB / or message queue, can / should pick up dynamically created fields and attributes from the source, and be capable of adding to the payload and delivering straight to the Data Vault.  Traditional FIXED STRUCTURE TOOLS (ETL / ELT) etc.. that focus on batch do NOT cut the mustard here.  I do not consider these tools (whenever IT must “make the change” to the structure, check it in, etc…) to be dynamic.
  3. Data Vault based Data Warehouse – either in traditional RDBMS OR in Hive / Hadoop.  Turns out, in Hive and Hadoop, you can use JSON and XML and Document SATELLITE structures automatically.  They hook right in to the Hubs and links that are already there.  This is the most dynamic adaptation of structure there can be.  In an RDBMS, you will also need an automated routine (smart automation / neural net) that can answer the questions: is this a business key? is this a relationship? is this a Satellite attribute – and which Satellite does it belong in?
  4. An automation / code generation routine to generate structure changes, and send e-mails.  Grading the change GREEN, YELLOW, or RED and responding appropriately.  This automation routine *should* also generate views on top of the Data Vault, including Point-in-time and Bridge tables as defined in the DV2 standards.
  5. Virtualized Information (Data) Marts.  The more virtual you are, the easier it is to refactor a view and dynamically add attributes – making it possible to release these new attributes through automation.  If you’re stuck in Physicalized Data Marts, then you might have additional technical issues (indexing, partitioning, etc…) to worry about.
  6. Metadata Capture at the Source – to include business metadata and business meaning.  It is absolutely critical to increase the confidence levels of the automated routines figuring out where and how to put the new attribute.  Without this, it becomes a MANUAL task (and hence, no longer dynamic).  That metadata needs to pass from the source, all the way through the DDW, in to the marts, and off to the BI tool on the end / the consumer.
  7. Dynamically adaptable BI tool through an API call or calls.  We *must* be able to dynamically add attributes to the appropriate places in the BI / analytics tools, completing the source to target linage in an automated fashion.  This is not only the icing on the cake, it is the very definition of DDW.

So you see, there are quite a number of moving components, including a neural net in the middle (which introduces concepts I call Metadata Mining)…  Not something you are used to I’m sure.  Anyhow, I’m happy to help you get here, just drop me a line if you think this is valuable.

By the way, businesses who engage in this effort save tons of time, effort, and money in their entire EDW / BI projects.  More on this later…

(C) Dan Linstedt, 2016

Tags: , , , , ,

No comments yet.

Leave a Reply

*