solved stamp

Canonical Modeling, Ontologies, #datavault 2.0 and #noSQL

In this entry, I will explore (at a management level), what these components are, why they are important to you, and how they can help you with your NoSQL and BigData implementations.

What are these components?

Let’s explore each as we go.

Canonical Modeling:

A canonical model is a design pattern used to communicate between different data formats. A form of enterprise application integration, it is intended to reduce costs and standardize on agreed data definitions associated with integrating business systems. A canonical model is any model that is canonical in nature, i.e. a model which is in the simplest form possible based on a standard, application integration (EAI) solution.  http://en.wikipedia.org/wiki/Canonical_model

How about one specific to Canonical Data Modeling?

The canonical data model is the definition of a standard organization view of a particular subject, plus the mapping back to each application view of this same subject. The standard organization view is built traditionally using simple yet useful structures.  http://www.information-management.com/issues/2007_50/10001733-1.html

Ontologies:

In computer science and information science, an ontology is a formal naming and definition of the types, properties, and interrelationships of the entities that really or fundamentally exist for a particular domain of discourse. It is thus a practical application of philosophical ontology, with a taxonomyhttp://en.wikipedia.org/wiki/Ontology_%28information_science%29

Data Vault 2.0 (Data Model):

The Data Vault 2.0 Model is a detail oriented, historical tracking and uniquely linked set of normalized tables that support one or more functional areas of business. It is a hybrid approach encompassing the best of breed between 3rd normal form (3NF) and star schema. The design is flexible, scalable, consistent and adaptable to the needs of the enterprise.  Data Vault 2.0 Data Model exchanges Sequences for Hash Keys (as primary key) in order to increase parallelism, and allow linking to External Data Stores (NoSQL included).

Why are they important?

We build data warehouses not information warehouses.  The reason is simple: data needs to be auditable, therefore: stored in raw format organized by historical dates (temporal).  While data can be important for business decisions (ie: discovering bad data – outliers), it needs to transition in to information to be useful to the business.

What is today’s most common platform for ingesting raw data in a historical sense without regard to format or model?  Enter NoSQL…  NoSQL (relational, semi-relational, and non-relational) have wonderful ingestion capabilities (semi & non-relational can ingest schema-less: meaning without Modeling).  They can almost accept any data in any format, stamp it with a date/time of arrival for historization, and split the data across parallel computing devices to make use of parallelism and high scalability.

BEFORE you can “make sense” of any data, you (or your team) need to organize it.  That means: arranging it, categorizing it, classifying it – generally known as a data model. Then, the next step is applying business rules or context in order to alter, change, correlate, condition the data and turn it in to information. Which of course means: your “target model” needs to be focused on the business.

Why is this important?

Because you CANNOT use, apply, or even understand data without associating it, and tying it to context or meaning.  In other words, without applying a model to it, and additional processing rules. For example, can you tell me what this data means?  FF 2A 99 1E 55 42 6B AC ?  Even if I were to put it in ASCII code / string, it still wouldn’t make sense.   That’s because there is no “model”, no context to view this data with.

Now, if I were to say to you: this is the FIRST set of bytes leading to a JPEG (it’s not, but just humor me)… then you would say: Ahhh!  Now I understand, it’s an Image of some sort.  If I were to say: it’s the first set of bytes to a ZIP file, you would say: AHH!  Ok, it’s a Compressed Binary File of some sort.  All of the sudden, when I apply a model, you have some level of context for understanding the data set.

Now that you know it’s an image (for instance), can you tell me what it is an image of?  No.  Now the real work begins, taking the data and interpreting it – changing it to a format (in this case) consistent with a visual display.  Let’s say it’s an image of a frog.  Great, I just gave it context.

Why is Canonical Modeling important?

Can you tell me what class of vertebrates the frog belongs to?  Hmmm…  Amphibians perhaps might be one possibility for “parent” in the hierarchy.  But to be honest, there are different species and sub species of frogs and without additional visual identifying characteristics (think: data mining, or statistical analysis here), further classification in a canonical model is next to impossible.

Ok – a Canonical Model can help us represent this class of vertebrates inside the context of our business.  That’s right, it’s a move from data to information by applying a canonical model.   What do you need to do to the “data” once you’ve identified it?  Yep, in the case of images or video, we generally TAG them.  Ahh, add metadata?  Yep.  Definitional context – or metadata tags, so that repeated searches will work, and we don’t have to “parse” the image again to find out what kind of frog it is.

Wait a minute, how do we identify this “thing”?

Most of the time, the next step is uniquely identifying the “file” or set of information with a meaningful business key. The business key “tag” – as long as it’s unique, will allow us to track the data as it passes through the business processes, associating value and enrichment of the data and information as the business performs “work”.

Why are Ontologies Important?

Ontologies provide us (in I.T.) with a method for formally identifying metadata.  Ok – in ENGLISH please…  We can specify a word-doc outline, consisting of KEY business Terms, their parents, peers, relationships / associations – resulting in categorization and hierarchies of the data that we need to deal with or visualize for the business.

An Ontology is a working map to the data landscape, providing context to the developers, designers, and business users on how the data fits in to the business.  You may have missed this earlier point, so I will repeat myself:

You must classify data in order to turn it in to information and make it usable by the business.  This means, in order to get value from your NoSQL stores, you must take the time to build some sort of canonical model or ontology – so that you can build requirements (business process expectations) for turning data in to information through business rule processing.  (think analytics here)

Why is Data Vault 2.0 Important?

Data Vault 2.0 (as expressed above) is a form of a Canonical Modeling technique. The Data Vault methodology provides the I.T. rules and procedures for building a hierarchical model that makes sense to the business, follows business terms, and establishes a working ontology (think hierarchy of business terms).  It provides the formal specification for building a map to all that “unstructured” data sitting in your NoSQL platform, so that you and every other business user can begin extracting value from these platforms.

For business definition purposes: the Hash Keys are a technical solution that allow us in I.T. to tie relational (traditional RDBMS) systems to NoSQL systems easily, without re-engineering, and without re-building the entire warehouse, saves you time and money!

How they can help you with your NoSQL and BigData environments…

To summarize…  If you have, want, or need a NoSQL environment, then you need to investigate, use, and apply canonical / conceptual modeling techniques to turn data in to information. Data Vault 2.0 Modeling is a formalized process that will help you get there quickly.  Data Vault 2.0 Methodology provides the standards, best practices, and automation recommendations that can allow you to seamlessly integrate NoSQL platforms with your existing relational DBMS investments.  The methodology also provides I.T. with the rules and procedures for building and managing the data to information processes.

It’s not enough to simply have a NoSQL environment, and continue to throw data at it as it comes (while helpful for audit purposes, it has zero business value until you can classify the data inside).  Yes, data mining (aka: deep analytics) and statistical engines will assist you in the discovery process, but to truly take advantage of data as an asset on the books, you will need to classify it, categorize it, and assign it context.

Using the Data Vault 2.0 Modeling techniques, you can focus on the business context via metadata tagging, and business keys – that fit easily in to the hierarchical format (ontology model).

If you are interested in digging in to this further, you can attend my conference: http://WWDVC.com – where I will present much of this information (along with others, including Claudia Imhoff).  OR you can contact me directly (on this site), and we can arrange on-site training in Data Vault 2.0 Boot Camp & Private Certification.

There is also on-line training available at: http://LearnDataVault.com

(you can always leave me a comment)…

(C) Dan Linstedt, 2015 all rights reserved.  May not be duplicated, copied, re-posted in any form without express written consent from Dan Linstedt.

 

Tags: , , , , , , ,

2 Responses to “Canonical Modeling, Ontologies, #datavault 2.0 and #noSQL”

  1. Siva Janamanchi 2015/04/03 at 12:34 pm #

    Hi Dan,
    Great to see your recent posts on DV2.0 and NoSQL. Though I am new to DV, from what I have been reading so far, on DV2, I want to understand

    a. Some examples or case studies to understand how DV2.0 has helped enterprises to extend the EDW by integrating NoSQL and Hadoop data sets (DW2.0 ?). I mean what aspects of NoSQL and Hadoop data sets (or result sets ?) have been made part of DV models. This is very important to articulate the value proposition of DV2.0 with (such) Big Data Analytics platforms.

    b.I see some DV experts like Roelant Vos strongly advocating and exploring the case for Virtualizing DV and Data Mart layers. Many ‘Big Data Analytics’ vendors today are having tools to let business users/BI tools seamlessly (?) access data from any and/or across the RDBMS / Hadoop /NoSQL systems – like Teradata’s ‘Query Grid’ or Oracle’s ‘Big Data SQL’ – just with the help of SQL. This is like providing ‘Data Virtualization’ layer (thro views).

    Does this mean, DV2.0 model can be a virtualized EDW layer ? and also can be used to quickly build agile and resilient sandpit data models – for different user groups ?

    I know you will be talking this at WWDVC but I cannot make it and I am in urgent need of some direction and guidance on this.

    Please comment and provide details from real cases, if any.

    Thanks and Regards
    Siva Janamanchi
    Hyderabad

  2. Dan Linstedt 2015/04/17 at 2:49 pm #

    Hi Siva,

    Thank-you for your interest in DV2. I am pleased to hear from you about these things. I will be publishing a bunch more about DV2 in July 2015, soon, as my new book is released (which will be the standards reference & documentation in DV2 to use).

    Regarding NoSQL & Hadoop – there is more information coming, but at the moment, I am pressed for time – and simply don’t have the time to publish answers to the questions you are seeking. Again, sometime this summer I expect to be publishing. Probably on my subscription layers (which will be new) here on the site.

    regarding Roleant Vos, you will have to contact him directly. But, in the discussions I’ve had with him, we both agree that virtualizing your EDW layer only seems to work when the data set is relatively small, OR in-memory technology can be used for all the data in the warehouse or staging areas. DV2 model can be virtualized, Roleant will be discussing this at WWDVC. I am sorry to hear you cannot make it, I will be recording *some* of the presentations (those that I am allowed to do so), and selling the presentations on-line at http://KeyLDV.com.

    Again, real-case studies are coming, but in order to have a full understanding, I highly recommend you contact my partner: Sanjay Pande sanjay.pande@sanshi.ca – he now lives in Mysore India, and can assist in teaching classes in that region.

    Hope this helps,
    Dan Linstedt

Leave a Reply

*