it’s time for me to jump back in to the theoretical aspects, and consider some of the deepest roots of the data vault model… that is: the natural world. i’ve long held the belief that the data vault is modeled (albeit a poor mans model) after the neural images of what we believe our brains look like. i have a hobby, as many of you may know, of reading and trying to understand the beauty and simplicity of the architecture. yet the architecture holds depth and complexity – or is the function that holds these things? in case you’re wondering what i’m working on, this is a dive into the theoretical, the unknown (or my unknown as the case may be).
i’ve written it in the technical book, i’ve showed images. from what i understand, the way we think is a combination of the form (the diagrammatic model/form of the neurons, dendrites and synapses) and the function of the brain. or as the case may be, multiple functions of different parts of the brain. some parts of the brain are said to house memories, other parts, images of parents, other parts are said to deduce fight or flight. of course there is the major separation of what we know: short term memory vs long term memory.
i believe, when we think, or are cognitively aware, we are constantly taking in input from our senses (touch, sight, sound, taste, smell, etc..), grabbing specific images, words, thoughts from the “memory banks” as it were, and then applying context to the memories. using these individual building blocks to form a consistent and cohesive thought, one at a time. they say the brain is a relatively “slow” computer, but how then can it get to emotions, considerations, and feelings – or even complete thoughts so fast? no one really knows…. but one thing is for certain: the brain, in all of it’s complexity, combines form, function, and content – in parallel.
i think that when we build systems in the data warehousing world, we are building primitive (very primitive) content stores. just like the brain, the content stores of a data warehouse hold data over time. the brain tries it’s best to remember, categorize and index (if you will) information by time. when you think of you’re 12th birthday, or your 8th birthday, these are both along the “birthday” index – or content/concept retrieval path. now, it’s a matter of “time” – as in when did the event happen?
the next question might be: “what was the weather warm or cold? or did your cake taste good?
of course, these questions are the questions that begin to lend context to the data or the information. but i digress….
i believe that if we can build a system, that recombines, form (data model), function (retrieval, indexing, parallelism), and context (learning, neural networks, patterns of association, probability scoring) in a self-contained component (like hardware) that we can actually make a machine that begins to “perceive” things about the world around it. i think the data model must resemble (in some way) the data vault, or to be more specific: a neural model, where the data is keye’d off of important events, and where it’s got hundreds of connections (if not millions) to other information around it. the data vault carries links for these purposes. i believe that adding function to the mix is critical in order to make use of the data, know where it is, run the retrievals and updates in 100% parallelism, and of course finally, context. this must be a combination of historical data, plus the “learning pattern” that is taught based on a finite world, along with teaching the “learning system” what the model truly is and how to leverage it.
the scientists say when we learn something new, we form new neurons, dendrites, and synapses. when we connect or associate memories, the dendrites get thicker – the stronger the memory, the more vivid the memory, the thicker the dendrites. they say that alzheimers patients suffer from memory loss because these connections (these dendrites) deteriorate, the patients can no longer connect the proper memories to form context around their ideas. they also say that neurons die off when not used, or when memory loss occurs.
all of these “features” of the brain make me believe that we can build a prototype of a perception system containing the data vault model, and that the model (because of it’s nature) is best suited to dynamic alteration. in other words, when the system learns new things, receives new inputs, etc… it can create hubs & links & satellites on the fly for storage. that the “stronger the indicators” and the “higher the confidence”, the more links can be associated with that information. in reality, i believe that the data vault model lends itself to the beginnings of a self-optimization pattern, that the model itself can & should morph automatically, or optimize according to the world around it.
now before you go jumping off the deep end, or quoting some obscure scientific reference to me, please be aware that this is just a thought experiment. so in keeping with this tone, if you have contributions or arguments against this, please voice them here by replying or commenting at the bottom of this post. also note: that i am not a brain surgeon, nor a neurologist, nor am i a cognitive scientist. i’m just an interested and curious computer scientist who dabbles in the theoretical possibilities of arriving at a dynamic and self-sustaining system.
just imagine for a minute what it might be like to have a truly back-office self-healing (not self-aware), but self-adapting historical data store or memory, capable of “spotting new associations” for us, presenting those to us as a mechanism for review, and through that review or human interaction, we teach and guide the system to do better the next time… what would that mean to you? is this even interesting?
curious to hear your theoretical thoughts….