#bigdata, logfiles and Business Keys? #nosql, #datavault, #asset

Welcome back for those of you following my series of posts.  Thank-you.  For those who haven’t read my posts yet – (yes, that’s you), why not?  (just kidding), anyhow, this is the third in a series of posts on Big Data, Data Lakes, turning data in to an asset for your organization.  In this entry I will discuss business keys, data vault, with a focus on machine generated files (which have no good natural business key – or so you say).

Machine Generated Log Files Have No Keys….

Well – I beg to differ. 

The keys aren’t necessarily business driven, they are machine driven.  But somewhere along the line a business user or developer had to make the decision on what to write to the log, how the log would be structured, and what might be “unique” about the log entries.

Ok, so where’s the BEEF?

The BEEF as it were, is in the details.  Each log file creates structured content.  There are some columns (like URL, and referrer, etc…) which are what we call multi-structured.  Meaning each “entry” can have multiple levels of defined details.  In other words, they can be described by discrete or finite mathematics (in fact, they are: URL’s are defined by an RFC – a standard)… ohhh there I go again with that stupid STANDARDS word…  Get OVER IT People – you need to understand the world is full of standards, and it’s one of the only ways we can provide stepping stones for future innovation and optimization.

Sorry, off on my soap box again.  Let’s get back to the topic at hand:

HOW do I KEY a Log File record ?

Well – good question.  Ultimately, you would need to hash the entire row using SHA-1 or MD5 for example to come up with a unique key.  But this isn’t good enough when we introduce two web-log capturing servers.  If they capture the SAME EXACT URL at the SAME EXACT TIME, and provide the SAME EXACT RESPONSE, then they will generate key collision.

But wait a minute…  Isn’t that what we truly want?  Hmmm – No duplicates when we store these logs in Hadoop for example, does sound tempting, but no – at the end of the day we want to record the fact that different servers actually captured the different log records.

Ohhh – I thought the SERVER IP was a part of the Log Record?  It is…  BUT if you have a set of servers in a cluster, all attached to the same external IP and logging to different shared directories, then you can easily get right back to this situation of duplicate hashes.

Ok – enough on how to “solve” log file recording problems – I’ve got more, but it gets too technical.

Let’s get back to business value here – the nature of our discussion.

You must accept these fundamental tenants to make sense of “machine” generated data, or to treat “machine generated” data as an asset on the books: (no I never worked at Google, I actually BUILT Log File Warehouses back in 1997 before it became a FAD or a cool thing to do)

  1. Log Files “business keys” are in fact, technical keys – being technical keys they are machine generated
  2. Log Files “technical keys” are / must be multi-part (or composite keys) made up of different pieces of the record
  3. To be truly unique, the complete ROW must be hashed, but that depends on the purpose you are trying to solve

The questions the business needs to ask:

  1. WHEN is a single Log File entry important or have value to the business?  IF this ever is a case for the business, then EACH ROW must be stored with a unique machine generated hash
  2. When is the value of the aggregate more important?  If this is the case, then the business needs to construct additional business keys (for instance, based on SESSION ID), or COOKIE + IP + BROWSER + DATE TIME (etc…)

In Conclusion:

90% of the time, businesses need to extract value from Web-Logs OR machine generated data by aggregate examination.They change the grain to aggregate together the interesting data, and it is at that point they must assign a machine generated business key that makes sense for tracking to the business.

WHEN this is not the case, then unique machine generated business keys must be assigned to the machine generated data (entire row) for unique identification within the set.  Sometimes it is necessary to add a computed column (such as server name) in order to enrich the machine generated data so that the machine generated business key can remain unique.

At the end of the day, in order to attach Business Value to machine generated data, it will require machine generated business keys because the flow rates / volume are prohibitive for any human to attach a natural world smart key.

When these situations arise – the HIGHER the level of aggregation (results), the MORE VALUE the aggregate results tend to have for the business, and at the end of the day – it will be up to the HUMAN to attach a meaningful natural world business key to the HIGH level aggregate results.  (in other words, picking the gold out of the sludge at the bottom of the lake – then tagging it with where you found it, how you found it and so on).

Hope this helps,
Dan Linstedt
DanLinstedt@gmail.com
PS: There are no impossibilities, only lack of foresight and vision

 

 

Tags: , , , , , , ,

No comments yet.

Leave a Reply

*