Q: What’s the best way to Test a Data Vault?

A: Follow your nose…  (just kidding).  In this post I’ll see if I can begin to describe the procedure of testing data vaults.   I have a full methodology with recommendations, example test plans, rules and standards to follow for making this happen, and I offer this methodology during on-site training

Elements of Testing

There are many different elements to creating successful testing procedures.  In this post I will outline just a few thoughts and ideas that would get you going.  If you have other ideas, or questions about this process – please FEEL FREE to add them as comments on this entry.  I’d love to hear what you’re thinking!  Of course, I can’t do justice here with just a single entry alone, so there will be a lot of information yet to come.   However if you want to make the most of your time, and you want to expidite the testing process then I highly recommend that you sign up for my one-on-one coaching section.   Elements of testing include:

  • Test Plan (generally a word doc), refers to test cases, and outlines each one.
  • Test Cases (specific data sets, instructions on what to do, expected results, and tolerance levels – what would pass, what would fail)
  • Testing Tools – generally a document defining each tool used to test and what it’s purpose in the testing is.

For data used in testing I would recommend the following:

  • Real, raw data sets
  • Generated Test Data Sets

Using Raw Data

Data Vault data is generally RAW data sets.  So in the case of the Data Vault reconciling to the source system is a recommended test.  This can be reconciling to the flat-files that arrive, or reconciling to the source databases.  Sometimes there is no “system” to reconcile to because the data arrives on a web-service.  In this particular case, I would instruct you to store the data in a staging area – just for testing purposes later!

Using Generated Test Data

It is absolutely VITAL that you test the processes with predetermined generated test data.  Reconciliation is one thing, but testing “domain alignment” and “normalization”, along with “missing keys, nulled attributes” is another.  Generating test data that is “sparse”, incomplete, and in some cases fully complete and fully correct allows testing of the relationships, the zero-key generation, the start and end-dates of specific records (ie: the delta processing detection), elimination of duplicates, etc…  There are many many reasons why you should test with GENERATED test data, but keep this in mind: each row of generated test data MUST mean something, MUST accomplish a specific goal, and MUST be documented 1 for 1 in a specific test case.

When you use generated test data, you can “foresee” what the outcome should be, and adjust the expectations to match.  You can dictate in the test case what the expected result should be, therefore – a consistent test case can test new releases (with the same data set over and over again).  It will determine pass/fail states for your baseline Data Vault Data Warehouse.

Also, with another set of Generated “random” test data, you can actually test volume, joins, and performance of the database.  These are functional tests – to make sure the partitioning is right, the performance is acceptable, the indexing works, etc..  These volume tests are really good for testing load cycle performance as well.  You can project what you need to do in order to make the system run faster.

I use a tool called “RowGen” from a company called CoSort (IRI, Inc) to make my tests accurate, fast, and easy. http://www.cosort.com/products/RowGen  There are hundreds of test data generation systems, but what I really like about this product are the following points:

  1.  it’s written native C – IT’S FAST!!!  with assembler level components!  (many others I’ve seen are way to slow to generate “heavy load”.  I once used RowGen to generate 986MB of data inside 4 hours on a Dell PowerEdge 2650 with 2 CPU’s and no hyperthreading, 4 GB RAM – running Windows Server 2008 32 bit.
  2. it has a set of scripting commands, that allow me run lookups, pre-generate files, generate associations across files, etc…
  3. it can take advantage of multi-core CPU Power…  It’s MULTI-THREADED
  4. It’s a command line interface – letting me script the test cases and data generation.
  5. I can setup RowGen to “use” some columns of real-data from real-source files.
  6. It recognizes patterns, and uses a statistical algorithm to allow generation spread of each column for sparsity.
  7. It can run on a mainframe if necessary, or Linux, or Unix, or Windows…
  8. It’s CHEAP!!!  Compared to some other systems, it used to be a $1500 entry point (check with them for current pricing).

There are many many more features and reasons why I like RowGen, but if you do talk to them, just tell them I sent you.  *** NOTE: I am not a reseller, I do not receive any fees for talking about the product, I do not receive any monetary compensation here, and I DO NOT ENDORSE ANY PRODUCT I HAVEN’T USED or don’t like.  *** In this case, I like the product, I’ve used the product, and I endorse the product.

Balancing to Real-Data

There are some tests that should balance the raw data set to the source system, these balancing routines must be tested very carefully to ensure they don’t produce false-positives, or false-negatives.  Once the balancing routines are in place, then testing can be fairly easy.  Some suggestions: run: sum(x), count(y), average(z), min(a), max(b), count(distinct c), count( null(d)) etc…  across the satellites FOR A SPECIFIC LOAD DATE, tie together these images against a SINGLE load date in the staging area, if they don’t match – you’ve got trouble.  REMEMBER: Staging areas MAY contain duplicate rows, where Data Vaults DO NOT.  So, before balancing against a staging area, remove the duplicates!!!

THEN: develop similar balancing routines that match the staging (duplicates and all) to the FLAT FILES (if the source is a flat file), or the source Database (if the source is a database).  compare the outputs.

These types of routines should be coded so they can be run once a week, or daily (depending on load frequency and performance).  They should send emails when things fall out of balance.

What about TESTING to see if all the data has arrived?

NOW HEAR THIS!!!  Source feeds tend to a) grow on a regular basis b) fluctuate with some level of consistency (within a given high/low average)  – these types of things need to be accounted for on “auto-testing” routines.  In other words, if a feed “grows” by an average of 100 rows a day, then you should  set a threshold that says, if I receive anything less than 75 new rows a day, send an email, fail the load… etc…  something may be wrong.  On the contrary you should set another rule: any time I receive more than 125 rows a day, see above….

Thresholding for automated “sanity checks” is yet another testing procedure that should be setup.

Now, regarding using real data as test data – once a test data set has been selected – it must be backed up!!  It must be used over and over again to test for pass/fail.  It also MUST be documented in a test case that this is REAL DATA that is not to be shared.

REMEMBER: TESTING WITH AN EMPTY DATA VAULT, FOLLOWED BY A SINGLE DAY’S LOAD, FOLLOWED AGAIN BY A DAY-2 LOAD IS A MUST.  A Day-2 Load will allow you test DELTA’S, and duplicate removal from the staging area.

Interested in more information? want test case examples?  contact me for training.

Cheers,
Dan L

Tags: , ,

No comments yet.

Leave a Reply

*