#datavault, #bigdata #hadoop making progress

I am still working on the details, however they are showing serious signs of progress.  This is a short entry just to update you on where I am in the details…

I’ve got one customer going “global” (seriously global EDW, fortune 500 firm) with Netezza TwinFin and Data Vault v1.0

I’ve got another customer looking at using Pentaho BigData solutions with Data Vault 2.0 specifications,  and HBase or Cloudera underneath.  I outlined a solution for them recently, some of the data will live in the unstructured world of HBase, and some of the data will live in the Data Vault in a relational database engine.  Using DV2.0 modeling specifications they will be able to hook the data sets together at “query time”.

I’ve got another customer who will be using single ETL templates and parameter driven ETL in Informatica v9.x to load a new DV2.0 project that is just kicking off.

I’ve got a few other customers waiting in the wings, who are looking at the impact of Hadoop, Big Data systems, and application of Data Vault v2.0 modeling and implementation best practices.

The only problem I have right now? my time is seriously booked!  (which is good), but it delays the launch of the DV2.0 specifications until Q1 of 2013.

Are you interested in any of this information?  Do you have a project where you could use these types of solutions?  let me know by replying (either by email or my Contact Us form here) or publicly below.

Stay tuned…

Dan Linstedt

Tags: , , , , , , , , ,

No comments yet.

Leave a Reply

*