Q&A:Additional tables impacting existing ETL jobs

Here we discuss the impact of “additional tables” on ETL jobs.  It really is negligable, but in this case – it would be good for you to read this entry.

If you have comments, thoughts, or other experiences I encourage you to add your COMMENT to the end of this posting.   As you know, I’ve launched one-on-one coaching, this is the kind of knowledge you get within the walls of one-on-one coaching. But for TODAY  – I’m giving you the answers FREE to show you the kind of value you get when you sign up for my coaching sessions.  Contact me today: coach@danlinstedt.com for more information.

How does the introduction of additional hub links impact existing ETL jobs?

It doesn’t.  All it does is “add” additional ETL jobs to be executed, but get this: THEY’RE FAST FAST FAST, Restartable, auditable, flexible and compliant.  I’ve put 25 years of research and design into the ETL that I generate with the SaaS services, and so by using the SaaS services you gain all of that knowledge – and guess what?  It’s better than that… by subscribing to my coaching you get access to these services for FREE!! that’s right…. Try it out for yourself.

Take it from me, I’ve worked with data sets as large as 3 terabytes, inflow size of 20 to 40 Terabytes per batch…  These are NOT small numbers, and when you deal with data this large – you have ONE thing in mind: performance.  Their may be MORE ETL routines to deal with, but each one is SMALLER, EASIER, FASTER and more nimble than anything you’ve ever experienced.  And by the way, you can stretch the limits of the machines hardware by running parallel processes.  You can finally PROVE to your business users that you are using all the hardware they bought you.

Think about it, today’s typical ETL loads take on average only 40% to 60% of the hardware per batch cycle during peak load.  Why?  Well, there are many many reasons for that (all are covered in the coaching section here) – but the main reason is because of COMPLEXITY.  Having the business rules upstream, having multiple sources, multiple lookups, multiple targets – makes for a very messy ETL later.  One that requires constant maintenance and upkeep.  I’m so wound up about this topic, it will take me 10 to 20 more blog entries just to describe the causes and effects…  BUT the benefits of using the Data Vault approach and it’s corresponding ETL are clear.

Again, I refer to my above ETL example:  the “TYPICAL” federated data warehouse load cycle for 50 staging tables and 1 TB of data may take 3 to 4 hours.  The SAME data vault load cycle will take 20 to 40 minutes, even though there are many more jobs running.

Tags: , , ,

No comments yet.

Leave a Reply

*