Mathematics of Joins, Denormalization, Rows Per Block and IO

I’ve been asked over and over again about the Joins, and the mathematics behind the Data Vault.  This is a brief look in to this subject.  There are a number of arguments out there, and hundreds of articles that discuss the principles of normalization vs denormalization.  Over the next few entries I will bring some of these arguments to bear on the Data Vault Model architecture.  However, there is a common belief that denormalization (or removal of joins) will gain you performance.  This is true, but only up to a certain point.  The purpose of this entry is to explain (with finality) the nature of denormalization and it’s relationship to the number of I/O’s that are executed to insert and select data in a common relational database system.

First the Assumptions

I hereby assume the following to hold true for everything stated in this entry:

  1. The data is RAW data
  2. The columns are not compressed via any mechanism
  3. The blocks and rows are not compressed via any mechanism
  4. I am not discussing appliances, nor columnar database systems, nor NoSQL alternatives, I am referring to traditional RDBMS systems with standard block sizing and I/O calls, because this appears to be (today) over 85% of the market share for the enterprise data warehousing world.

Defining Denormalization

Denormalization is the process by which columns are “moved” from 2 or more tables, and combined in to a single table.  The idea is to remove the joins between the two tables, and increase performance.  Denormalization from a data modeling perspective is the act of going “up” a level (ie: from 3rd normal form down to 2nd normal form, then down to 1st normal form).  It causes repetition (copies of data) in the columns.  In other words, if you start with two table structures (ie: name and address), you would combine them to form a single table with both name and address contained – that would be an example of denormalization.

That said, what happens when you denormalize data?

  1. The row width of the denormalized table becomes larger.  In other words, it’s the sum of the row-width of the two or more normalized tables that have been combined.
  2. Table scans become more frequent
  3. RDBMS engine optimizers rely more heavily on read-ahead buffering, and potentially horizontal (range) based partitioning for partition elimination techniques.
  4. The entire row of data must be fetched in to RAM regardless of how many columns you want or need.  Examples can come later, in other words: ALL COLUMNS OF ALL ROWS are necessary to be “read in” to meet the demands of every query accessing any data in this table.
  5. Over-Indexing becomes more important – in order to find the right row faster, and eliminate any possible block scans that may happen, the wider the row gets, the more indexes are needed to avoid full table scan operations.
  6. Indexes on extremely wide tables can take as much as 3x the amount of space (to keep performance acceptable) when compared to the raw data size of the table.
  7. Data lay out across MPP nodes is consistently “kept together”, assuming that your queries will be accessing the data all together on a consistent basis.

Defining Normalization

Normalization is the process by which columns are “separated” from a single table in to smaller multiple tables.  Another name for this activity is vertical partitioning (splitting data by column).  Normalization from a data modeling perspective is the act of going “down” a level (ie: from 3rd normal form to 4th, then 5th, then 6th).  An example of 6th normal form would be a column based appliance (which is outside the scope of this discussion, so no further references will be made).  If you start with a single table (customer details), then split it in to name, address, and personal information – you would be effectively normalizing the data set.

That said, what happens when you normalize the data?

  1. The row width of EACH table is respectively smaller (on a percentage scale).  The actual width depends on the header pointers + the size of the columns in the row.
  2. Table scans are less frequent, and are replaced by joins.
  3. Single indexes become more important – especially the indexes used for joining.
  4. Clustering data by the primary key (for join purposes) can really help speed performance of the joins
  5. Data layout (on MPP) forces each of the tables to be spread evenly across the nodes
  6. RDBMS engine optimizers rely heavily on cost optimization, table and column elimination techniques, and index join algorithms
  7. RDBMS optimizers begin to rely heavily on parallelism, proper data distribution, and partitioning of the data sets.
  8. Queries determine WHICH tables and WHAT data is needed to answer the question, NOT ALL ROWS OF ALL BLOCKS OF ALL TABLES are necessary to meet the demands of every query.

What about the mathematics?

You have been asking, so I will do my best to answer.  If someone with a PHD or Masters in physics or math would care to help me out, that would be most welcome.  I have neither of these.  Let’s take a look at “simple mans math” – because I am a simple man.

ASSUMPTION: 64kb DATA BLOCK SIZES (not talking file allocation block size here, I’m referring to DATABASE block size allocation).  Let’s also assume 100% full (0% free, 100% fillfactor), 0% overhead, zero updates, 100% inserts – our tests will be all about the performance of SQL Select statements – select * from TABLE is the query, just to be fair.  Assumption: ZERO FRAGMENTATION in the database (as we all know, this affects performance adversely regardless of data modeling technique)

Row sizing:

  • 1k row size = 64 rows per block
  • 2k row size = 32 rows per block
  • 4k row size = 16 rows per block
  • 8k row size = 8 rows per block

Now, for a SINGLE TABLE, executing the query above, 100M rows to select:   let’s assume one more thing: each I/O (block read) takes 1 milliseconds (0.001 Seconds)

  1. 1k / 1: 100M / 64 rows per block = 1,562,500 I/O’s to read all the data off disk * 0.001 seconds = 26.041 minutes to read all rows
  2. 2k / 1: 100M / 32 rows per block = 3,125,000 I/O’s to read all the data off disk * 0.001 seconds = 52.083 minutes to read all rows
  3. 4k / 1: 100M / 16 rows per block = 6,250,000 I/O’s to read all the data off disk * 0.001 seconds = 104.16 minutes to read all rows
  4. 8k / 1: 100M / 8 rows per block = 12,500,000 I/O’s to read all the data off disk * 0.001 seconds = 208.33 minutes to read all rows

Now, REMEMBER THIS: the TIME is sequential time.  Why?  because at this point I’m assuming ZERO parallelism, and 100% table scan to fetch ALL data for ALL rows off DISK, I’m also assuming a CLEAN RAM buffer (no buffering on the disk device, and no buffering in he database RAM area).  This is strictly DISK performance.

Let’s take a look at the table normalized: (we will assume there will be a parallel factor of 4 – ie: split the tables up in to 4 equal parts, so as to give EACH sized row an equal parallelism count)

  • 1k table : normalized to 4 * 256 byte tables  by 64kb blocks = 256 rows per block
  • 2k table : normalized to 4 * 512 byte tables by 64kb blocks  = 128 rows per block
  • 4k table : normalized to 4 * 1k tables by 64kb blocks = 64 rows per block
  • 8k table: normalized to 4 * 2k tables by 64kb blocks = 32 rows per block

Now, let’s check back in with the MATH for each single table (for which we currently have no calculations). Remember the block size is FIXED to 64kb for this example.

  • 1k / 4: 100M rows / 256 rows per block = 390,625 I/O’s to read ALL the data off disk * 0.001 seconds = 6 minutes to read all the rows
  • 2k / 4: 100M rows / 128 rows per block = 781,250 I/O’s to read ALL the data off disk * 0.001 seconds = 13.020 minutes to read all the rows
  • 4k / 4: 100M rows / 64 rows per block = 1,5262,500 I/O’s to read ALL the data off disk * 0.001 seconds = 26.041 minutes to read all the rows
  • 8k / 4: 100M rows / 32 rows per block = 3,125,000 I/O’s to read all the data off disk * 0.001 seconds = 52.083 minutes to read all rows

The rest off the calculations for 1k and 2k tables are above, and already listed.

NOW: the Parallelism part of the test.  Assumptions first:

  1. I assume that the RDBMS engine is capable of running parallel query
  2. I assume that the RDBMS engine has been properly tuned to run parallel query
  3. I assume that there are no I/O or other bottlenecks that stand in the way of the optimizer choosing parallel query (in other words, I assume that the optimizer will ALWAYS run parallel query)
  4. I assume that there are 4 parallel query threads possible, and that all 4 can run concurrently without “blocking” – in other words, the RDBMS has been properly configured, and in fact now chooses to run 4 parallel query threads for this test.
  5. I assume there is SOME added overhead for merging results together (to be assigned based on case)

What about the performance?

Ok, so here we are.  My claims are that normalization (within reason), of a single table can actually a) scale better, b) perform better with joins, when compared to a single table scan of a wide table structure.  My answers are based in run-times, the mathematics of parallelism & partitioning (the division of BOTH data and processes).  It comes down to this: you can test the pieces above in your own RDBMS system, I’d be curious to know what you find for out-comes.  Perhaps the RDBMS vendors are “missing something” in the parallel computing field that is holding back the performance of their joins…  What ever the case is, please comment on this post, add your own performance test cases so we can all see what different vendors actually DO (speed wise).

Based on the case above & all the assumptions:

  • 1k row size table RESULTS: 6 minutes to read 1 table x 4 parallel processes (+ some merge execution overhead, lets assume we add 10 minutes to the process) = 16 minutes to read all 100M rows.  And if I’m not mistaken, 16 minutes is STILL faster than a single table scan of 26 minutes for 100M rows.  Now, add a bit more overhead (+5 minutes for the increased parallel load): still adds up to 21 minutes, which is still less than a total run time of 26 minutes for all 100M rows.
  • 2k row size table RESULTS: 13 minutes to read 1 table x 4 parallel processes (+merge time, we’ll add 15 minute overhead to the process) + 8 minutes for process overhead of parallel query = 36 minutes.  Which is STILL less than 52 minutes for 100M rows in the original table scan.
  • 4k row size table RESULTS: 26.041 minutes x 4 parallel processes (+ merge time, we’ll add 20 minute overhead to the process) + 10 minutes for parallel query processes = 56 minutes.  STILL less than 104.16 minutes for the table scan of the original table.
  • 8k row size table RESULTS: 52.083 m x 4 parallel processes (+ merge time, we’ll add 30 minute overhead to the process) + 20 minutes for parallel query processing = 102 minutes.  STILL less than 208 minutes for the table scan of the original table.

Try this case yourself.  You *MUST* be accurate, and not execute a simple “select count(*)” that will NOT get you the results that you can measure against.  You MUST force a “SELECT * FROM TABLE”, and use joins on numeric columns in order to achieve these results.  The timing will vary depending on the hardware, infrastructure, disk, and database that are used.  However, joins & normalization (when setup properly) can only help performance when compared to table scans such as these.

I hereby maintain my claims for the Data Vault, and the normalization – that it is good for performance, and great for scalability – WHEN TUNED PROPERLY, and that it can & will beat denormalization and rows that get too wide along the way.

ONE MORE THING:

When you begin to “explode” the row size, or the table (regardless of Satellite, or Dimension, or Fact, or otherwise) becomes too wide…  You increase the I/O’s needed to both read and write that same number of rows to disk.  The I/O’s will double, triple, and quadruple as you continue to widen the row – requiring double the I/O’s to get the data ON and OFF disk.  The physical I/O Count is TIED DIRECTLY to the PERFORMANCE of the table structure.  Wider rows = more complexity = slower performance.  This has been proven over and over again in many different references below.

We also, “denormalize” to eliminate joins, but we denormalize KEY STRUCTURES by using Point-in-time, and Bridge tables.  These are SNAPSHOT tables which house KEYS ONLY, and provide “less joins” to get to the data in the Data Vault underneath.  However, it allows us to utilize the Data Vault structures to ensure flexibility, compatibility, and accountability going forward.

Finally: Over-Normalization can take it’s toll.  Today’s hardware (in RDBMS systems – see above assumptions) cannot handle 6th normal form.   This is why COLUMNAR database engines with firmware and hardware adaptations are springing up.  Over-normalize (add TOO MANY joins) and yes, performance will drop – but not from the normalization – but rather the amount of parallel load placed on the infrastructure underneath.  The same can be said for over-denormalizing.  Make the rows TOO WIDE, and table scans begin to double / triple the I/O count (as shown here).  There is a fine-line between the two, and a sweet spot where they both perform equally well.

I HOPE this puts this “argument / complaint” about the Data Vault to bed once and for all.  If you still maintain that the Data Vault has too many joins after this article, then you do not understand the mathematics of parallelism, partitioning, and concurrency.

OH YEA I almost forgot: The DATA VAULT is BUILT for “machine based queries.” in other words, we do not need to tune the architecture for AD-HOC end user queries.  This means the architecture CAN and SHOULD be tuned for PROCESSES.  Once tuned, it’s done!  No more maintenance (until the data set grows, and horizontal range partitioning needs to be added).  Ad-hoc query tuning is a different beast, and has different requirements.  NO, The Data Vault will NOT perform well under ad-hoc query access (in most cases).

Leap of Faith?

Not really, it’s a proof of mathematics. I believe that “division of data”, both horizontal and vertical partitioning, assists division of tasks (multi-tasking / parallelism).  If the dependency on sequential read is NOT reduced, then there is an inherent upper limit to the scalability, size of data set, and performance that can be achieved (I am referring to straight table scans here vs using joins in parallel query operations).  Horizontal partitioning is good for parallel queries, but people often forget that vertical partitioning (normalization) also helps.   Yes, today there is such a thing as over-normalization.  Too many joins / too much parallelism and the system can easily be overwhelmed.  When this is the case, the performance of the parallel threads slows down (not enough computing power to make use of the high degrees of parallelism)

But really, if the HARDWARE uses normalization (division of tasks, division of data – partitioning), and continues to make advances with multi-threaded, parallel processing AND performance gains, then why are we so HUNG UP on table scans, sequential processing in the database world?  This just simply baffles me to no end.  The next question is: if this is truly the case (that non-parallelism scales better and faster than parallelism and normalization, then what are the RDBMS vendors doing WRONG to keep joins from performing in accordance with the parallel mathematics?)

References

In case you want to read more about parallelism, parallel query, degrees of parallelism, or in case you doubt my findings that Joins and Parallel tables (vertical partitioning) actually improves performance.  I believe that normalization enhances the optimizers’ ability to make decisions about parallelism – table scans leave the optimizers NO CHOICE but to execute sequential read-aheads in nearly all instances.

Additional references for Denormalization:

Normalization Linked To Parallelism:

This is OFTEN ignored by those in the database world – all they talk about is how denormalization makes things faster.  Yes, but only to a point – once that scalability point is reached, normalization and parallelism are the only way to scale further.  It’s WHY we have parallel OS, multi-threaded CPU’s today, and WHY BIG SMP systems cannot (today) out run MPP systems.  If the HARDWARE uses normalization and parallelism, why can’t we take advantage of this at the Database and data architecture levels?

Tags: , , , , , , , , ,

One Response to “Mathematics of Joins, Denormalization, Rows Per Block and IO”

  1. Dan Chatten 2011/09/15 at 6:03 pm #

    Good read, Dan !

Leave a Reply

*