SSD Changes Everything

I’ve been researching Solid State Disk and it’s impact on Data Warehousing / Business Intelligence for quite some time.  Quite frankly, I should have seen this sooner!  Anyhow, in this entry, I’ll dive in to the SSD technology and discuss it’s impact on the future of what I think it could do for Data Warehousing appliances, data modeling, and databases.  I’m going to touch on things like performance, scalability, and making the most of your existing hardware and database system (sunk cost), so read on!

What is SSD?

A solid state disk / drive (SSD) – is electrically, mechanically and software compatible with a conventional (magnetic) hard disk.

The difference is that the storage medium is not magnetic (like a hard disk) or optical (like a CD) but solid state semiconductor such as battery backed RAM, EPROM or other electrically erasable RAMlike chip such as flash.

This provides faster access time than a hard disk, because the SSD data can be randomly accessed in the same time whatever the storage location. The SSD access time does not depend on a read/write interface head synchronising with a data sector on a rotating disk. The SSD also provides greater physical resilience to physical vibration, shock and extreme temperature fluctuations. SSDs are also imune to strong magnetic fields which could sanitize a hard drive.
http://www.storagesearch.com/ssd-storage-glossary.html

Why does it make a difference?

Well, if you look at the statistics or performance comparisons to the standard hard drive, you’ll notice that in general: the reads are 10x faster (minimum), and the writes are 5x faster.  Sometime these increases are a lot more, I’ve seen read speeds advertised at 40x faster when you get in to high-end SSD storage.  But this isn’t the only reason that it makes a difference, sure speed is important, but in the RDBMS world (relational database management systems) – the data model also makes a huge difference.

I wrote a blog entry not too long ago about the future of data modeling, and I still believe that to hold true, but what I didn’t address is the interim – the today…  What companies can do to save time, allow more data faster using existing hardware/software – leveraging their investments; all without changing to a NoSQL database/appliance or column store.  I also did not address what the benefits are of sticking with your existing systems, applications, processes, and procedures.  In other words, how to save money TODAY with what is virtually almost a plug and play storage system; and what to do to handle growth/expansion without jumping into the cloud.

So, again: why does SSD make a difference?

Ok, it impacts a lot more than just the speed and performance!  It actually can alleviate some architectural decisions that plague the data models of today.  These are the same data models that cause problems with scalability, accountability, real-time inflows, etc…   The symptoms of a bad data model include:

  • Fragmentation
  • Performance Degradation
  • Chained Rows
  • Join Operations
  • Indexing Coverage (lack thereof, or too many)
  • Lack of Parallel I/O abilities at the Disk level
  • Poor storage choice (Raid 5, Raid S, Raid 10+1 etc..)
  • Over-Partitioning or Incorrect Partitioning (Could be a symptom of a bad data model!)

These are but a few of the symptoms that can be caused by “poor” data models – especially when the data warehouse or Business intelligence system grows to 45 / 50 Terabytes or more.  But then again, SSD is a hardware component and to be fair – it simply replaces the disk – so why all this talk about the connection to the data model?

In order to answer this, let’s look at the root causes of performance problems:

  1. tables too wide, the number of rows per block is too little, causing double, triple, quadruple the I/Os
  2. Too many indexes causing double/triple the I/O’s for massive inserts/updates
  3. Not enough indexes, causing table scans (more I/Os) over large data sets when queried
  4. Hierarchical data sets N levels deep, causing recursive I/O’s to access (query) the data set in RANDOM READ order
  5. Real-time inflow overwhelming the I/O buffering, causing wait times for ALL queries, and ALL other write processes (think double I/O for logging as well, and triple I/O for updating indexes – listed earlier)

Data Modeling can control some of these issues by vertically partitioning (normalizing) the data set.  This allows the data sets to scale with parallelism, however – this introduces additional CPU processing power AND more I/O’s for joining the data together.  The saving grace (in most cases) is that you can architect the data model for a balanced approach to parallel I/O operations, avoiding the dreaded table scans, and avoiding over-indexing.  When you avoid table scans, and avoid over-indexing the I/O count can drop a little bit; but at the end of the day it’s all about parallelism and how many parallel I/O channels you have.

There is your connection:  Data Model Normalization can increase parallelism (by design) which increases performance, and in some cases can reduce the number of I/O’s by increasing the total number of rows per block.  But yes, it increases I/Os by increasing the joins.

What is SSD really good at?  High speed parallel, random read/write with little to no I/O blocking.

In other words – it greatly enhances parallel data access at much higher speeds when compared to traditional disk.  BUT: Don’t make the mistake of thinking that denormalized data models will see the same or better performance increases as normalized models.  THIS SIMPLY IS NOT TRUE, except in the case of an appliance like Netezza where the hardware/firmware works at knowing where the data lives on disk, and thrives on wide-data sets.  SSD’s in Netezza (without a bit of re-engineering on their part) may in fact slow down their device…  but then again, only until they adapt their own algorithms to work with SSD’s.

But, if you’re thinking: how do I keep my current investment in tact without jumping to the cloud, and/or investing in an appliance?  Let’s talk about it.

Replace some (I would say as much as you can afford because high speed SSD’s are not cheap!) of your disks in your current data warehousing environment with high speed SSD’s.  It’s mostly a seemless operation, as the SSD vendors have done an outstanding job at making it plug and play with current interfaces.  SSD’s dramatically increase the performance of (you guessed it) I/O’s.

So, if you couple a Data Vault design data model (normalized) with an SSD device for your most important data, you will end up with the highest possible performance in your existing systems without needing to buy an appliance, move to a new platform, or re-architect your entire solution.   You can save your investment.  By the way, if your RDBMS vendor doesn’t currently “support” or advocate SSD’s, you can push them to integrate with them.  There’s a really good chance that the engineering required is very minimal on their part.

Where do these hypothesis come from?

Believe it or not, I watch the gaming industry.  Why?  Because they are always pushing the limits of what current hardware can do, graphics, input devices, caching, I/O’s, networking – all have to be high-speed and all have to be parallel.  If the gaming industry can receive benefits, then the BI industry can receive benefits in multiples.  Here’s a study that compares actual performance results of SSD to Hard Drives.  http://www.samsung.com/global/business/semiconductor/products/SSD/downloads/SSD_vs_HDD_is_there_a_difference_Rev_3.pdf

I would argue that BI/EDW has more to gain in performance than the gaming industry, as we frequently deal with a lot more data sets – heavy in the read/write category and that’s where SSD shines.

What about other appliances?  Can SSD Help there?

Yes, it can – it could be a boon to column based appliances, or Oracle Exadata (not necessarily an appliance – although thats debatable), Netezza, Teradata Warehouse Appliance, and more…  Again, the vendors of the appliances may need to make changes to their own hardware/software to really see the benefits – but there would be an incredible boost to performance around the board.  Finally, those multi-core CPU’s may begin to see a little bit of load!

Conclusions…

The SSD is clearly here to stay, and I will say that I believe – if you want to leverage your existing investments and take the next step in performance, then make the jump to SSD.  But make the jump for ONLY your most critical or hot data (as temperature of data is concerned), because today it is cost prohibitive (for high-speed SSD).   I would also say in order to make the most of parallel random accesses, add a Data Vault normalized data warehouse to the mix.  You don’t have to move your whole system – the Data Vault modeling techniques are agile in nature and allow you to build the system incrementally.  I teach you how to make this work inside my on-line training at: http://LearnDataVault.com.

I will be experimenting with SSD on a new laptop (probably early next year), and will release some performance results after that as well, so we can all see what might be possible.

I hope this helps your endeavors to handle large data, real-time feeds, and existing investments, your current system does not need to be moth-balled (at least not yet).

As always – if you have thoughts, comments or stories you’d like to share (even if SSD didn’t work for you) I’d love to hear it.  Register for free, then add your comment.

Cheers,
Dan L

Tags: , , , ,

No comments yet.

Leave a Reply

*