Storage Optimization


Capacity-Optimized Storage: The Emergence of the O Tier

Posted in Storage by storageoptimization on July 23, 2008
Tags: , , , ,

 

Everyone is talking about the explosive growth of storage, but all growth is not the same. In fact, unstructured data (files) are growing much faster than structured data (databases), and capacity-optimized storage for files is growing much faster than traditional filer-based storage. This is driving some key developments in storage technology, as storage offerings emerge that are designed specifically for where the growth is.
       Traditionally, the difference between performance-optimized storage and capacity-optimized storage was just whether a storage system shipped with Fibre Channel drives or SATA drives, and maybe how much cache was in the storage controller. Now, the differences between Performance-Optimized and Capacity-Optimized storage are becoming much bigger, with advances in both tiers taking them in different directions and further away from each other.
       The “P Tier”–long dominated by NetApp and EMC–is seeing lots of advances, include bigger caches, solid state disk, and more fault tolerance. It’s where data gets created, and there is a huge focus on never losing data that has just been created. The “P” in this tier doesn’t just represent “Performance,” but also “Protection.” Performance is measured in SPEC sfs and IOPS, and protection features include mirroring, RAID levels, synchronous replication to DR sites, and snapshots every time a file is modified or deleted. However, the P Tier is very costly per Terabyte because of the premium technology required to provide all those protection mechanisms while providing stellar low latency performance at the same time.
       Enter the “O Tier”–or what IDC calls capacity-optimized storage. This is no longer just a NetApp or EMC filer with SATA drives instead of Fibre . True O Tier offerings–which are starting to come out from the major vendors–have several major architectural differences. EMC’s Hulk, IBM’s XIV, and the HP’s exciting ExDS Extreme Storage are all based on scale-out architectures. You buy “bricks” of capacity, at near commodity prices, and you can scale out these systems by just adding more bricks.  Almost all of the scale-out “O Tier” offerings are based on clustered or distributed file systems.  These architectures are drastically cheaper than P Tier storage, even P Tier offerings with SATA disks.
       What’s more, the O Tier is becoming clearer in what its metrics are. Data may be created in the P Tier, but it moves for long-term storage to the O Tier. That means there is less focus on extravagant protection measures. Data that makes it to the O Tier has already been backed up, snapped, replicated, and protected many times in the P Tier. On the O Tier, the key metrics are Cost per Terabyte, Terabytes per Admin, and Watts per Terabyte over its lifecycle.
       The O Tier is evolving to solve different storage problems than the legacy P Tier and because of that the O Tier is developing its own new features for capacity optimization. The most important of these new features is integrated data reduction. That can take the form of block-level dedupe, next generation compression, or content-aware optimization. There are several technologies coming out aiming to get 5X, 10X or 20X data reduction for online storage in the O Tier. Expect these technologies to be embedded as integrated elements in leading O Tier storage offerings.  Examples would include Data Domain moving from being a storage solution for backups to offering nearline storage with dedupe, or the several storage vendors who are integrating my company Ocarina Networks’ storage optimization solution in to their O Tier storage offerings.
       Anyone who is tracking trends in storage need to start paying attention to differentiating these tiers not by just what disks are in a given filer, but whether they are really P Tier filers or O Tier filers, with true Performance and Protection in the P Tier, or true Capacity-Optimization in the O Tier.
       While the traditional NAS leaders, EMC and NetApp, will certainly come out with O Tier offerings, the emergence of a new tier with different characteristics creates a new market opportunity for other major players to become the new  leaders in the O Tier. Look for HP, in particular, as well as IBM,  Ibrix, Isilon, and Blue Arc to be making major pushes in the O Tier this year and especially in ’09.
Advertisements

2 Responses to 'Capacity-Optimized Storage: The Emergence of the O Tier'

Subscribe to comments with RSS or TrackBack to 'Capacity-Optimized Storage: The Emergence of the O Tier'.


  1. Nice blog. I would also consider the impact of content storage on the “O” tier as well. This space is booming and driving requirements that existing architectures cannot satisfy.


  2. […] I mentioned in an earlier post, the kind of data that’s driving much of today’s storage […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: