Storage Optimization


How to Cut Storage Costs – Taneja

The explosive growth of data is threatening to overwhelm any number of industries. Whether we’re talking about an online photo sharing site or high throughput gene sequencing lab, the pain is the same. There’s too much data and not enough space to store it on, with the result that costs are spiraling out of control. A recent white paper from the Taneja Group: “Extending the Vision for Primary Storage Optimization: Ocarina Networks” takes a look at the emerging capacity optimization technologies to handle this influx of data. It comes to the conclusion that ours is one of the most compelling technologies, being the only content-aware primary storage optimization (PSO) on the market today.

In its conclusion, the report states: “‘If you’re looking at PSO technology, Ocarina needs to be on your short list.”

Click here to access this report.

The impending storage crunch

Posted in Storage by storageoptimization on July 28, 2008
Tags: , , ,

No one can miss the fact that data storage is spiraling upward at a terrifying rate. Joerg Hallbauer puts it on Dell‘s Future of Storage blog hit the nail on the head with his post: “We are running out of places to put things.”

Citing data collected by IDC, Hallbauer concludes that in a mere three years, we there will be 1400 exabytes sitting on disk. Currently, according to the study, there are 281 exabytes of data being stored, and the CAGR rate is 70 percent. Much of this data is on laptops, home computers or servers under your desk today, but as Joerg correctly notes, there’s no question its migrating quickly to the cloud. Huge data centers will end up holding most of this data, and disk drives are not growing fast enough to deal with it anymore.

So, where do we go from here? Well, if the traditional answer was, wait for bigger drives so I can put more stuff on a disk, the other logical thing to do is to say, how can I put a lot more stuff on disks that I already have?  The answer is advanced storage optimization. The first simple storage optimization solutions are out there today – single instancing, deduplication, and compression. But the area of storage optimization is really just taking off, and much more sophisticated approaches are emerging that will allow a disk – whatever its physical size – to store 10, 20, or 100 times more data than it does today.  

What’s more, the move to large data centers providing huge cloud storage services will make this more efficient, because storage optimization is all about finding redundant information and figuring out how to store it more efficiently. So the larger the data set, the more likely you will see big wins from next generation storage optimization.

This also naturally leads to more tiering. Where today you have fast disks (Fibre Channel or SAS) and slow disks (SATA) making up the tiers, it’s much more likely in the future that the fast tiers will be solid state storage of some sort (SSD and Flash, as Joerg points out) and the massive tiers that hold the bulk of all these Exabytes will be the largest possible disks integrated in to systems that have very efficient storage optimization built in.

Image credit: Orange Photography blog archives