Storage Optimization


Coming Soon – Blog Face Lift

Posted in Featured by storageoptimization on January 30, 2009

3_film1

Great news — the Storage Optimization blog is getting a face lift. Stay tuned as we will be changing our look and feel, with a lot more features, and a tie-in to microblogging. For those of you who subscribe to this blog or have it bookmarked, look out for a new Web address.

Thanks to all our readers and we hope you’ll enjoy the new, improved Storage Optimization.

Nice to be a Finalist

Posted in Storage by storageoptimization on January 28, 2009
Tags: , , , , , , , ,

My company Ocarina Networks received word this week that Storage Magazine has named us a Storage Management Category finalist in the Storage Magazine Annual Products of the Year Awards.

This is yet another strong validation not only of our business proposition, but of our overall belief that dedupe for online storage–and capacity optimization in general–are becoming must-haves for a wide swath of the business community. This is more true than ever in today’s tough economy.

Our category, “Storage Management Software” is a wide field and the finalists represent everything from EMC’s virtual infrastructure support to Nirvanix’s CloudNAS to Symantec’s storage and virtual server management portal, and so on. The common thread, however, is a focus on getting more from less–whether it’s deduplication for primary, thin provisioning, virtualization, or storage in the cloud, this theme is apparent in just about every entrant.

Thanks to the people at TechTarget and we look forward to announcing that we’ve won the category when the results come out next month.

2009–the Year of Storage Optimization

Posted in Analyst by storageoptimization on January 28, 2009
Tags: ,
Storage consultant Tony Asaro cut straight to the chase on his HDS blog with his top prediction for 2009: “IT professionals will focus on optimization. I should end my blog right here. Nothing is more important this year.”

We couldn’t agree more, Tony. As data volumes grow – and budgets shrink –doing more with less is going to be the most important theme of storage for 2009 and the foreseeable future.

HDS is already recognized as the leader in many of the most important optimizations available in block storage. The next frontier is optimization in file storage. This includes content-aware compression and content-aware dedupe for online NAS, active archives, and content depots.

Being able to store two, 10, or 20 times more file data on a given amount of high performance virtualized HDS physical storage is not only now possible, but an example of vendor technology and user need intersecting at just the right time.

Storage Optimization – The Trend Picks Up

Posted in Storage by storageoptimization on January 26, 2009
Tags: , , , , , ,
Several news articles in the past week are responding to reports about the continued skyrocketing growth of unstructured data, and the technologies that are coming up to meet this new set of demands under today’s economic circumstances. 
Here are a few of the articles that jumped out at us:
Processor
NetworkWorld
InternetNews
As we’ve often mentioned, a combination of solutions is called for when it comes to capacity optimization, one of which is content aware compression, such as that offered by my company Ocarina Networks. Given the state of the economy and everyone’s focus on cost savings, we have no doubt that this trend will pick up in 2009 –dealing with the costs of growing data by having it take 90% less space to actually store is a win-win all around.

NetApp Nabs Best Company

Posted in Storage by storageoptimization on January 23, 2009
Tags: , , , ,

Fortune Magazine has published its yearly report on the “100 Best Companies to Work For.” To everyone’s amazement, the company with the most satisfied employees in the U.S., according to Fortune, is none other than NetApp.

The company was at number 14 last year, and so this is quite a climb by any standards, especially considering that previous winners were among the likes of Starbucks and Google.

Chuck’s Blog, normally a bit acerbic on the competition with his company EMC, writes: “They done good, and deserve all due credit. On other topics, though, it’s still open season …”

Nicely put, Chuck.

What this Fortune award says to us is that the storage industry has shed its image as a backwater–or a place where only the hard core survive–and is emerging as one of the most attractive sectors in all of high tech. With the economy in the dumps, storage is showing itself to be one of the few areas where growth is not only possible, but
inevitable, as the volume of data continues to increase. At Ocarina, we’re certainly experiencing that, and so are extremely pleased to see this recognition of our industry and its value to employees.

Looking back at the year of the cloud

Posted in Storage by storageoptimization on January 21, 2009
Tags: , , , , , , ,
In the past year, we’ve seen a massive shift toward the cloud as a viable and trustworthy storage option for many small to medium-sized businesses. As Chris Preimesberger notes in his recent post that 2008 was “all about capacity and the cloud.”
Meanwhile, Chuck’s Blog is predicting a new emergence of the “private cloud” which will take the place of the “uber clouds” of Amazon and Microsoft. Not sure why he doesn’t mention Amazon, but obviously their S3 offering is another major entrant to the emerging cloud storage arena.
Parascale CEO Sajai Krishnan, meanwhile, sees both private and public clouds  taking off in the coming year. He is quoted in the Web 2.0 journal as follows: “The economic downturn and the addition of private cloud solutions to complement public offerings are creating an environment that enables incremental adoption of cloud storage on a very broad scale.”
As we have noted several times, in order for cloud storage to truly take  off, it must include some kind of capacity optimization in order to ensure  that the costs remain viable. We definitely continue to make this prediction as the cloud ramps up in 2009.

 Optimization includes both compression and content-aware dedupe, and effects both how much it costs to store files in the cloud, as well as optimizations that would make uploading to, and reading from, cloud storage faster over the internet.        

Because almost all clouds are based on “forests” of industry standard servers with software to tie them together as a self-healing scalable storage pool, they have the ideal architecture for hosting lots of CPU-intense data reduction algorithms – next-gen object dedupe, and content-aware compressors that work on specific file types. A traditional filer does not have that kind of CPU horsepower – so the cloud is not only a different cheaper place to go rent Terabytes out. It’s also a new green-field architecture on the storage side – whether you are talking Mozy, Nirvanix, Zetta, or Microsoft – with the kind of horsepower to host new fundamental features for cost- and capacity-optimized storage.

Our Prediction for the Hottest Storage Category of 2009

Posted in Storage by storageoptimization on January 19, 2009
Tags: , , ,

And the winner is… dedupe for online

When it comes to storage, our market research and experience with customers have led us to the following prediction: dedupe for online storage will emerge as the hottest category of the year in 2009.

The current economic climate, coupled with the pace of advancement in cloud storage have created a perfect storm in which the need for cheap online storage is growing exponentially.

This category, which has also been referred to as “dedupe for primary” is a hot one with several entrants, one of which is my company Ocarina Networks.

Some industry observers have implied that this category is being overplayed, and that dedupe for primary won’t be as hot in the coming year as others have predicted. This is no doubt due to a misunderstanding of what is meant by “primary” storage, and where the bulk of the data growth is occurring. To clarify, we’re not talking here about dedupe for transactional databases or backups. The vast increases we’ve seen in storage demand is all in files and in nearline, not in performance-oriented primary storage.

With this in mind, here are the three key areas to consider when thinking about a dedupe solution for online:

1) How much can the product shrink an online data set with a wide mix of the typical kinds of files driving storage growth?
2) How fast can users access files that have been compressed and deduplicated?
3) How easy is it to integrate this new technology into an existing file serving environment?

I’m glad to say that Ocarina excels on all three fronts. Any product can deduplicate virtual machine images. The real question is which ones can also get good results on Exchange, Office 2007, PDF, and the wide range of image-rich data found in Web 2.0, energy, life sciences, medicine, and engineering. That’s where the rubber hits the road for our customers, and so most likely you’re going to be facing the same issues for your nearline data.

Of course, only time will tell whether this prediction is correct, but I’m betting the farm on it myself.

Storing Bush’s Brain

Posted in Storage by storageoptimization on January 9, 2009
Tags: , , ,

Storage woes are hitting home in the most surprising places. The New York Times recently reported that an emergency plan has to be implemented to deal with the massive amount of electronic information that the Bush White House produced over the last eight years–about 100 terabytes of information, which was 50 times the amount of the Clinton White House. Someone should’ve told the President to stop forwarding all those funny cat photos.

Under federal law, every last byte of correspondence must be stored by the National Archives, which, like so many other companies and institutions, is struggling to manage the rising tide of data. The problem has reached crisis proportions, says TechWeb. And of course when it comes to presidential data, this is just the beginning. With an incoming president who has been vigorously fighting to retain his Blackberry since the day he was elected, the Obama administration’s data footprint is sure to be petabytes in magnitude larger still.