Storage Optimization


Nice to be a Finalist

Posted in Storage by storageoptimization on January 28, 2009
Tags: , , , , , , , ,

My company Ocarina Networks received word this week that Storage Magazine has named us a Storage Management Category finalist in the Storage Magazine Annual Products of the Year Awards.

This is yet another strong validation not only of our business proposition, but of our overall belief that dedupe for online storage–and capacity optimization in general–are becoming must-haves for a wide swath of the business community. This is more true than ever in today’s tough economy.

Our category, “Storage Management Software” is a wide field and the finalists represent everything from EMC’s virtual infrastructure support to Nirvanix’s CloudNAS to Symantec’s storage and virtual server management portal, and so on. The common thread, however, is a focus on getting more from less–whether it’s deduplication for primary, thin provisioning, virtualization, or storage in the cloud, this theme is apparent in just about every entrant.

Thanks to the people at TechTarget and we look forward to announcing that we’ve won the category when the results come out next month.

Advertisements

2009–the Year of Storage Optimization

Posted in Analyst by storageoptimization on January 28, 2009
Tags: ,
Storage consultant Tony Asaro cut straight to the chase on his HDS blog with his top prediction for 2009: “IT professionals will focus on optimization. I should end my blog right here. Nothing is more important this year.”

We couldn’t agree more, Tony. As data volumes grow – and budgets shrink –doing more with less is going to be the most important theme of storage for 2009 and the foreseeable future.

HDS is already recognized as the leader in many of the most important optimizations available in block storage. The next frontier is optimization in file storage. This includes content-aware compression and content-aware dedupe for online NAS, active archives, and content depots.

Being able to store two, 10, or 20 times more file data on a given amount of high performance virtualized HDS physical storage is not only now possible, but an example of vendor technology and user need intersecting at just the right time.

Storage Optimization – The Trend Picks Up

Posted in Storage by storageoptimization on January 26, 2009
Tags: , , , , , ,
Several news articles in the past week are responding to reports about the continued skyrocketing growth of unstructured data, and the technologies that are coming up to meet this new set of demands under today’s economic circumstances. 
Here are a few of the articles that jumped out at us:
Processor
NetworkWorld
InternetNews
As we’ve often mentioned, a combination of solutions is called for when it comes to capacity optimization, one of which is content aware compression, such as that offered by my company Ocarina Networks. Given the state of the economy and everyone’s focus on cost savings, we have no doubt that this trend will pick up in 2009 –dealing with the costs of growing data by having it take 90% less space to actually store is a win-win all around.

NetApp Nabs Best Company

Posted in Storage by storageoptimization on January 23, 2009
Tags: , , , ,

Fortune Magazine has published its yearly report on the “100 Best Companies to Work For.” To everyone’s amazement, the company with the most satisfied employees in the U.S., according to Fortune, is none other than NetApp.

The company was at number 14 last year, and so this is quite a climb by any standards, especially considering that previous winners were among the likes of Starbucks and Google.

Chuck’s Blog, normally a bit acerbic on the competition with his company EMC, writes: “They done good, and deserve all due credit. On other topics, though, it’s still open season …”

Nicely put, Chuck.

What this Fortune award says to us is that the storage industry has shed its image as a backwater–or a place where only the hard core survive–and is emerging as one of the most attractive sectors in all of high tech. With the economy in the dumps, storage is showing itself to be one of the few areas where growth is not only possible, but
inevitable, as the volume of data continues to increase. At Ocarina, we’re certainly experiencing that, and so are extremely pleased to see this recognition of our industry and its value to employees.

Looking back at the year of the cloud

Posted in Storage by storageoptimization on January 21, 2009
Tags: , , , , , , ,
In the past year, we’ve seen a massive shift toward the cloud as a viable and trustworthy storage option for many small to medium-sized businesses. As Chris Preimesberger notes in his recent post that 2008 was “all about capacity and the cloud.”
Meanwhile, Chuck’s Blog is predicting a new emergence of the “private cloud” which will take the place of the “uber clouds” of Amazon and Microsoft. Not sure why he doesn’t mention Amazon, but obviously their S3 offering is another major entrant to the emerging cloud storage arena.
Parascale CEO Sajai Krishnan, meanwhile, sees both private and public clouds  taking off in the coming year. He is quoted in the Web 2.0 journal as follows: “The economic downturn and the addition of private cloud solutions to complement public offerings are creating an environment that enables incremental adoption of cloud storage on a very broad scale.”
As we have noted several times, in order for cloud storage to truly take  off, it must include some kind of capacity optimization in order to ensure  that the costs remain viable. We definitely continue to make this prediction as the cloud ramps up in 2009.

 Optimization includes both compression and content-aware dedupe, and effects both how much it costs to store files in the cloud, as well as optimizations that would make uploading to, and reading from, cloud storage faster over the internet.        

Because almost all clouds are based on “forests” of industry standard servers with software to tie them together as a self-healing scalable storage pool, they have the ideal architecture for hosting lots of CPU-intense data reduction algorithms – next-gen object dedupe, and content-aware compressors that work on specific file types. A traditional filer does not have that kind of CPU horsepower – so the cloud is not only a different cheaper place to go rent Terabytes out. It’s also a new green-field architecture on the storage side – whether you are talking Mozy, Nirvanix, Zetta, or Microsoft – with the kind of horsepower to host new fundamental features for cost- and capacity-optimized storage.

Our Prediction for the Hottest Storage Category of 2009

Posted in Storage by storageoptimization on January 19, 2009
Tags: , , ,

And the winner is… dedupe for online

When it comes to storage, our market research and experience with customers have led us to the following prediction: dedupe for online storage will emerge as the hottest category of the year in 2009.

The current economic climate, coupled with the pace of advancement in cloud storage have created a perfect storm in which the need for cheap online storage is growing exponentially.

This category, which has also been referred to as “dedupe for primary” is a hot one with several entrants, one of which is my company Ocarina Networks.

Some industry observers have implied that this category is being overplayed, and that dedupe for primary won’t be as hot in the coming year as others have predicted. This is no doubt due to a misunderstanding of what is meant by “primary” storage, and where the bulk of the data growth is occurring. To clarify, we’re not talking here about dedupe for transactional databases or backups. The vast increases we’ve seen in storage demand is all in files and in nearline, not in performance-oriented primary storage.

With this in mind, here are the three key areas to consider when thinking about a dedupe solution for online:

1) How much can the product shrink an online data set with a wide mix of the typical kinds of files driving storage growth?
2) How fast can users access files that have been compressed and deduplicated?
3) How easy is it to integrate this new technology into an existing file serving environment?

I’m glad to say that Ocarina excels on all three fronts. Any product can deduplicate virtual machine images. The real question is which ones can also get good results on Exchange, Office 2007, PDF, and the wide range of image-rich data found in Web 2.0, energy, life sciences, medicine, and engineering. That’s where the rubber hits the road for our customers, and so most likely you’re going to be facing the same issues for your nearline data.

Of course, only time will tell whether this prediction is correct, but I’m betting the farm on it myself.

Storing Bush’s Brain

Posted in Storage by storageoptimization on January 9, 2009
Tags: , , ,

Storage woes are hitting home in the most surprising places. The New York Times recently reported that an emergency plan has to be implemented to deal with the massive amount of electronic information that the Bush White House produced over the last eight years–about 100 terabytes of information, which was 50 times the amount of the Clinton White House. Someone should’ve told the President to stop forwarding all those funny cat photos.

Under federal law, every last byte of correspondence must be stored by the National Archives, which, like so many other companies and institutions, is struggling to manage the rising tide of data. The problem has reached crisis proportions, says TechWeb. And of course when it comes to presidential data, this is just the beginning. With an incoming president who has been vigorously fighting to retain his Blackberry since the day he was elected, the Obama administration’s data footprint is sure to be petabytes in magnitude larger still.

Yet another Ocarina

Posted in Featured by storageoptimization on November 12, 2008
Tags: , , , ,

As someone who enjoys my iPhone, I was surprised and pleased to discover a new app that’s getting a lot of attention, the “Ocarina” by Smule–the same folks that came up with the sonic lighter app. According to TechCrunch, it’s a “Textbook example of how to build a great iPhone app.” 

Looks like it could be fun to play, and seems to work something like a real ocarina–the musical instrument, that is. See below for a demo.

The other reason that I am mentioning it is that my company, Ocarina Networks, is, in my humble opinion, a textbook example of another kind. That is, how to build a company that serves a growing and urgent need. In our case, this need is for reducing the storage load in the arena of unstructured data. Like the ocarina iPhone app, we are responding to something that has become extremely popular in this day and age.

The folks at Smule are responding to the fact that the iPhone is becoming a source of entertainment. In our case, we are responding to the immense uptick in the amount of data that must be stored due to the number of photos, videos and, yes, music files that are being shared in our social networking era.

For all this, we never expected that the actual instrument known as the ocarina would be making such a comeback. When we came up with the name Ocarina Networks, our biggest consideration was that it is a real word, rather than one of those conglomerated computerized type monikers like Zminlglynx or … well, like “Smule.”

In any case, glad to share the stage with you guys. And look us up if your storage capacity starts to get too high.

Economic Woes and Storage

Posted in Storage by storageoptimization on October 6, 2008
Tags: , , , , , ,

Every major business magazine has a cover story this week on the economic turmoil that’s gripping the credit markets, Wall Street, and the rest of us. Every one, that is, except Forbes, which chose to put John Chambers, CEO of Cisco on its cover this week. No doubt, some editors over there are wishing they’d made a different choice at this moment–but leaving that aside, in some ways this story says more about the economy than any of the others.

Cisco, the article demonstrates, has jumped in with both feet into the area with the greatest promise: data centers. This unglamorous chunk of reality that underlies all the fun and fancy Web 2.0 that, for now, is keeping Silicon Valley from tanking along with the rest of the economy. (Unless you believe the NYT, of course.)

To quote the Forbes article: “This is what the online computing revolution has become, a giant electricity hog of Internet searches, phone calls, blog posts, wireless downloads, bank transactions and office documents. And video, lots and lots of video. ” The article also includes a chart comparing new server spending v. power and cooling costs.

All of which leads us to the inexorable conclusion–which TechTarget’s Dave Raffo refers to in a recent post–that one of the few places that is sheltered from the current storm is anything that reduces the cost of storage. So yes, storage optimization is the place to be in today’s tough economic climate. But the main point is that it could help keep lots of companies afloat that might otherwise crumple under the weight of their storage costs.

Are you Content Aware?

Posted in Analyst,Storage by storageoptimization on October 2, 2008
Tags: , ,
Storage analyst Robin Harris commented on the storage story of the week–NetApp’s Guarantee that virtualization will mean a 50% gain in storage capacity for its customers. 
Harris’s take on the announcement is that dedupe for primary storage could be “the next big win for IT shops.” Perhaps, but let’s keep in mind that NetApp dedupe is very simple. It only finds duplicate blocks at NetApp WAFL 4K block boundaries. The reason that they are positioning it as a big win for VMware users is that virtual machines (static images of whole operating systems) are exactly one of the few places where you’ll find lots of dupes in primary storage on block-aligned boundaries.
Here is my take: The best results in dedupe for primary storage are going to be from applications that can recognize file types and understand how to find the duplicate information in them. That is where the big wins in dedupe for primary storage are going to be.
Consider this typical scenario: I create a PowerPoint and email it to someone else. They save it, open it, and make an edit – add a slide, or even just edit a bullet or two. That small edit will mean that none of the redundant content of that file falls on the same NetApp WAFL block boundaries. So although the two files are almost entirely the same, you won’t see good dedupe results on them.
A content-aware solution – which combines both information-level dedupe with content-aware compression – should be able to get 10:1 compression on most typical file mixes (especially those Office and engineering ones). A 10:1 ratio is the same as 90% reduction, so if you can shrink 80% of your data by 90%, so can get a pretty good handle on how big the win could be. And by the way, It’s not necessarily a bad thing for the guys who sell disks, either, because what happens when you can get that kind of win is that you start to think differently about what you can store, and how long you can store it for. For example, at my company Ocarina Networks (www.ocarinanetworks.com), we have a customer that plans to store a snapshot a day online for every day’s data for 10 years. That wouldn’t be possible without some drastic deduplication.
Block level dedupe – whether simple block-aligned like NetApp or sliding window like market leader Data Domain – is only going to find a small subset of the duplicate or redundant information in primary storage. That’s because most file types that drive storage growth in primary (or nearline) storage are compressed. Compression will cause the contents of a file to be recomputed – and to look random – every time a file is changed. So if I store a photo, then open it and edit one pixel and save the new version as a new file, there won’t be a single duplicate block at the disk level. On the other hand, almost the entire file is duplicate information.    
Can you find a duplicate graphic that was used in a Powerpoint, a Word document, and a PDF? Powerpoint and Word both compress with a variant of zip; PDF compressed with deflate. Even if the graphic is identical, block level dedupe won’t find the duplicate graphics because they are not stored identically on disk. You need something that can find duplicate data at the information level. Finally, there are pretty concrete data that say that about 80% of the file data on NAS is a candidate for deduplication.
With all that in mind, don’t you think content aware optimization is going to be the next truly big win?
« Previous PageNext Page »