Storage Optimization


Less is More–Part 2

Posted in Featured,File Systems,Storage by storageoptimization on April 25, 2008

As we all know, the internet is where there is huge storage growth, multi-petabyte scale, and a need to stay very close to the commodity price point on storage costs. There are two common threads across all of the “less is more” file systems that have been popping up to handle all this growth. 

First, they are all designed in a way that you can build very scalable, very large pools of storage using generic white box servers stuffed with cheap disks. Second, they mostly support only the most primitive operations — create a new file, read that file, delete a file. While I’m generalizing, and this is not exactly true for all of these new file systems, many just skip things that are considered standard in traditional file systems: locking, Posix semantics, authentication, ACLs, concurrency control, metadata or the ability to list and search for files.

The overhead of all those traditional file system operations is too much for massive internet-scale operations where the primary purpose of a file system is for a user to upload something, for millions of people to look at it over and over, and maybe someday, sometime, someone will delete something.

These file systems are in contrast to advanced file system developments from places like NetApp’s latest OnTap and WAFL releases, HP’s PolyServe cluster file system, or the transaction-enabled NTFS from Microsoft that you can find in Server 2008. 

The line in the sand is, there are file systems that are designed to be used by people, and file systems that are designed to be used by specific applications only.    

The commercial file systems grew up serving the needs of business users and business applications. They are designed to host a wide variety of applications, including production databases, to let users peruse and manage their files, and to let storage administrators keep up with both growth, availability, and corporate compliance requirements.     

As a consequence, more and more value-add features are being put in to the file system to support these use-cases. The “less is more” crowd, on the other hand, wants a very cost-effective but massively scalable pool of storage to make available to their web applications.  A global namespace (so it looks like one giant pool of storage), and low, low cost per terabyte are the drivers of these file systems.

Users don’t list their directories in these file systems. In fact, users never see these file systems. Users see web applications, and the web applications use databases to keep track of what files are where in the massive storage pool, and who is allowed to see them. In that sense, in the “less is more” file system world, a lot of the value-add and management functionality of the file system is moving up in to the application layer, especially in the largest content-rich web sites.   

>From my point of view, the feature-rich commercial file systems will continue to evolve to meet the needs of corporate customers, including scaling to meet their growth needs. The “less is more” file systems will continue to push out traditional file systems in the highest growth web properties and other customers whose data growth is at that many-petabyte scale. Finally, the two things are not entirely incompatible – most of the new web tier file systems actually have a bunch of single node file systems buried in them on each storage node somewhere at the bottom building block level of their architecture.

But it’s time that these two file system approaches evolve and develop some kind of relationship–because for now, neither is perfectly suited the problem at hand. There’s no reason why those building blocks couldn’t have richer functionality, such as transparent clustering and failover, that comes from commercial file systems, and still give you the massive scale and cheap $/petabyte of a global namespace and commodity building blocks.

The internet has often been the cauldron in which new technologies are forged that then eventually move in to the corporate data center. We saw this in the server world, where low cost Linux servers displaced Sun and other Unix systems early on, and eventually that movement to cheaper, standard servers pushed Big Unix out of the corporate data center too.   

The cost differences between a corporation’s EMC DMX storage array and a storage pool of white boxes with disk is even greater than the cost difference between Unix machines and standard Linux boxes. People are more hesitant to change storage platforms than server platforms (for good reason), but that huge cost difference and the rate at which storage is growing is going to cause the shift to happen sooner or later.

My prediction (and hope) is that someone will figure out a way to marry the “less is more” simple file system layers with richer underlying commercial file systems. This is what’s needed.

Less is More

Posted in Analyst,Featured,File Systems,Storage by storageoptimization on April 24, 2008
Tags: , ,

Less is more … or is it? Part One

I recently returned from Storage Networking World in Orlando. As everyone knows, the conference is mainly a place for storage vendors to meet each other, tout their wares, and nose around in their competitors’ booths pretending to be potential customers. There are some good sessions, however, and one of the best was IDC analyst Noemi Greyzdorf’s presentation on the future of file systems.

Her smart and interesting talk was on the evolution of clustered, distributed, and grid file systems. As I listened, it occurred to me that I’m seeing a big split in the file system world, especially at the high end, where really large amounts of data are stored.

One of Noemi’s key points is that more and more functionality is being packed into file systems. As she puts it, file systems are the natural place for value-add knowledge about storage to be kept. That’s certainly true, and there are a number of advanced file systems that are becoming richer and richer in terms of integrated features.

At the same time, there is definitely a “less is more” crowd emerging, where many of the most basic features of file systems are being left out in some of the newest large-scale file systems around. This group includes file systems like GoogleFS, Hadoop, Mogile, Amazon’s S3 simple storage service, and the in-house developments at a couple of other very large online web 2.0 shops.

Are these two trends in file systems headed on a collision course? I don’t think so. But what I do see is that neither of these solutions is nailing the growing problem posed by the exploding amount of internet data that needs to be managed and stored. In other words, there are issues with both of these approaches. In my next entry, I will discuss what that is, and how we might solve it.

Why Storage Is So Inefficient: The Huge Gulf Between Applications Development and Storage Platforms

Posted in Featured by storageoptimization on April 14, 2008

Most of what is driving storage growth is files created by applications. The big applications are email, Microsoft Office and office files like PDFs, and rich media files like photos, music, and videos.  There’s a lot of inefficiency in how all the data in these application files get stored.

If you stop and think about it, there’s a simple explanation for this. Applications are written by developers. These folks are trying to solve an application problem, not a storage problem.

Application developers are working with logical files – you have a file name, you read and write bytes at the start, the middle, or the end of those files. They are not thinking, “OK, this is going to go at sector x on cylinder y on platter z on some disk drive.” In their minds, that’s the job of the storage system.

On the other side of this, you have the storage developers. They make systems to store files. But they don’t know what’s in the files, how the files are being used, or even what the data in them is for.

If you are a file system vendor, or a file server or NAS vendor, you create a storage solution where applications can write files, and you figure out how to lay out and organize those files on volumes and disks – with RAID levels, mirroring, snapshots, and all sorts of other cool storage features.  But you don’t know – or care – what is inside the files.  That’s up to the application.

So, as you can see there’s a gulf here. You could see it as a problem, or you could see the way we do: an opportunity. There is a clear need for an improved solution.
If you were a storage expert, and you did look inside each file, and understood how that file data was being laid out, and why, and how it was being used, you could probably figure out much more efficient ways of storing it.

In the old days, this would have been too much work to contemplate – most applications were custom, and every file format was different, and no one could have kept up or figured it all out.   That’s not true anymore.  Today, the vast majority of the world’s file data are in about two dozen fundamental file formats, many of which we already listed – Word, Excel, JPEG, MPEG, PDF, PowerPoint, mp3 and maybe 20 others.   It’s no longer an insurmountable task to figure out how to optimize most of what a data center has to store. That’s true for an internet data center and it’s true for a corporate data center too.

In other words, there is a ton of efficiency to be gained from bridging the gap between how applications write data and how storage stores data. You can get a huge amount of space savings just by dealing with the top 25 file types.  You don’t have to get them all.    If I can drastically improve the space taken by 80% of all your files, that’s still a big win, even I never do figure out the other 20%.

At Ocarina, our ECOsystem (Extract, Correlate, Optimize) starts out by identifying each file by type, understanding what’s inside it, and then taking a set of steps to store every bit of information in those files, but doing so with using a much smaller amount of disk space for each file.

Storage optimization for online storage is going to be about being file type aware.   Without bridging that gap between traditional storage technology and how the application sees its data, online storage optimization won’t get any further than what’s been achieved in the past by generic compression or dedupe.

INTRODUCING STORAGE OPTIMIZATION BLOG

Posted in Featured by storageoptimization on April 7, 2008

Helping Address a New Set of Storage Challenges

In recent years, a handful of storage companies have built sizable and successful businesses by driving innovation in data reduction for areas such as back-up data and WAN data movement. Data Domain pioneered de-duplication, which has become the de facto standard for reducing the amount of disk space needed for backups.

However, the biggest and most expensive part of storage—the one that threatens to swamp Internet data centers with its upwardly spiraling load—is online data. Trying to fix this storage problem with these methods is like using a screwdriver on a nail. They’re the wrong tools for the job.

Online Data: A Different Animal

Whether you call them primary storage, online, or nearline, online data sets have different characteristics than backups. This translates to different needs for reducing their size. Backups are repetitive – you do them every day. If you’re backing up the same files over and over, it makes sense that there is going to be duplicate information that can eliminated. Several companies have dedupe solutions that do this, and there are pros and cons to each company’s solution. But the bottom line is that dedupe just doesn’t cut it when it comes to online data.

In your online set of files, there just isn’t a whole lot of duplication. Where there is duplicate information, it is often encoded differently from one file to the next. If I take a photograph and store it as a JPEG, then crop it and paste it in to a PowerPoint, and then scale it down and paste it in to a PDF document, it’s the same photo in all three places – but there might not be a single duplicate block of data that’s common across any of those three files. That’s because they are all compressed and encoded differently by the different applications that create and save those files. Even the best dedupe solution might not get much data reduction when presented with those types of files.

Dedupe Sample

Closing The “Dedupe Gap”

Deduplication may provide some benefit for online file sets, but it’s not really well-suited for addressing the growing needs of businesses that have enormous amounts of online file data. I’ll call that the “dedupe gap.” You want the benefit of dedupe, but to really deal with the massive growth of storage – most of which is in the form of online or nearline files – you need more. You need to be able to find and eliminate the redundant information in files at the information level, not the block or byte level.

A new generation storage optimization solution will have to address three big questions:

  1. Can you find and eliminate redundant information across a set of files, even when there are no duplicate blocks or strings of bytes at the disk level in those files?
  2. Can you provide data reduction for online files that have already been compressed using one or more generic compression tools?
  3. Can you provide data reduction where it is most needed – on storage that customers already have – without having to buy or implement a whole new tier of storage.

Companies that can solve all three will be able to close the dedupe gap and bring serious data reduction to the online and nearline space, delivering the same kind of benefits for online storage that dedupe has for backups.

Enter Ocarina Networks

Ocarina Networks Logo

Ocarina is a company that was founded to solve these problems. Ocarina’s three-step ECOsystem process – Extract, Correlate, and Optimize – is a new method for data reduction that can give you up to 10:1 data reduction for online storage, on your existing storage, from your existing vendors, without changing your storage management or backup processes.

Now that we’ve introduced ourselves, a bit about our the Storage Optimization blog. Here we’ll cover a wide cross section of topics having to do with data reduction – compression, dedupe, single-instancing, and where the research is headed – and we’ll welcome comments from everyone who has an interest in making online storage more efficient.