Archive for June 2011

A bit on SANs and system dependencies

It’s fairly well known that I’m not a fan of SANs for mysql. I’m really not a fan of them in general but most of this is from not being a fan of them for mysql. Over the past few decades the server world has migrated from few large expensive servers to lots of small cheap servers. For the most part this is accepted as a good thing. Large mainframe computers were slowly replaced by relatively smaller yet still expensive servers which are now replaced by smaller cheaper servers. I apply this same logic and trend to storage. I think storage should be replaced by smaller local storage or storage services rather than large centralized storage.

The idea of a SAN seems great on paper. You get a large pool of storage which can be sliced up into smaller pools as needed by different clients. If one client demands more storage it’s fairly straight forward to allocate them more storage and expand the filesystem over it. if you need more storage you simply add it to the pool and dole it out as needed. When it comes to mysql there are some problems with this approach. Some of the problems are technical others are with people and process.

People tend to think of storage as a sack that can hold N apples. When the sack is full of apples either get another sack or a larger sack and add more apples. This is only one dimension of storage. The other one is more difficult to plan for and often times overlooked until it’s too late. This is the rate at which apples can be added or removed from the sack or sorting through the sack to get at specific apples. The time it takes to access data and the number of concurrent requests that can be supported is more important for the speed of a database than the amount of data that can be stored.

Not all SAN storage is created equal. It’s possible that the large shared pool of storage is created by different underlying disks. Some of these disks can only support a few hundred iops (input/output operations per second) while others can support several thousand. By mixing the type of disks the once large shared storage pool is now smaller pools classified by the rate at which they can read and write data. Many SANs also support caching reads or writes in front of those disks. So not only do you now have to decide which clients will use which speed of disks but making sure the cache isn’t overloaded by any of the clients.

On top of all of this there is the problem of changing workloads. During almost every SAN performance related conversation I have ever had someone always suggests that I benchmark the SAN vs DAS and let the benchmark drive the architecture. This is fine if the SAN performance doesn’t change but as I’ve already established allocating SAN space is an ongoing task. This is because the workload of clients can change and the number of clients on the SAN can also change. Adding more clients or changing the workload of clients can change the performance for other clients.

I understand that it is possible to allocate clients in a way that will prevent some clients from interfering with others but in practice I’ve never seen this work out. Inevitably some clients interfere with others. This becomes a huge problem as people tend to think of i/o latency as a constant within some small range of variation. I’ve seen i/o latency go from a few milliseconds to several seconds for the same storage. Tracking down the source of these problems can be difficult especially if the clients are separate teams or business units. Adding new clients to a SAN or changing the workload invalidates any previous benchmarks or capacity plans done against it.

Back to mysql. Most mysql installations commit transactions serially. As part of this process they run an fsync on the transaction log to make sure the transaction is sycned to disk. This means that doubling the i/o latency will halve the number of transactions that can be sent through. This makes mysql more sensitive to i/o latency than other types of applications. Mysql slaves will perform every i/o of a transaction sequentially, not just the commit phase which makes them even more sensitive. While it’s technically possible to have a SAN function correctly it isn’t as simple as carving up the global pool of storage for clients.

I have to be fair and say that there is a potential for performance problems with DAS as well. Performance can degrade for various reasons such as when a disk dies or a battery dies. In these situations performance can degrade a little bit or it can degrade severely such as the controller refusing to cache writes without a battery. The difference between these scenarios and the allocation scenarios on a SAN is that DAS failures are states that can be monitored. If a disk dies your monitor sends an alert and you replace it. If the battery dies simply replace it. This means your monitoring has to be in place and you need a HA pair of machines but I don’t feel like that is a tall order.

The upside of using DAS is that each machine gets it’s own set of disks. If it overloads those disks and causes problems those problems won’t be reflected on the other machines io latency. What it does mean is that it’s more difficult to add space to each machine meaning that you need to plan accordingly. It can also mean that there is a larger pool over unused space because each machine needs a certain percentage of overhead. This depends on the workload of course but it must be considered. This also means that it is more difficult to scale vertically although SSDs and fusionio have raised the vertical scalability ceiling significantly in the past few years.

 

[ Update 2011-06-19 Daniël van Eeden has posted a good followup on his blog ]