Speed up your web server with memcached distributed caching

Profiling Time

For existing web applications, the question is always where you can best deploy and use memcached. Profiling gives you the answer: Database queries that really stress the server are best routed via the cache. Listings 3 and 4 show you what this looks like in real life: Before querying the database, the code checks to see whether the requested information is available in memcached. A query only occurs if the information isn't found.

Listing 3

Database Query Without memcached …


Listing 4

… and After Introducing memcached


To avoid the need for another query, the results are stored in the cache. To keep the cache up to date, the information from each write operation is also cached. In Listing 4, the keys are built by combining the word user with the ID for the user account – this is a common strategy for generating unique keys.

This approach makes it easy to integrate memcached with your own applications, but you need to be aware of the pitfalls, which do not become apparent until you look under the hood.


Experienced programmers will probably already have noticed that memcached uses a dictionary internally; some programming languages call this an associative array. Like a real dictionary, this data structure stores each value under a specific key(word). The memcached system implements its dictionary in the form of two subsequent hash tables [5]. First, the client library accepts the key and runs a sophisticated mathematical function against it to create a number or hash. The number tells the library which memcached daemon it needs to talk to. After receiving the data, the daemon uses its own hash function to assign a memory allocation for storing the data. The mathematical functions are designed to return exactly the same number for a specific key. This approach guarantees extremely short search and response times. To retrieve information from the cache, the memcached system simply needs to evaluate the two mathematical functions. Data transmission across the network accounts for most of the response time.

Because the client library decides which daemon stores which data, all of the machines involved need to have the same versions of the same libraries. A mix of versions can cause clients to use different hash functions and thus store the same information on different servers, which would cause inconsistency and mess up the data. If you use the libmemcached C and C++ library, you need to pay special attention to this because it offers a choice of several hash functions.

On top of this, each client uses a different serialization method. For example, Java uses Hibernate, whereas PHP uses serialize. In other words, if you are not just storing strings, but also objects, in the cache, shared use based on different languages is impossible – even if all the clients use the same hash function. The libraries are also allowed to choose their compression methods.

Memory Loss

The cache handles parallel requests without losing speed. In a theater, several attendants could walk through the aisles at the same time, hanging up coats, or handing them back to people, without people having to wait in line. The same principle applies to memcached: Each client ascertains which daemon it needs to talk to, and in an ideal world, each attendant would walk down a different aisle: of course, nothing can stop the attendants from following one another down the same aisle. If you retrieve data from the cache, manipulate the data, and write it back to the cache, there is no guarantee that a separate instance has not modified the data in the meantime. The gets and cas commands introduced in version 1.2.5 offer a solution: Users can issue a gets command to retrieve data and receive a unique identifier, which they can then send back to the server, along with the modified data, with a cas command. The daemon then refers to the ID to check to see whether the data has changed since the last query and overwrites using the new value if this is the case.

The way memcached handles a server failure also depends on the client. By default, memcached will simply act as if the requested information is not, or is no longer, in the cache. Because of this, it is a good idea to permanently monitor the cache servers. Thanks to the modular design, a daemon is easily replaced. All you need to do is de-register the previous IPs and register the new IP addresses with the clients. But note that some libraries will consider the whole cache to be invalid in this case.

Buy Linux Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Nginx

    The fast and practical Nginx web server is easy to configure and extend.

  • Books


  • AuFS

    AuFS offers a painless filesystem for a thin client, and FS-Cache provides a persistent cache.

  • Netdata

    What cannot be measured cannot be improved. Netdata lets you measure almost anything – at least as long as it's about the performance and health of a Linux computer.

  • Offline FS

    Tired of copying and recopying files from your laptop to the office file server? Maybe you need an automated offline filesystem, such as OFS.

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More