Skip to main content
    United States [change]    Terms of use
    Home    Products    Services & solutions    Support & downloads    My account    
IBM Research

Storage Systems - Projects - ARC: Adaptive Replacement Cache

IBM Almaden Research Center


Adaptive Replacement Cache (ARC) is a scan-resistant, low space/time complexity, adaptive algorithm that outperforms LRU!

  • Papers: ARC was published in FAST 2003, and later in IEEE Computer and in USENIX ;login:, CAR was published in FAST 2004.

  • Prestige: IBM Pat Goldberg Best Paper Award in Computer Science, Electrical Engineering, and Mathematics, IBM Outstanding Innovation Award.

Detailed Description

Caching dates back (at least) to von Neumann's classic 1946 paper that laid the foundation for modern practical computing. Caching is typically used when a fast, but expensive memory is put in front of a cheap, but slow memory. The fast memory is known as cache. Today, caching is used widely in storage systems, databases, web servers, middleware, processors, file systems, disk drives, RAID controllers, operating systems and in varied and numerous other applications.

A key goal of a cache replacement policy is to maximize the "cache hit ratio" -- the fraction of requested pages that are already in the cache and do not have to be loaded from the slow memory. For nearly four decades, the Least Recently Used (LRU) algorithm and its variants have been the most widely used class of cache replacement policies. LRU maintains cache pages based on their most recent access times and replaces the least recently used pages. While LRU is simple to implement and captures the "clustered locality of reference" that is common in many workloads, it does not capture "frequency" features of workloads and is not well suited for sequential or one-time-use-only workloads such as database scans.

A long-standing question in computer science has been: "Is it possible to improve on LRU across a wide range of workloads and cache sizes without incurring excess overhead or requiring workload specific pre-tuning?" Hundreds of attempts have been made. However, until now, none has been universally successful. One possible approach is to replace the "least frequently used" (LFU) cache pages. While LFU is beneficial for stable access patterns and is advantageous for sequential workloads, it does not capture the "clustered locality of reference" that LRU does and requires logarithmic computational complexity, thus making it too complex for universal use.

We have invented a novel algorithm, which we call Adaptive Replacement Cache (ARC), that combines the virtues of LRU and LFU, while avoiding vices of both. The basic idea behind ARC is to adaptively, dynamically and relentlessly balance between "recency" and "frequency" to achieve a high hit ratio. At a very high level, the key idea is to maintain a history of pages recently evicted from the cache and to utilize this history to effect a continual adaptation between "recency" and "frequency." For a detailed description of the algorithm, see the associated research papers.

  • ARC has low space complexity. A realistic implementation had a total space overhead of less than 0.75%.
  • ARC has low time complexity; virtually identical to LRU.
  • ARC is self-tuning and adapts to different workloads and cache sizes. In particular, it gives very little cache space to sequential workloads, thus avoiding a key limitation of LRU.
  • ARC outperforms LRU for a wide range of workloads. For example, for a huge, real-life workload generated by a large commercial search engine with a 4GB cache, ARC's hit ratio was dramatically better than that of LRU (40.44 percent vs. 27.62 percent).
  • ARC is extremely simple to implement. Only a few lines of code are needed to convert an existing LRU implementation into an ARC implementation. For a "how-to" on this process, see the paper, "One Up on LRU."

arrow image IBM Almaden Research - Advanced Storage Systems
von Neumann
"...constructing a hierarchy of memories,
each of which has greater capacity ...
but which is less quickly accessible."
von Neumann et al., 1946

Technical Papers
Nimrod Megiddo and Dharmendra S. Modha
Link to content in pdf format Outperforming LRU with an Adaptive Replacement Cache Algorithm (124 KB Acrobat PDF file)
IEEE Computer Magazine, pp. 58-65, April 2004.

Nimrod Megiddo and Dharmendra S. Modha
Link to content in pdf format ARC: A Self-Tuning, Low Overhead Replacement Cache (272 KB Acrobat PDF file) USENIX File and Storage Technologies (FAST), March 31, 2003, San Francisco, CA.

Nimrod Megiddo and Dharmendra S. Modha
Link to content in pdf format One Up on LRU (170 KB Acrobat PDF file) ;login: - The Magazine of the USENIX Association, vol. 28, no. 4, pp.7-11, August 2003.

Sorav Bansal and Dharmendra S. Modha
Link to content in pdf format CAR: Clock with Adaptive Replacement (217 KB Acrobat PDF file) USENIX File and Storage Technologies (FAST), March 31-April 2, 2004, San Francisco, CA.

    About IBMPrivacyContact