Syed Jafer K

Its all about Trade-Offs

Paper Notes #1 – FIFO can be Better than LRU: the Power of Lazy Promotion and Quick Demotion

Today, i read this paper on cache eviction. Didn’t came directly to this paper, but referred from SIEVE is Simpler than LRU. In this blog, i jot down bullet points that i understood from this paper for future reference.

Historical Context

  • LRU’s Legacy – LRU (Least Recently Used) has been the foundation of cache eviction algorithms for decades.

Numerous innovations have focused on improving LRU’s miss ratio and throughput to make it more cache-efficient.

  • FIFO’s Position – FIFO (First In, First Out) is simpler and offers better throughput and scalability, but traditionally lags behind LRU in terms of miss ratio, impacting cache efficiency.

LRU Algorithm Mechanics

  • Core Mechanism: LRU maintains a doubly-linked list, promoting accessed objects to the head of the list (eager promotion) and evicting objects at the tail when the cache is full. Relies on recency to decide which objects are most valuable.
  • Overhead: Each eager promotion requires updating the linked list structure, involving at least 6 guard rails, which increases computational overhead.

FIFO Algorithm Mechanics

  • Core Mechanism: FIFO evicts the oldest object in the queue, irrespective of its recency or frequency of access. It operates on a simpler structure without maintaining access order.
  • Overhead: FIFO avoids the computational cost associated with maintaining metadata or complex data structures, resulting in lower overhead.

Performance Comparison

  • LRU offers better cache efficiency (lower miss ratio) by leveraging recency but suffers from scalability issues due to computational overhead.
  • FIFO delivers significantly better throughput and scalability but traditionally lags behind LRU in miss ratio, impacting cache efficiency.

Enhancements to FIFO

  • Lazy Promotion (LP): Performs promotion only at eviction time, avoiding frequent updates and reducing computational complexity.
  • Quick Demotion (QD): Quickly removes most objects after insertion using a small probationary FIFO queue, reducing the opportunity cost of waiting for new objects to traverse the entire queue.
  • QD-LP-FIFO: Uses two FIFO queues to cache data and a ghost FIFO queue to track evicted objects, enhancing performance and balancing promotion and demotion strategies.

Enhanced FIFO Algorithms

  • FIFO-Reinsertion & CLOCK: FIFO-Reinsertion and CLOCK are approximations of LRU, combining FIFO’s simplicity with aspects of recency. Achieves a lower miss ratio than LRU across diverse workloads while avoiding the computational cost of maintaining a doubly-linked list.
  • QD-Enhanced Algorithms: State-of-the-art algorithms such as ARC and LIRS, when enhanced with Quick Demotion, outperform traditional LRU in terms of scalability and hit rates.

Real-World Use Cases

  • Performance-Centric Systems: FIFO-based algorithms are preferred in performance-centric systems like networking buffers or distributed systems due to their scalability and reduced overhead.
  • Workload Diversity: Lazy Promotion and Quick Demotion make FIFO-based algorithms better suited for diverse access patterns, such as sequential scans or bursty traffic.

Reddit Discussion: https://www.reddit.com/r/programming/comments/1cj2faa/fifo_is_better_than_lru_the_power_of_lazy/

Authors Blog: https://blog.jasony.me/system/cache/2023/06/24/fifo-lru

ACM: https://dl.acm.org/doi/10.1145/3593856.3595887

Paper Download: https://sigops.org/s/conferences/hotos/2023/papers/yang.pdf