Tag Archives: LRU

Research on LRU algorithm

LRU is least recently used algorithm.

A page replacement algorithm of memory management. For data blocks (internal memory blocks) that are in memory but not in use, it is called LRU. The operating system will move them out of memory according to which data belong to LRU to make room for loading other data.
What is LRU algorithm?LRU is the abbreviation of least recently used, that is, the least page replacement algorithm, which serves for virtual page storage management.
As for the memory management of the operating system, how to save the memory with small capacity and provide resources for the most processes has always been an important research direction. The virtual storage management of memory is the most common and successful way at present. In the case of limited memory, a part of external memory is expanded as virtual memory, and the real memory only stores the information used by the current runtime. This undoubtedly greatly expands the function of memory and greatly improves the concurrency of computer. Virtual page storage management is to divide the space required by the process into multiple pages, only the current required pages are stored in memory, and the rest pages are put into external memory.

Note:
for virtual page storage, the replacement of internal and external memory information is based on pages. When a page in external memory is needed, it should be transferred into memory. At the same time, in order to maintain the size of the original space, the less internal memory is transferred, the higher the efficiency of process execution. So, which page can be transferred out to achieve the purpose of transferring as little as possible?We need an algorithm.
In fact, the algorithm to achieve such a situation is the most ideal – the page swapped out each time is the latest one to be used in all memory pages – which can delay page swapping to the maximum extent. This algorithm is called ideal page replacement algorithm. Unfortunately, this algorithm can not be realized.

In order to minimize the gap with the ideal algorithm, a variety of ingenious algorithms have been produced, at least using page replacement algorithm is one of them. The LRU algorithm is based on the fact that the pages used frequently in the previous instructions are likely to be used frequently in the following instructions. Conversely, pages that have not been used for a long time are likely not to be used for a long time in the future. This is the famous locality principle. Cache, which is faster than memory, is also based on the same principle. Therefore, we only need to find the least used page to call out the memory in each swap. This is the whole content of LRU algorithm.

FIFO, LRU and LFU algorithms

When it comes to caching, there are two points that must be considered:
(1) the consistency between cached data and target data.
(2) Cache expiration policy (mechanism).
Among them, cache expiration strategy involves elimination algorithm. Common elimination algorithms are as follows:
(1) FIFO: first in first out, first in first out
(2) LRU: least recently used
(3) LFU: least frequently used
pay attention to the difference between LRU and LFU. LFU algorithm selects the least used data item according to the number of times the data item is used in a period of time, that is, it is determined according to the difference of the number of times. LRU is determined according to the difference of usage time.
An excellent caching framework must implement all the above caching mechanisms. For example, ehcache implements all the above policies.

Double linked list + hashtable

package com.example.test;

import java.util.Hashtable;

public class LRUCache{

    private int cacheSize;
    private Hashtable<Object,Entry> nodes;
    private int currentSize;
    private Entry first; 
    private Entry last; 

    public LRUCache(int i){
        currentSize = 0;
        cacheSize = i;
        nodes = new Hashtable<Object,Entry>(i);
    }

    /**
     *Get the object in the cache and put it at the top
     */
    public Entry get(Object key){
        Entry node = nodes.get(key);
        if(node != null){
            moveToHead(node);
            return node;
        }else{
            return null;
        }
    }

    /**
     * add 
     */
    public void put(Object key,Object value){
        //First see if the hashtable has the entry, if it exists, then only update it
        Entry node = nodes.get(key);

        if(node == null){
            if(currentSize >= cacheSize){
                nodes.remove(last.key);
                removeLast();
            }else{
                currentSize++;
            }
            node = new Entry();
        }
        node.value = value;
        //Place the most recently used node at the head of the chain table to indicate the most recently used
        moveToHead(node);
        nodes.put(key, node);
    }

    public void remove(Object key){
        Entry node = nodes.get(key);
        //Delete in the chain table
        if(node != null){
            if(node.prev != null){
                node.prev.next = node.next;
            }
            if(node.next != null){
                node.next.prev = node.prev;
            }
            if(last == node)
                last = node.prev;
            if(first == node)
                first = node.next;
        }
        //Delete in hashtable中
        nodes.remove(key);
    }

    /**
     * Delete the last node of the chain, i.e., use the last entry used
     */
    private void removeLast(){
        //The end of the chain table is not empty, then the end of the chain table to point to null, delete even the end of the table (delete the least reference)
        if(last != null){
            if(last.prev != null)
                last.prev.next = null;
            else
                first = null;
            last = last.prev;
        }
    }

    /**
     *Move to the head of the chain table to indicate the latest used
     */
    private void moveToHead(Entry node){
        if(node == first)
            return;
        if(node.prev != null)
            node.prev.next = node.next;
        if(node.next != null)
            node.next.prev = node.prev;
        if(last == node)
            last =node.prev;
        if(first != null){
            node.next = first;
            first.prev = node;
        }
        first = node;
        node.prev = null;
        if(last == null){
            last = first;
        }
    }
    /*
     * clear the cache
     */
    public void clear(){
        first = null;
        last = null;
        currentSize = 0;
    }


}

    class Entry{
        Entry prev;
        Entry next;
        Object value;
        Object key;
    }