37

I was trying to implement a LRU cache using LinkedHashMap. In the documentation of LinkedHashMap (http://docs.oracle.com/javase/7/docs/api/java/util/LinkedHashMap.html), it says:

Note that insertion order is not affected if a key is re-inserted into the map.

But when I do the following puts

public class LRUCache<K, V> extends LinkedHashMap<K, V> {
    private int size;

    public static void main(String[] args) {
        LRUCache<Integer, Integer> cache = LRUCache.newInstance(2);
        cache.put(1, 1);
        cache.put(2, 2);
        cache.put(1, 1);
        cache.put(3, 3);

        System.out.println(cache);
    }

    private LRUCache(int size) {
        super(size, 0.75f, true);
        this.size = size;
    }

    @Override
    protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
        return size() > size;
    }

    public static <K, V> LRUCache<K, V> newInstance(int size) {
        return new LRUCache<K, V>(size);
    }

}

The output is

{1=1, 3=3}

Which indicates that the re-inserted did affected the order. Does anybody know any explanation?

9
  • I wonder, are you doing it for a specific purpose? Because Java already provides a WeakHashMap which provides this functionality. docs.oracle.com/javase/7/docs/api/java/util/WeakHashMap.html Commented Dec 15, 2014 at 0:36
  • If the order will not be affected by re-insertion. The order should be {2=2, 3=3}, since the {1=1} is added first and re-inserted. Commented Dec 15, 2014 at 0:37
  • 3
    @Jack WeakHashMap does not do what you think it does. It is not the same thing as a LRU cache. Commented Dec 15, 2014 at 0:40
  • 1
    @Jeffrey: it does exactly what I think it does. It provides a data structure which allows to store cached values without worrying to clear them if they are not referenced anywhere else in the code. Which is a garbage collected way to implement a LRU cache. If you don't have the requirement to wipe old values (for refreshing issues, and that's the specific purpose I was talking about) then there it fulfils exactly that issue by allowing Java to release them just when there is the necessity. Commented Dec 15, 2014 at 0:44
  • 1
    @Jack I'm implementing this for coding practice. I think using the weakHashMap is a better way if I'm using the LRU to hold temp objects and let GC to handle everything. Thanks Commented Dec 15, 2014 at 0:56

6 Answers 6

25

As pointed out by Jeffrey, you are using accessOrder. When you created the LinkedHashMap, the third parameter specify how the order is changed.

"true for access-order, false for insertion-order"

For more detailed implementation of LRU, you can look at this http://www.programcreek.com/2013/03/leetcode-lru-cache-java/

Sign up to request clarification or add additional context in comments.

1 Comment

The link doesn't use a solution with LinkedHashMap though.
9

But you aren't using insertion order, you're using access order.

order of iteration is the order in which its entries were last accessed, from least-recently accessed to most-recently (access-order)

...

Invoking the put or get method results in an access to the corresponding entry

So this is the state of your cache as you modify it:

    LRUCache<Integer, Integer> cache = LRUCache.newInstance(2);
    cache.put(1, 1); // { 1=1 }
    cache.put(2, 2); // { 1=1, 2=2 }
    cache.put(1, 1); // { 2=2, 1=1 }
    cache.put(3, 3); // { 1=1, 3=3 }

2 Comments

Where did the 2 go in the last line?
The LRUCache only has capacity for 2 elements. Since 2 was the least recently used element, it got pushed out of the cache when we added 3.
8

Here is my implementation by using LinkedHashMap in AccessOrder. It will move the latest accessed element to the front which only incurs O(1) overhead because the underlying elements are organized in a doubly-linked list while also are indexed by hash function. So the get/put/top_newest_one operations all cost O(1).

class LRUCache extends LinkedHashMap<Integer, Integer>{
    private int maxSize;
    public LRUCache(int capacity) {
        super(capacity, 0.75f, true);
        this.maxSize = capacity;
    }

    //return -1 if miss
    public int get(int key) {
        Integer v = super.get(key);
        return v == null ? -1 : v;
    }

    public void put(int key, int value) {
        super.put(key, value);
    }

    @Override
    protected boolean removeEldestEntry(Map.Entry<Integer, Integer> eldest) {
        return this.size() > maxSize; //must override it if used in a fixed cache
    }
}

1 Comment

What do you mean by a fixed cache? What are caches are there in this context?
4

Technically LinkedHashMap has the following constructor. Which help us to make the access-order True/False. If it is false then it keep the insertion-order.

LinkedHashMap(int initialCapacity, float loadFactor, boolean accessOrder)

(#Constructs an empty LinkedHashMap instance with the specified initial capacity, load factor and ordering mode)

Following is the simple implementation of LRU Cache ---

  class LRUCache {

     private LinkedHashMap<Integer, Integer> linkHashMap;

     public LRUCache(int capacity) {
        linkHashMap = new LinkedHashMap<Integer, Integer>(capacity, 0.75F, true) {
          @Override
          protected boolean removeEldestEntry(Map.Entry<Integer, Integer> eldest) {
             return size() > capacity;
          }
       };
     }

     public void put(int key, int value) {
        linkHashMap.put(key, value);
     }

     public int get(int key) {
       return linkHashMap.getOrDefault(key, -1);
     }

 } 

Comments

2

I used the following code and its works!!!! I have taken window size to be 4, but any value can be taken.

for Insertion order:
1: Check if the key is present.

2: If yes, then remove it (by using lhm.remove(key))

3: Add the new Key Value pair.

for Access Order:

No need of removing keys only put and get statements do everything automatically.

This code is for ACCESS ORDER:

import java.util.LinkedHashMap;

public class LRUCacheDemo {

 public static void main(String args[]){

  LinkedHashMap<String,String> lhm = new LinkedHashMap<String,String>(4,0.75f,true) {

     @Override
     protected boolean removeEldestEntry(Map.Entry<String,String> eldest) {
         return size() > 4;
     }
 };
 lhm.put("test", "test");
 lhm.put("test1", "test1");
 lhm.put("1", "abc");
 lhm.put("test2", "test2");
 lhm.put("1", "abc");
 lhm.put("test3", "test3");
 lhm.put("test4", "test4");
 lhm.put("test3", "test3");
 lhm.put("1", "abc");
 lhm.put("test1", "test1");

 System.out.println(lhm);
}
}

Comments

0
I also implement LRU cache with little change in code. I have tested and it works perfectly as concept of LRU cache.

package com.first.misc;
import java.util.LinkedHashMap;
import java.util.Map;

public class LRUCachDemo {
 public static void main(String aa[]){
     LRUCache<String, String> lruCache = new LRUCache<>(3);
     lruCache.cacheable("test", "test");
     lruCache.cacheable("test1", "test1");
     lruCache.cacheable("test2", "test2");
     lruCache.cacheable("test3", "test3");
     lruCache.cacheable("test4", "test4");
     lruCache.cacheable("test", "test");


     System.out.println(lruCache.toString());
 }
}

class LRUCache<K, T>{
    private Map<K,T> cache;
    private int windowSize;

    public LRUCache( final int windowSize) {
        this.windowSize = windowSize;
        this.cache = new LinkedHashMap<K, T>(){

            @Override
            protected boolean removeEldestEntry(Map.Entry<K, T> eldest) {
                return size() > windowSize;
            }
        };

    }


    // put data in cache
    public void cacheable(K key, T data){
        // check key is exist of not if exist than remove and again add to make it recently used
        // remove element if window size is exhaust
        if(cache.containsKey(key)){
            cache.remove(key);
        }

        cache.put(key,data);

    }

    // evict functioanlity

    @Override
    public String toString() {
        return "LRUCache{" +
                "cache=" + cache.toString() +
                ", windowSize=" + windowSize +
                '}';
    }
}

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.