3

I've got a question about synchronization of objects inside a Map (same objects I later change value of). I want to atomically read, do checks and possibly do updates to a value from a map without locking the entire map. Is this a valid way to work with synchronization of objects?

    private final Map<String, AtomicInteger> valueMap = new HashMap<>();

    public Response addValue(@NotNull String key, @NotNull Integer value) {
        AtomicInteger currentValue = valueMap.get(key);
        if (currentValue == null) {
            synchronized (valueMap) {
                // Doublecheck that value hasn't been changed before entering synchronized
                currentValue = valueMap.get(key);
                if (currentValue == null) {
                    currentValue = new AtomicInteger(0);
                    valueMap.put(key, currentValue);
                }
            }
        }
        synchronized (valueMap.get(key)) {
            // Check that value hasn't been changed when changing synchronized blocks
            currentValue = valueMap.get(key);
            if (currentValue.get() + value > MAX_LIMIT) {
                return OVERFLOW;
            }
            currentValue.addAndGet(value);
            return OK;
        }
    }
3
  • 3
    You can also use a ConcurrentHashMap or Collections.synchronizedMap(map) (see this question) to synchronize your map. Commented Mar 9, 2015 at 8:49
  • How do you ensure that the value hasn't changed between the get + check until the update/return in a ConcurrentHashMap for example. It should work for the update since you will have the previous value stored and checked when you update but when returning OVERFLOW there should be a possibility that the value has changed before returning unless you lock the access between check and return? Commented Mar 9, 2015 at 8:52
  • In your code, it is not clear to me that currentValue won't hold a stale value if the valueMap is accessed without synchronization (as you have done). Commented Mar 9, 2015 at 15:29

1 Answer 1

1

I fail to see much of a difference between your approach and that of a standard ConcurrentHashMap - asides from the fact that ConcurrentHashMap has been heavily tested, and can be configured for minimal overhead with the exact number of threads you want to run the code with.

In a ConcurrentHashMap, you would use the replace(K key, V old, V new) method to atomically update key to new only when the old value has not changed.

The space savings due to removing all those AtomicIntegers and the time savings due to lower synchronization overhead will probably compensate having to wrap the replace(k, old, new) calls within while-loops:

ConcurrentHashMap<String, Integer> valueMap = 
      new ConcurrentHashMap<>(16, .75f, expectedConcurrentThreadCount);

public Response addToKey(@NotNull String key, @NotNull Integer value) {
     if (value > MAX_LIMIT) {
        // probably should set value to MAX_LIMIT-1 before failing
        return OVERFLOW; 
     }
     boolean updated = false;
     do {
        Integer old = putIfAbsent(key, value);
        if (old == null) {
             // it was absent, and now it has been updated to value: ok
             updated = true;
        } else if (old + value > MAX_LIMIT) {
             // probably should set value to MAX_LIMIT-1 before failing
             return OVERFLOW;
        } else {
             updated = valueMap.replace(key, old, old+value);
        }
     } while (! updated);

     return OK;
}

Also, on the plus side, this code works even if the key was removed after checking it (yours throws an NPE in this case).

Sign up to request clarification or add additional context in comments.

4 Comments

putIfAbsent() doesn't return a boolean. Besides, I don't see why you're using get(), checking if it returns null, and then using putIfAbsent(), while you can perfectly use putIfAbsent() directly: if it returns null, it means the value wasn't in the map, otherwise it returns the value.
True on both counts, @Magnamag. Fixed.
Thanks for input. This seems very good except for getting an initial value > MAX_LIMIT.
Fixed, although I still advocate for overflow setting the value to MAX_LIMIT-1 or so; it is counter-intuitive that addToKey("x",1000) may leave a smaller value than addToKey("x", 10) due to overflow in the first case.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.