Suppose we have some function which takes in a key, retreives its value from a shared hashtable, and perform some operations on it to obtain a new value , and then updates the hashtable with this new value. Now, this function is multi-threaded so there could be multiple threads calling this same function, either with the same or different keys, so some sort of race condition protection using mutex is necessary. I've come up with the following implementation in python using locks, where I use a dict as a hashtable. (I know that in python, dict operations are atomic but this is just for illustration purposes for the sake of the algorithm)
class Solution:
def __init__(self):
self.datamap = {}
self.lockmap = {}
self.datamap_lock = Lock()
self.lockmap_lock = Lock()
def initializeKey(self, key):
with self.datamap_lock:
if key not in self.datamap:
self.datamap[key] = (-sys.maxsize, 0)
with self.lockmap_lock:
self.lockmap[key] = Lock()
def getLock(self, key):
with self.lockmap_lock:
return self.lockmap[key]
def getValue(self, key):
with self.datamap_lock:
return self.datamap[key]
def storeValue(self, key, max_, value):
with self.datamap_lock:
self.datamap[key] = (max_, value)
def calc(self, key, param_value):
self.initializeKey(key)
with self.getLock(key):
max_, value = self.getValue(key)
# Does some operations on value to obtain a new value
self.storeValue(key, max_, value)
Basically I used a mutex for the hashtable, a mutex for each key and a mutex for the hashtable which maps from the key to the mutex. Would this implementation be correct and thread-safe? Is there a way to do it better without using so many locks/mutexes?