3

I have a list of "Module" classes, List<Module> modules. These modules each contain their own public object to use as a lock when accessing data. Let's say I have a couple threads which perform processing on these modules at random times. Currently I have each thread perform the processing on the modules in order, like so:

foreach (Module module in modules)
{
    lock (module.Locker)
    {
        //Do stuff
    }
}

This has worked fine so far, but I have the feeling there's a lot of unnecessary waiting. For instance, if two threads start one right after another, but the first is performing heavy processing and the second one isn't, the second one will have to wait on every module while the first one is doing its processing.

This is the question then: Is there a "proper" or "most efficient" way to lock on elements in a list? I was going to do this:

foreach (Module module in modules.Randomize())
{
    lock (module.Locker)
    {
        //Do stuff
    }
}

Where "Randomize()" is just an extension method that returns the elements of the list in a random order. However, I was wondering if there's an even better way than random?

11
  • Take a look at ThreadPool and BlockingQueue/Non-blocking queue. I feel in your program the lock on the object isn't necessary as you seem only using that for tell other threads the object is occupied. Commented Jan 6, 2015 at 9:00
  • I'm sorry, I'm confused; how will a Blocking/Non-Blocking Queue help? What should go in the queue? Commented Jan 6, 2015 at 9:06
  • So you want thread to skip the element if some other thread is already working on it? Otherwise, I don't get your question. Commented Jan 6, 2015 at 9:08
  • Essentially you just want each element to be processed once but only once, am I right? In that case simply insert them into a queue and call the ThreadPool to do so. Commented Jan 6, 2015 at 9:09
  • 1
    Can you split the operations on the modules into read and write operations? If so, you could increase efficiency by using ReaderWriterLockSlim Commented Jan 6, 2015 at 9:15

3 Answers 3

1

Assuming that work inside the lock is huge and has heavy contention. I'm introducing additional overhead of creating new List<T> and removing items from them.

public void ProcessModules(List<Module> modules)
{
    List<Module> myModules = new List<Module>(modules);//Take a copy of the list
    int index = myModules.Count - 1;
    while (myModules.Count > 0)
    {
        if (index < 0)
        {
            index = myModules.Count - 1;
        }

        Module module = myModules[index];
        if (!Monitor.TryEnter(module.Locker))
        {
            index--;
            continue;
        }

        try
        {
            //Do processing module
        }
        finally
        {
            Monitor.Exit(module.Locker);
            myModules.RemoveAt(index);
            index--;
        }
    }
}

What this method does is takes the copy of the modules passed in, then tries to acquire the lock, if not possible to acquire it(because another thread owns it), it skips and moves on. After finishing the list, it comes again to see whether another thread has released the lock, if not again skips it and moves on. This cycle continues till we process all the modules in the list.

This way, we're not waiting for any contended locks, we just keep on processing the modules that's not locked by another thread.

Sign up to request clarification or add additional context in comments.

5 Comments

This is... actually really smart! I didn't know "TryEnter" existed; most things I look up say to avoid Monitor for some reason, so I never really went into it. I feel like they should have taught some of these access patterns in my parallel computing course XD. Of course there's still the very real possibility that a thread could get stuck indefinitely while looking for a free module. It could always JUST miss the opportunity. I could add safety code I guess where if a certain amount of iterations is met, I just switch to "in order".
@limitlessinfinity If a thread is locked up indefinitely, that means you have some other problem. I mean some other thread isn't releasing the lock. You can optimize this algorithm further to find if there is no more free Modules but still the List is not empty, you can switch to Monitor.Enter` from Monitor.TryEnter. This will avoid burning the CPU.
Let's say there's a whole bunch of food stands at a park that's open forever. Let's say a hungry guy doesn't want to wait in line for food, so he continues to walk in circles around the park until one of the lines becomes empty. What if there's so many people at the park that the lines are never empty when he happens to come across them? This is the problem I mean: it could become a case that one thread never finds an open module, because another thread always happens to have a lock at the moment that this thread checks it.
@limitlessinfinity I see what you mean. I didn't realize the problem myself, Clever man! Good analogy too.
Thank you for your help! And thanks for the compliments, but I wasn't clever enough to come up with the solution ;D Now I just need to see if all the different things I do within the "processing" section can fit nicely into a generic delegate so I don't have to write this code for each and every type of process.
1

lock stands for Monitor.Enter, you can use Monitor.TryEnter to check if lock is already acquired and somehow skip this element and try to take another.

There will be overhead if multiple threads are processing same ordered list of items, so idea with Randomize seems a good one (unless reordering is expensive compared to processing itself, or list can be changed while processing, etc).

Totally other possibility is to prepare queues (from list) for each thread in a way what there will be no cross-waiting (or waiting will be minimized). Combined with Monitor.TryEnter this should be an ultimate solution. Unfortunately, I have no clue in how to prepare such queues, nor how to skip processing queue item, leaving that for you =P.


Here is a snippet of what I mean:

foreach(var item in list)
    if(!item.Processed && Monitor.TryEnter(item.Locker))
        try
        {
            ... // do job
            item.Processed = true;
        }
        finally
        {
            Monitor.Exit(item.Locker))
        }

1 Comment

The list won't be changed at any point, so that's not a problem. Hmm, I didn't think about the Monitor solution. I always try to avoid "TryEnter" due to the problems it can cause, but in this case, since I'm just going to go to a different element, it may work. EDIT: Oops, my previous response was dumb because I was thinking of something else. TryEnter actually acquires the lock, so this could work wonderfully.
1

Not sure I entirely follow, however from what I can tell your goals is to periodically do stuff to each module, and you want to use multiple threads because the stuff is time consuming. If this is the case I would have a single thread periodically check all modules and have that thread use the TPL to spread the workload, like so:

Parallel.ForEach(modules, module =>
{
    lock(module.Locker)
    {

    }
});

As an aside, the guidance on locks is that the object that you lock on should be private, so I'd probably change to doing something like this:

Parallel.ForEach(modules, module => module.DoStuff());

// In the module implementation
private readonly object _lock = new object();

public void DoStuff()
{
    lock (this._lock)
    {
        // Do stuff here
    }
}

I.e. each module should be thread-safe and responsible for its own locking.

5 Comments

This would normally be what I'd do if I just needed to perform just one or two things on multiple modules. However, I have to perform quite a few things at the same time on all the modules, so this would effectively come out to processing the modules in a "random" order since I'm at the mercy of the C# Parallel.ForEach scheduler.
@limitlessinfinity I'm lost - if the order in which you process the modules matters then whats with the Randomize() call in your question?
The order doesn't matter, I'm just saying it'd be about the same as a non-parallel foreach loop but in a random order due to the amount of parallel.foreach loops I'd be running (each of which performs a lock on each iteration).
@limitlessinfinity The idea is to have a single thread doing this - the Parallel.ForEach is to eliminate the need for more than one thread, which means no blocking.
It would be wonderful if I could force each module to worry about its own locking, but unfortunately these modules are written by other people and I don't trust them to perform proper locking, which is why I perform the locking outside the module. Had I had the foresight to see this problem, I would have made public wrapper methods around the functions to be overridden which would lock an internal lock, but unfortunately I didn't do that and it's too late to change the base class. Your way is probably the best in terms of clean implementation.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.