1

I'm studying lock free MPMC queues, and I read some papers mainly about common practices and/or designs. I found myself reading about tantrum queues, that are lock free data structures that emulate an infinite array (based queue). These queues are generally composed by segments (continuous in memory) where most operations take place. Aside from the CAS-loop implementations, where all producers synchronize on the tail index and consumers synchronize on the head index, these designs use the Fetch-Add instruction to distribute multiple producers/consumers in different cells of the array thus reducing the CAS hotspot overhead.

The major drawback I found is that they are subsceptible to livelock (this is quite tricky) so the segments need to be closed (in situations of starvation/full queue), and a new segment must be allocated so that producers and consumers don't maliciously synchronize (on the livelock loop).

I was mainly working with LCRQ (that has been the state-of-the-art) but I could only find unbounded implementations.

I was trying to work on a bounded implementation, so I tried 2 pretty naive approaches:

We have 2 main objects:

  • Segment: here most of the operations of the queue take place (using the above mentioned FAA-loop)
  • LinkedAdapter: wrapper class that mantains a linked list of segments, deallocating empty ones and allocating new ones. It implements 2 wrapper methods (push/pop) that incapsulate push/pop on the underlying segment allocating and removing segments in addition.

That's how LCRQ is built, and it achieve lock-free semantics (as an unbounded queue) since when the current segment is closed (to further push) one producer has to allocate a new one and link it to the list. This producer always makes progress since before linking the segment places the element (this operation is guaranteed successful since it's executed in isolation from otehr threads).

To make it bounded I tried 2 (very similar approaches) thinking that the push operation has to spuriously fail (when queue reaches max capacity). I used 2 additional counters itemsPushed itemsPopped shared respectively between producers and consumers. Before placing an element producers check if the difference between the 2 counters is less than the max capacity (so it fails if full) instead places the element incrementing the counter. Consumers decrement the counter when removing an element.

The other approach takes a similar route counting the allocated segments and limiting the allocation of a new segment before consumers deallocate a fixed amout.

While testing I noticed these 2 designs are prone to livelock but I can't explain myself why.

1
  • 1
    Well these things are difficult to understand when you don't show any code/pseudo-code. If you can, post the algorithm you implemented. The design idea you describe does not seem like it is inherently livelocks, it all depends on your actual algorithm and implementation. Commented Feb 21 at 16:05

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.