2

I 'm a novice in multithreading programing and i have still confusion with that.

Below is my reference counted class:

class Rbuffer
{
  private:
    char *m_pnData;
    volatile unsigned int mRefCount;

  public:
     Rbuffer(int nLength) : mRefCount(0)
    {
    m_pnData = new char[nLength]; 
    }
   ~Rbuffer(){
    delete[] m_pnData;
    }

   void decRef() {
     if(InterlockedDecrement(&mRefCount)==0){
               delete (Rbuffer *)this;
           }
    }

  void incRef() {
        InterlockedIncrement(&mRefCount);
    } 

}; 

is it fully thread safe? Can you exclude this situation:

ThreadA                                 ThreadB
PointerToRBuffer->incRef();//mRefCount 1
switch->  
                                        PointerToRBuffer->incRef();//mRefCount 2
                                          <-switch
PointerToRBuffer->decRef();           
InterlockedDecrement(&mRefCount)//mRefCount 1 
switch->                                
                                        PointerToRBuffer->decRef();//mRefCount 0!
                                        InterlockedDecrement(&mRefCount);
                                        if (0==0)
                                        delete (Rbuffer *)this; 
                                            <-switch
if (0==0) 
//deleting object, that doesn't exist 
delete (Rbuffer *)this;
//CRASH                               

The reasons of crashing could be that only (InterlockedDecrement(&mRefCount)) part is atomic, but if (InterlockedDecrement(&mRefCount)==0) not? Am i wrong with example above?

Thanks in advance for your opinions and advice to make my class fully thread safe.

1
  • You're deleting a non-dynamic member variable (delete[] m_pnData) and making no affordances for protecting construction of a class who's instances can literally self-destruct (i.e. a static class factory method with a private constructor family). That said, I think your crash may have nothing to do with reference counting. Frankly, the fixed-casts on the delete operands should be equally concerning. I assume there is some reason you're not using std::shared_ptr<Rbuffer> for this, as that would make all of this irrelevant. Commented Jan 14, 2014 at 17:00

2 Answers 2

3

Your analysis isn't right; The code you posted is using interlockedDecrement correctly.

This is a safe use

 if(InterlockedDecrement(&mRefCount)==0)
           cleanup();

.. but this would indeed have the problem you described

 InterlockedDecrement(&mRefCount);

 if (mRefCount==0)
           cleanup();

However, the use of delete this is much more likely to be the cause of the problem. It's very unlikely you're passing the 'absolutely positively 100% sure' tests described here: http://www.parashift.com/c++-faq-lite/delete-this.html

In particular, the following simple code would cause chaos.

{
  RBuffer x;  // count is what... ? zero
  x.incRef(); // make count one
  x.decRef(); // make count zero, and deletes itself 
}  // now x goes out of scope, so destructor is called a second time = chaos! 

A normal "reference counting" idiom involves a 'shared object' (with the count), and simple 'reference objects' (not C++ references, although the semantics would be similar) which refer to the shared object. The constructors and destructors of the 'reference objects' are responsible for calling the incref/decref methods on the shared object. So the shared object is automatically counting the number of active 'reference objects'.

Sign up to request clarification or add additional context in comments.

13 Comments

The member access of mRefCount is equally problematic in the first example. If cleanup() deletes the instance it also deletes the mRefCount being addressable (as it no longer exists), and as such a context switch to Thread B entering into the aforementioned if-expression eval switch after Thread A performs delete this; buried inside of cleanup() will still fault. If this is the problem you speak of in your description, I concur.
@WhozCraig is correct which is why I proposed a mutex around the whole decRef operation, and setting the PointerToRBuffer to NULL and adding NULL checks prior to derefencing PointerToRBuffer.
@ChrisDesjardins exactly. It will dereference an invalid address if A has deleted the object before B can invoke the interlocked-dec..
@Roddy "Where's the dereference?" What do you think InterlockedDecrement(&mRefCount) is doing? Its sending &this->mRefCount to InterlockedDecrement. Now consider if it is possible that this is no longer valid when that is done (and it is possible).
@WhozCraig I think you're missing the point that this is meant to be for reference counting: If something calls any method on the object after it's been deleted, then it's bad, even if single-threaded.
|
2

It is not 100% clear what is going on, but it looks like ThreadA deletes the RBuffer object, and then subsequently ThreadB dereferences it.

What you really need is mutual exclusion around decrement and the delete operation, furthermore you need to set some kind of flag to prevent dereference after delete. Usually setting pointers to NULL works and then checking for NULL before any dereference.

So your decRef might look something like this:

 void decRef() {
     lock(_mutex);
     if(InterlockedDecrement(&mRefCount)==0) {
               PointerToRBuffer = NULL;
               delete (Rbuffer *)this;
           }
    }

You might be better off using a shared_ptr.

3 Comments

Why (and how) does decref (a class method) need to modifyPointerToRBuffer (a pointer to an instance of that class)?
@Roddy I know this whole thing is ulgy, which is why a shared_ptr should be considered.
No doubt that Shared_ptr is the right solution! but learning how you might implement shared_ptr is an excellent skill to learn.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.