1

I'm working on my digital signal processing framework. To provide a data exchange interface, I wrap all buffers in a Data class, which has a reference count based GC mechanism(the system is simple enough so I believe ref count can handle this).

It works like this:

  1. When a Data instance is constructed, it sets its ref count to zero.
  2. When the instance is dispatched to N DSP modules, add N to its ref count.
  3. When a DSP module finishes with the instance, it reduces the ref count.
  4. When the ref count decreases to zero, it delete this;

However I found out that there is memory leak in my program.

To debug, I added static variables m_alloccount and m_freecount to the Data class to record the time of allocation and freeing. Then I pause the execution at random times, only finding out there is just slight difference between the two numbers.

E.g. in different trials:

Trial         1   2    3      4
m_alloccount  12  924  34413  364427
m_freecount   11  923  34412  364425

But the fact is that the memory usage is still growing. I believe all memory allocation is bound to the Data class. Really can't figure out the reason now.

int Data::m_alloctime=0;
int Data::m_freetime=0;

Data::Data(DataPinOut*parent, int type, int size)
:m_ptr(NULL)
,m_parent(parent)
,m_refcnt(0)
,m_type(type)
,m_size(size)
{
    if(size>0)
        m_ptr=new char[TypeSizeLookup()*size];
    m_alloctime++;
}

Data::~Data()
{
    delete [] (char*)m_ptr;
    m_freetime++;
}

void Data::Delete()
{
    std::lock_guard<std::mutex>*lock=new std::lock_guard<std::mutex>(m_mutex);

    if(m_refcnt>1)
    {
        m_refcnt--;
        delete lock;
    }
    else
    {
        delete lock;
        delete this;
    }
}
28
  • 1
    Have you used valgrind? Commented Nov 17, 2013 at 5:31
  • 1
    Is this multi-threaded? Commented Nov 17, 2013 at 5:32
  • 6
    Hand off a std::shared_ptr and don't worry about the counts yourself. Commented Nov 17, 2013 at 5:32
  • 1
    Have you looked at std::shared_ptr, the standard reference counting facility in C++11? Commented Nov 17, 2013 at 5:33
  • 1
    @babel92 could you please tell us how you determined there is a memory leak? thanks Commented Nov 17, 2013 at 5:45

3 Answers 3

2

In my experience, a memory leak of just one or two regardless of the amount of internal operations indicates a leak of an input or output variable. Check the accounting consistency of the external interface of your system.

std::shared_ptr is nice because being standard, it is automatically suitable as an external interface. The user can interact with ref-counted objects without knowing the management details defined by DSP.

Other than that, there's not much we can do to intuit what's happening in your program.

Sign up to request clarification or add additional context in comments.

6 Comments

If he's pausing at random times, those numbers don't really mean much - the differences are probably just the live objects at that time. I'm more suspicious as to how he has determined it's a memory leak when it could be fragmentation not leading to memory handback etc
@imsoconfused I'm supposing the table of trials he posted were complete operations from input to final output. But it's quite possible he simply printed the statistics before disposing of the final output object. That might or might not be considered a bug.
nope - he states in the question "Then I pause the execution at random times, only finding out there is just slight difference between the two numbers."
@imsoconfused Yeah, but that might be different from the trials. I could be wrong.
oh word, yeah that's another interpretation. I just use Intel VTune/Inspector and cut right to the point
|
0

How are you maintaining your counters? If your counter decrement/test is not atomic you might be leaking decrements which would prevent the object from ever reaching a refcount of 0.

1 Comment

I use std::lock_guard and a mutex to protect every function related to ref count operations.
0

Step 2. Add n refs when dispatched.

Are modules guaranteed to be dispatched? Based on your simple algorithm statement a created module that is not dispatched has no mechanism by which its ref count would decrement and be deleted.

4 Comments

Yes. Every block's thread wait for all its input pins to be loaded and then do its work. Afterwards it decreases ref counts of input data.
And there are no fault paths that would result in an undispatched data instance?
Not likely because in my testing example there are only three blocks... A->B->C ....
Back in the olden days of COM, before shared ptr and the like, the most common cause of ref count leaks was improper derefs in error conditions so maybe something to consider since it sounds like you are managing that explicitly. Best of luck.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.