6

I use win32 threads with OpenGL 2.1. What I'm trying to achieve is to render simple image saying "loading", while in the background whole 3D scene is being loaded. It works now, but I have a problem, where sometimes a part of my cubemap texture takes data from Mozilla Firefox Browser (How in the hell does this happen???) and ignore that small box, with texture, it's only a sprite and it is where it should be.: enter image description here

This happens like 1 in 3 times I try to load my program. This is how my thread looks:

WindowsThread::WindowsThread(HGLRC graphicsContext, HDC deviceContext) :
    graphicsContext_(graphicsContext),
    deviceContext_(deviceContext),
    running_(false),
    task_(0),
    mode_(WT_NORMAL)
{
    handle_ = CreateThread(0, 0,
        (unsigned long (__stdcall *)(void *)) this->staticRun,
        (void*) this, CREATE_SUSPENDED, &id_);

    if (handle_ == 0) {
        LOGE("Unable to create thread.");
        return;
    }

    if (!SetThreadPriority(handle_, THREAD_PRIORITY_NORMAL)) {
        LOGE("Unable to set thread priority for thread.");
        return;
    }
}

WindowsThread::~WindowsThread() {
    finishTask();
    running_ = false;
    WaitForSingleObject(handle_, INFINITE);
    CloseHandle(handle_);
    wglDeleteContext(graphicsContext_);
}

void WindowsThread::start() {
    running_ = true;
    if (!ResumeThread(handle_)) {
        LOGW("Unable to resume thread.");
    }
}

bool WindowsThread::isRunning() {
    return running_;
}

void WindowsThread::setTask(Task* task, Mode mode) {
    finishTask();
    task_ = task;
    mode_ = mode;
}

bool WindowsThread::hasTask() {
    return task_ != 0;
}

void WindowsThread::finishTask() {
    while (task_ != 0) {
        Sleep(1);
    }
}

void WindowsThread::stop() {
    running_ = false;
}

int WindowsThread::staticRun(void* thread) {
    return ((WindowsThread*) thread)->run();
}

int WindowsThread::run() {
    wglMakeCurrent(deviceContext_, graphicsContext_);
    while (running_) {
        if (task_ != 0) {
            task_->run();
            task_ = 0;
        }
        Sleep(10);
    }
    wglMakeCurrent(0, 0);
    return 1;
}

ThreadManager:

WindowsThreadManager::WindowsThreadManager(
    System* system, UINT threadPoolSize)
{
    if (threadPoolSize == 0) {
        SYSTEM_INFO info;
        GetSystemInfo(&info);
        threadPoolSize = info.dwNumberOfProcessors;
        if (threadPoolSize == 0) {
            threadPoolSize = 1;
        }
    }
    LOGI("Number of threads used: %d", threadPoolSize);
    masterContext_ = wglGetCurrentContext();
    HDC hdc = wglGetCurrentDC();
    for (UINT i = 0; i < threadPoolSize; i++) {
        HGLRC threadContext = wglCreateContext(hdc);
        wglShareLists(masterContext_, threadContext);
        WindowsThread* thread = new WindowsThread(threadContext, hdc);
        thread->start();
        threads_.push_back(thread);
    }
}

WindowsThreadManager::~WindowsThreadManager() {
    for (UINT i = 0; i < threads_.size(); i++) {
        delete threads_[i];
    }
    for (UINT i = 0; i < tasks_.size(); i++) {
        delete tasks_[i];
    }
}

void WindowsThreadManager::execute(Task* task, Mode mode) {
    WindowsThread::Mode wtMode = WindowsThread::WT_NORMAL;
    if (mode == TM_GRAPHICS_CONTEXT) {
        wtMode = WindowsThread::WT_GRPAHICS_CONTEXT;
    }
    tasks_.push_back(task);
    for (UINT i = 0; i < threads_.size(); i++) {
        if (!threads_[i]->hasTask()) {
            threads_[i]->setTask(task, wtMode);
            return;
        }
    }
    threads_[0]->setTask(task, wtMode);
}

void WindowsThreadManager::joinAll() {
    for (UINT i = 0; i < threads_.size(); i++) {
        if (threads_[i]->hasTask()) {
            threads_[i]->finishTask();
        }
    }
}

I use Nvidia 670GTX with latest drivers on Winodws 8. Any Ideas where the problem might be?

[EDIT] I added glFinish() at the end of my loader thread, and now everything loads normaly. I red somewhere, that OpenGL doesn't immediately finish all of it's work, so I guess this was the case, where context was set to NULL before it could finish it's work.

7
  • 1
    take a note that opengl and threads usually do not go along Commented Feb 14, 2013 at 11:03
  • Yes, I red quite a lot about that. As a matter of fact, I had implemented two contexts before, on one I rendered, on other one I loaded resources, and when resources were loaded I just took that context and made it main context, while deleting last main context. This worked with no problems. Commented Feb 14, 2013 at 11:07
  • @BЈовић: OpenGL and multithreading can be done, it's just not simple to get right. Commented Feb 14, 2013 at 11:11
  • @datenwolf Yes, but if I understand the post, he is creating a texture in a thread (or he renders something in a thread?). Since it happens randomly (1 in 3 tries), that indicates he didn't do it right, and created some kind of race condition. Commented Feb 14, 2013 at 11:16
  • @BЈовић: Yes, it's likely a race condition, but he's using contexts in with a shared texture space. Which means it's the burden of the OpenGL implementation to introduce the right synchronization points into texture management. The worst that should happen are dropped frames, not a race condition leading to fetch data from uninitialized memory. Commented Feb 14, 2013 at 11:22

1 Answer 1

7

It works now, but I have a problem, where sometimes a part of my cubemap texture takes data from Mozilla Firefox Browser (How in the hell does this happen???)

Your texture receives data from uninitialized graphics memory, which very likely may contain residual images from another process, that previously used that memory region. Stuff like this can happen, if

a) the driver has a bug and doesn't synchronize resources between threads

and

b) if you're trying to modify a texture, while it's bound to a texture unit in the other thread.

EDIT: You can (and should) introduce proper synchronization yourself. Simply because it enhances performance. Use condition variables to communicate between threads, when the texture is currently not busy. Ideally you use two or more textures, you update in a round robin fashion.

Sign up to request clarification or add additional context in comments.

4 Comments

If I understood correctly, you suggest using only one context, and synchronizing all parts where I use it, so that there won't be any race conditions? Or using multiple contexts, but still synchronizing myself, so opengl driver wouldn't have to?
@SMGhost: No! Use two contexts (don't play hot potato with the context between threads). But use a condition variable so that the update thread knows, when it's safe to modify the contents of a texture. Also use glTexSubImage2D if you're not doing it already. Keep 2 or 3 textures in a round robin cycle, where you update the textures one index ahead of what's being drawn.
Currently there was no situation where once loaded texture was modified, and later on, after loading resources, all graphics operations only work on main thread with single context. The way I try to implement whole scene loading mechanism is that what loader thread loads is never used by main thread until whole resources are loaded, then I start rendering from those resources. I assume I don't need condition variables in this situation? :)
@SMGhost: Ah, okay. I thought your intention was creating a video player of sorts. Well in that case no condition variables are required.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.