3

I'm trying to find a way to bind an OpenCL buffer into DirectX buffer.

I did manage to find the inverse way of doing so using eh OpenGL Api function: clCreateFromGLBuffer, but failed on finding the other way around.

Is there maybe a way to go through the 3rd party, meaning transfering from OpenCL into some other type of buffer which is capable to be bind into a DirectX buffer?

Update: Trying pmdj's solution (attached code snippet here), in a part of my solution what i'm doing is creating a DirectX 11 resource (ID3D11Texture2D to be precise) and giving that resource access from both OpenCL resource (boost::compute::image2d) and from DirectX 12 resource (ID3D12Resource).. Now about the OpenCL - i managed to perform. Though giving access to a DirectX 12 resource i didn't manage (it just leaves the com_ptr with NULL value, am i doing something wrong here?

std::tuple<com_ptr<ID3D11Texture2D>, com_ptr<ID3D12Resource>, boost::compute::image2d> CreateInteropTextureD12Support(
    ID3D11Device* d3dDevice11,
    ID3D12Device* d3dDevice12,
    const boost::compute::context& clContext,
    DXGI_FORMAT format,
    UINT bindFlags,
    unsigned width,
    unsigned height
)
{
    D3D11_TEXTURE2D_DESC desc{};
    desc.Width = width;
    desc.Height = height;
    desc.MipLevels = 1;
    desc.ArraySize = 1;
    desc.Format = format;
    desc.SampleDesc.Count = 1;
    desc.SampleDesc.Quality = 0;
    desc.Usage = D3D11_USAGE_DEFAULT;
    desc.BindFlags = bindFlags;
    desc.CPUAccessFlags = 0;
    desc.MiscFlags = D3D11_RESOURCE_MISC_SHARED_NTHANDLE | D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX;// shared with opencld

    // create d11 resource (from which opencl and d12 will point to)
    com_ptr<ID3D11Texture2D> texture11;
    HR(d3dDevice11->CreateTexture2D(&desc, nullptr, texture11.GetAddressOf()));
    auto image2d = GetCLImageFromD3D11Texture2D(texture11, clContext);

    
    com_ptr<ID3D12Resource> texture12; // texture / resource?
    HANDLE handle = NULL;

    IDXGIResource1* pResource;
    texture11->QueryInterface(__uuidof(IDXGIResource1), (void**)&pResource);
    HR(pResource->CreateSharedHandle(
        NULL,
        DXGI_SHARED_RESOURCE_READ | DXGI_SHARED_RESOURCE_WRITE,
        NULL,
        &handle
    ));
  
    HR(d3dDevice12->OpenSharedHandle(handle, __uuidof(ID3D12Resource), (void**)&texture12)); 


    return std::make_tuple(std::move(texture11), std::move(texture12), std::move(image2d));
}
3
  • To clarify: What DirectX version are you targeting? Are you aware of the cl_khr_d3d11_sharing extension? What about the OpenCL-on-DX12 implementation? Also, you want to create a buffer on OpenCL first and then bind this from DirectX, and you cannot modify your code to work the other way around? Commented May 27, 2022 at 13:44
  • 1
    What DirectX version are you targeting? DirectX 12/11 (12 is preferable). Regarding the cl_khr_d3d11_sharing extension & OpenCL-on-DX12 , i read the links you sent and i only see the ability to go from OpenCL buffer to a DirectX buffer and not the other way around which is what i really need. And what i precisely want is: Given a boost::compute::vector<uint8_t> (the vector can be replaced with any other type... though it must sit on top of OpenCL), i need it to be wrapped under a ID3D12 Resource (i need it wrapped to ID3D12 for applying inference using ONNXRuntime if you're familiar with) Commented May 29, 2022 at 6:25
  • Thanks for the clarification. I suspected you were trying to glue some higher-level libraries together, otherwise the restrictions didn't make much sense. Commented May 29, 2022 at 8:48

2 Answers 2

2

Just to add some info to the existing answer.

  1. When you wish to share resources between Direct3D11 and OpenCL, the OpenCL context is initially created from the D3D context. This implies that all shared memory resources are allocated using D3D (with D3D11_RESOURCE_MISC_SHARED flag) , and the cl_mem handle is obtained from the D3D resource using OpenCL extension functions. You can't do the other way around.

  2. The 'best' way I found to work around the lack of proper DirectX12-OpenCL interoperability, and share resources between OpenCL and DirectX12 is indeed using D3D11On12. You must first ensure that the D3D11 device is created properly (Hint - there is an existing flag in your codebase that does just that). Once you do that, all the D3D11 resources are D3D12 resources, and so D3D12-OpenCL sharing might be possible. However, be aware that this is a hack. I reported some nasty driver bugs associated with this workaround.

  3. A small note regarding boost::compute. In many scenarios it is not really necessary to use boost::compute::vector since you have no intention of modifying its size after its creation. You can always work directly with boost::compute::buffer which can be initialized with a low level cl_mem. Just make sure you understand how OpenCL resource ownership is managed by boost::compute (Take a look at how boost::compute::buffer uses clRetainMemObject and clReleaseMemObject.

  4. When trying to share a texture with OpenCL, I think it is necessary to allocate it with the SHADER_RESOURCE, RENDER_TARGET and MISC_SHARED flags. Try that in your snippet.

Sign up to request clarification or add additional context in comments.

Comments

1

From the discussion in comments, we've established that the reason for the restrictions/requirements is that the asker is using boost::compute as a wrapper around OpenCL.

I see 2 major options:

1. Copying

  • Perform your computations in pure boost::compute.
  • Create your destination DirectX buffer.
  • Acquire a OpenCL buffer reference to the DirectX buffer using the API provided by the cl_khr_d3d11_sharing extension.
  • Copy the result from the boost::compute::vector to the wrapped DirectX buffer using clEnqueueCopyBuffer

Depending on your requirements, this might be less terrible than you think. clEnqueueCopyBuffer will use the GPU's DMA engine, and so should be very fast.

2. Custom buffer_allocator

boost::compute::vector uses boost::compute::buffer as its memory storage container, which itself is a wrapper around OpenCL's cl_mem. buffer has a constructor which allows it to take control of an OpenCL buffer you give it. One of vector's template parameters is a buffer allocator, which is what controls how the vector's internal buffer is created.

I think using this it should be possible to create a custom buffer_allocator which produces buffers which use OpenCL buffer objects which reference DirectX buffers. I've never done this myself, so I'm afraid I can't give you an example. This approach should be completely zero-copy however.

DirectX 12 vs 11

It looks like the OpenCL sharing extension only supports up to DirectX 11. However, I think you can create a buffer using DirectX 12 APIs, then acquire a wrapped DirectX 11 reference to it using D3D11on12. Presumably you can then pass this DirectX 11 buffer to the OpenCL/DirectX sharing extension's API to create an OpenCL buffer which wraps a DirectX 11 buffer, which wraps a DirectX 12 buffer. Complicated, but it might just work!

3 Comments

Thanks for the elaborated answer! will dig into both options and let you know regarding the results ;)
Alright, in a part of my solution what i'm doing is creating a DirectX 11 resource (ID3D11Texture2D to be precise) and giving that resource access from both OpenCL resource (boost::compute::image2d) and from DirectX 12 resource (ID3D12Resource).. Now about the OpenCL - i managed to perform. Though giving access to a DirectX 12 resource i didn't manage. I'm adding the code snippet for the DirectX12 part in the next comment
Added the code snippet in the original post

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.