1

The code here will run in GPU and capture windows screen, it give us ID3D11Texture2D Resource. Using ID3D11DeviceContext::Map I taking GPU resource in to BYTE buffer from BYTEbuffer in to CPU Memory g_iMageBuffer its a UCHAR.

Now I want to do reverse engineering, I want to take g_iMageBufferbuffer(CPU Memory) in to ID3D11Texture2D(GPU memory). Please someone help me how to do this reverse engineering I am new to graphical part.

//Variable Declaration
IDXGIOutputDuplication* IDeskDupl;
IDXGIResource*          lDesktopResource = nullptr;
DXGI_OUTDUPL_FRAME_INFO IFrameInfo;
ID3D11Texture2D*        IAcquiredDesktopImage;
ID3D11Texture2D*        lDestImage;
ID3D11DeviceContext*    lImmediateContext;
UCHAR*                  g_iMageBuffer=nullptr;

//Screen capture start here
hr = lDeskDupl->AcquireNextFrame(20, &lFrameInfo, &lDesktopResource);

// >QueryInterface for ID3D11Texture2D
hr = lDesktopResource->QueryInterface(IID_PPV_ARGS(&lAcquiredDesktopImage));
lDesktopResource.Release();

// Copy image into GDI drawing texture
lImmediateContext->CopyResource(lDestImage,lAcquiredDesktopImage);
lAcquiredDesktopImage.Release();
lDeskDupl->ReleaseFrame();  

// Copy GPU Resource to CPU
D3D11_TEXTURE2D_DESC desc;
lDestImage->GetDesc(&desc);
D3D11_MAPPED_SUBRESOURCE resource;
UINT subresource = D3D11CalcSubresource(0, 0, 0);
lImmediateContext->Map(lDestImage, subresource, D3D11_MAP_READ_WRITE, 0, &resource);

std::unique_ptr<BYTE> pBuf(new BYTE[resource.RowPitch*desc.Height]);
UINT lBmpRowPitch = lOutputDuplDesc.ModeDesc.Width * 4;
BYTE* sptr = reinterpret_cast<BYTE*>(resource.pData);
BYTE* dptr = pBuf.get() + resource.RowPitch*desc.Height - lBmpRowPitch;
UINT lRowPitch = std::min<UINT>(lBmpRowPitch, resource.RowPitch);

for (size_t h = 0; h < lOutputDuplDesc.ModeDesc.Height; ++h)
{
    memcpy_s(dptr, lBmpRowPitch, sptr, lRowPitch);
    sptr += resource.RowPitch;
    dptr -= lBmpRowPitch;
}

lImmediateContext->Unmap(lDestImage, subresource);
long g_captureSize=lRowPitch*desc.Height;
g_iMageBuffer= new UCHAR[g_captureSize];
g_iMageBuffer = (UCHAR*)malloc(g_captureSize);

//Copying to UCHAR buffer 
memcpy(g_iMageBuffer,pBuf,g_captureSize);
2
  • A better solution than the malloc and memcpy at the end would be to just "move" the buffer allocated in your std::unique_ptr<> which you can get by calling release. Of course, that assumes you clean up with delete instead of free--or better yet just use a std::unique_ptr<>. If you must use free, then use malloc which initializing the std::unique_ptr<> in the first place (and provide a custom deleter that uses free instead of delete).. Also note that your new followed by a malloc is leaking memory. Commented Nov 16, 2017 at 17:09
  • @ChuckWalbourn ya i got it ... thank you. Commented Nov 17, 2017 at 5:48

1 Answer 1

0

You don't need reverse engineering. What you describe is called "loading a texture".

How to: Initialize a Texture Programmatically
How to: Initialize a Texture From a File

As you appear to be new to DirectX programming, consider working through the DirectX Tool Kit for DX11 tutorials. In particular, make sure you read the section on ComPtr and ThrowIfFailed.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.