Using DirectX 9, I am trying to create and then fill in an LPDIRECT3DTEXTURE9 texture in the following way.
First, I create the texture with IDirect3DTexture9::CreateTexture:
LPDIRECT3DTEXTURE9 pTexture;
if ( FAILED( pd3dDevice->CreateTexture( MAX_IMAGE_WIDTH,
MAX_IMAGE_HEIGHT,
1,
0, // D3DUSAGE_DYNAMIC,
D3DFMT_A8R8G8B8,
D3DPOOL_MANAGED, // D3DPOOL_DEFAULT,
&pTexture,
NULL ) ) )
{
// Handle error case
}
Then, I try and lock a rectangle on the texture as follows:
unsigned int uiSize = GetTextureSize();
D3DLOCKED_RECT rect;
ARGB BlackColor = { (char)0xFF, (char)0xFF, (char)0xFF, (char)0x00 };
::ZeroMemory( &rect, sizeof( D3DLOCKED_RECT ) );
// Lock outline texture to rect, and then cast rect to bits and use bits as outlineTexture access point
if ( pTexture == NULL )
{
return ERROR_NOT_INITIALIZED;
}
pTexture->LockRect( 0, &rect, NULL, D3DLOCK_NOSYSLOCK ); // Consider ?
ARGB* bits = (ARGB*)rect.pBits;
for ( unsigned int uiPixel = 0; uiPixel < uiSize; ++uiPixel )
{
// Copy all black pixels only
if ( compositeMask[uiPixel] == BlackColor )
{
bits[uiPixel] = compositeMask[uiPixel];
}
}
pTexture->UnlockRect( 0 );
return ERROR_SUCCESS;
ARGB is just a struct defined as follows:
struct ARGB
{
char b;
char g;
char r;
char a;
bool operator==( ARGB& comp )
{
if ( a == comp.a &&
r == comp.r &&
g == comp.g &&
b == comp.b )
return TRUE;
else
return FALSE;
}
bool operator!=( ARGB& comp )
{
return !( this->operator==( comp ) );
}
};
What I want to do is pre-calculate an array of pixel data (a black outline) depending on an in-application algorithm, and then only write the pure black pixels from that set of pixel data onto my LPDIRECT3DTEXTURE9 to be rendered later.
The application currently throws a ACCESS_VIOLATION exception (0xC0000005) at the LockRect call. Can anyone possibly explain why?
Here's the exact exception detail:
Unhandled exception at 0x0132F261 in TestApp.exe: 0xC0000005: Access violation reading location 0x00000001.
The location varied between 0x00000000 and 0x00000001... Does that hint at anything?
Also, if there's a better way to do what I am trying to do, then I'd be all ears :)
Like the other commentators on your question, I can't see anything wrong in principle with the way that you create and lock the texture. I have done the same myself - creating a texture in D3DPOOL_MANAGED and using LockRect to update the content.
However, there are three areas that concern me. I'm posting as an answer because there's far too much for a comment, so please bear with me...
Using the D3DLOCK_NOSYSLOCK flag when locking. I have found that this can cause conflicts when the D3D device has been created for multithreaded operation.
The way you access the locked bits takes no account of the stride. I appreciate that the error apparently occurs before this code, but it's worth mentioning anyway.
You are casting to your own struct for pixel access and it's unclear what the actual size of the struct may be because I can't see your packing options for the project.
So, I suggest three things that you can do to identify if any of the above are causing a problem:
First, just use the default zero flag for the locking call
pTexture->LockRect( 0, &rect, NULL, 0 );
Second, verify that your ARGB structure really is 4 bytes
ASSERT(sizeof(ARGB) == 4);
Finally, do nothing except lock and unlock the texture and see if you still get a runtime error, but also check the return code
HRESULT hr = pTexture->LockRect( 0, &rect, NULL, 0 );
ASSERT(SUCCEEDED(hr));
hr = pTexture->UnlockRect( 0 );
ASSERT(SUCCEEDED(hr));
In any case, when updating the texture bits, you must do it on a row-by-row basis, taking account of the stride reported back from the LockRect call in D3DLOCKED_RECT.Pitch.
Perhaps you could update your question with the results of the above and I can amend this answer as necessary.
This was mind-numbingly stupid. Sorry everyone.
I followed the texture pointer all the way through the code; the LPDIRECT3DTEXTURE9 pointers are actually being stored inside another custom Texture class object type with extra contextual data attached to it; these wrapper objects were members of another class that was being copied and used all over the place, and yet there was no assignment operator or copy constructor written for the class. At some point, out of the huge list of textures being processed, one of the textures sent from the container class was found to be invalid because it actually was; it was supposed to contain a copy of another texture, but contained only an invalid pointer.
Sorry for the unfortunate amateur error everyone, but thank you all for the great pointers and assurance
Related
A little background: I'm attempting to make a Windows (10) application which makes the screen look like an old CRT monitor, scanlines, blur, and all. I'm using this official Microsoft screen capture demo as a starting point: At this stage I can capture a window, and display it back in a new mouse-through window as if it were the original window.
I am attempting to use the CRT-Royale CRT shaders which are generally considered the best CRT shaders; these are available in .cg format. I transpile them with cgc to hlsl, then compile the hlsl files to compiled shader byte code with fxc. I am able to successfully load the compiled shaders and create the pixel shader. I then set the pixel shader in the d3d context. I then attempt to copy the capture surface frame to a pixel shader resource and set the created shaders resource. All of this builds and runs, but I do not see any difference in the output image and am not sure how to proceed. Below is the relevant code. I am not a c++ developer and am making this as a personal project which I plan on open sourcing once I have a primitive working version. Any advice is appreciated, thanks.
SimpleCapture::SimpleCapture(
IDirect3DDevice const& device,
GraphicsCaptureItem const& item)
{
m_item = item;
m_device = device;
// Set up
auto d3dDevice = GetDXGIInterfaceFromObject<ID3D11Device>(m_device);
d3dDevice->GetImmediateContext(m_d3dContext.put());
auto size = m_item.Size();
m_swapChain = CreateDXGISwapChain(
d3dDevice,
static_cast<uint32_t>(size.Width),
static_cast<uint32_t>(size.Height),
static_cast<DXGI_FORMAT>(DirectXPixelFormat::B8G8R8A8UIntNormalized),
2);
// ADDED THIS
HRESULT hr1 = D3DReadFileToBlob(L"crt-royale-first-pass-ps_4_0.fxc", &ps_1_buffer);
HRESULT hr = d3dDevice->CreatePixelShader(
ps_1_buffer->GetBufferPointer(),
ps_1_buffer->GetBufferSize(),
nullptr,
&ps_1
);
m_d3dContext->PSSetShader(
ps_1,
nullptr,
0
);
// END OF ADDED CHANGES
// Create framepool, define pixel format (DXGI_FORMAT_B8G8R8A8_UNORM), and frame size.
m_framePool = Direct3D11CaptureFramePool::Create(
m_device,
DirectXPixelFormat::B8G8R8A8UIntNormalized,
2,
size);
m_session = m_framePool.CreateCaptureSession(m_item);
m_lastSize = size;
m_frameArrived = m_framePool.FrameArrived(auto_revoke, { this, &SimpleCapture::OnFrameArrived });
}
void SimpleCapture::OnFrameArrived(
Direct3D11CaptureFramePool const& sender,
winrt::Windows::Foundation::IInspectable const&)
{
auto newSize = false;
{
auto frame = sender.TryGetNextFrame();
auto frameContentSize = frame.ContentSize();
if (frameContentSize.Width != m_lastSize.Width ||
frameContentSize.Height != m_lastSize.Height)
{
// The thing we have been capturing has changed size.
// We need to resize our swap chain first, then blit the pixels.
// After we do that, retire the frame and then recreate our frame pool.
newSize = true;
m_lastSize = frameContentSize;
m_swapChain->ResizeBuffers(
2,
static_cast<uint32_t>(m_lastSize.Width),
static_cast<uint32_t>(m_lastSize.Height),
static_cast<DXGI_FORMAT>(DirectXPixelFormat::B8G8R8A8UIntNormalized),
0);
}
{
auto frameSurface = GetDXGIInterfaceFromObject<ID3D11Texture2D>(frame.Surface());
com_ptr<ID3D11Texture2D> backBuffer;
check_hresult(m_swapChain->GetBuffer(0, guid_of<ID3D11Texture2D>(), backBuffer.put_void()));
// ADDED THIS
D3D11_TEXTURE2D_DESC txtDesc = {};
txtDesc.MipLevels = txtDesc.ArraySize = 1;
txtDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
txtDesc.SampleDesc.Count = 1;
txtDesc.Usage = D3D11_USAGE_IMMUTABLE;
txtDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
auto d3dDevice = GetDXGIInterfaceFromObject<ID3D11Device>(m_device);
ID3D11Texture2D *tex;
d3dDevice->CreateTexture2D(&txtDesc, NULL,
&tex);
frameSurface.copy_to(&tex);
d3dDevice->CreateShaderResourceView(
tex,
nullptr,
srv_1
);
auto texture = srv_1;
m_d3dContext->PSSetShaderResources(0, 1, texture);
// END OF ADDED CHANGES
m_d3dContext->CopyResource(backBuffer.get(), frameSurface.get());
}
}
DXGI_PRESENT_PARAMETERS presentParameters = { 0 };
m_swapChain->Present1(1, 0, &presentParameters);
... // Truncated
Shaders define how things are drawn. However, you don't draw anything - you just copy, which is why the shader doesn't do anything.
What you should do is to remove the CopyResource call, and instead draw a full screen quad on the back buffer (Which requires you to create a vertex buffer that you can bind, then set the back buffer as render target, and finally call Draw/DrawIndexed to actually render something, which then will invoke the shader).
Also - since I'm not sure whether you already do this and just stripped it from the shown code - functions like CreatePixelShader don't return HRESULTs just for the fun of it - you should check what is actually returned, because DirectX silently returns most errors and expects you to handle them, instead of crashing your program.
I am coding a 2D Game using DirectX11 and DirectXTK.
I did a class Framework that initializes both the window displayed for the game and initializes DirectX. These initializations work correctly. Then, I decided to draw some backgrounds, etc in the window, but after a while it exits on an exception. I did a try{ ... } catch(){ } block, which tells me that "Texture cannot be null". However, i could not find which texture it is talking about, even by debbugging and checking all the values.
I decided to separate the different elements i was drawing in the window, to see where the problem might come from... So now i have 3 draw methods :
Draw(DWORD &elapsedTime);
DrawBackground(DWORD &elapsedTime);
DrawCharacter(DWORD &elapsedTime);
The Draw(DWORD &elapsedTime) method calls both DrawBackground() and DrawCharacter() methods.
Here is my Draw Method :
void Framework::Draw(DWORD * elapsedTime)
{
// Clearing the Back Buffer
immediateContext->ClearRenderTargetView(renderTargetView, Colors::Aquamarine);
//Clearing the depth buffer to max depth (1.0)
immediateContext->ClearDepthStencilView(depthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0); //immediateContext is a ID3D11DeviceContext*
CommonStates states(d3dDevice); //d3dDevice is a ID3D11Device*
sprites.reset(new SpriteBatch(immediateContext));
sprites->Begin(SpriteSortMode_Deferred, states.NonPremultiplied());
DrawBackground1(elapsedTime);
DrawCharacter(elapsedTime);
sprites->End();
//Presenting the back buffer to the front buffer
swapChain->Present(0, 0);
}
By debugging i am almost sure that the exception comes from both DrawBackground() and DrawCharacter(). Indeed, when I comment those in the Draw method, i have no error, but as soon as i put one it sets the exception after displaying what i want during a few seconds.
Here is the method DrawBackground() for example :
void Framework::DrawBackground1(DWORD * elpasedTime)
{
RECT *try1 = new RECT();
try1->bottom = 0; try1->left = 0; try1->right = (int)WIDTH; try1->bottom = (int)HEIGHT;
ID3D11ShaderResourceView * texture2 = nullptr;
ID3D11ShaderResourceView * textureRV = nullptr;
CreateDDSTextureFromFile(d3dDevice, L"../Images/backgrounds/set2_background.dds", nullptr, &textureRV);
CreateDDSTextureFromFile(d3dDevice, L"../Images/backgrounds/set3_tiles.dds", nullptr, &texture2);
sprites->Draw(textureRV, XMFLOAT2(0, 0), try1, Colors::White);
sprites->Draw(texture2, XMFLOAT2(0, 0), try1, Colors::CornflowerBlue);
}
So as soon as i uncomment this method (or any DrawCharacter(), which follows the same steps), the window displays what i expect it to for a few seconds, but then i get the exception "Texture cannot be null". I also noticed that the method DrawCharacter() lets the window displaying what i want longer than the method DrawBackground(), whose texture is way bigger than the character's one.
I'm not sure if this information is useful but i think that maybe this might be linked to the size of the texture ?
Would you notice anything that i did wrong in this code ? Why would a texture be considered null while it does display it for a while ? I've been looking for answers for a few hours now, some help would be amazing please !
Thank you
I noticed that you create two new ID3D11ShaderResourceView every iteration without Release-ing the old ones. You could try by creating the ShaderResourceViews only once and storing them as global variables, or you could try by ->Release() them after the sprites->Draw(...) calls.
I want to understand DXGI Desktop Duplication. I have read a lot and this is the code I copied from parts of the DesktopDuplication sample on the Microsoft Website. My plan is to get the Buffer or Array from the DesktopImage because I want to make a new Texture for an other program. I hope somebody can explain me what I can do to get it.
void DesktopDublication::GetFrame(_Out_ FRAME_DATA* Data, _Out_ bool* Timeout)
{
IDXGIResource* DesktopResource = nullptr;
DXGI_OUTDUPL_FRAME_INFO FrameInfo;
// Get new frame
HRESULT hr = m_DeskDupl->AcquireNextFrame(500, &FrameInfo, &DesktopResource);
if (hr == DXGI_ERROR_WAIT_TIMEOUT)
{
*Timeout = true;
}
*Timeout = false;
if (FAILED(hr))
{
}
// If still holding old frame, destroy it
if (m_AcquiredDesktopImage)
{
m_AcquiredDesktopImage->Release();
m_AcquiredDesktopImage = nullptr;
}
// QI for IDXGIResource
hr = DesktopResource->QueryInterface(__uuidof(ID3D11Texture2D), reinterpret_cast<void **>(&m_AcquiredDesktopImage));
DesktopResource->Release();
DesktopResource = nullptr;
if (FAILED(hr))
{
}
// Get metadata
if (FrameInfo.TotalMetadataBufferSize)
{
// Old buffer too small
if (FrameInfo.TotalMetadataBufferSize > m_MetaDataSize)
{
if (m_MetaDataBuffer)
{
delete[] m_MetaDataBuffer;
m_MetaDataBuffer = nullptr;
}
m_MetaDataBuffer = new (std::nothrow) BYTE[FrameInfo.TotalMetadataBufferSize];
if (!m_MetaDataBuffer)
{
m_MetaDataSize = 0;
Data->MoveCount = 0;
Data->DirtyCount = 0;
}
m_MetaDataSize = FrameInfo.TotalMetadataBufferSize;
}
UINT BufSize = FrameInfo.TotalMetadataBufferSize;
// Get move rectangles
hr = m_DeskDupl->GetFrameMoveRects(BufSize, reinterpret_cast<DXGI_OUTDUPL_MOVE_RECT*>(m_MetaDataBuffer), &BufSize);
if (FAILED(hr))
{
Data->MoveCount = 0;
Data->DirtyCount = 0;
}
Data->MoveCount = BufSize / sizeof(DXGI_OUTDUPL_MOVE_RECT);
BYTE* DirtyRects = m_MetaDataBuffer + BufSize;
BufSize = FrameInfo.TotalMetadataBufferSize - BufSize;
// Get dirty rectangles
hr = m_DeskDupl->GetFrameDirtyRects(BufSize, reinterpret_cast<RECT*>(DirtyRects), &BufSize);
if (FAILED(hr))
{
Data->MoveCount = 0;
Data->DirtyCount = 0;
}
Data->DirtyCount = BufSize / sizeof(RECT);
Data->MetaData = m_MetaDataBuffer;
}
Data->Frame = m_AcquiredDesktopImage;
Data->FrameInfo = FrameInfo;
}
If I'm understanding you correctly, you want to get the current desktop image, duplicate it into a private texture, and then render that private texture onto your window. I would start by reading up on Direct3D 11 and learning how to render a scene, as you will need D3D to do anything with the texture object you get from DXGI. This, this, and this can get you started on D3D11. I would also spend some time reading through the source of the sample you copied your code from, as it completely explains how to do this. Here is the link to the full source code for that sample.
To actually get the texture data and render it out, you need to do the following:
1). Create a D3D11 Device object and a Device Context.
2). Write and compile a Vertex and Pixel shader for the graphics card, then load them into your application.
3). Create an Input Layout object and set it to the device.
4). Initialize the required Blend, Depth-Stencil, and Rasterizer states for the device.
5). Create a Texture object and a Shader Resource View object.
6). Acquire the Desktop Duplication texture using the above code.
7). Use CopyResource to copy the data into your texture.
8). Render that texture to the screen.
This will capture all data displayed on one of the desktops to your texture. It does not do processing on the dirty rects of the desktop. It does not do processing on moved regions. This is bare bones 'capture the desktop and display it elsewhere' code.
If you want to get more in depth, read the linked resources and study the sample code, as the sample basically does what you're asking for.
Since tacking this onto my last answer didn't feel quite right, I decided to create a second.
If you want to read the desktop data to a file, you need a D3D11 Device object, a texture object with the D3D11_USAGE_STAGING flag set, and a method of converting the RGBA pixel data of the desktop texture to whatever it is you want. The basic procedure is a simplified version of the one in my original answer:
1). Create a D3D11 Device object and a Device Context.
2). Create a Staging Texture with the same format as the Desktop Texture.
3). Use CopyResource to copy the Desktop Texture into your Staging Texture.
4). Use ID3D11DeviceContext::Map() to get a pointer to the data contained in the Staging Texture.
Make sure you know how Map works and make sure you can write out image files from a single binary stream. There may also be padding in the image buffer, so be aware you may also need to filter that out. Additionally, make sure you Unmap the buffer instead of calling free, as the buffer given to you almost certainly does not belong to the CRT.
I am currently developing a little screenshot application which records both of my screen's desktop in a file.
I am using the GetFrontBufferData() function and it is working great.
Unfortunately when changing the screen color depth from 32 to 16 bits (to perform some tests) I have a bad image (purple image with changed resolution) and the recorded screenshot has a very poor quality:
Does someone know if there is a way to use GetFrontBufferData() with a 16 bits screen ?
edit:
My init direct3D:
ZeroMemory(&d3dPresentationParameters,sizeof(D3DPRESENT_PARAMETERS));//Fills a block of memory with zeros.
d3dPresentationParameters.Windowed = TRUE;
d3dPresentationParameters.Flags = D3DPRESENTFLAG_LOCKABLE_BACKBUFFER;
d3dPresentationParameters.BackBufferFormat = d3dFormat;//d3dDisplayMode.Format;//D3DFMT_A8R8G8B8;
d3dPresentationParameters.BackBufferCount = 1;
d3dPresentationParameters.BackBufferHeight = gScreenRect.bottom = uiHeight;
d3dPresentationParameters.BackBufferWidth = gScreenRect.right = uiWidth;
d3dPresentationParameters.MultiSampleType = D3DMULTISAMPLE_NONE;
d3dPresentationParameters.MultiSampleQuality = 0;
d3dPresentationParameters.SwapEffect = D3DSWAPEFFECT_DISCARD;
d3dPresentationParameters.hDeviceWindow = hWnd;
d3dPresentationParameters.PresentationInterval = D3DPRESENT_INTERVAL_DEFAULT;
d3dPresentationParameters.FullScreen_RefreshRateInHz = D3DPRESENT_RATE_DEFAULT;
The thread I use to capture screenshots:
CreateOffscreenPlainSurface(uiWidth, uiHeight, D3DFMT_A8R8G8B8, D3DPOOL_SYSTEMMEM, pBackBuffer, NULL)) != D3D_OK )
{
DBG("Error: CreateOffscreenPlainSurface failed = 0x%x", iRes);
break;
}
GetFrontBufferData(0, pCaptureSurface)) != D3D_OK)
{
DBG("Error: GetFrontBufferData failed = 0x%x", iRes);
break;
}
//D3DXSaveSurfaceToFile("Desktop.bmp", D3DXIFF_BMP, pBackBuffer,NULL, NULL); //Test purposes
ZeroMemory(lockedRect, sizeof(D3DLOCKED_RECT));
LockRect(lockedRect, NULL, D3DLOCK_READONLY)) != D3D_OK )
{
DBG("Error: LockRect failed = 0x%x", iRes);
break;
}
if( (iRes = UnlockRect()) != D3D_OK )
{
DBG("Error: UnlockRect failed = 0x%x", iRes);
break;
}
/**/
This code is perfectly working with 32 bits color depth but not with 16bits.
When creating the device I create 2 devices for both screens (iScreenNber). This is also working in 32bits (not in 16).
When saving the captured screenshot into 2 bmp files for testing (in 16 bits), I have one screen which represents the main display perfectly and the other screen is black.
When using memcpy to use pData, I have the above screenshot with purple color and bad resolution
edit2:
I noticed the following:
When saving Offscreen surface to a BMP file, I get the main display (on 1.bmp) which is refreshed each frame (so it is working just fine). For the second display, I just get the first frame then nothing more.
Quoting MSDN for GetFrontBufferData "The buffer pointed to by pDestSurface will be filled with a representation of the front buffer, converted to the standard 32 bits per pixel format D3DFMT_A8R8G8B8." I guess this is a problem for 16 bits color depth.
The first problem comes from the memcpy which does not handle properly the 16 bits color depth and I still don't know why ----> Help needed for this !!
Second problem is the second display which is not working and I don't why either
What am I doing wrong here ? I just get a black image on my Desktop N°xx.bmp file
Thank you very much for your help.
This is how I create a surface to capture screenshots:
IDirect3DSurface9* pCaptureSurface = NULL;
HRESULT hr = pD3DDevice->CreateOffscreenPlainSurface(
D3DPresentParams.BackBufferWidth,
D3DPresentParams.BackBufferHeight,
D3DPresentParams.BackBufferFormat,
D3DPOOL_SYSTEMMEM,
&pCaptureSurface,
NULL);
pD3DDevice->GetFrontBufferData(0, pCaptureSurface);
If you didn't store D3DPresentParams anywhere, you can use IDirect3DDevice9::GetDisplayMode to obtain width, height and format of your swap chain. All operations of resizing and format conversion you can perform after capturing a front buffer. Also, as I know, display format doesn't support alpha channel, so it typically is D3DFMT_X8R8G8B8, not D3DFMT_A8R8G8B8.
Update:
Actually, you try to capture a whole screen by using d3d device, without rendering anything. A purpose of d3d/opengl is to create or process images and do it GPU-accelerated. Taking a screenshot is just copying some video memory, it doesn't use all GPU power. So, using any GPU API brings no significant gain. Moreover, when you capture front buffer rendered not by yourself, strange things occur, you see. To extend your app you may capture image by GDI and then load it into texture and do any GPU postprocessing.
So i found some answers to my problem.
1) Second monitor wasn't working and I was unable to capture screenshot from it in 16 bits
This comes from the memcpy(..) line in the code. Because I am working with a 16 bits monitor, when executing the memcpy, the surface memory is corrupt and this leads to a black screen.
I still didn't find the solution for this but I'm working on.
2) The colors of the screenshot are wrong
This is, without any surprise, due to the 16 bits color depth. Because I am using GetFrontBufferData, and I am quoting Microsoft: The buffer pointed to by pDestSurface will be filled with a representation of the front buffer, converted to the standard 32 bits per pixel format D3DFMT_A8R8G8B8. This means, if I want to use the pixel data from LockRect(...), I have to "re-convert" my data into 16 bits mode. Therefore, I need to convert my pData data from D3DFMT_A8R8G8B8 to D3DFMT_R5G6B5 which is pretty simple.
3) How to debug the application ?
Thanks to your comments, I've been told that I should analyze pScreeInfo->pData content when I was in 16bits (thanks to Niello). Therefore, I've created a simple method using raw data from pScreeInfo->pData and copying in a .bmp:
HRESULT hr;
DWORD dwBytesRead;
UINT uiSize = 1920 * 1080 * 4;
HANDLE hFile;
hFile = CreateFile(TEXT("data.raw"), GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);
BOOL bOk = ReadFile(hFile, pData, uiSize, &dwBytesRead, NULL);
if(!bOk)
exit(0);
pTexture = NULL;
hr = pScreenInfo->g_pD3DDevice->CreateTexture(width, height, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pTexture, NULL);
D3DLOCKED_RECT lockedRect;
hr = pTexture->LockRect(0, &lockedRect, NULL, D3DLOCK_READONLY);
memcpy(lockedRect.pBits, pData, lockedRect.Pitch * height);
hr = pTexture->UnlockRect(0);
hr = D3DXSaveTextureToFile(test, D3DXIFF_BMP, pTexture,NULL);
bOk = CloseHandle(hFile);
SAFE_RELEASE(pTexture);
This piece of code allowed me to notice that pData data was correct and I could get a good .bmp file at the end which means that GetFrontBufferData(...) was correctly working and the problem comes from the memcpy(...)
4) Remaining problems
I am still trying to know how I can solve the memcpy issue to see where the problem comes from. This is the last problem since the colors are good now (thanks to the 32bits to 16 bits conversion)
Thank everybody for your helpful comments !
I am fairly new to DirectX 10 programming, and I have been trying to do the following with my limited skills (though I have a strong background with OpenGL)
I am trying to display 2 different textured Quads, 1 per monitor. To do so, I understood that I need a single D3D10 Device, multiple (2) swap chains, and single VertexBuffer
While I think I'm able to create all of those, I'm still pretty unsure how to handle all of them. Do I need multiple ID3D10RenderTargetView(s) ? How and where should I Use OMSetRenderTargets(...) ?
Other than MSDN, documentation or explaination of those concepts are rather limited, so any help would be very welcome. Here is some code I have :
Here's the rendering code
for(int i = 0; i < screenNumber; i++){
//clear scene
pD3DDevice->ClearRenderTargetView( pRenderTargetView, D3DXCOLOR(0,1,0,0) );
//fill vertex buffer with vertices
UINT numVertices = 4;
vertex* v = NULL;
//lock vertex buffer for CPU use
pVertexBuffer->Map(D3D10_MAP_WRITE_DISCARD, 0, (void**) &v );
v[0] = vertex( D3DXVECTOR3(-1,-1,0), D3DXVECTOR4(1,0,0,1), D3DXVECTOR2(0.0f, 1.0f) );
v[1] = vertex( D3DXVECTOR3(-1,1,0), D3DXVECTOR4(0,1,0,1), D3DXVECTOR2(0.0f, 0.0f) );
v[2] = vertex( D3DXVECTOR3(1,-1,0), D3DXVECTOR4(0,0,1,1), D3DXVECTOR2(1.0f, 1.0f) );
v[3] = vertex( D3DXVECTOR3(1,1,0), D3DXVECTOR4(1,1,0,1), D3DXVECTOR2(1.0f, 0.0f) );
pVertexBuffer->Unmap();
// Set primitive topology
pD3DDevice->IASetPrimitiveTopology( D3D10_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP );
//set texture
pTextureSR->SetResource( textureSRV[textureIndex] );
//get technique desc
D3D10_TECHNIQUE_DESC techDesc;
pBasicTechnique->GetDesc( &techDesc );
// This is where you actually use the shader code
for( UINT p = 0; p < techDesc.Passes; ++p )
{
//apply technique
pBasicTechnique->GetPassByIndex( p )->Apply( 0 );
//draw
pD3DDevice->Draw( numVertices, 0 );
}
//flip buffers
pSwapChain[i]->Present(0,0);
}
And here's the code for creating rendering targets, which I am not sure is good
for(int i = 0; i < screenNumber; ++i){
//try to get the back buffer
ID3D10Texture2D* pBackBuffer;
if ( FAILED( pSwapChain[1]->GetBuffer(0, __uuidof(ID3D10Texture2D), (LPVOID*) &pBackBuffer) ) ) return fatalError("Could not get back buffer");
//try to create render target view
if ( FAILED( pD3DDevice->CreateRenderTargetView(pBackBuffer, NULL, &pRenderTargetView) ) ) return fatalError("Could not create render target view");
pBackBuffer->Release();
pD3DDevice->OMSetRenderTargets(1, &pRenderTargetView, NULL);
}
return true;
}
I hope I got the gist of what you wish to do - render different content on two different monitors while using a single graphics card (graphics adapter) which maps its output to those monitors. For that, you're going to need one device (for the single graphics card/adapter) and enumerate just how many outputs there are at the user's machine.
So, in total - that means one device, two outputs, two windows and therefore - two swap chains.
Here's a quick result of my little experiment:
A little introduction
With DirectX 10+, this falls into the DXGI (DirectX Graphics Infrastructure) which manages the common low-level logistics involved with DirectX 10+ development which, as you probably know, dumped the old requirement of enumerating feature sets and the like - requiring every DX10+ capable card to share in on all of the features defined by the API. The only thing that varies is the extent and capability of the card (in other words, lousy performance is preferable to the app crashing and burning). This was all within DirectX 9 in the past, but people at Microsoft decided to pull it out and call it DXGI. Now, we can use DXGI functionality to set up our multi monitor environment.
Do I need multiple ID3D10RenderTargetView(s) ?
Yes, you do need multiple render target views, count depends (like the swap chains and windows) on the number of monitors you have. But, to save you from spewing words, let's write it out as simple as possible and additional information where it's needed:
Enumerate all adapters available on the system.
For each adapter, enumerate all outputs available (and active) and create a device to accompany it.
With the enumerated data stored in a suitable structure (think arrays which can quickly relinquish size information), use it to create n windows, swap chains, render target views, depth/stencil textures and their respective views where n is equal to the number of outputs.
With everything created, for each window you are rendering into, you can define special routines which will use the available geometry (and other) data to output your results - which resolves to what each monitor gets in fullscreen (don't forget to adjust the viewport for every window accordingly).
Present your data by iterating over every swap chain which is linked to its respective window and swap buffers with Present()
Now, while this is rich in word count, some code is worth a lot more. This is designed to give you a coarse idea of what goes into implementing a simple multimonitor application. So, assumptions are that there is only one adapter ( a rather bold statement nowadays ) and multiple outputs - and no failsafes. I'll leave the fun part to you. Answer to the second question is downstairs...
Do note there's no memory management involved. We assume everything magically gets cleaned up when it is not needed for illustration purposes. Be a good memory citizen.
Getting the adapter
IDXGIAdapter* adapter = NULL;
void GetAdapter() // applicable for multiple ones with little effort
{
// remember, we assume there's only one adapter (example purposes)
for( int i = 0; DXGI_ERROR_NOT_FOUND != factory->EnumAdapters( i, &adapter ); ++i )
{
// get the description of the adapter, assuming no failure
DXGI_ADAPTER_DESC adapterDesc;
HRESULT hr = adapter->GetDesc( &adapterDesc );
// Getting the outputs active on our adapter
EnumOutputsOnAdapter();
}
Acquiring the outputs on our adapter
std::vector<IDXGIOutput*> outputArray; // contains outputs per adapter
void EnumOutputsOnAdapter()
{
IDXGIOutput* output = NULL;
for(int i = 0; DXGI_ERROR_NOT_FOUND != adapter->EnumOutputs(i, &output); ++i)
{
// get the description
DXGI_OUTPUT_DESC outputDesc;
HRESULT hr = output->GetDesc( &outputDesc );
outputArray.push_back( output );
}
}
Now, I must assume that you're at least aware of the Win32 API considerations, creating window classes, registering with the system, creating windows, etc... Therefore, I will not qualify its creation, only elaborate how it pertains to multiple windows. Also, I will only consider the fullscreen case here, but creating it in windowed mode is more than possible and rather trivial.
Creating the actual windows for our outputs
Since we assume existence of just one adapter, we only consider the enumerated outputs linked to that particular adapter. It would be preferable to organize all window data in neat little structures, but for the purposes of this answer, we'll just shove them into a simple struct and then into yet another std::vector object, and by them I mean handles to respective windows (HWND) and their size (although for our case it's constant).
But still, we have to address the fact that we have one swap chain, one render target view, one depth/stencil view per window. So, why not feed all of that in that little struct which describes each of our windows? Makes sense, right?
struct WindowDataContainer
{
//Direct3D 10 stuff per window data
IDXGISwapChain* swapChain;
ID3D10RenderTargetView* renderTargetView;
ID3D10DepthStencilView* depthStencilView;
// window goodies
HWND hWnd;
int width;
int height;
}
Nice. Well, not really. But still... Moving on! Now to create the windows for outputs:
std::vector<WindowDataContainer*> windowsArray;
void CreateWindowsForOutputs()
{
for( int i = 0; i < outputArray.size(); ++i )
{
IDXGIOutput* output = outputArray.at(i);
DXGI_OUTPUT_DESC outputDesc;
p_Output->GetDesc( &outputDesc );
int x = outputDesc.DesktopCoordinates.left;
int y = outputDesc.DesktopCoordinates.top;
int width = outputDesc.DesktopCoordinates.right - x;
int height = outputDesc.DesktopCoordinates.bottom - y;
// Don't forget to clean this up. And all D3D COM objects.
WindowDataContainer* window = new WindowDataContainer;
window->hWnd = CreateWindow( windowClassName,
windowName,
WS_POPUP,
x,
y,
width,
height,
NULL,
0,
instance,
NULL );
// show the window
ShowWindow( window->hWnd, SW_SHOWDEFAULT );
// set width and height
window->width = width;
window->height = height;
// shove it in the std::vector
windowsArray.push_back( window );
//if first window, associate it with DXGI so it can jump in
// when there is something of interest in the message queue
// think fullscreen mode switches etc. MSDN for more info.
if(i == 0)
factory->MakeWindowAssociation( window->hWnd, 0 );
}
}
Cute, now that's done. Since we only have one adapter and therefore only one device to accompany it, create it as usual. In my case, it's simply a global interface pointer which can be accessed all over the place. We are not going for code of the year here, so why the hell not, eh?
Creating the swap chains, views and the depth/stencil 2D texture
Now, our friendly swap chains... You might be used to actually creating them by invoking the "naked" function D3D10CreateDeviceAndSwapChain(...), but as you know, we've already made our device. We only want one. And multiple swap chains. Well, that's a pickle. Luckily, our DXGIFactory interface has swap chains on its production line which we can receive for free with complementary kegs of rum. Onto the swap chains then, create for every window one:
void CreateSwapChainsAndViews()
{
for( int i = 0; i < windowsArray.size(); i++ )
{
WindowDataContainer* window = windowsArray.at(i);
// get the dxgi device
IDXGIDevice* DXGIDevice = NULL;
device->QueryInterface( IID_IDXGIDevice, ( void** )&DXGIDevice ); // COM stuff, hopefully you are familiar
// create a swap chain
DXGI_SWAP_CHAIN_DESC swapChainDesc;
// fill it in
HRESULT hr = factory->CreateSwapChain( DXGIDevice, &swapChainDesc, &p_Window->swapChain );
DXGIDevice->Release();
DXGIDevice = NULL;
// get the backbuffer
ID3D10Texture2D* backBuffer = NULL;
hr = window->swapChain->GetBuffer( 0, IID_ID3D10Texture2D, ( void** )&backBuffer );
// get the backbuffer desc
D3D10_TEXTURE2D_DESC backBufferDesc;
backBuffer->GetDesc( &backBufferDesc );
// create the render target view
D3D10_RENDER_TARGET_VIEW_DESC RTVDesc;
// fill it in
device->CreateRenderTargetView( backBuffer, &RTVDesc, &window->renderTargetView );
backBuffer->Release();
backBuffer = NULL;
// Create depth stencil texture
ID3D10Texture2D* depthStencil = NULL;
D3D10_TEXTURE2D_DESC descDepth;
// fill it in
device->CreateTexture2D( &descDepth, NULL, &depthStencil );
// Create the depth stencil view
D3D10_DEPTH_STENCIL_VIEW_DESC descDSV;
// fill it in
device->CreateDepthStencilView( depthStencil, &descDSV, &window->depthStencilView );
}
}
We now have everything we need. All that you need to do is define a function which iterates over all windows and draws different stuff appropriately.
How and where should I Use OMSetRenderTargets(...) ?
In the just mentioned function which iterates over all windows and uses the appropriate render target (courtesy of our per-window data container):
void MultiRender( )
{
// Clear them all
for( int i = 0; i < windowsArray.size(); i++ )
{
WindowDataContainer* window = windowsArray.at(i);
// There is the answer to your second question:
device->OMSetRenderTargets( 1, &window->renderTargetView, window->depthStencilView );
// Don't forget to adjust the viewport, in fullscreen it's not important...
D3D10_VIEWPORT Viewport;
Viewport.TopLeftX = 0;
Viewport.TopLeftY = 0;
Viewport.Width = window->width;
Viewport.Height = window->height;
Viewport.MinDepth = 0.0f;
Viewport.MaxDepth = 1.0f;
device->RSSetViewports( 1, &Viewport );
// TO DO: AMAZING STUFF PER WINDOW
}
}
Of course, don't forget to run through all the swap chains and swap buffers per window basis. The code here is just for the purposes of this answer, it requires a bit more work, error checking (failsafes) and contemplation to get it working just the way you like it - in other words - it should give you a simplified overview, not a production solution.
Good luck and happy coding! Sheesh, this is huge.