My goal is to create a texture atlas in my directx application. What i have is a vector of ID2D1PathGeometries which need to to be put on a texture atlas. So i create a ID2D1Bitmap1, but i have no clue on what is my next step. In other words, - how exactly do i lay an ID2D1PathGeometry on a ID2D1Bitmap1 on the spot i need?
p/s/ it worth mentioning, that i'm kind of a newbie in directx and when i try to look for an answer on msdn i just keep getting lost in everything direct2d provides you with.
TU
p/p/s Code requested:
there is not much to show, as i mentioned already.
std::vector<Microsoft::WRL::ComPtr<ID2D1PathGeometry>> atlasGeometries; // so i have my geometries
////than i fill the vector
{
....
}
////Creating Bitmap for font sheet
Microsoft::WRL::ComPtr<ID2D1Bitmap1> bitmap;
D2D1_SIZE_U dimensions;
dimensions.height = 1024;
dimensions.width = 1024;
D2D1_BITMAP_PROPERTIES1 d2dbp;
D2D1_PIXEL_FORMAT d2dpf;
FLOAT dpiX;
FLOAT dpiY;
d2dpf.format = DXGI_FORMAT_A8_UNORM;
d2dpf.alphaMode = D2D1_ALPHA_MODE_PREMULTIPLIED;
this->dxDevMt.GetD2DFactory()->GetDesktopDpi(&dpiX, &dpiY);
d2dbp.pixelFormat = d2dpf;
d2dbp.dpiX = dpiX;
d2dbp.dpiY = dpiY;
d2dbp.bitmapOptions = D2D1_BITMAP_OPTIONS_TARGET;
d2dbp.colorContext = nullptr;
newCtx->CreateBitmap(dimensions, nullptr, 0, d2dbp, bitmap.GetAddressOf());
But what i do next is a quest for me. i kind of figured out, i should use RenderTarget for such kind of stuff. but i failed to figure out, how exactly.
The problem was solved via using bitmap as a render target.
The idea is to create a new d2d device
create a new DeviceContext
set Bitmap as a render target
and render everything, thta needs to be rendered
Related
I'd like to render basic 3D shapes without any aliasing/smoothing with a PGraphics instance using the P3D renderer, but noSmooth() doesn't seem to work.
In OF I remember calling setTextureMinMagFilter(GL_NEAREST,GL_NEAREST); on a texture.
What would be the equivalent in Processing ?
I tried to use PGL:
PGL.TEXTURE_MIN_FILTER = PGL.NEAREST;
PGL.TEXTURE_MAG_FILTER = PGL.NEAREST;
but I get a black image as the result.
If I comment PGL.TEXTURE_MIN_FILTER = PGL.NEAREST; I can see the render, but it's interpolated, not sharp.
Here'a basic test sketch with a few things I've tried:
PGraphics buffer;
PGraphicsOpenGL pgl;
void setup() {
size(320, 240, P3D);
noSmooth();
//hint(DISABLE_TEXTURE_MIPMAPS);
//((PGraphicsOpenGL)g).textureSampling(0);
//PGL pgl = beginPGL();
//PGL.TEXTURE_MIN_FILTER = PGL.NEAREST;
//PGL.TEXTURE_MAG_FILTER = PGL.NEAREST;
//endPGL();
buffer=createGraphics(width/8, height/8, P3D);
buffer.noSmooth();
buffer.beginDraw();
//buffer.hint(DISABLE_TEXTURE_MIPMAPS);
//((PGraphicsOpenGL)buffer).textureSampling(0);
PGL bpgl = buffer.beginPGL();
//PGL.TEXTURE_MIN_FILTER = PGL.NEAREST;//commenting this back in results in a blank buffer
PGL.TEXTURE_MAG_FILTER = PGL.NEAREST;
buffer.endPGL();
buffer.background(0);
buffer.stroke(255);
buffer.line(0, 0, buffer.width, buffer.height);
buffer.endDraw();
}
void draw() {
image(buffer, 0, 0, width, height);
}
(I've also posted on the Processing Forum, but no luck so far)
You were actually on the right track. You were just passing the wrong value to textureSampling().
Since the documentation on PGraphicsOpenGL::textureSampling()
is a bit scarce to say the least.
I decided to peak into it using a decompiler, which lead me to
Texture::usingMipmaps().
There I was able to see the values and what they reflected (in the decompiled code).
2 = POINT
3 = LINEAR
4 = BILINEAR
5 = TRILINEAR
Where PGraphicsOpenGL's default textureSampling is 5 (TRILINEAR).
I also later found this old comment on an issue equally confirming it.
So to get point/nearest filtering you only need to call noSmooth() on the application itself, and call textureSampling() on your PGraphics.
size(320, 240, P3D);
noSmooth();
buffer = createGraphics(width/8, height/8, P3D);
((PGraphicsOpenGL) buffer).textureSampling(2);
So considering the above, and only including the code you used to draw the line and drawing buffer to the application. Then that gives the following desired result.
I needed to combine both GL_LINEAR and GL_NEAREST with one shader so the ((PGraphicsOpenGL) buffer).textureSampling(2); was no option.
It was some digging, but this works for me:
PGL pgl = beginPGL();
Texture ascii_map_tex = ((PGraphicsOpenGL)g).getTexture(ascii_map);
pgl.bindTexture(PGL.TEXTURE_2D, ascii_map_tex.glName);
pgl.texParameteri(PGL.TEXTURE_2D, PGL.TEXTURE_MIN_FILTER, PGL.NEAREST);
pgl.texParameteri(PGL.TEXTURE_2D, PGL.TEXTURE_MAG_FILTER, PGL.NEAREST);
pgl.bindTexture(PGL.TEXTURE_2D, 0);
endPGL();
Hi I just start with DirectX under CoreWindow^ + C++/CLI, all things seems to be ok until I want to start using a Mulitsampling.
This example is rendering the simple triangle
Working example without AA
When I fill the SwapChain struct like this:
UINT m4xMsaaQuality;
dev->CheckMultisampleQualityLevels(DXGI_FORMAT_B8G8R8A8_UNORM,4, &m4xMsaaQuality);
// set up the swap chain description
DXGI_SWAP_CHAIN_DESC1 scd = { 0 };
scd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; // how the swap chain should be used
scd.BufferCount =2;
scd.Format = DXGI_FORMAT_B8G8R8A8_UNORM; // the most common swap chain format
scd.SwapEffect = DXGI_SWAP_EFFECT_FLIP_DISCARD; // the recommended flip mode
scd.SampleDesc.Count = 4; // >1 enable anti-aliasing
scd.SampleDesc.Quality = m4xMsaaQuality-1;
CoreWindow^ Window = CoreWindow::GetForCurrentThread(); // get the window pointer
// create the swap chain
dxgiFactory->CreateSwapChainForCoreWindow(
dev.Get(), // address of the device
reinterpret_cast<IUnknown*>(Window), // address of the window
&scd, // address of the swap chain description
nullptr, // advanced
&swapchain); // address of the new swap chain pointer
// get a pointer directly to the back buffer
ComPtr<ID3D11Texture2D> backbuffer;
swapchain->GetBuffer(0, __uuidof(ID3D11Texture2D), (&backbuffer));`
the
dxgiFactory->CreateSwapChainForCoreWindow(
dev.Get(), // address of the device
reinterpret_cast<IUnknown*>(Window), // address of the window
&scd, // address of the swap chain description
nullptr, // advanced
&swapchain);
return "nullptr",
I checked that m4xMsaaQuality is equal "17" so scd.SampleDesc.Quality =16
How I should fill up the SwapChain struct?
Flip swap effects are not compatible with a multisampling surface. You need to create a non-msaa swap chain and explicitly resolve your msaa render to the swapchain buffers.
YES! Finally I menage to use this multisampling, I realize that
ResolveSubresource
Copy a multisampled resource into a non-multisampled resource.
That why I don't have to recreating one more time the RenderViewTarget, just I can use my.
I still have question is there is some way to increase the anti-aliasing? The maximum sample what I can put in offScreenSurfaceDesc.SampleDesc.Count is 8.
Init_DirectX with MSAA enable
Here is working solution.
Thank you for your answer, I read in MSDN about Multisampling in UWP app. But still I can't to render the smooth triangle. I changed the way of swapping - now should be correct for UWP application. I used also the ResolveSubresource
Can you take a look? The Window is complete black without a triangle. So far I finished with this code, Mostly I suspect the render target bacause when I uncomment this:
dev->CreateRenderTargetView(backbuffer.Get(), &renderTargetViewDesc, &rendertarget);
And in Render Method [line:210, 214] use "renderTarget"
[210] devcon->OMSetRenderTargets(1, rendertarget.GetAddressOf(), nullptr);
[214]devcon->ClearRenderTargetView(rendertarget.Get(), color);
Everything back to normal (of course without MSAA)
Init_DirectX - render fail
I want to understand DXGI Desktop Duplication. I have read a lot and this is the code I copied from parts of the DesktopDuplication sample on the Microsoft Website. My plan is to get the Buffer or Array from the DesktopImage because I want to make a new Texture for an other program. I hope somebody can explain me what I can do to get it.
void DesktopDublication::GetFrame(_Out_ FRAME_DATA* Data, _Out_ bool* Timeout)
{
IDXGIResource* DesktopResource = nullptr;
DXGI_OUTDUPL_FRAME_INFO FrameInfo;
// Get new frame
HRESULT hr = m_DeskDupl->AcquireNextFrame(500, &FrameInfo, &DesktopResource);
if (hr == DXGI_ERROR_WAIT_TIMEOUT)
{
*Timeout = true;
}
*Timeout = false;
if (FAILED(hr))
{
}
// If still holding old frame, destroy it
if (m_AcquiredDesktopImage)
{
m_AcquiredDesktopImage->Release();
m_AcquiredDesktopImage = nullptr;
}
// QI for IDXGIResource
hr = DesktopResource->QueryInterface(__uuidof(ID3D11Texture2D), reinterpret_cast<void **>(&m_AcquiredDesktopImage));
DesktopResource->Release();
DesktopResource = nullptr;
if (FAILED(hr))
{
}
// Get metadata
if (FrameInfo.TotalMetadataBufferSize)
{
// Old buffer too small
if (FrameInfo.TotalMetadataBufferSize > m_MetaDataSize)
{
if (m_MetaDataBuffer)
{
delete[] m_MetaDataBuffer;
m_MetaDataBuffer = nullptr;
}
m_MetaDataBuffer = new (std::nothrow) BYTE[FrameInfo.TotalMetadataBufferSize];
if (!m_MetaDataBuffer)
{
m_MetaDataSize = 0;
Data->MoveCount = 0;
Data->DirtyCount = 0;
}
m_MetaDataSize = FrameInfo.TotalMetadataBufferSize;
}
UINT BufSize = FrameInfo.TotalMetadataBufferSize;
// Get move rectangles
hr = m_DeskDupl->GetFrameMoveRects(BufSize, reinterpret_cast<DXGI_OUTDUPL_MOVE_RECT*>(m_MetaDataBuffer), &BufSize);
if (FAILED(hr))
{
Data->MoveCount = 0;
Data->DirtyCount = 0;
}
Data->MoveCount = BufSize / sizeof(DXGI_OUTDUPL_MOVE_RECT);
BYTE* DirtyRects = m_MetaDataBuffer + BufSize;
BufSize = FrameInfo.TotalMetadataBufferSize - BufSize;
// Get dirty rectangles
hr = m_DeskDupl->GetFrameDirtyRects(BufSize, reinterpret_cast<RECT*>(DirtyRects), &BufSize);
if (FAILED(hr))
{
Data->MoveCount = 0;
Data->DirtyCount = 0;
}
Data->DirtyCount = BufSize / sizeof(RECT);
Data->MetaData = m_MetaDataBuffer;
}
Data->Frame = m_AcquiredDesktopImage;
Data->FrameInfo = FrameInfo;
}
If I'm understanding you correctly, you want to get the current desktop image, duplicate it into a private texture, and then render that private texture onto your window. I would start by reading up on Direct3D 11 and learning how to render a scene, as you will need D3D to do anything with the texture object you get from DXGI. This, this, and this can get you started on D3D11. I would also spend some time reading through the source of the sample you copied your code from, as it completely explains how to do this. Here is the link to the full source code for that sample.
To actually get the texture data and render it out, you need to do the following:
1). Create a D3D11 Device object and a Device Context.
2). Write and compile a Vertex and Pixel shader for the graphics card, then load them into your application.
3). Create an Input Layout object and set it to the device.
4). Initialize the required Blend, Depth-Stencil, and Rasterizer states for the device.
5). Create a Texture object and a Shader Resource View object.
6). Acquire the Desktop Duplication texture using the above code.
7). Use CopyResource to copy the data into your texture.
8). Render that texture to the screen.
This will capture all data displayed on one of the desktops to your texture. It does not do processing on the dirty rects of the desktop. It does not do processing on moved regions. This is bare bones 'capture the desktop and display it elsewhere' code.
If you want to get more in depth, read the linked resources and study the sample code, as the sample basically does what you're asking for.
Since tacking this onto my last answer didn't feel quite right, I decided to create a second.
If you want to read the desktop data to a file, you need a D3D11 Device object, a texture object with the D3D11_USAGE_STAGING flag set, and a method of converting the RGBA pixel data of the desktop texture to whatever it is you want. The basic procedure is a simplified version of the one in my original answer:
1). Create a D3D11 Device object and a Device Context.
2). Create a Staging Texture with the same format as the Desktop Texture.
3). Use CopyResource to copy the Desktop Texture into your Staging Texture.
4). Use ID3D11DeviceContext::Map() to get a pointer to the data contained in the Staging Texture.
Make sure you know how Map works and make sure you can write out image files from a single binary stream. There may also be padding in the image buffer, so be aware you may also need to filter that out. Additionally, make sure you Unmap the buffer instead of calling free, as the buffer given to you almost certainly does not belong to the CRT.
How can I change the images size in the code below:
const int XHome = 10, YHome = 10;
const int WHome = 50, HHome = 50;
.
.
.
SDL_Surface* Image = SDL_LoadBMP(Address);
SDL_Rect destRect;
destRect.x = WHome * x;
destRect.y = HHome * y;
destRect.w = WHome;
destRect.h = HHome;
SDL_BlitSurface(Image, NULL, mainScreen, &destRect);
SDL_FreeSurface(Image);
When I put Image in mainScreen which is another SDL_Surface, It's bigger than 50*50. Is it possible to resize Image? Thank you.
this is what happens when I set the WHome and HHome, 50*50.
Since I have only 5 reputation, I can't post images. To see the image please click here.
But when I set them like the original images size, this is what I see:
here
According to the SDL_BlitSurface documentation:
Only the position is used in the dstrect (the width and height are ignored).
I highly recommend switching to SDL 2 for many reasons (hardware acceleration being a big one); this task would also become trivial with a texture and SDL_RenderCopy. If you're somehow stuck using SDL 1, you can either look into scaling surfaces manually, or use a library like SDL_gfx, which has custom blit functions.
I'm implementing a custom cursor in DirectX/C++ that is drawn on a transparent window on top of the desktop.
I have stripped it down to a basic example. The magic of executing Direct3D on the DWM is based on this article on Code Project
The problem is that when using a very big window (e.g. 2560x1440) as a base for the DirectX rendering, it will give up to 40% GPU Load according to GPU-Z. Even if the only thing I am displaying is a static 128x128 sprite, or nothing at all. If I use an area like 256x256, the GPU Load is around 1-3%.
Basically this loop would make the GPU go crazy on a big window while it's smooth sailing on a small window:
while(true) {
g_pD3DDevice->PresentEx(NULL, NULL, NULL, NULL, NULL);
Sleep(10);
}
So it seems like it re-renders the whole screen whether anything changes or not, am I right? Can I tell Direct3D to only re-render specific parts that needs to be updated?
EDIT:
I have found a way to tell Direct3D to render a specific part by providing RGNDATA Dirty region information to PresentEx. It is now 1% GPU Load instead of 20-40%.
std::vector<RECT> dirtyRects;
//Fill dirtyRects with previous and new cursor boundaries
DWORD size = dirtyRects.size() * sizeof(RECT)+sizeof(RGNDATAHEADER);
RGNDATA *rgndata = NULL;
rgndata = (RGNDATA *)HeapAlloc(GetProcessHeap(), 0, size);
RECT* pRectInitial = (RECT*)rgndata->Buffer;
RECT rectBounding = dirtyRects[0];
for (int i = 0; i < dirtyRects.size(); i++)
{
RECT rectCurrent = dirtyRects[i];
rectBounding.left = min(rectBounding.left, rectCurrent.left);
rectBounding.right = max(rectBounding.right, rectCurrent.right);
rectBounding.top = min(rectBounding.top, rectCurrent.top);
rectBounding.bottom = max(rectBounding.bottom, rectCurrent.bottom);
*pRectInitial = dirtyRects[i];
pRectInitial++;
}
//preparing rgndata header
RGNDATAHEADER header;
header.dwSize = sizeof(RGNDATAHEADER);
header.iType = RDH_RECTANGLES;
header.nCount = dirtyRects.size();
header.nRgnSize = dirtyRects.size() * sizeof(RECT);
header.rcBound.left = rectBounding.left;
header.rcBound.top = rectBounding.top;
header.rcBound.right = rectBounding.right;
header.rcBound.bottom = rectBounding.bottom;
rgndata->rdh = header;
// Update display
g_pD3DDevice->PresentEx(NULL, NULL, NULL, rgndata, 0);
But it's something I do not understand. It will only give 1% GPU Load if I add the following
SetLayeredWindowAttributes(hWnd, 0, 180, LWA_ALPHA);
I want it transparent anyway so it's good, but instead I get some weird tearing effects after a while. It is more noticeable the faster I move the cursor. What does that come from? It looks like image provided. I am sure I have set the dirty rects perfectly accurate.
The above tearing seem to differ from computer to computer.