I want to modify single pixels with SDL2 and I don't want to do it with surfaces.
Here is the relevant part of my code:
// Create a texture for drawing
SDL_Texture *m_pDrawing = SDL_CreateTexture(m_pRenderer, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_STREAMING, 1024, 768);
// Create a pixelbuffer
Uint32 *m_pPixels = (Uint32 *) malloc(1024*768*sizeof(Uint32));
// Set a single pixel to red
m_pPixels[1600] = 0xFF0000FF;
// Update the texture with created pixelbuffer
SDL_UpdateTexture(m_pDrawing, NULL, m_pPixels, 1024*sizeof(Uint32));
// Copy texture to render target
SDL_RenderCopy(m_pRenderer, m_pDrawing, NULL, NULL);
If it is rendered then with SDL_RenderPresent(m_pRenderer) nothing appears on screen.
Here they explained that you could either use "surface->pixels, or a malloc()'d buffer". So, what's wrong?
Edit:
In the end it was just my m_pRenderer which was defined after the SDL_CreateTexture call.
Everything works fine now and I fixed the small bug in the buffer allocation, so this code should be working.
I'm not sure if this is your problem, but it's a bug:
Uint32 *m_pPixels = (Uint32 *) malloc(1024*768);
You need a * sizeof(Uint32) in there, if you want to allocate enough data to represent the pixels in your surface:
Uint32 *m_pPixels = (Uint32 *) malloc(1024*768*sizeof(Uint32));
(And if this is C, you don't need the cast.)
Related
I am investigating how to do cross-process interop with OpenGL and Direct3D 11 using the EXT_external_objects, EXT_external_objects_win32 and EXT_win32_keyed_mutex OpenGL extensions. My goal is to share a B8G8R8A8_UNORM texture (an external library expects BGRA and I can not change it. What's relevant here is the byte depth of 4) with 1 Mip-level allocated and written to offscreen with D3D11 by one application, and render it with OpenGL in another. Because the texture is being drawn to off-process, I can not use WGL_NV_DX_interop2.
My actual code can be seen here and is written in C# with Silk.NET. For illustration's purpose though, I will describe my problem in psuedo-C(++).
First I create my texture in Process A with D3D11, and obtain a shared handle to it, and send it over to process B.
#define WIDTH 100
#define HEIGHT 100
#define BPP 4 // BGRA8 is 4 bytes per pixel
ID3D11Texture2D *texture;
D3D11_TEXTURE2D_DESC texDesc = {
.Width = WIDTH,
.Height = HEIGHT,
.MipLevels = 1,
.ArraySize = 1,
.Format = DXGI_FORMAT_B8G8R8A8_UNORM,
.SampleDesc = { .Count = 1, .Quality = 0 }
.Usage = USAGE_DEFAULT,
.BindFlags = BIND_SHADER_RESOURCE
.CPUAccessFlags = 0,
.MiscFlags = D3D11_RESOURCE_MISC_SHARED_NTHANDLE | D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX
};
device->CreateTexture2D(&texDesc, NULL, &texture);
HANDLE sharedHandle;
texture->CreateSharedHandle(NULL, DXGI_SHARED_RESOURCE_READ, NULL, &sharedHandle);
SendToProcessB(sharedHandle, pid);
In Process B, I first duplicate the handle to get one that's process-local.
HANDLE localSharedHandle;
HANDLE hProcA = OpenProcess(PROCESS_DUP_HANDLE, false, processAPID);
DuplicateHandle(hProcA, sharedHandle, GetCurrentProcess(), &localSharedHandle, 0, false, DUPLICATE_SAME_ACCESS);
CloseHandle(hProcA)
At this point, I have a valid shared handle to the DXGI resource in localSharedHandle. I have a D3D11 implementation of ProcessB that is able to successfully render the shared texture after opening with OpenSharedResource1. My issue here is OpenGL however.
This is what I am currently doing for OpenGL
GLuint sharedTexture, memObj;
glCreateTextures(GL_TEXTURE_2D, 1, &sharedTexture);
glTextureParameteri(sharedTexture, TEXTURE_TILING_EXT, OPTIMAL_TILING_EXT); // D3D11 side is D3D11_TEXTURE_LAYOUT_UNDEFINED
// Create the memory object handle
glCreateMemoryObjectsEXT(1, &memObj);
// I am not actually sure what the size parameter here is referring to.
// Since the source texture is DX11, there's no way to get the allocation size,
// I make a guess of W * H * BPP
// According to docs for VkExternalMemoryHandleTypeFlagBitsNV, NtHandle Shared Resources use HANDLE_TYPE_D3D11_IMAGE_EXT
glImportMemoryWin32HandleEXT(memObj, WIDTH * HEIGHT * BPP, GL_HANDLE_TYPE_D3D11_IMAGE_EXT, (void*)localSharedHandle);
DBG_GL_CHECK_ERROR(); // GL_NO_ERROR
Checking for errors along the way seems to indicate the import was successful. However I am not able to bind the texture.
if (glAcquireKeyedMutexWin32EXT(memObj, 0, (UINT)-1) {
DBG_GL_CHECK_ERROR(); // GL_NO_ERROR
glTextureStorageMem2D(sharedTexture, 1, GL_RGBA8, WIDTH, HEIGHT, memObj, 0);
DBG_GL_CHECK_ERROR(); // GL_INVALID_VALUE
glReleaseKeyedMutexWin32EXT(memObj, 0);
}
What goes wrong is the call to glTextureStorageMem2D. The shared KeyedMutex is being properly acquired and released. The extension documentation is unclear as to how I'm supposed to properly bind this texture and draw it.
After some more debugging, I managed to get [DebugSeverityHigh] DebugSourceApi: DebugTypeError, id: 1281: GL_INVALID_VALUE error generated. Memory object too small from the Debug context. By dividing my width in half I was able to get some garbled output on the screen.
It turns out the size needed to import the texture was not WIDTH * HEIGHT * BPP, (where BPP = 4 for BGRA in this case), but WIDTH * HEIGHT * BPP * 2. Importing the handle with size WIDTH * HEIGHT * BPP * 2 allows the texture to properly bind and render correctly.
My goal is to create a texture atlas in my directx application. What i have is a vector of ID2D1PathGeometries which need to to be put on a texture atlas. So i create a ID2D1Bitmap1, but i have no clue on what is my next step. In other words, - how exactly do i lay an ID2D1PathGeometry on a ID2D1Bitmap1 on the spot i need?
p/s/ it worth mentioning, that i'm kind of a newbie in directx and when i try to look for an answer on msdn i just keep getting lost in everything direct2d provides you with.
TU
p/p/s Code requested:
there is not much to show, as i mentioned already.
std::vector<Microsoft::WRL::ComPtr<ID2D1PathGeometry>> atlasGeometries; // so i have my geometries
////than i fill the vector
{
....
}
////Creating Bitmap for font sheet
Microsoft::WRL::ComPtr<ID2D1Bitmap1> bitmap;
D2D1_SIZE_U dimensions;
dimensions.height = 1024;
dimensions.width = 1024;
D2D1_BITMAP_PROPERTIES1 d2dbp;
D2D1_PIXEL_FORMAT d2dpf;
FLOAT dpiX;
FLOAT dpiY;
d2dpf.format = DXGI_FORMAT_A8_UNORM;
d2dpf.alphaMode = D2D1_ALPHA_MODE_PREMULTIPLIED;
this->dxDevMt.GetD2DFactory()->GetDesktopDpi(&dpiX, &dpiY);
d2dbp.pixelFormat = d2dpf;
d2dbp.dpiX = dpiX;
d2dbp.dpiY = dpiY;
d2dbp.bitmapOptions = D2D1_BITMAP_OPTIONS_TARGET;
d2dbp.colorContext = nullptr;
newCtx->CreateBitmap(dimensions, nullptr, 0, d2dbp, bitmap.GetAddressOf());
But what i do next is a quest for me. i kind of figured out, i should use RenderTarget for such kind of stuff. but i failed to figure out, how exactly.
The problem was solved via using bitmap as a render target.
The idea is to create a new d2d device
create a new DeviceContext
set Bitmap as a render target
and render everything, thta needs to be rendered
This is the syntax of the SDL_CreateTextureFromSurface function:
SDL_Texture* SDL_CreateTextureFromSurface(SDL_Renderer* renderer, SDL_Surface* surface)
However, I'm confused why we need to pass a renderer*? I thought we need a renderer* only when drawing the texture?
You need SDL_Renderer to get information about the applicable constraints:
maximum supported size
pixel format
And probably something more..
In addition to the answer by plaes..
Under the hood, SDL_CreateTextureFromSurface calls SDL_CreateTexture, which itself also needs a Renderer, to create a new texture with the same size as the passed in surface.
Then the the SDL_UpdateTexture function is called on the new created texture to load(copy) the pixel data from the surface you passed in to SDL_CreateTextureFromSurface. If the formats between the passed-in surface differ from what the renderer supports, more logic happens to ensure correct behavior.
The Renderer itself is needed for SDL_CreateTexture because its the GPU that handles and stores textures (most of the time) and the Renderer is supposed to be an abstraction over the GPU.
A surface never needs a Renderer since its loaded in RAM and handled by the CPU.
You can find out more about how these calls work if you look at SDL_render.c from the SDL2 source code.
Here is some code inside SDL_CreateTextureFromSurface:
texture = SDL_CreateTexture(renderer, format, SDL_TEXTUREACCESS_STATIC,
surface->w, surface->h);
if (!texture) {
return NULL;
}
if (format == surface->format->format) {
if (SDL_MUSTLOCK(surface)) {
SDL_LockSurface(surface);
SDL_UpdateTexture(texture, NULL, surface->pixels, surface->pitch);
SDL_UnlockSurface(surface);
} else {
SDL_UpdateTexture(texture, NULL, surface->pixels, surface->pitch);
}
}
I am having some problems finding a solution on how to retrieve a specific color of a pixel on a SDL_Texture...
To be bit more specific: I am trying to calculate the average amount of color used in a given texture. Later on I want to devide for example the number of red pixels by the total amount of pixels. For this task I will need a method, which will get me each pixel color...
I tried to search for some functions, but unfortunately I wasnt able to figure it out..
I saw methods like SDL_RenderReadPixels and SDL_GetPixelFormatName, but none of those helped me out...
Do you have a solution for me?
To access an SDL_Texture's pixels, you must create a blank texture using SDL_CreateTexture() and pass in SDL_TEXTUREACCESS_STREAMING for the access parameter, then copy the pixels of a surface into it. Once that's done, you can use the SDL_LockTexture() function to retrieve a pointer to the pixel data which can then be accessed and modified. To save your changes, you'd call SDL_UnlockTexture(). Try something like this:
SDL_Texture *t;
int main()
{
// Init SDL
SDL_Surface * img = IMG_Load("path/to/file");
SDL_CreateTexture(renderer, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_STREAMING, img->w, img->h);
void * pixels;
SDL_LockTexture(t, &img->clip_rect, &pixels, img->pitch);
memcpy(pixels, img->pixels, img->w * img->h);
Uint32 * upixels = (Uint32 *) pixels;
// get or modify pixels
SDL_UnlockTexture(t);
return 0;
}
Uint32 get_pixel_at(Uint32 * pixels, int x, int y, int w)
{
return pixels[y * w + x];
}
You can get the colors from a pixel like this:
Uint32 pixel = get_pixel_at(pixels, x, y, img->w);
Uint8 * colors = (Uint8 *) pixel;
// colors[0] is red, 1 is green, 2 is blue, 3 is alpha (assuming you've set the blend mode on the texture to SDL_BLENDMODE_BLEND
If you want more information, then check out these SDL 2.0 tutorials: http://lazyfoo.net/tutorials/SDL/index.php. Tutorial 40 deals specifically with this problem.
Let me know if you have any questions or something is unclear.
Good luck!
i want to convert the output image of an ID2D1Effect to an ID2D1Bitmap, so I'm able to draw it at a later time without applying all the effects over and over...
My first try was just to keep the ID2D1Effect::GetOutput image ptr, but this image changes if I use the effect with another image source...
My next try was to create a Bitmap (ID2D1DeviceContext::CreateBitmap) with D2D1_BITMAP_OPTIONS_TARGET flag set and draw the effects output to this Bitmap but that doesn't seem to work either...
Draw effect to given Bitmap1** (dest):
ComPtr<ID2D1Image> swapChainImageBuffer;
this->pDeviceContext->CreateBitmap(D2D1::SizeU(120, 50), nullptr, 0, D2D1::BitmapProperties1(D2D1_BITMAP_OPTIONS_TARGET), dest);
this->pDeviceContext->GetTarget(swapChainImageBuffer.GetAddressOf());
this->pDeviceContext->SetTarget(*dest);
this->pDeviceContext->BeginDraw();
this->pDeviceContext->Clear(D2D1::ColorF(RGB(0, 0, 0), 1.f));
this->pDeviceContext->DrawImage(this->pCompositeEffect.Get(), D2D1::Point2F(20, 10));
this->pDeviceContext->EndDraw();
this->pDeviceContext->SetTarget(swapChainImageBuffer.Get());
swapChainImageBuffer = nullptr;
Draw the bitmap at a later time : (preview)
this->pD2DeviceContext->DrawBitmap(preview.Get(), D2D1::RectF(x, y, x+width, y + height));
(Same DeviceContext - no function that returns HRESULT returns an error code)
What am I doing wrong?
How am I supposed to do this?
simply forgot to specify the PixelFormat in CreateBitmap.
D2D1::PixelFormat(DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_PREMULTIPLIED)
works now.