SDL2 - Why does SDL_CreateTextureFromSurface() need a renderer*? - c++

This is the syntax of the SDL_CreateTextureFromSurface function:
SDL_Texture* SDL_CreateTextureFromSurface(SDL_Renderer* renderer, SDL_Surface* surface)
However, I'm confused why we need to pass a renderer*? I thought we need a renderer* only when drawing the texture?

You need SDL_Renderer to get information about the applicable constraints:
maximum supported size
pixel format
And probably something more..

In addition to the answer by plaes..
Under the hood, SDL_CreateTextureFromSurface calls SDL_CreateTexture, which itself also needs a Renderer, to create a new texture with the same size as the passed in surface.
Then the the SDL_UpdateTexture function is called on the new created texture to load(copy) the pixel data from the surface you passed in to SDL_CreateTextureFromSurface. If the formats between the passed-in surface differ from what the renderer supports, more logic happens to ensure correct behavior.
The Renderer itself is needed for SDL_CreateTexture because its the GPU that handles and stores textures (most of the time) and the Renderer is supposed to be an abstraction over the GPU.
A surface never needs a Renderer since its loaded in RAM and handled by the CPU.
You can find out more about how these calls work if you look at SDL_render.c from the SDL2 source code.
Here is some code inside SDL_CreateTextureFromSurface:
texture = SDL_CreateTexture(renderer, format, SDL_TEXTUREACCESS_STATIC,
surface->w, surface->h);
if (!texture) {
return NULL;
}
if (format == surface->format->format) {
if (SDL_MUSTLOCK(surface)) {
SDL_LockSurface(surface);
SDL_UpdateTexture(texture, NULL, surface->pixels, surface->pitch);
SDL_UnlockSurface(surface);
} else {
SDL_UpdateTexture(texture, NULL, surface->pixels, surface->pitch);
}
}

Related

Is it okay to have a SDL_Surface and SDL_Texture for each sprite?

I'm trying to build a gameengine in SDL2 with cpp. I have a class called 'entity' which has some data for movement and also some pointers to a surface and a texture. A function "render" is called inmass to render each sprite based on the g_entities vector.
class entity {
...
SDL_Surface* image;
SDL_Texture* texture;
entity(const char* filename, SDL_Renderer * renderer, float size) {
image = IMG_Load(filename);
width = image->w * size;
height = image->h * size;
texture = SDL_CreateTextureFromSurface(renderer, image);
g_entities.push_back(this);
}
~entity() {
SDL_DestroyTexture(texture);
SDL_FreeSurface(image);
//TODO remove from g_entities
}
void render(SDL_Renderer * renderer) {
SDL_Rect dstrect = { x, y, width, height };
SDL_RenderCopy(renderer, texture, NULL, &dstrect);
}
...
}
So the program makes a new texture and surface for each sprite. Is this okay? Is there a faster way?
If so, I'd like to clean that up before it becomes a bigger mess.
I made a testlevel with 96 sprites that each take up 2% of the screen with tons of overdraw and ft is 15ms (~65fps)at a resolution of1600x900
Yes but actually, no. If the same sprite will be used many times without modification, its most efficient for those objects to have pointers to the same SDL_Texture. Additionally, the image can be freed after the texture is generated. Furthermore, loading these in the constructor may be a bad idea since objects made on-the-fly will require disk-reading.
I set up a system where entities are given another variable on construction, and if it is positive, the entity will check and see if any other entity used the same file for it's sprite, and if so, just use that same reference.
That means that objects like bullets that are spawned and destroyed can be handled efficiently by spawning a single bullet in the level.
https://www.reddit.com/r/sdl/comments/lo24vt/is_it_okay_to_have_a_sdl_surface_and_sdl_texture/

SDL_SetRenderTarget doesn't set the tartget

I am trying to write a C++ lambda that is registered and to be used in Lua using the Sol2 binding. The callback below should create an SDL_Texture, and clear it to a color. A Lua_Texture is just a wrapper for an SDL_Texture, and l_txt.texture is of type SDL_Texture*.
lua.set_function("init_texture",
[render](Lua_Texture &l_txt, int w, int h)
{
// free any previous texture
l_txt.deleteTexture();
l_txt.texture = SDL_CreateTexture(render, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_TARGET, w, h);
SDL_SetRenderTarget(render, l_txt.texture);
SDL_Texture *target = SDL_GetRenderTarget(render);
assert(l_txt.texture == target);
assert(target == nullptr);
SDL_SetRenderDrawColor(render, 0xFF, 0x22, 0x22, 0xFF);
SDL_RenderClear(render);
});
My problem is that SDL_SetRenderTarget isn't functioning as I'd expect it. I try to set the texture as the target so I can clear it's color, but when I try to draw the texture to the screen it is still blank. The asserts in the above code both fail, and show that the current target texture is not set to the texture I am trying to clear and later use, nor is it Null (which is the expected value if there is no current target texture).
I have used this snippet of code before in just c++ (not as a Lua callback) and it works as intended. Somehow, embedding it in Lua causes the behavior to change. Any help is very much appreciated as I've been pulling my hair out over this for a while, thanks!
I may have an answer for you, but you're not going to like it.
It looks like SDL_GetRenderTarget doesn't work as expected.
I got the exact same problem you have (that's how I found your question), and I could reproduce it reliably using that simple program :
int rendererIndex;
[snipped code : rendererIndex is set to the index of the DX11 renderer]
SDL_Renderer * renderer = SDL_CreateRenderer(pWindow->pWindow, rendererIndex, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC | SDL_RENDERER_TARGETTEXTURE);
SDL_Texture* rtTexture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_TARGET, 200, 200);
SDL_SetRenderTarget(renderer, rtTexture);
if(SDL_GetRenderTarget(renderer) != rtTexture)
printf("ERROR.");
This always produces :
ERROR.
The workaround I used it that I'm saving the pointer to the render target texture I'm setting for the renderer and I don't use SDL_GetRenderTarget.
EDIT :
I was curious why I didn't get the correct render target when getting it, and I look through SDL2's source code. I found out why (code snipped for clarity) :
int
SDL_SetRenderTarget(SDL_Renderer *renderer, SDL_Texture *texture)
{
// CODE SNIPPED
/* texture == NULL is valid and means reset the target to the window */
if (texture) {
CHECK_TEXTURE_MAGIC(texture, -1);
if (renderer != texture->renderer) {
return SDL_SetError("Texture was not created with this renderer");
}
if (texture->access != SDL_TEXTUREACCESS_TARGET) {
return SDL_SetError("Texture not created with SDL_TEXTUREACCESS_TARGET");
}
// *** EMPHASIS MINE : This is the problem.
if (texture->native) {
/* Always render to the native texture */
texture = texture->native;
}
}
// CODE SNIPPED
renderer->target = texture;
// CODE SNIPPED
}
SDL_Texture *
SDL_GetRenderTarget(SDL_Renderer *renderer)
{
return renderer->target;
}
In short, the renderer saves the current render target in renderer->target, but not before converting the current texture to it's native form. When we use SDL_GetRenderTarget, we're getting that native texture, which may or may not be different.

Exception "Texture cannot be null" Direct X

I am coding a 2D Game using DirectX11 and DirectXTK.
I did a class Framework that initializes both the window displayed for the game and initializes DirectX. These initializations work correctly. Then, I decided to draw some backgrounds, etc in the window, but after a while it exits on an exception. I did a try{ ... } catch(){ } block, which tells me that "Texture cannot be null". However, i could not find which texture it is talking about, even by debbugging and checking all the values.
I decided to separate the different elements i was drawing in the window, to see where the problem might come from... So now i have 3 draw methods :
Draw(DWORD &elapsedTime);
DrawBackground(DWORD &elapsedTime);
DrawCharacter(DWORD &elapsedTime);
The Draw(DWORD &elapsedTime) method calls both DrawBackground() and DrawCharacter() methods.
Here is my Draw Method :
void Framework::Draw(DWORD * elapsedTime)
{
// Clearing the Back Buffer
immediateContext->ClearRenderTargetView(renderTargetView, Colors::Aquamarine);
//Clearing the depth buffer to max depth (1.0)
immediateContext->ClearDepthStencilView(depthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0); //immediateContext is a ID3D11DeviceContext*
CommonStates states(d3dDevice); //d3dDevice is a ID3D11Device*
sprites.reset(new SpriteBatch(immediateContext));
sprites->Begin(SpriteSortMode_Deferred, states.NonPremultiplied());
DrawBackground1(elapsedTime);
DrawCharacter(elapsedTime);
sprites->End();
//Presenting the back buffer to the front buffer
swapChain->Present(0, 0);
}
By debugging i am almost sure that the exception comes from both DrawBackground() and DrawCharacter(). Indeed, when I comment those in the Draw method, i have no error, but as soon as i put one it sets the exception after displaying what i want during a few seconds.
Here is the method DrawBackground() for example :
void Framework::DrawBackground1(DWORD * elpasedTime)
{
RECT *try1 = new RECT();
try1->bottom = 0; try1->left = 0; try1->right = (int)WIDTH; try1->bottom = (int)HEIGHT;
ID3D11ShaderResourceView * texture2 = nullptr;
ID3D11ShaderResourceView * textureRV = nullptr;
CreateDDSTextureFromFile(d3dDevice, L"../Images/backgrounds/set2_background.dds", nullptr, &textureRV);
CreateDDSTextureFromFile(d3dDevice, L"../Images/backgrounds/set3_tiles.dds", nullptr, &texture2);
sprites->Draw(textureRV, XMFLOAT2(0, 0), try1, Colors::White);
sprites->Draw(texture2, XMFLOAT2(0, 0), try1, Colors::CornflowerBlue);
}
So as soon as i uncomment this method (or any DrawCharacter(), which follows the same steps), the window displays what i expect it to for a few seconds, but then i get the exception "Texture cannot be null". I also noticed that the method DrawCharacter() lets the window displaying what i want longer than the method DrawBackground(), whose texture is way bigger than the character's one.
I'm not sure if this information is useful but i think that maybe this might be linked to the size of the texture ?
Would you notice anything that i did wrong in this code ? Why would a texture be considered null while it does display it for a while ? I've been looking for answers for a few hours now, some help would be amazing please !
Thank you
I noticed that you create two new ID3D11ShaderResourceView every iteration without Release-ing the old ones. You could try by creating the ShaderResourceViews only once and storing them as global variables, or you could try by ->Release() them after the sprites->Draw(...) calls.

Bind CUDA output array/surface to GL texture in ManagedCUDA

I'm currently attempting to connect some form of output from a CUDA program to a GL_TEXTURE_2D for use in rendering. I'm not that worried about the output type from CUDA (whether it'd be an array or surface, I can adapt the program to that).
So the question is, how would I do that? (my current code copies the output array to system memory, and uploads it to the GPU again with GL.TexImage2D, which is obviously highly inefficient - when I disable those two pieces of code, it goes from approximately 300 kernel executions per second to a whopping 400)
I already have a little bit of test code, to at least bind a GL texture to CUDA, but I'm not even able to get the device pointer from it...
ctx = CudaContext.CreateOpenGLContext(CudaContext.GetMaxGflopsDeviceId(), CUCtxFlags.SchedAuto);
uint textureID = (uint)GL.GenTexture(); //create a texture in GL
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, width, height, 0, OpenTK.Graphics.OpenGL.PixelFormat.Rgba, PixelType.UnsignedByte, null); //allocate memory for the texture in GL
CudaOpenGLImageInteropResource resultImage = new CudaOpenGLImageInteropResource(textureID, CUGraphicsRegisterFlags.WriteDiscard, CudaOpenGLImageInteropResource.OpenGLImageTarget.GL_TEXTURE_2D, CUGraphicsMapResourceFlags.WriteDiscard); //using writediscard because the CUDA kernel will only write to this texture
//then, as far as I understood the ManagedCuda example, I have to do the following when I call my kernel
//(done without a CudaGraphicsInteropResourceCollection because I only have one item)
resultImage.Map();
var ptr = resultImage.GetMappedPointer(); //this crashes
kernelSample.Run(ptr); //pass the pointer to the kernel so it knows where to write
resultImage.UnMap();
The following exception is thrown when attempting to get the pointer:
ErrorNotMappedAsPointer: This indicates that a mapped resource is not available for access as a pointer.
What do I need to do to fix this?
And even if this exception can be resolved, how would I solve the other part of my question; that is, how do I work with the acquired pointer in my kernel? Can I use a surface for that? Access it as an arbitrary array (pointer arithmetic)?
Edit:
Looking at this example, apparently I don't even need to map the resource every time I call the kernel, and call the render function. But how would this translate to ManangedCUDA?
Thanks to the example I found, I was able to translate that to ManagedCUDA (after browsing the source code and fiddling around), and I'm happy to announce that this does really improve my samples per second from about 300 to 400 :)
Apparently it is needed to use a 3D array (I haven't seen any overloads in ManagedCUDA using 2D arrays) but that doesn't really matter - I just use a 3D array/texture which is exactly 1 deep.
id = GL.GenTexture();
GL.BindTexture(TextureTarget.Texture3D, id);
GL.TexParameter(TextureTarget.Texture3D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear);
GL.TexParameter(TextureTarget.Texture3D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear);
GL.TexImage3D(TextureTarget.Texture3D, 0, PixelInternalFormat.Rgba, width, height, 1, 0, OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, IntPtr.Zero); //allocate memory for the texture but do not upload anything
CudaOpenGLImageInteropResource resultImage = new CudaOpenGLImageInteropResource((uint)id, CUGraphicsRegisterFlags.SurfaceLDST, CudaOpenGLImageInteropResource.OpenGLImageTarget.GL_TEXTURE_3D, CUGraphicsMapResourceFlags.WriteDiscard);
resultImage.Map();
CudaArray3D mappedArray = resultImage.GetMappedArray3D(0, 0);
resultImage.UnMap();
CudaSurface surfaceResult = new CudaSurface(kernelSample, "outputSurface", CUSurfRefSetFlags.None, mappedArray); //nothing needs to be done anymore - this call connects the 3D array from the GL texture to a surface reference in the kernel
Kernel code:
surface outputSurface;
__global__ void Sample() {
...
surf3Dwrite(output, outputSurface, pixelX, pixelY, 0);
}

SDL2 modifying pixels

I want to modify single pixels with SDL2 and I don't want to do it with surfaces.
Here is the relevant part of my code:
// Create a texture for drawing
SDL_Texture *m_pDrawing = SDL_CreateTexture(m_pRenderer, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_STREAMING, 1024, 768);
// Create a pixelbuffer
Uint32 *m_pPixels = (Uint32 *) malloc(1024*768*sizeof(Uint32));
// Set a single pixel to red
m_pPixels[1600] = 0xFF0000FF;
// Update the texture with created pixelbuffer
SDL_UpdateTexture(m_pDrawing, NULL, m_pPixels, 1024*sizeof(Uint32));
// Copy texture to render target
SDL_RenderCopy(m_pRenderer, m_pDrawing, NULL, NULL);
If it is rendered then with SDL_RenderPresent(m_pRenderer) nothing appears on screen.
Here they explained that you could either use "surface->pixels, or a malloc()'d buffer". So, what's wrong?
Edit:
In the end it was just my m_pRenderer which was defined after the SDL_CreateTexture call.
Everything works fine now and I fixed the small bug in the buffer allocation, so this code should be working.
I'm not sure if this is your problem, but it's a bug:
Uint32 *m_pPixels = (Uint32 *) malloc(1024*768);
You need a * sizeof(Uint32) in there, if you want to allocate enough data to represent the pixels in your surface:
Uint32 *m_pPixels = (Uint32 *) malloc(1024*768*sizeof(Uint32));
(And if this is C, you don't need the cast.)