SDL_Texture renders black after resize unless it is redrawn - c++

I've got a bit of a nasty bug for you folks. (Yes, it's probably my bug and not SDL's.) I have been in the process of writing a modern C++ wrapper for SDL and everything appears to be working as intended. However, my Texture class has a strange bug: if it is redrawn after a resize, it looks fine, but if it is not, it becomes entirely black. Here is what that looks like:
Before the resize
After the resize
I can't exactly post just one part of the code here, so here is the entire folder (hosted on GitLab): SDL wrapper
Here is a small program that reproduces the error using this library:
#include "sdl_wrapper/context.hh"
#include "sdl_wrapper/video/context.hh"
#include "sdl_wrapper/render/renderer.hh"
#include "sdl_wrapper/render/texture.hh"
#include "sdl_wrapper/colors.hh"
#include "SDL2/SDL_events.h"
int main()
{
sdl::Context sdlContext;
sdl::video::Context videoContext = sdlContext.initVideo();
sdl::video::Window window = videoContext.createWindow("test", 0, 0, 800, 600).resizable().build();
int width = 800;
int height = 600;
sdl::render::Renderer renderer = window.createRenderer().targetTexture().build();
sdl::render::Texture example(renderer, SDL_PIXELFORMAT_RGBA8888, SDL_TEXTUREACCESS_TARGET, 400, 300);
renderer.setTarget(example);
renderer.setDrawColor(sdl::colors::Blue);
renderer.clear();
renderer.resetTarget();
bool run = true;
while (run)
{
SDL_Event e;
while (SDL_PollEvent(&e) != 0)
{
if (e.type == SDL_QUIT)
{
run = false;
break;
}
}
renderer.setDrawColor({0x44, 0x44, 0x44, 0xff});
renderer.clear();
renderer.copy(example, std::nullopt, {{width / 4, height / 4, example.getWidth(), example.getHeight()}});
renderer.present();
}
}
To reproduce, simply run this program and resize the window. There will be a blue square that becomes black after the resize.
I would greatly appreciate it if someone could point me in the right direction here. I would really like to avoid redrawing on every resize (feel free to argue with me on that point if I am misguided).

SDL doesn't promise to keep target textures data. There are cases, especially with d3d or mobile, where data is lost due to some big state change. Changing window size may sound not that big, but on some hardware/driver configurations is causes problems, I suppose that's the reason why SDL detects resize and drops all renderer data. You get SDL_RENDER_TARGETS_RESET event when you need to update your render textures.
That shouldn't happen with e.g. opengl renderer implementation (that may sound great but reasons behind it are not so great); on windows, SDL2 defaults to direct3d, which could be modified by issuing SDL_SetHint or setting envvars.

Related

Has SDL2.0 SetRenderDrawColor and TTF_RenderUNICODE_Blended memory issues?

I have a work-related C/C++ SDL 2.0 project - originally developed and compiled on Win7 computer for x86 platform (With MS Visual Studio) that was working fine for the whole time. Since the program was meant to run also at newer Win10 x64 devices, I moved the entire solution to a newer Win10 x64 computer, changed all SDL2 libraries, header files etc. for the x64 version and rebuilt the solution for x64 architecture.
The program is basically an advanced diagnostic screen with dials, icons and text information, so it uses variety of colors, fonts, geometry etc.
After the recompilation for x64 platform, I noticed that the colors of some elements are absolutely off. For example - a rectangle that should be grey (used SDL_SetRenderDrawColor one step before drawing the rect.) is suddenly green, despite its given grey RGB numbers as input arguments. Even the function SDL_GetRenderDrawColor called before and/or after drawing of the rectangle returns values 195, 195, 195, but the rectangle is shown green.
Rectangle drawing function:
void DrawRectangle(SDL_Color color, int pozX, int pozY, int width, int height)
{
SDL_Rect Proportions = { pozX, pozY, width, height };
SDL_SetRenderDrawColor(gRenderer, color.r, color.g, color.b, 0xFF);
SDL_RenderFillRect(gRenderer, &Proportions);
}
Rectangle drawing call:
DrawRectangle(grey, pocpozX, 182, width, height);
Later I have noticed that this rectangle inherits its color from a text rendered a few steps earlier. If I change the color of this text, it for some strange reason applies also for the rectangle.
Text drawing function:
void DrawText(std::wstring text, TTF_Font *font, int pozX, int pozY, SDL_Color color)
{
typedef std::basic_string<Uint16, std::char_traits<Uint16>, std::allocator<Uint16> > u16string;
std::wstring text_sf_str = text;
u16string utext_sf_str(text_sf_str.begin(), text_sf_str.end());
SDL_Surface* text_sf = TTF_RenderUNICODE_Blended(font, utext_sf_str.c_str(), color);
SDL_Texture* text_Tx = SDL_CreateTextureFromSurface(gRenderer, text_sf);
int text_TxW;
int text_TxH;
SDL_QueryTexture(text_Tx, NULL, NULL, &text_TxW, &text_TxH);
SDL_Rect textrect = { pozX, pozY, text_TxW, text_TxH };
SDL_RenderCopy(gRenderer, text_Tx, NULL, &textrect);
SDL_FreeSurface(text_sf);
SDL_DestroyTexture(text_Tx);
}
Notice all these conversions needed to display a wstring text with some uncommonly used UNICODE characters. I don't quite understand why SDL needs to be passed each character as Uint16 number, this could have been done better. I also don't understand the need of using SDL_QueryTexture just in order to display the text proportions right. I would expect the user to want to display the text not anyhow stretched by default. This all seems to me unnecessarily complicated.
Funny thing is, that if I remove SDL_DestroyTexture tag, there is of course a huge memory leak, but the problem disappears.
Text drawing call - in a for loop, this draws numeric labels onto speedometer axis:
if (i % 20 == 0)
{
if (i <= 160)
{
DrawText(str2, font, posX, posY, grey);
}
else
{
DrawText(str2, font, posX, posY, green);
}
}
... And this very "green" as the last argument of the call is what causes the rectangle to be green also - despite the fact that more elements are drawn between this text and rectangle so SDL_SetRenderDrawColor is called multiple times.
I have also noticed that some shown numbers are for a little while glitching some senseless values every n-th second, so I guess this is a memory-related problem. Of course there are more color problems, this was just for an example. Due to company policy, I cannot be more specific or post the full code here.
If I switch back to x86 configuration (with the right libraries of course), the problem now remains the same.

How to efficiently render a small sprite in Direct3D / C++ on a large Window (DWM)?

I'm implementing a custom cursor in DirectX/C++ that is drawn on a transparent window on top of the desktop.
I have stripped it down to a basic example. The magic of executing Direct3D on the DWM is based on this article on Code Project
The problem is that when using a very big window (e.g. 2560x1440) as a base for the DirectX rendering, it will give up to 40% GPU Load according to GPU-Z. Even if the only thing I am displaying is a static 128x128 sprite, or nothing at all. If I use an area like 256x256, the GPU Load is around 1-3%.
Basically this loop would make the GPU go crazy on a big window while it's smooth sailing on a small window:
while(true) {
g_pD3DDevice->PresentEx(NULL, NULL, NULL, NULL, NULL);
Sleep(10);
}
So it seems like it re-renders the whole screen whether anything changes or not, am I right? Can I tell Direct3D to only re-render specific parts that needs to be updated?
EDIT:
I have found a way to tell Direct3D to render a specific part by providing RGNDATA Dirty region information to PresentEx. It is now 1% GPU Load instead of 20-40%.
std::vector<RECT> dirtyRects;
//Fill dirtyRects with previous and new cursor boundaries
DWORD size = dirtyRects.size() * sizeof(RECT)+sizeof(RGNDATAHEADER);
RGNDATA *rgndata = NULL;
rgndata = (RGNDATA *)HeapAlloc(GetProcessHeap(), 0, size);
RECT* pRectInitial = (RECT*)rgndata->Buffer;
RECT rectBounding = dirtyRects[0];
for (int i = 0; i < dirtyRects.size(); i++)
{
RECT rectCurrent = dirtyRects[i];
rectBounding.left = min(rectBounding.left, rectCurrent.left);
rectBounding.right = max(rectBounding.right, rectCurrent.right);
rectBounding.top = min(rectBounding.top, rectCurrent.top);
rectBounding.bottom = max(rectBounding.bottom, rectCurrent.bottom);
*pRectInitial = dirtyRects[i];
pRectInitial++;
}
//preparing rgndata header
RGNDATAHEADER header;
header.dwSize = sizeof(RGNDATAHEADER);
header.iType = RDH_RECTANGLES;
header.nCount = dirtyRects.size();
header.nRgnSize = dirtyRects.size() * sizeof(RECT);
header.rcBound.left = rectBounding.left;
header.rcBound.top = rectBounding.top;
header.rcBound.right = rectBounding.right;
header.rcBound.bottom = rectBounding.bottom;
rgndata->rdh = header;
// Update display
g_pD3DDevice->PresentEx(NULL, NULL, NULL, rgndata, 0);
But it's something I do not understand. It will only give 1% GPU Load if I add the following
SetLayeredWindowAttributes(hWnd, 0, 180, LWA_ALPHA);
I want it transparent anyway so it's good, but instead I get some weird tearing effects after a while. It is more noticeable the faster I move the cursor. What does that come from? It looks like image provided. I am sure I have set the dirty rects perfectly accurate.
The above tearing seem to differ from computer to computer.

SDL2 does not draw image using texture

I have been attempting to get a simple SDL2 program up to display an image and then exit. I have this code:
/* compile with `gcc -lSDL2 -o main main.c` */
#include <SDL2/SDL.h>
#include <SDL2/SDL_video.h>
#include <SDL2/SDL_render.h>
#include <SDL2/SDL_surface.h>
#include <SDL2/SDL_timer.h>
int main(void){
SDL_Init(SDL_INIT_VIDEO);
SDL_Window * w = SDL_CreateWindow("Hi", 0, 0, 640, 480, 0);
SDL_Renderer * r = SDL_CreateRenderer(w, -1, 0);
SDL_Surface * s = SDL_LoadBMP("../test.bmp");
SDL_Texture * t = SDL_CreateTextureFromSurface(r, s);
SDL_FreeSurface(s);
SDL_RenderClear(r);
SDL_RenderCopy(r, t, 0, 0);
SDL_RenderPresent(r);
SDL_Delay(2000);
SDL_DestroyTexture(t);
SDL_DestroyRenderer(r);
SDL_DestroyWindow(w);
SDL_Quit();
}
I am aware that I have omitted the normal checks that each function succeeds - they all do succeed, they were removed for ease of reading. I am also aware I have used 0 rather than null pointers or the correct enum values, this also is not the cause of the issue (as the same issue occurs when I correctly structure the program, this was a quick test case drawn up to test the simplest case)
The intention is that a window appear and shows the image (which is definitely at that directory), wait for a couple of seconds and exit. The result, however, is that the window appears correctly but the window is filled with black.
An extra note SDL_ShowSimpleMessageBox() appears to work correctly. I don't know how this relates to the rest of the framework though.
I'll just clear this, you wanted to make a texture, do it directly to ease control, plus this gives you better control over the image, try using this code, fully tested, and working, all you wanted was for the window to show the image and close within 2 seconds right?. If the image doesn't load then it's your image's location.
/* compile with `gcc -lSDL2 -o main main.c` */
#include <SDL.h>
#include <SDL_image.h>
#include <iostream> //I included it since I used cout
int main(int argc, char* argv[]){
bool off = false;
int time;
SDL_Init(SDL_INIT_VIDEO);
SDL_Window * w = SDL_CreateWindow("Hi", 0, 0, 640, 480, SDL_WINDOW_SHOWN);
SDL_Renderer * r = SDL_CreateRenderer(w, -1, SDL_RENDERER_ACCELERATED);
SDL_Texture * s = NULL;
s = IMG_LoadTexture(r, "../test.bmp"); // LOADS A TEXTURE DIRECTLY FROM THE IMAGE
if (s == NULL)
{
cout << "FAILED TO FIND THE IMAGE" << endl; //we did this to check if IMG_LoadTexture found the image, if it showed this message in the cmd window (the black system-like one) then it means that the image can't be found
}
SDL_Rect s_rect; // CREATES THE IMAGE'S SPECS
s_rect.x = 100; // just like the window, the x and y values determine it's displacement from the origin which is the top left of your window
s_rect.y = 100;
s_rect.w = 640; //width of the texture
s_rect.h = 480; // height of the texture
time = SDL_GetTicks(); //Gets the current time
while (time + 2000 < SDL_GetTicks()) //adds 2 seconds to the past time you got and waits for the present time to surpass that
{
SDL_RenderClear(r);
SDL_RenderCopy(r, s, NULL, &s_rect); // THE NULL IS THE AREA YOU COULD USE TO CROP THE IMAGE
SDL_RenderPresent(r);
}
SDL_DestroyTexture(s);
SDL_DestroyRenderer(r);
SDL_DestroyWindow(w);
return 0; //returns 0, closes the program.
}
if you wanted to see a close button on the window and want it to take effect then create an event then add it to the while area to check if it's equal to SDL_Quit();, I didn't include it since you wanted it to immediately close within 2 seconds, thus, rendering the close button useless.
HAPPY CODING :D
When using SDL_RENDERER_SOFTWARE for the renderer flags this worked. Also it worked on a different machine. Guess there must be something screwed up with my configuration, although I'm not sure what it is because I'm still getting no errors shown. Ah well, mark as solved.
I believe this to be (not 100% sure, but fairly sure), due to this line of code:
SDL_Renderer * r = SDL_CreateRenderer(w, -1, 0);
According to the SDL wiki article SDL_CreateRenderer, the parameters required for the arguments that you are passing in are as follows:
SDL_Window* window
int index
Uint32 flags
You are passing in the pointer to the window correctly, as well as the index correctly, but the lack of a flag signifies to SDL that SDL should use the default renderer.
Most systems have a default setting for applications for which renderer should be used, and this can be modified on a application by application basis. If no default setting is provided for a specific application, the render look up immediately checks the default render settings list. The SDL wiki briefly refers to this list by the following line at the bottom of the remarks section:
"Note that providing no flags gives priority to available SDL_RENDERER_ACCELERATED renderers."
What's not explained here in the wiki is that the "renderers" the wiki is referring to comes from the default renderer list.
This leads me to believe that you have either changed a setting somewhere in the course of history of your computer, or elsewhere in you visual studio settings that is resulting in no list to be found. Therefore you must explicitly inform SDL which renderer to use because of your machine settings. Otherwise using an argument of 0 should work just fine. In the end this doesn't hurt as it's better to be explicit in your code rather than implicit if at all possible.
(That said, all of my deductions are based off of the fact that I am assuming that everything you said that works, works. If this is not true, then the issue could be because of a vast variety of reasons due to the lack of error checking.)

Equivalent of "Invalidate Rect" / "WM_PAINT" in X11

I'm Porting some code from Windows to XLib. In the windows code, I can force a redraw by calling InvalidateRect and then handling the corresponding WM_PAINT message. However, I am having trouble finding out how to do this in X11/XLib. I see there is an Expose message but not sure if that is the same thing.
If it matters, I need to do this to force the window to render at a certain frame rate for an OpenGL based program.
To expand slightly on the useful answers given by BЈовић,
With raw Xlib you can draw at any time in a single thread, because every Xlib function specifies the full display, window, and context. AFAIK, with multithreading all bets are off.
You also must have an Expose event handler, and select for those events, if you're in a desktop environment. And it won't hurt to have one even if you're writing a full screen program.
Most toolkits are not as flexible and only draw in a designated event handler (but much nicer to use in many other ways) and have some equivalent to the Windows InvalidateRect. In raw Xlib you get the same effect by sending yourself an Expose event. Doing so won't lead to any real performance problems and will make the code more understandable by other programmers, and easier to port, so you might as well.
There are also XClearArea and XClearWindow functions which will generate Expose events for you, but they first erase part/all with the background color, which might lead to flickering.
With OpenGL it gets a bit more complicated because you have to work with GLX as well. I have a very simple OpenGL/Xlib program online at
http://cs.anu.edu.au/~hugh.fisher/3dteach/
which might be useful as an example.
You need to handle Expose events. This tutorial explains with an example how to handle Expose events :
#include <stdio.h>
#include <stdlib.h>
#include <X11/Xlib.h>
#include <X11/Xutil.h>
#include <X11/Xos.h>
#include <X11/Xatom.h>
#include <X11/keysym.h>
/*Linux users will need to add -ldl to the Makefile to compile
*this example.
*/
Display *dis;
Window win;
XEvent report;
GC green_gc;
XColor green_col;
Colormap colormap;
/*
Try changing the green[] = below to a different color.
The color can also be from /usr/X11R6/lib/X11/rgb.txt, such as RoyalBlue4.
A # (number sign) is only needed when using hexadecimal colors.
*/
char green[] = "#00FF00";
int main() {
dis = XOpenDisplay(NULL);
win = XCreateSimpleWindow(dis, RootWindow(dis, 0), 1, 1, 500, 500, 0, BlackPixel (dis, 0), BlackPixel(dis, 0));
XMapWindow(dis, win);
colormap = DefaultColormap(dis, 0);
green_gc = XCreateGC(dis, win, 0, 0);
XParseColor(dis, colormap, green, &green_col);
XAllocColor(dis, colormap, &green_col);
XSetForeground(dis, green_gc, green_col.pixel);
XSelectInput(dis, win, ExposureMask | KeyPressMask | ButtonPressMask);
XDrawRectangle(dis, win, green_gc, 1, 1, 497, 497);
XDrawRectangle(dis, win, green_gc, 50, 50, 398, 398);
XFlush(dis);
while (1) {
XNextEvent(dis, &report);
switch (report.type) {
case Expose:
fprintf(stdout, "I have been exposed.\n");
XDrawRectangle(dis, win, green_gc, 1, 1, 497, 497);
XDrawRectangle(dis, win, green_gc, 50, 50, 398, 398);
XFlush(dis);
break;
case KeyPress:
/*Close the program if q is pressed.*/
if (XLookupKeysym(&report.xkey, 0) == XK_q) {
exit(0);
}
break;
}
}
return 0;
}
I may have misunderstood the question. If you want to create Expose events in your application, you can create and set expose event, and send it using XSendEvent.

GDI Acceleration In Windows 7 / Drawing To Memory Bitmap

My GDI program runs fine on Windows XP but on Windows Vista and 7 it looks pretty terrible due to the lack of GDI hardware acceleration. I recall reading an article a few years back saying that Windows 7 added hardware acceleration to some GDI functions, including BitBlt() function. Supposedly, if you if you draw to a memory bitmap and then use BitBlt() to copy the image to your main window it runs about the same speed as XP. Is that true?
If it is true, how do you do it? I'm terrible at programming and am having a bit of trouble. I created the below class to to try and get it working:
class CMemBmpTest
{
private:
CDC m_dcDeviceContext;
CBitmap m_bmpDrawSurface;
public:
CMemBmpTest();
~CMemBmpTest();
void Init();
void Draw();
};
CMemBmpTest::CMemBmpTest()
{
}
CMemBmpTest::~CMemBmpTest()
{
m_bmpDrawSurface.DeleteObject();
m_dcDeviceContext.DeleteDC();
}
void CMemBmpTest::Init()
{
m_dcDeviceContext.CreateCompatibleDC(NULL);
m_bmpDrawSurface.CreateCompatibleBitmap(&m_dcDeviceContext, 100, 100);
}
void CMemBmpTest::Draw()
{
m_dcDeviceContext.SelectObject(I.m_brshRedBrush);
m_dcDeviceContext.PatBlt(0, 0, 100, 100, BLACKNESS);
}
In the OnPaint() function of the window I added the line:
pDC->BitBlt(2, 2, 100, 100, &m_MemBmp, 0, 0, SRCCOPY);
I was hoping to see a 100x100 black box in the corner of the window but it didn't work. I'm probably doing everything horrifically wrong, so would be grateful if somebody could advise me as to how to do this correctly.
Thanks for any advice you can offer.
AFAIK you get hardware acceleration on GDI functions on all versions of Windows (I'm happy to stand corrected on this if someone can explain it in more detail). But either way, you're correct that double buffering (which is what you're talking about) provides a massive performance boost (and more importantly no flickering) relative to drawing direct to the screen.
I've done quite a lot of this and have come up with a method to allow you to use GDI and GDI+ at the same time in your drawing functions, but benefit from the hardware acceleration of the BitBlt in drawing to screen. GDI+ isn't hardware accelerated AFAIK but can be very useful in many more complex drawing techniques so it can be useful to have the option of.
So, my basic view class will have the following members :
Graphics *m_gr;
CDC *m_pMemDC;
CBitmap *m_pbmpMemBitmap;
Then the class itself will have code something like this
/*======================================================================================*/
CBaseControlPanel::CBaseControlPanel()
/*======================================================================================*/
{
m_pMemDC = NULL;
m_gr = NULL;
m_pbmpMemBitmap = NULL;
}
/*======================================================================================*/
CBaseControlPanel::~CBaseControlPanel()
/*======================================================================================*/
{
// Clean up all the GDI and GDI+ objects we've used
if(m_pMemDC)
{ delete m_pMemDC; m_pMemDC = NULL; }
if(m_pbmpMemBitmap)
{ delete m_pbmpMemBitmap; m_pbmpMemBitmap = NULL; }
if(m_gr)
{ delete m_gr; m_gr = NULL; }
}
/*======================================================================================*/
void CBaseControlPanel::OnPaint()
/*======================================================================================*/
{
pDC->BitBlt(rcUpdate.left, rcUpdate.top, rcUpdate.Width(), rcUpdate.Height(),
m_pMemDC, rcUpdate.left, rcUpdate.top, SRCCOPY);
}
/*======================================================================================*/
void CBaseControlPanel::vCreateScreenBuffer(const CSize szPanel, CDC *pDesktopDC)
// In :
// szPanel = The size that we want the double buffer bitmap to be
// Out : None
/*======================================================================================*/
{
// Delete anything we're already using first
if(m_pMemDC)
{
delete m_gr;
m_gr = NULL;
delete m_pMemDC;
m_pMemDC = NULL;
delete m_pbmpMemBitmap;
m_pbmpMemBitmap = NULL;
}
// Make a compatible DC
m_pMemDC = new CDC;
m_pMemDC->CreateCompatibleDC(pDesktopDC);
// Create a new bitmap
m_pbmpMemBitmap = new CBitmap;
// Create the new bitmap
m_pbmpMemBitmap->CreateCompatibleBitmap(pDesktopDC, szPanel.cx, szPanel.cy);
m_pbmpMemBitmap->SetBitmapDimension(szPanel.cx, szPanel.cy);
// Select the new bitmap into the memory DC
m_pMemDC->SelectObject(m_pbmpMemBitmap);
// Then create a GDI+ Graphics object
m_gr = Graphics::FromHDC(m_pMemDC->m_hDC);
// And update the bitmap
rcUpdateBitmap(rcNewSize, true);
}
/*======================================================================================*/
CRect CBaseControlPanel::rcUpdateBitmap(const CRect &rcInvalid, const bool bInvalidate, const bool bDrawBackground /*=true*/)
// Redraws an area of the double buffered bitmap
// In :
// rcInvalid - The rect to redraw
// bInvalidate - Whether to refresh to the screen when we're done
// bDrawBackground - Whether to draw the background first (can give speed benefits if we don't need to)
// Out : None
/*======================================================================================*/
{
// The memory bitmap is actually updated here
// Then make the screen update
if(bInvalidate)
{ InvalidateRect(rcInvalid); }
}
So, you can then either just draw direct to the memory DC and call InvalidateRect() or put all your drawing code in rcUpdateBitmap() which was more convenient for the way I was using it. You'll need to call vCreateScreenBuffer() in OnSize().
Hopefully that gives you some ideas anyway. Double buffering is definitely the way to go for speed and non-flickering UI. It can take a little bit of effort to get going but it's definitely worth it.
CMemDC:
http://www.codeproject.com/Articles/33/Flicker-Free-Drawing-In-MFC
http://msdn.microsoft.com/en-us/library/cc308997(v=vs.90).aspx