SDL video overlay flickers with SDL_Flip - c++

I use SDL and libav in C++ to draw a video on the screen in Linux. Most of my code for video opening is based on this tutorial, but I changed some functions that were deprecated. I initialize SDL like this:
SDL_Init(SDL_INIT_EVERYTHING);
const SDL_VideoInfo * info = SDL_GetVideoInfo();
screen = SDL_SetVideoMode(info->current_w, info->current_h, 0, SDL_SWSURFACE | SDL_FULLSCREEN);
I am not going to post the whole code since it is pretty big, but the following shows how I try to display a video overlay. Note that some variables are classmembers from my Video class, like formatCtx and packet.
void Video::GetOverlay(SDL_Overlay * overlay) {
int frameFinished;
while (av_read_frame(formatCtx, &packet) >= 0) {
if (packet.stream_index == videoStream) {
avcodec_decode_video2(codecCtx, frame, &frameFinished, &packet);
if (frameFinished) {
SDL_LockYUVOverlay(overlay);
AVPicture pict;
pict.data[0] = overlay->pixels[0];
pict.data[1] = overlay->pixels[2];
pict.data[2] = overlay->pixels[1];
pict.linesize[0] = overlay->pitches[0];
pict.linesize[1] = overlay->pitches[2];
pict.linesize[2] = overlay->pitches[1];
SwsContext * ctx = sws_getContext (codecCtx->width, codecCtx->height, codecCtx->pix_fmt,
codecCtx->width, codecCtx->height, PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);
sws_scale(ctx, frame->data, frame->linesize, 0, codecCtx->height, pict.data, pict.linesize);
sws_freeContext(ctx);
SDL_UnlockYUVOverlay(overlay);
++frameIndex;
return;
}
}
av_free_packet(&packet);
}
}
And then in my mainloop:
SDL_Overlay * overlay = SDL_CreateYUVOverlay(video->GetWidth(), video->GetHeight(), SDL_YV12_OVERLAY, screen);
while (true) {
video->GetOverlay(overlay);
SDL_Rect rect = { 400, 200, video->GetWidth(), video->GetHeight() };
SDL_DisplayYUVOverlay(overlay, &rect);
SDL_Flip(screen);
}
This works, the video plays but it flickers a lot. Like it tries to draw an image on the same place each frame. When I remove the call to SDL_Flip(screen) the video plays fine. Too fast, I haven't worked videotiming out yet, but when I add a temporary SDL_Delay(10) it looks pretty good.
But when I remove SDL_Flip to show my video, I can't draw anything else on the screen. Both SDL_BlitSurface and SDL_FillRect fail to draw anything on the screen. I already tried to add SDL_DOUBLEBUF to the flags, but this did not change the situation.
I can provide more code if that is needed, but I think the problem is somewhere in the code that I have posted, since everything else is working fine (drawing images, or displaying a video without SDL_Flip).
What am I doing wrong?

Since you're using a SWSURFACE don't use SDL_Flip(screen) use SDL_UpdateRect. You don't need to set SDL_DOUBLEBUF
http://sdl.beuc.net/sdl.wiki/SDL_UpdateRect
That's what I do and I don't get flicker.
I set my screen like this
screen = SDL_SetVideoMode(width, height, 32, SDL_SWSURFACE | SDL_FULLSCREEN);
In my main loop I call
SDL_UpdateRect(screen, 0, 0, 0, 0);

You should use double buffering to prevent flickering.
screen = SDL_SetVideoMode(info->current_w, info->current_h, 0, SDL_SWSURFACE | SDL_FULLSCREEN | SDL_DOUBLEBUF);

I still think this isn't the best solution, but I found something that works!
The command SDL_UpdateRect(screen, 0, 0, 0, 0); will update the whole screen. If I only update the parts of the screen where no video is drawn, the video won't flicker anymore. I think it might have something to do with SDL_Overlays being handled differently than normal surfaces.

Related

SDL_CreateRGBSurfaceFrom / SDL_BlitSurface - I see old frames on my emulator

I'm working on a Space Invaders emulator and for display output I'm using SDL2.
The problem is that on the output window I see all the frames since emulation starts!
Basically the important piece of code is this:
Intel8080 mainObject; // My Intel 8080 CPU emulator
mainObject.loadROM();
//Main loop flag
bool quit = false;
//Event handler
SDL_Event e;
//While application is running
while (!quit)
{
//Handle events on queue
while (SDL_PollEvent(&e) != 0)
{
//User requests quit
if (e.type == SDL_QUIT)
{
quit = true;
}
}
if (mainObject.frameReady)
{
mainObject.frameReady = false;
gHelloWorld = SDL_CreateRGBSurfaceFrom(&mainObject.frameBuffer32, 256, 224, 32, 4 * 256, 0xff000000, 0x00ff0000, 0x0000ff00, 0x000000ff);
//Apply the image
SDL_BlitSurface(gHelloWorld, NULL, gScreenSurface, NULL);
//Update the surface
SDL_UpdateWindowSurface(gWindow);
}
mainObject.executeROM();
}
where Intel8080 is my CPU emulator code and mainObject.frameBuffer32 is the Space Invaders' video RAM that I converted from 1bpp to 32bpp in order to use SDL_CreateRGBSurfaceFrom function.
Emulation's working fine but I see all the frames generated since emulator starts!
I tried to change Alpha value in the 4 bytes of each RGBA pixel but nothing changes
This is happening because it looks like you're rendering the game without first clearing the window. Basically, you should fill the entire window with a color and then render on top of it constantly. The idea is that filling the window with a specific color before rendering over it is the equivalent of erasing the previous frame (most modern computers are powerful enough to handle this).
You might want to read-up on SDL's SDL_FillRect function, it'll allow you to fill the entire screen with a specific color.
Rendering pseudocode:
while(someCondition)
{
[...]
// Note, I'm not sure if "gScreenSurface" is the proper variable to use here.
// I got it from reading your code.
SDL_FillRect(gScreenSurface, NULL, SDL_MapRGB(gScreenSurface->format, 0, 0, 0));
SDL_BlitSurface(gHelloWorld, NULL, gScreenSurface, NULL);
SDL_UpdateWindowSurface(gWindow);
[...]
}

SDL_RenderReadPixels returns a black rectangle?

I'm trying to use SDL2 to take a screenshot of the entire desktop in windows. However, when looking at the resulting .bmp file it's completely black. Any help would be appreciated.
Here's my code:
SDL_Init(SDL_INIT_EVERYTHING);
SDL_Window* window = SDL_CreateWindowFrom(GetDesktopWindow());
int w, h;
SDL_GetWindowSize(window, &w, &h);
uint32_t wnd_pix_fmt = SDL_GetWindowPixelFormat(window);
if(wnd_pix_fmt == SDL_PIXELFORMAT_UNKNOWN)
SDL_ShowSimpleMessageBox(SDL_MESSAGEBOX_ERROR, "Window pix fmt error", SDL_GetError(), NULL);
SDL_Surface* screenshot = SDL_CreateRGBSurfaceWithFormat(0, w, h, 32, wnd_pix_fmt);
if(!screenshot)
SDL_ShowSimpleMessageBox(SDL_MESSAGEBOX_ERROR, "RGB surface error", SDL_GetError(), NULL);
SDL_Renderer* renderer = SDL_CreateRenderer(window, -1, 0);
if(!renderer)
SDL_ShowSimpleMessageBox(SDL_MESSAGEBOX_ERROR, "Renderer error", SDL_GetError(), NULL);
if(SDL_RenderReadPixels(renderer, &screenshot->clip_rect, screenshot->format, screenshot->pixels, screenshot->pitch))
SDL_ShowSimpleMessageBox(SDL_MESSAGEBOX_ERROR, "RendererReadPixels error", SDL_GetError(), NULL);
SDL_SaveBMP(screenshot, "screenshot.bmp");
SDL_FreeSurface(screenshot);
NOTE: Even when I set SDL_Window* window to a window created with SDL_CreateWindow it's still completely black. On a different forum, they mentioned this may have to do with a double buffering issue. I'm unaware of how to solve such an issue though.
You have a fundamental misconception of what SDL_RenderReadPixels() does. Indeed it is used to take "screenshots", but the "screenshot" will be of what you have rendered using that specific renderer, nothing else. You will not be able to accomplish what you want using anything available in SDL.
Taking a screenshot of the entire screen typically requires elevated permissions (I have no idea about Windows) and is outside the scope of SDL.

gdi+ draws string inverted

I am trying to write text to a bitmap i am getting from an usb camera using directshow.
The problem is that the text is mirror inverted upside-down and i don't know why.
Here is the code that writes the text:
BITMAPINFOHEADER bih = m_videoInfo.bmiHeader;
Bitmap bmp(bih.biWidth, bih.biHeight, m_stride, m_pixFmt, pBuffer);
Graphics g(&bmp);
if (this->introTimer->timeToDo())
{
RectF pos(10, 10, 100, 100);
SolidBrush brush(Color::Black);
Font font(FontFamily::GenericSerif(), 30);
hr = g.DrawString(this->introText, -1, &font, pos, StringFormat::GenericDefault(), &brush);
return hr;
}
I am not sure if my code is the only thing that affects the drawing of the string. Maybe there is some configuration or something.
Update
I tried using a negative height as suggested by Hans Passant. The result is that the text is not written at all.
The obvious thing to do would be to set the transformation on GDI+.
Essentially you need to invert the Y axis (though by doing this it would now draw off screen). So you need to then translate it down by the window size.
Something like this:
graphics.Transform = new Matrix2D( 1, 0,
0, -1,
0, -windowHeight );
Then draw as normal.
(Its worth noting I'm suggesting this without testing it. The y translation may not be negative so try both!).
The scanlines are stored upside down, with the first scan (scan 0) in memory being the bottommost scan in the image. This is another artifact of Presentation Manager compatibility. GDI automatically inverts the image during the Set and Get operations.
About the suggestion from Hans Passant, did you try negative height in position?
He suggests to use negative height in bih.
BITMAPINFOHEADER bih = m_videoInfo.bmiHeader;
bih.biHeight = -bih.biHeight;
Bitmap bmp(bih.biWidth, bih.biHeight, m_stride, m_pixFmt, pBuffer);
...
Also copying another solution from this page if it helps
void BitmapControl::OnPaint()
{
EnterCriticalSection (&CriticalSection);
if (_handleBMP)
{
CPaintDC dc(this);
//dc.SetMapMode(MM_ISOTROPIC);
dc.SetMapMode(MM_LOENGLISH);
CDC dcMem;
dcMem.CreateCompatibleDC(&dc);
CRect rect;
GetClientRect(&rect);
dc.DPtoLP(&rect);
CBitmap* pBmpOld = dcMem.SelectObject(CBitmap::FromHandle(_handleBMP));
//tst
dc.SetStretchBltMode(COLORONCOLOR);
//BitBlt(dc,rect.left,-0,rect.Width(),rect.Height(),dcMem,rect.left,rect.top,SRCCOPY); //works with MM_TEXT but upsidedown
BitBlt(dc,0,rect.bottom,rect.Width(),-rect.Height(),dcMem,0,0,SRCCOPY); //works with MM_LOENGLISH
dcMem.SelectObject(pBmpOld);
DeleteDC(dc);
DeleteDC(dcMem);
DeleteObject(_handleBMP);
DeleteObject(pBmpOld);
_handleBMP = NULL;
}
LeaveCriticalSection (&CriticalSection);
}

SDL image tearing / stuttering

First of all, let me make some things clear:
My monitor is at 60 hertz
I cap my FPS to 60, and it seems to be working correctly
I have the double buffering flag active
I made a backbuffer myself, and made sure to draw to it, and afterwards to the screen
This problem happens both in fullscreen and windowed mode
This is my main function (it contains all the code):
SDL_Init(SDL_INIT_EVERYTHING);
SDL_Surface * backbuffer = NULL;
SDL_Surface * screen = NULL;
SDL_Surface * box = NULL;
SDL_Surface * background = NULL;
SDL_Rect * rect = new SDL_Rect();
double FPS = 60;
double next_time;
bool drawn = false;
rect->x = 0;
rect->y = 0;
screen = SDL_SetVideoMode(1920, 1080, 32, SDL_HWSURFACE | SDL_DOUBLEBUF | SDL_FULLSCREEN);
if(screen == NULL) {
return 0;
}
background = SDL_LoadBMP("background.bmp");
box = SDL_LoadBMP("box.bmp");
if((background == NULL) || (box == NULL)) {
return 0;
}
background = SDL_DisplayFormat(background);
box = SDL_DisplayFormat(box);
backbuffer = SDL_CreateRGBSurface(SDL_HWSURFACE | SDL_DOUBLEBUF | SDL_FULLSCREEN,
1920,
1080,
32,
0,
0,
0,
0);
if(backbuffer == NULL) {
return 0;
}
next_time = (double)SDL_GetTicks() + (1000.0 / FPS);
while(true) {
if(!drawn) {
SDL_BlitSurface(background, NULL, backbuffer, NULL);
SDL_BlitSurface(box, NULL, backbuffer, rect);
rect->x += 3;
drawn = true;
}
if((Uint32)next_time <= SDL_GetTicks()) {
SDL_BlitSurface(backbuffer, NULL, screen, NULL);
SDL_Flip(screen);
next_time += 1000.0 / FPS;
drawn = false;
}
}
SDL_FreeSurface(backbuffer);
SDL_FreeSurface(background);
SDL_FreeSurface(box);
SDL_FreeSurface(screen);
SDL_Quit();
return 0;
I know this code isn't looking great, it was just a test to see why this happens to me whenever I write anything in SDL.
Please ignore the ugly code, and let me know if you have any idea of what might be causing the image of the moving white square over the black background to have weird artifacts and to seem to be tearing / stuttering.
If you need any more information let me know, and I'll update what's needed.
EDIT:
If I don't cap my FPS, it runs at 200-400 fps, which probably means SDL_Flip isn't waiting for the screen refresh.
I don't know if the flags I write, are actually used.
I checked my flags, and it seems like I can't get the SDL_HWSURFACE and SDL_DOUBLEBUF flags. It might cause the problem?
SDL Double buffering on HW_SURFACE
This is the solution to my problem.
All I had to do was add this line:
SDL_putenv("SDL_VIDEODRIVER=directx");
Thanks for the comments, they helped me find the solution. :D
I guess I should check SDL 2.0 out too...

C++ GDI+ Drawing Image on layered window not working

So I've found a lot of code samples, guides, and answers on SO about drawing an image to a layered window. I've tried using pure HBITMAPS and the WIC libs to draw, and now I'm on to GDI+ to draw (which is much simpler and is seemingly easier to use, and thus far it has solved a lot of errors that were caused by faulty WIC code).
I'm currently stuck on UpdateLayeredWindow. No matter what I try I can't get it to work. Right now, it's returning 87, or ERROR_INVALID_PARAMETER. The question is, which one is incorrect? I'm stumped! The below code seems to be the solution other than the fact that UpdateLayeredWindow is refusing to work.
What am I doing wrong?
Here is the code that sets up the HDC/bitmap information/graphics object.
// Create DC
_oGrphInf.canvasHDC = GetDC(_hwndWindow);
// Create drawing 'canvas'
_oGrphInf.lpBits = NULL;
_oGrphInf.bmpCanvas = CreateDIBSection(_oGrphInf.canvasHDC,
&_oGrphInf.bmpWinInformation, DIB_RGB_COLORS,
&_oGrphInf.lpBits, NULL, 0);
// Create graphics object
_oGrphInf.graphics = new Gdiplus::Graphics(_oGrphInf.canvasHDC);
The above works fine - I step through it and all of the pointers work.
And here is the method that draws the PNG.
void Splash::DrawPNG(PNG* lpPNG, int x, int y)
{
LOGD("Drawing bitmap!");
HDC hdcMem = CreateCompatibleDC(_oGrphInf.canvasHDC);
// Select
HBITMAP bmpOld = (HBITMAP)SelectObject(hdcMem, _oGrphInf.bmpCanvas);
Gdiplus::Color trans(0, 0, 0, 0);
_oGrphInf.graphics->Clear(trans);
_oGrphInf.graphics->DrawImage(lpPNG->GetImage(), x, y);
_oGrphInf.graphics->Flush();
SIZE szSize = {_oGrphInf.bmpWinInformation.bmiHeader.biWidth,
_oGrphInf.bmpWinInformation.bmiHeader.biHeight};
// Setup drawing location
POINT ptLoc = {0, 0};
POINT ptSrc = {0, 0};
// Set up alpha blending
BLENDFUNCTION blend = {0};
blend.BlendOp = AC_SRC_OVER;
blend.SourceConstantAlpha = 255;
blend.AlphaFormat = AC_SRC_ALPHA;
blend.BlendFlags = 0;
// Update
if(UpdateLayeredWindow(_hwndWindow, _oGrphInf.canvasHDC, &ptLoc,
&szSize, hdcMem, &ptSrc,
(COLORREF)RGB(0, 0, 0),
&blend, ULW_ALPHA) == FALSE)
LOGE("Could not update layered window: %u", GetLastError());
// Delete temp objects
SelectObject(hdcMem, bmpOld);
DeleteObject(hdcMem);
DeleteDC(hdcMem);
}
Pulling my hair out! Help?
EDIT: I just decided to re-write the call to UpdateLayeredWindow function, which solved the incorrect parameter issue. Here is what I came up with. However, it still does not work. What am I doing wrong?
UpdateLayeredWindow(_hwndWindow, _oGrphInf.canvasHDC,
NULL, NULL, hdcMem, &ptLoc,
RGB(0, 0, 0), &blend, ULW_ALPHA)
For alpha information to be preserved in drawing operations, you have to make your Graphics object based on a memory-backed Bitmap object, not an HDC, and of course your Bitmap needs to be in a format with an alpha channel.
You'll need to use this Bitmap constructor: http://msdn.microsoft.com/en-us/library/ms536315%28v=vs.85%29.aspx
Just give that a stride of 0, a pointer to your DIB's bits, and PixelFormat32bppPARGB.
Then use Graphics::FromImage to create your Graphics object.
I've never used UpdateLayeredWindow, so I can't verify that that side of it is correct.