QWidget::paintEngine being called from QCoreApplication::processEvents - opengl

I'm converting an OSX application from Qt 4/Carbon to Qt5.11 with the QOpenGLWidget.
I've moved the drawing "calls" to my overridden QOpenGlWidget::paintGL().
The problem is I'm still getting these messages on the console:
QWidget::paintEngine: Should no longer be called
Getting a stack trace, I've discovered that this is being called eventually from QCoreApplication::processEvents, which I'm calling from my own internal event loop.
Here's a stack trace (edited for readability)
thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
frame #0: libQt5Widgets_debug.5.dylibQWidget::paintEngine()
frame #1: libQt5Widgets_debug.5.dylibQOpenGLWidget::paintEngine(0)
frame #2: libQt5Gui_debug.5.dylibQPainter::begin()
frame #3: libQt5Gui_debug.5.dylibQPainter::QPainter()
frame #4: libQt5Gui_debug.5.dylibQPainter::QPainter()
frame #5: libQt5Widgets_debug.5.dylibQWidgetPrivate::drawWidget()
frame #6: libQt5Widgets_debug.5.dylibQWidgetPrivate::repaint_sys()
frame #7: libQt5Widgets_debug.5.dylibQWidgetPrivate::syncBackingStore()
frame #8: libQt5Widgets_debug.5.dylibQWidgetWindow::handleExposeEvent()
frame #9: libQt5Widgets_debug.5.dylibQWidgetWindow::event()
frame #10: libQt5Widgets_debug.5.dylibQApplicationPrivate::notify_helper()
frame #11: libQt5Widgets_debug.5.dylibQApplication::notify()
frame #12: libQt5Core_debug.5.dylibQCoreApplication::notifyInternal2()
frame #13: libQt5Gui_debug.5.dylibQCoreApplication::sendSpontaneousEvent()
frame #14: libQt5Gui_debug.5.dylibQGuiApplicationPrivate::processExposeEvent()
frame #15: libQt5Gui_debug.5.dylibQGuiApplicationPrivate::processWindowSystemEvent()
frame #16: libQt5Gui_debug.5.dylibbool QWindowSystemInterfacePrivate::handleWindowSystemEvent<QWindowSystemInterface::SynchronousDelivery>()
frame #17: libQt5Gui_debug.5.dylibvoid QWindowSystemInterface::handleExposeEvent()
frame #18: libqcocoa_debug.dylibQCocoaWindow::handleExposeEvent()
frame #19: libqcocoa_debug.dylib::-[QNSView updateRegion:](self=0x000061200039fc40, _cmd="updateRegion:", dirtyRegion=QRegion # 0x00007ffeefbf9b18)
frame #20: libqcocoa_debug.dylib::-[QNSView updateLayer](self=0x000061200039fc40, _cmd="updateLayer")
frame #21: AppKit_NSViewUpdateLayer + 45
frame #22: AppKit-[_NSViewBackingLayer display] + 495
frame #23: QuartzCoreCA::Layer::display_if_needed(CA::Transaction*) + 634
frame #24: QuartzCoreCA::Context::commit_transaction(CA::Transaction*) + 319
frame #25: QuartzCoreCA::Transaction::commit() + 576
frame #26: QuartzCoreCA::Transaction::observer_callback(__CFRunLoopObserver*, unsigned long, void*) + 66
frame #27: CoreFoundationCFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION + 23
frame #28: CoreFoundation__CFRunLoopDoObservers + 452
frame #29: CoreFoundationCFRunLoopRunSpecific + 523
frame #30: HIToolboxRunCurrentEventLoopInMode + 293
frame #31: HIToolboxReceiveNextEventCommon + 618
frame #32: HIToolbox_BlockUntilNextEventMatchingListInModeWithFilter + 64
frame #33: AppKit_DPSNextEvent + 997
frame #34: AppKit-[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1362
frame #35: libqcocoa_debug.dylibQCocoaEventDispatcher::processEvents(this=0x00006040000dbdf0, flags=(i = 0)) at qcocoaeventdispatcher.mm:482
frame #36: libQt5Core_debug.5.dylib`QCoreApplication::processEvents(flags=(i = 0)) at qcoreapplication.cpp:1252
The problem is that ::processEvents is eventually calling ::paintEngine for the QOpenGLWidget, OUTSIDE of ::paintGL, but it's totally out of my control.
FWIW, the Event driving this is a QEvent::UpdateRequest.
I tried overriding ::event in my QOpenGLWidget-inheriting class to call QOpenGlWidget::update when it receives a QEvent::UpdateRequest, but that just ended up making the app non-responsive.
How should I handle ::processEvents attempting to draw QOpenGlWidgets?
Thanks!

I fixed this by removing this statement from our QOpenGlWidget subclass:
setAttribute( Qt::WA_PaintOnScreen, true );
Removing this got ride of the paintEngine calls (and solved all kinds of other problems).

Related

DirectX Screen Capture - Desktop Duplication API - limited frame rate of AcquireNextFrame

I'm trying to use Windows Desktop Duplication API to capture the screen and save the raw output to a video. I'm using AcquireNextFrame with a very high timeout value (999ms). This way I should get every new frame from windows as soon as it at has one, which naturally should be at 60fps anyway. I end up getting sequences where everything looks good (frame 6-11), and then sequences where things look bad (frame 12-14). If I check AccumulatedFrames
lFrameInfo.AccumulatedFrames
the value is often 2 or higher. From my understanding, this means windows is saying "hey hold up, I don't have a frame for you yet", because calls to AcquireNextFrame take so long. But once windows does finally give me a frame, it is saying "hey you were actually too slow and ended up missing a frame". If i could somehow get these frames I think I would be getting 60hz.
This can be further clarified with logging:
I0608 10:40:16.964375 4196 window_capturer_dd.cc:438] 206 - Frame 6 start acquire
I0608 10:40:16.973867 4196 window_capturer_dd.cc:451] 216 - Frame 6 acquired
I0608 10:40:16.981364 4196 window_capturer_dd.cc:438] 223 - Frame 7 start acquire
I0608 10:40:16.990864 4196 window_capturer_dd.cc:451] 233 - Frame 7 acquired
I0608 10:40:16.998364 4196 window_capturer_dd.cc:438] 240 - Frame 8 start acquire
I0608 10:40:17.007876 4196 window_capturer_dd.cc:451] 250 - Frame 8 acquired
I0608 10:40:17.015393 4196 window_capturer_dd.cc:438] 257 - Frame 9 start acquire
I0608 10:40:17.023905 4196 window_capturer_dd.cc:451] 266 - Frame 9 acquired
I0608 10:40:17.032411 4196 window_capturer_dd.cc:438] 274 - Frame 10 start acquire
I0608 10:40:17.039912 4196 window_capturer_dd.cc:451] 282 - Frame 10 acquired
I0608 10:40:17.048925 4196 window_capturer_dd.cc:438] 291 - Frame 11 start acquire
I0608 10:40:17.058428 4196 window_capturer_dd.cc:451] 300 - Frame 11 acquired
I0608 10:40:17.065943 4196 window_capturer_dd.cc:438] 308 - Frame 12 start acquire
I0608 10:40:17.096945 4196 window_capturer_dd.cc:451] 336 - Frame 12 acquired
I0608 10:40:17.098947 4196 window_capturer_dd.cc:464] 1 FRAMES MISSED on frame: 12
I0608 10:40:17.101444 4196 window_capturer_dd.cc:438] 343 - Frame 13 start acquire
I0608 10:40:17.128958 4196 window_capturer_dd.cc:451] 368 - Frame 13 acquired
I0608 10:40:17.130957 4196 window_capturer_dd.cc:464] 1 FRAMES MISSED on frame: 13
I0608 10:40:17.135459 4196 window_capturer_dd.cc:438] 377 - Frame 14 start acquire
I0608 10:40:17.160959 4196 window_capturer_dd.cc:451] 399 - Frame 14 acquired
I0608 10:40:17.162958 4196 window_capturer_dd.cc:464] 1 FRAMES MISSED on frame: 14
Frame 6-11 look good, the acquires are roughly 17ms apart. Frame 12 should be acquired at (300+17=317ms). Frame 12 starts waiting at 308, but doesn't get anything until 336ms. Windows didn't have anything for me until the frame after (300+17+17~=336ms). Okay sure maybe windows just missed a frame, but when I finally get it, I can check AccumulatedFrames and its value was 2 (meaning I missed a frame because I waited too long before calling AcquireNextFrame). In my understanding, it only makes sense for AccumulatedFrames to be larger than 1 if AcquireNextFrame returns immediately.
Furthermore, I can use PresentMon while my capture software is running. The logs show MsBetweenDisplayChange for every frame, which is fairly steady at 16.666ms (with a couple outliers, but much less than my capture software is seeing).
These people (1, 2) seem to have been able to get 60fps, so I'm wondering what I am doing incorrectly.
My code is based on this:
int main() {
int FPS = 60;
int video_length_sec = 5;
int total_frames = FPS * video_length_sec;
for (int i = 0; i < total_frames; i++) {
if(!CaptureSingleFrame()){
i--;
}
}
}
ComPtr<ID3D11Device> lDevice;
ComPtr<ID3D11DeviceContext> lImmediateContext;
ComPtr<IDXGIOutputDuplication> lDeskDupl;
ComPtr<ID3D11Texture2D> lAcquiredDesktopImage;
ComPtr<ID3D11Texture2D> lGDIImage;
ComPtr<ID3D11Texture2D> lDestImage;
DXGI_OUTPUT_DESC lOutputDesc;
DXGI_OUTDUPL_DESC lOutputDuplDesc;
D3D11_TEXTURE2D_DESC desc;
// Driver types supported
D3D_DRIVER_TYPE gDriverTypes[] = {
D3D_DRIVER_TYPE_HARDWARE
};
UINT gNumDriverTypes = ARRAYSIZE(gDriverTypes);
// Feature levels supported
D3D_FEATURE_LEVEL gFeatureLevels[] = {
D3D_FEATURE_LEVEL_11_0,
D3D_FEATURE_LEVEL_10_1,
D3D_FEATURE_LEVEL_10_0,
D3D_FEATURE_LEVEL_9_1
};
UINT gNumFeatureLevels = ARRAYSIZE(gFeatureLevels);
bool Init() {
int lresult(-1);
D3D_FEATURE_LEVEL lFeatureLevel;
HRESULT hr(E_FAIL);
// Create device
for (UINT DriverTypeIndex = 0; DriverTypeIndex < gNumDriverTypes; ++DriverTypeIndex)
{
hr = D3D11CreateDevice(
nullptr,
gDriverTypes[DriverTypeIndex],
nullptr,
0,
gFeatureLevels,
gNumFeatureLevels,
D3D11_SDK_VERSION,
&lDevice,
&lFeatureLevel,
&lImmediateContext);
if (SUCCEEDED(hr))
{
// Device creation success, no need to loop anymore
break;
}
lDevice.Reset();
lImmediateContext.Reset();
}
if (FAILED(hr))
return false;
if (lDevice == nullptr)
return false;
// Get DXGI device
ComPtr<IDXGIDevice> lDxgiDevice;
hr = lDevice.As(&lDxgiDevice);
if (FAILED(hr))
return false;
// Get DXGI adapter
ComPtr<IDXGIAdapter> lDxgiAdapter;
hr = lDxgiDevice->GetParent(
__uuidof(IDXGIAdapter), &lDxgiAdapter);
if (FAILED(hr))
return false;
lDxgiDevice.Reset();
UINT Output = 0;
// Get output
ComPtr<IDXGIOutput> lDxgiOutput;
hr = lDxgiAdapter->EnumOutputs(
Output,
&lDxgiOutput);
if (FAILED(hr))
return false;
lDxgiAdapter.Reset();
hr = lDxgiOutput->GetDesc(
&lOutputDesc);
if (FAILED(hr))
return false;
// QI for Output 1
ComPtr<IDXGIOutput1> lDxgiOutput1;
hr = lDxgiOutput.As(&lDxgiOutput1);
if (FAILED(hr))
return false;
lDxgiOutput.Reset();
// Create desktop duplication
hr = lDxgiOutput1->DuplicateOutput(
lDevice.Get(), //TODO what im i doing here
&lDeskDupl);
if (FAILED(hr))
return false;
lDxgiOutput1.Reset();
// Create GUI drawing texture
lDeskDupl->GetDesc(&lOutputDuplDesc);
desc.Width = lOutputDuplDesc.ModeDesc.Width;
desc.Height = lOutputDuplDesc.ModeDesc.Height;
desc.Format = lOutputDuplDesc.ModeDesc.Format;
desc.ArraySize = 1;
desc.BindFlags = D3D11_BIND_FLAG::D3D11_BIND_RENDER_TARGET;
desc.MiscFlags = D3D11_RESOURCE_MISC_GDI_COMPATIBLE;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.MipLevels = 1;
desc.CPUAccessFlags = 0;
desc.Usage = D3D11_USAGE_DEFAULT;
hr = lDevice->CreateTexture2D(&desc, NULL, &lGDIImage);
if (FAILED(hr))
return false;
if (lGDIImage == nullptr)
return false;
// Create CPU access texture
desc.Width = lOutputDuplDesc.ModeDesc.Width;
desc.Height = lOutputDuplDesc.ModeDesc.Height;
desc.Format = lOutputDuplDesc.ModeDesc.Format;
std::cout << desc.Width << "x" << desc.Height << "\n\n\n";
desc.ArraySize = 1;
desc.BindFlags = 0;
desc.MiscFlags = 0;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.MipLevels = 1;
desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE;
desc.Usage = D3D11_USAGE_STAGING;
return true;
}
void WriteFrameToCaptureFile(ID3D11Texture2D* texture) {
D3D11_MAPPED_SUBRESOURCE* pRes = new D3D11_MAPPED_SUBRESOURCE;
UINT subresource = D3D11CalcSubresource(0, 0, 0);
lImmediateContext->Map(texture, subresource, D3D11_MAP_READ_WRITE, 0, pRes);
void* d = pRes->pData;
char* data = reinterpret_cast<char*>(d);
// writes data to file
WriteFrameToCaptureFile(data, 0);
}
bool CaptureSingleFrame()
{
HRESULT hr(E_FAIL);
ComPtr<IDXGIResource> lDesktopResource = nullptr;
DXGI_OUTDUPL_FRAME_INFO lFrameInfo;
ID3D11Texture2D* currTexture;
hr = lDeskDupl->AcquireNextFrame(
999,
&lFrameInfo,
&lDesktopResource);
if (FAILED(hr)) {
LOG(INFO) << "Failed to acquire new frame";
return false;
}
if (lFrameInfo.LastPresentTime.HighPart == 0) {
// not interested in just mouse updates, which can happen much faster than 60fps if you really shake the mouse
hr = lDeskDupl->ReleaseFrame();
return false;
}
int accum_frames = lFrameInfo.AccumulatedFrames;
if (accum_frames > 1 && current_frame != 1) {
// TOO MANY OF THESE is the problem
// especially after having to wait >17ms in AcquireNextFrame()
}
// QI for ID3D11Texture2D
hr = lDesktopResource.As(&lAcquiredDesktopImage);
// Copy image into a newly created CPU access texture
hr = lDevice->CreateTexture2D(&desc, NULL, &currTexture);
if (FAILED(hr))
return false;
if (currTexture == nullptr)
return false;
lImmediateContext->CopyResource(currTexture, lAcquiredDesktopImage.Get());
writer_thread->Schedule(
FROM_HERE, [this, currTexture]() {
WriteFrameToCaptureFile(currTexture);
});
pending_write_counts_++;
hr = lDeskDupl->ReleaseFrame();
return true;
}
**EDIT - According to my measurements, you must call AcquireNextFrame() before the frame will actually appear by about ~10ms, or windows will fail to acquire it and get you the next one. Every time my recording program takes more than 7 ms to wrap around (after acquiring frame i until calling AcquireNextFrame() on i+1), frame i+1 is missed.
***EDIT - Heres a screenshot of GPU View showing what I'm talking about. The first 6 frames process in no time, then the 7th frame takes 119ms. The long rectangle beside "capture_to_argb.exe" corresponds to me being stuck inside AcquireNextFrame(). If you look up to the hardware queue, you can see it cleanly rendering at 60fps, even while I'm stuck in AcquireNextFrame(). At least this is my interpretation (I have no idea what I'm doing).
"Current Display Mode: 3840 x 2160 (32 bit) (60hz)" refers to display refresh rate, that is how many frames can be passed to display per second. However the rate at which new frames are rendered is typically much lower. You can inspect this rate using PresentMon or similar utilities. When I don't move the mouse it reports me something like this:
As you can see when nothing happens Windows presents new frame only twice per second or even slower. However this is typically really good for video encoding because even if you are recording video at 60 fps and AcquireNextFrame reports that no new frame is available then it means that current frame is exactly the same as previous.
Doing a blocking wait before next call of AcquireNextFrame you are missing the actual frames. Desktop Duplication API logic suggests that you attempt to acquire next frame immediately if you expect a decent frame rate. Your sleeping call effectively relinquishes the available remainder of execution timeout without hard promise that you get a new slice in scheduled interval of time.
You have to poll at maximal frame rate. Do not sleep (even with zero sleep time) and request next frame immediately. You will have the option to drop the frames that come too early. Desktop Duplication API is designed in a way that getting extra frames might be not too expensive of you identify them early and stop their processing.
If you still prefer to sleep between the frames, you might want to read the accuracy remark:
To increase the accuracy of the sleep interval, call the timeGetDevCaps function to determine the supported minimum timer resolution and the timeBeginPeriod function to set the timer resolution to its minimum. Use caution when calling timeBeginPeriod, as frequent calls can significantly affect the system clock, system power usage, and the scheduler. If you call timeBeginPeriod, call it one time early in the application and be sure to call the timeEndPeriod function at the very end of the application.
As others have mentioned, the 60Hz refresh rate only indicates the frequency with which the display may change. It doesn't actually mean that it will change that frequently. AcquireNextFrame will only return a frame when what is being displayed on the duplicated output has changed.
My recommendation is to ...
Create a Timer Queue timer with the desired video frame interval
Create a compatible resource in which to buffer the desktop bitmap
When the timer goes off, call AcquireNextFrame with a zero timeout
If there has been a change, copy the returned resource to your buffer and release it
Send the buffered frame to the encoder or whatever further processing
This will yield a sequence of frames at the desired rate. If the display hasn't changed, you'll have a copy of the previous frame to use to maintain your frame rate.

SDL2 Smooth texture(sprite) animation between points in time function

currently im trying to develop smooth animation effect via hardware accelerated technique (DirectX or OpenGL),
my current goal is very simple, i would like to move texture from point A to point B in given duration,
this is classic way to animate objects,
i read a lot about Robert Penner interpolations, and for this purpose i would like to animate my texture in simpliest linear interpolation as described here:
http://upshots.org/actionscript/jsas-understanding-easing
Everything works, except that my animation is not smooth, it is jerky.
The reason is not frame dropping, it is some double to int rounding aspects,
i prepared very short sample in C++ and SDL2 lib to show that behavior:
#include "SDL.h"
//my animation linear interpol function
double GetPos(double started, double begin, double end, double duration)
{
return (end - begin) * (double)(SDL_GetTicks() - started) / duration + begin;
}
int main(int argc, char* argv[])
{
//init SDL system
SDL_Init(SDL_INIT_EVERYTHING);
//create windows
SDL_Window* wnd = SDL_CreateWindow("My Window", 0, 0, 1920, 1080, SDL_WINDOW_SHOWN | SDL_WINDOW_BORDERLESS);
//create renderer in my case this is D3D9 renderer, but this behavior is the same with D3D11 and OPENGL
SDL_Renderer* renderer = SDL_CreateRenderer(wnd, 0, SDL_RENDERER_ACCELERATED | SDL_RENDERER_TARGETTEXTURE | SDL_RENDERER_PRESENTVSYNC);
//load image and create texture
SDL_Surface* surf = SDL_LoadBMP("sample_path_to_bmp_file");
SDL_Texture* tex = SDL_CreateTextureFromSurface(renderer, surf);
//get rid of surface we dont need surface anymore
SDL_FreeSurface(surf);
SDL_Event event;
int action = 0;
bool done = false;
//animation time start and duration
double time_start = (double) SDL_GetTicks();
double duration = 15000;
//loop render
while (!done)
{
action = 0;
while (SDL_PollEvent(&event))
{
switch (event.type)
{
case SDL_QUIT:
done = 1;
break;
case SDL_KEYDOWN:
action = event.key.keysym.sym;
break;
}
}
switch (action)
{
case SDLK_q:
done = 1;
default:
break;
}
//clear screen
SDL_SetRenderDrawColor(renderer, 0, 0, 0, 255);
SDL_RenderClear(renderer);
//calculate new position
double myX = GetPos(time_start, 10, 1000, duration);
SDL_Rect r;
//assaign position
r.x = (int) round(myX);
r.y = 10;
r.w = 600;
r.h = 400;
//render to rendertarget
SDL_RenderCopy(renderer, tex, 0, &r);
//present
SDL_RenderPresent(renderer);
}
//cleanup
SDL_DestroyTexture(tex);
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(wnd);
SDL_Quit();
return 0;
}
i suppose that jerky animation effect is related to my GetPos(...) function which works with doubles values, and im rendering via int values. But i cant render to screen in double because i obviously can't draw at 1.2px,
My question is:
do you know any technique or do you have some advice how to make that kind of animations (from, to, duration) smooth without jerky effect?
Im sure that's definitely possible because frameworks like WPF, WIN_RT, Cocos2DX, AndroidJava all them supports that kind of animations, and texture/object animation is smooth,
thanks in advance
edit
as per #genpfault request in comments im adding frame by frame x position values, as int and double:
rx: 12 myX: 11.782
rx: 13 myX: 13.036
rx: 13 myX: 13.366
rx: 14 myX: 14.422
rx: 16 myX: 15.544
rx: 17 myX: 16.666
rx: 18 myX: 17.722
rx: 19 myX: 18.91
rx: 20 myX: 19.966
rx: 21 myX: 21.154
rx: 22 myX: 22.21
rx: 23 myX: 23.266
rx: 24 myX: 24.388
rx: 25 myX: 25.444
rx: 27 myX: 26.632
rx: 28 myX: 27.754
rx: 29 myX: 28.81
rx: 30 myX: 29.866
rx: 31 myX: 30.988
rx: 32 myX: 32.044
rx: 33 myX: 33.166
rx: 34 myX: 34.288
rx: 35 myX: 35.344
rx: 36 myX: 36.466
rx: 38 myX: 37.588
rx: 39 myX: 38.644
final update/solve:
I changed question title from DirectX/OpenGL to SDL2 because issue is related to SDL2 it self,
I marked Rafael Bastos answer as correct because he pushed me into right direction, issue is caused by SDL render pipeline which is based on int precision values
As we can see in above log - stuttering is caused by irregular X values which are rounded from float. To solve that issue i had to change SDL2 render pipeline to use floats instead of integers
Interesting is that, SDL2 internally for opengl,opengles2, d3d9 and d3d11 renderers uses floats, but public SDL_RenderCopy/SDL_RenderCopyEx api is based on SDL_Rect and int values, this causing jerky animation effects when animation is based on interpolation function,
What exactly i changed in SDL2 is far far beyound stackoverflow scope, but in next steps i writed some main points what should be done to avoid animation stuttering:
i moved SDL_FRect and SDL_FPoint structs from internal sys_render api to render.h api to make them public
i extended current SDL methods in rect.h/rect.c to support SDL_FRect and SDL_FPoint, such SDL_HasIntersectionF(...), SDL_IsEmptyF(...) SDL_IntersectRectF(...)
i added new method GerRenderViewPortF based on GetRenderViewPort to support float precision
i added 2 new method SDL_RenderCopyF and SDL_RenderCopyFEx to avoid any figures rounding and pass real floats values to internal renderers,
all public functions must be reflected in dyn_proc SDL api, it requires some SDL architecture knowledge to do that,
to avoid SDL_GetTick() and any other timing precision issues, i decided to change my interpolation step from time to frame dependency. For example to calculate animation duration im not using:
float start = SDL_GetTicks();
float duration = some_float_value_in_milliseconds;
i replaced that to:
float step = 0;
float duration = some_float_value_in_milliseconds / MonitorRefreshRate
and now im incrementing step++ after each frame render
of course it has some side effect, if my engine will drop some frames, then my animation time is not equal to duration because is more frame dependent,
of course this duration calculations are valid only when VSYNC is ON, it is useless when vblank is off,
and now i have really smooth and jerky free animations, with timeline functions,
#genpfault and #RafaelBastos thanks for your time and for your advices,
seems you need to subtract started from SDL_GetTicks()
Something like this:
(end - begin) * ((double)SDL_GetTicks() - started) / duration + begin
(end - begin) gives you the total movement
(SDL_GetTicks() - started) / duration gives you the interpolation ratio, which multiplied by the total movement will give you the amount interpolated, which needs to be summed to the begin portion, so you can have the absolute interpolated position
if that's not it, then it is probably a rounding issue, but if you can only render with int precision, then I think you need to bypass sdl and render it using plain opengl or directx calls, which allow floating precision.

Inter relationship between SDL_CreateWindow size/ screen resolution / SDL_Logical Size

My Android device supports the resolution of 480*800 ie: (Width*height).
I am trying to display 1280 * 720 frame that are received from ffmpeg.
Pertaining to SDL:
Window size is created to : 640 * 480 ( width * height )
Renderer size 640 * 480 ( width * height )
Setting Logical Size to 640 * 480 [ SDL_RenderSetLogicalSize ]
Question:
How 1280 * 720 [ HD Frame] actually related to these three components? What i understood is that SDL_RenderSetLogicalSize will try to fit (1280 * 720 ) into (640 * 480).
Changing render size and window size does not create any difference. so what important it is about size of window/renderer and logical size?
SDL_RenderSetLogicalSize(Renderer, 1280, 720);
That will display a resolution of 1280 by 720 on any resolution while maintaining the proper aspect ratio via letter boxing.

Issue controlling game frame rate C++

I'm programming a game in C++ and I'm having trouble creating a way so that the game only updates 60 times per second. The code I have written looks like it should work but the frame rate actually ends up as 44 frames per second instead of 60.
const int FRAMES_PER_SECOND = 60;
const int FRAME_CONTROL = (1000 / FRAMES_PER_SECOND);
double lastFrameTime;
double currentFrameTime;
void GameLoop()
{
currentFrameTime = GetTickCount();
if ((currentFrameTime - lastFrameTime) >= FRAME_CONTROL)
{
lastFrameTime = currentFrameTime;
// Update Game.
}
}
So yeah, it should be 60 frames but it actually runs at 44. And the class I'm using to count the frame rate works perfectly in other programs which already have a capped frame rate.
Any ideas what the problem is?
Its due to the resolution of getTickCount. That function only gives 10-16 ms resolution Microsoft GetTickCount()

OpenCV HOGDescriptor Errors

I am running OpenCV 2.4.3 on Mac OS X 10.8.
I am trying to use the cv::HOGDescriptor to get pedestrians in a video sequence.
This is the code I am using to do the detection and paint a bounding box.
cv::VideoCapture input("file.avi");
assert(input.isOpened());
cv::HOGDescriptor body;
assert(body.load("hogcascade_pedestrians.xml"));
cv::Mat frame, gray;
cv::namedWindow("video");
while (input.read(frame)) {
vector<cv::Rect> rects;
cv::cvtColor(frame, gray, cv::COLOR_RGB2GRAY);
cv::equalizeHist(gray, gray);
body.detectMultiScale(gray, rects);
for (unsigned int i=0;i<rects.size();i++) {
cv::rectangle(frame, cv::Point(rects[i].x, rects[i].y),
cv::Point(rects[i].x+rects[i].width, rects[i].y+rects[i].height),
cv::Scalar(255, 0, 255));
}
cv::imshow("video", frame);
}
However, when the execution reaches the line body.detectMultiScale(gray, rects);, I get the an error and the whole application crashes
libc++abi.dylib: terminate called throwing an exception
[1] 92156 abort ../bin/DetectPedestrians
What is going wrong? I cannot seem to get any new information from the gdb or lldb outputs. I am compiling with the code with a CMake build, so I guess this isn't a problem with the linking.
Here is a stack trace from the thread that crashed -
Thread 0 Crashed:: Dispatch queue: com.apple.root.default-priority
0 libsystem_kernel.dylib 0x00007fff8c001212 __pthread_kill + 10
1 libsystem_c.dylib 0x00007fff8e7afaf4 pthread_kill + 90
2 libsystem_c.dylib 0x00007fff8e7f3dce abort + 143
3 libc++abi.dylib 0x00007fff94096a17 abort_message + 257
4 libc++abi.dylib 0x00007fff940943c6 default_terminate() + 28
5 libobjc.A.dylib 0x00007fff8e11f887 _objc_terminate() + 111
6 libc++.1.dylib 0x00007fff96b0b8fe std::terminate() + 20
7 libobjc.A.dylib 0x00007fff8e11f5de objc_terminate + 9
8 libdispatch.dylib 0x00007fff8c4ecfa0 _dispatch_client_callout2 + 28
9 libdispatch.dylib 0x00007fff8c4ed686 _dispatch_apply_serial + 28
10 libdispatch.dylib 0x00007fff8c4e80b6 _dispatch_client_callout + 8
11 libdispatch.dylib 0x00007fff8c4ebae8 _dispatch_sync_f_invoke + 39
12 libopencv_core.2.4.3.dylib 0x0000000101d5d900 cv::parallel_for_(cv::Range const&, cv::ParallelLoopBody const&, double) + 116
13 libopencv_objdetect.2.4.3.dylib 0x000000010257fa21 cv::HOGDescriptor::detectMultiScale(cv::Mat const&, std::vector<cv::Rect_<int>, std::allocator<cv::Rect_<int> > >&, std::vector<double, std::allocator<double> >&, double, cv::Size_<int>, cv::Size_<int>, double, double, bool) const + 559
14 libopencv_objdetect.2.4.3.dylib 0x000000010257fdc2 cv::HOGDescriptor::detectMultiScale(cv::Mat const&, std::vector<cv::Rect_<int>, std::allocator<cv::Rect_<int> > >&, double, cv::Size_<int>, cv::Size_<int>, double, double, bool) const + 80
15 DetectPedestrians 0x0000000101a7886c main + 2572 (detect.cpp:41)
16 libdyld.dylib 0x00007fff8d89f7e1 start + 1
On a Linux system, the same code gives me an error saying -
OpenCV Error: Assertion failed (dsize.area() || (inv_scale_x > 0 && inv_scale_y > 0)) in resize, file /home/subho/OpenCV/OpenCV-2.4.3/modules/imgproc/src/imgwarp.cpp, line 1726
terminate called after throwing an instance of 'tbb::captured_exception'
what(): /home/subho/OpenCV/OpenCV-2.4.3/modules/imgproc/src/imgwarp.cpp:1726: error: (-215) dsize.area() || (inv_scale_x > 0 && inv_scale_y > 0) in function resize
I havent been able to track down why exactly this error occurs. However it has something to do with how the XML HOG cascade file is loaded into memory.
I have reported this as an bug in the OpenCV issue tracker and am waiting to hear back from the developers.
Temporarily, my workaround for this problem is to directly set the SVM parameters in the cv::HOGDescriptor class like so
cv::HOGDescriptor human;
human.setSVMDetector(cv::HOGDescriptor::getDefaultPeopleDetector());
This seems to be working on both the Mac OSX and the linux versions of OpenCV 2.4.3
i am not an expert on this area, but i think you get the assertion failed error cause of size missmatch problems. i would suggest initilizing rects with some predifined size and see if you get the same error.
hope this helps
I've the same problem like yours here: HOGDescriptor Error and Assertion Failed
but try this out!
change your...
cv::HOGDescriptor body;
to...
cv::CascadeClassifier body;
it works like a charm! it can detect the pedestrian! :)
but there's another problem, this program run slowly! SO LAGGY! :))