How to reset a frame rate in IQualProp::get_AvgFrameRate? - c++

I get an average frame rate from the EVR renderer using the IQualProp::get_AvgFrameRate method. It worked well but after call of Pause/Run in the DirectShow graph I got wrong values of frame rate. Here some my solutions:
Standard solution:
bool Pause()
{
IMediaControl* pMediaControl;
pMediaControl->Pause();
}
bool Run()
{
IMediaControl* pMediaControl;
pMediaControl->Run();
}
After pMediaControl->Run() the renderer shows me the frame rate values in two times less than it is necessary. But during 10-15 sec those values are restored.
Via Stop() method:
bool Pause()
{
IMediaControl* pMediaControl;
pMediaControl->Stop();
pMediaControl->Pause();
}
bool Run()
{
IMediaControl* pMediaControl;
pMediaControl->Run();
}
Into the Pause() I add pMediaControl->Stop(). After Run() I get the right frame rate but the renderer freezes for 10-15 sec.
Using IMediaFilter::Run()
bool Pause()
{
IMediaControl* pMediaControl;
pMediaControl->Stop();
pMediaControl->Pause();
}
bool Run()
{
IMediaFilter* pMediaFilter;
pMediaFilter->Run(1000000); //delay 10ms
}
Here I got the nice result without freezing and wrong values. But CPU utilization is more in two times than before Pause().
Ideas?
I can return to my old schema, when I got an average frame rate by calculation of frames, but I would like to use the IQualProp::get_AvgFrameRate method.

I returned to the old schema. Rewrite the code of getting average FPS is better than the rework of three source filters

Related

Making a separate thread for rendering made my code slower

I had a method called run in which I am updating and rendering the game objects.
void Run(olc::PixelGameEngine* pge) noexcept
{
Update(pge);
Render(pge);
}
Frame rate then was fluctuating 300~400 frames in release mode and 200~300 frames in debug mode. I have yet to add lodes of game logic, so I thought I would do rendering in a separate thread, so after a quick tutorial later I changed it to this.
void Run(olc::PixelGameEngine* pge) noexcept
{
Update(pge);
std::thread renderer(&GameManager::Render, this, pge);
renderer.join();
}
Now the frame rate is around 100~150 frames in release mode and 60~100 frames in debug mode.
std::thread creates a thread.
renderer.join() waits until the thread has finished.
Basically the same logic as your first example, but you create and destroy a thread in your 'loop'. Much more work than before, not surprising that the framerate goes down.
What you can do:
define two functions, one for the update and one for the render thread
introduce an global atomic flag
set it in the update function (indicates scene has been updated)
clear it in the render function (indicates that the changes has been presented)
create a thread (runnable, either update or render)
if the scene gets updated, set the flag accordingly
the renderer can decide based on the flag to render the scene or wait until the scene gets updated or render anyway (clear the flag)
the updater can decide based on the flag to wait until the scene has been rendered or update it anyway
Example (c++11):
#include <atomic>
#include <thread>
atomic_bool active = true;
atomic_bool scene_flag = false;
void render(olc::PixelGameEngine* pge) noexcept
{
while (active) {
//while (scene_flag == false) //wait for scene update
renderScene(pge);
scene_flag = false; //clear flag
}
}
void update(olc::PixelGameEngine* pge) noexcept
{
while (active) {
//while (scene_flag == true) //wait for scene render
updateScene(pge);
scene_flag = true; //set flag
}
}
int main()
{
thread u(update, nullptr);
thread r(render, nullptr);
/*while (some_condition) ...*/
active = false;
u.join();
r.join();
return 0;
}
Note: The above code could/will update the scene while it is being rendered (could/will lead to several problems). Proper synchronization is recommended.

Rendering texture (video) in OpenGL in a blocking way so my received video frame won't get replaced by new one while rendering

I think that the GTK specifics of this code are not needed to understand what's happening. The glDraw() function does OpenGL rendering from the Frame frame which is retrieved from decodedFramesFifo which is a thread-safe deque.
The .h file
class OpenGLArea2 : public Gtk::Window
{
public:
OpenGLArea2();
~OpenGLArea2() override;
public:
Gtk::Box m_VBox{Gtk::ORIENTATION_VERTICAL, false};
Gtk::GLArea glArea;
virtual bool render(const Glib::RefPtr<Gdk::GLContext> &context){};
};
Then the cpp file:
OpenGLArea2::OpenGLArea2()
{
set_default_size(640, 360);
add(m_VBox);
glArea.set_hexpand(true);
glArea.set_vexpand(true);
glArea.set_auto_render(true);
m_VBox.add(glArea);
glArea.signal_render().connect(sigc::mem_fun(*this, &OpenGLArea2::render), false);
glArea.show();
m_VBox.show();
}
glFlush
bool OpenGLArea2::render(const Glib::RefPtr<Gdk::GLContext> &context)
{
try
{
glArea.throw_if_error();
glFlush
glDraw();
glFlush();
}
catch (const Gdk::GLError &gle)
{
std::cerr << "An error occurred in the render callback of the GLArea" << std::endl;
return false;glFlush
}
}
void OpenGLArea2::run()
{
while (true)
{
//Important: if decodedFramesFifo does not have any data, it blocks until it has
Frame frame = decodedFramesFifo->pop_front();
this->frame = std::move(frame);
if (!firstFrameReceived)
firstFrameReceived = true;
queue_draw();
}
}
Here's a sketch of what glDraw() does:
void OpenGLArea2::run()
{
//Creates shader programs
//Generate buffers and pixel buffer object to render from
glBufferData(GL_PIXEL_UNPACK_BUFFER, textureSize, frame.buffer(j), GL_STREAM_DRAW);
//calls glTexSubImage2D to do rendering
}
The problem is that I'm getting segmentation faults sometimes. I tried debugging with gdb and valgrind, but in gdb it won't show the call stack, just the place where the error ocurred (some memmove weird things) and in valgrind it slows down the application to 1 fps and it simply won't experience the segmentation faults because I think it has plenty of time to render the data before new data arrives.
I suspect that queue_draw() isn't blocking, and therefore, it just marks the window for rendering and returns. The window then calls render(). If render() is fast enough to render before a new frame arrives on the while loop, then no data race occurs. But if render() takes a little bit more time, a new frame arrives and is written in the place of the old frame which was in the middle of the rendering
So the questions are: how to render in a blocking way? That is, instead of calling queue_draw(), I call directly glDraw() and wait for it to return? And can I trust that glBufferData() and glTexSubImage2D() both consume the frame data in a blocking way, and does not simply mark it to be sent to the GPU in a later time?
ps: I found void Gtk::GLArea::queue_render and void Gtk::GLArea::set_auto_render(bool auto_render=true) but I think queue_render() also returns immediately.
UPDATE:
Some people said I should use glFinish. The problem is that, in the renderer loop, queue_draw() returns immediately, therefore the thread is not blocked at all. How to render without queue_draw()?
UPDATE:
I added, at the beggining of the while loop:
std::unique_lock<std::mutex> lock{mutex};
and at the end of the while loop:
conditionVariable.wait(lock);
And now my render function is like this:
glArea.throw_if_error();
glDraw();
glFinish();
conditionVariable.notify_one();
The conidition variable makes the while loop wait before the rendering finishes, so it can safely delete the received frame (because it goes out of scope). But I'm still receiving segfaults. I added logging to some lines and found out the segfault occurs while waiting. What could be the reason?
I think this is the most problematic part:
Frame frame = decodedFramesFifo->pop_front();
this->frame = std::move(frame);
Since pop_front blocks and waits for frame. If you get a second frame before rendering than the first frame is going to be destroyed with:
this->frame = std::move(frame); // this->frame now contains second frame, first frame is destroyed
You should lock access to this->frame
void ...::run()
{
while (true)
{
Frame frame = decodedFramesFifo->pop_front();
std::unique_lock<std::mutex> lk{mutex};
this->frame = std::move(frame);
lk.unlock();
if (!firstFrameReceived)
firstFrameReceived = true;
queue_draw();
}
}
void ...::render()
{
std::unique_lock<std::mutex> lk{mutex};
// draw 'this->frame'
}
Optinaly if you can std::move frame out and you only render once per frame max, you can:
void ...::render()
{
std::unique_lock<std::mutex> lk{mutex};
Frame frame = std::move(this->frame);
lk.unlock();
// draw 'frame'
}

MFC app "Not Responding", although computation is already in separate thread

I didn't find sufficient answer for this, any suggestion is welcome..
I have a simple MFC single-document app, upon opening file a lengthy computation takes place in separate thread:
BOOL CrhMonkeyDoc::OnOpenDocument(LPCTSTR lpszPathName)
{
if (!CDocument::OnOpenDocument(lpszPathName))
return FALSE;
CMainFrame* pMainFrame = (CMainFrame*)AfxGetMainWnd();
// Start working thread to process the file
m_rhFile.StartParseFile(lpszPathName);
//periodically check progress and update
int progress, lines;
while ((progress = m_rhFile.GetProgress()) < 1000) {
lines = m_rhFile.GetNumLines();
CString strProgress;
strProgress.Format(_T("%d lines, %d percent complete"), lines, progress);
pMainFrame->SetStatusBarText(strProgress);
Sleep(1000);
}
UpdateAllViews(NULL);
}
The thread is started like this:
UINT ParseFileThread(LPVOID Param)
{
RhFile* rhFile = (RhFile*)Param;
rhFile->ParseFile();
return TRUE;
}
int RhFile::StartParseFile(LPCTSTR lpszPathName)
{
m_file.open(lpszPathName);
AfxBeginThread(ParseFileThread, this);
return 0;
}
Initially it does work updating the status bar periodically, but after 10-15 seconds the updates stop and "Not Responding" appears in app title.
I have tried to start the thread at lower priority, and also added periodic SwitchToThread() (always returning 0), and Sleep(50) inside ParseFile(), but it doesn't help.
I guess I am doing something wrong, but can't figure out what
Thanks for reading this!

Improving QTimer Accuracy

I am using QT 4.7, and have a timer that will execute a function 2 times a second or 4 times a second based on what type of sound is required.
void Keypad::SoundRequest(bool allowBeeps, int beepType)
{
// Clear any Previous active Timer
if (sound1Timer->isActive() == true)
{
sound1Timer->stop();
}
else if(sound2Timer->isActive() == true)
{
sound2Timer->stop();
}
// Zero out main
(void) memset(&soundmain_t, (char) 0, sizeof(soundmain_t));
// Zero out sound1
(void) memset(&sound1_t, (char) 0, sizeof(sound1_t));
// Zero out the sound2
(void) memset(&sound2_t, (char) 0, sizeof(sound2_t));
if (allowBeeps == true)
{
if(beepType == OSBEEP)
{
sound1Timer->start(500); // 250mS On / 250mS off called every 500mS = 2HZ
}
else if (beepType == DOORBEEP)
{
sound2Timer->start(250); // 125mS On / 125mS off called every 250mS = 4HZ
}
}
else if (allowBeeps == false)
{
//Shut the Beeper Down
if (sound1Timer->isActive() == true)
{
sound1Timer->stop();
}
else if(sound2Timer->isActive() == true)
{
sound2Timer->stop();
}
SOUND_BLAST(0, &soundmain_t);
}
}
Constructor:
sound1Timer = new QTimer(this);
sound2Timer = new QTimer(this);
connect(sound1Timer, SIGNAL(timeout()), this, SLOT(sound1Handler()));
connect(sound2Timer, SIGNAL(timeout()), this, SLOT(sound2Handler()));
SLOTS:
void Keypad::sound1Handler()
{
// Sound a 250mS chirp
SOUND_BLAST(0, &sound1_t);
}
// Public SLOT, Called by sound2Timer()
// Sounds a single 125mS Beep
void Keypad::sound2Handler()
{
// Emit a Single 125mS chirp
SOUND_BLAST(0, &sound2_t);
}
The Timer is mostly accurate, but it is not exactly 2Hz or 4Hz all the time. To improve the accuracy I was thinking of using a faster timer of say 25mS and letting it run, and every time it accumulates to 250mS or 125mS then sound the beep. However, I am not sure if this would make it more accurate.
Should I measure execution time with QElapsedTimer() and subtract the overhead to sound1Timer, and sound2Timer intervals? Is there a better way to do this?
The accuracy of the timer is limited by the operating system. The Qt library uses operating system timers "under the covers".
If you need high accuracy I would have a timer subroutine that reads a hardware based clock to establish the timing of events. You'll need to dig into your operating system documentation to get the details.

glutTimerFunc() callback function isn't executed if time threshold is under a certain number of msecs

I am working with an application which requires an engine to be executed in the lowest amount of time as possible by a GLUT GUI using the glutTimerFunc():
void SetGLUTTimer(void);
void callback(int value)
{
Engine* pEngine;
pEngine = (Engine*) value;
pEngine->Process();
pEngine->SetGLUTTimer();
}
void Engine::SetGLUTTimer(void)
{
glutTimerFunc(50, callback, (int)this);
}
bool Engine::Run(void)
{
if (m_pViewer != NULL)
m_pViewer->Run();
else
return false;
return true;
}
If I set the time threshold to 1000 msecs or more the engine callback will be regularly called, while any other interval below a second (like in the example above) will cause the GUI to run indefinitely never executing the engine Process() function.