MSER on Videotracking - c++

I've a time problem. I have programmed a qt Gui for imageprocessing. For this case it would be relevant to implement blobdectetors for videoprocessing and object tracking. Principally it looks good. It uses with GUI behind processing, grabbing, mser operation and displaying just 0.07 to 0.08 seconds which could be used for a nice framerate over 10 fps.
For that purposes i user Qt 4 - C++, on Suse 12.3. OpenCV 2.4.3 and a laptop webcam. My problem is, that after a short while my program's hanging.
Looking at my system monitor i am seeing that CPU-Power has reached 100 % and a single run uses hard ressources of cpu uses processor for long time (without GUI). I don't understand what is going wrong. Has anybody experiences with that?
TY in advance!
Some Code snippets:
MSER Initialisation about GUI:
MSER FtMSERVid( MSERDelta, MSERMinArea, MSERMaxArea,MSERMaxVariation ,MSERMinDiversity);
videoprocessing function
double startTime = clock();
camDev.read(vidImg);
if(vidImg.empty() == true)
{
newLineInText(tr("No data from device"));
timer->stop();
ui->pbPlay->setText(tr(">"));
return;
}
MSERPointsVid.clear();
if(vidImg.channels() > 1)
cvtColor(vidImg, vidImg,CV_BGR2GRAY);
FtMSERVid(vidImg, MSERPointsVid);
Mat showMat = vidImg.clone();
if(showMat.channels() > 1)
{
cvtColor(showMat,showMat,CV_BGR2RGB);
qImg = QImage((uchar*)showMat.data,showMat.cols,showMat.rows,showMat.step,QImage::Format_RGB888);
}
else if(showMat.channels() == 1)
qImg = QImage((uchar*)showMat.data,showMat.cols,showMat.rows,showMat.step,QImage::Format_Indexed8);
ui->lblOrig->setPixmap(QPixmap::fromImage(qImg));
double endTime = clock();
double timeDuration = (endTime - startTime)/CLOCKS_PER_SEC;
if(numVid%10 == 0)
{
framesPS = int(1/timeDuration) - 1;
if(framesPS > 1)
framesPS = 1;
FPSChanged(framesPS);
numVid = 0;
}

your hints have helped me to solve a problem. MSER is creating a lot data and i have programmed for displaying that a secondly update into a table, which works independently. So far no problem but it is to much for the table to display all the Points.So it was provided just to fill the hullpoints in the table. I have changed the according vector and then it runs like nothing else.
That i have found out because of your hint to valgrind. I have never needed this before. The threading hints have let me learned much about threading. Thank you for that.
Ingeborg

Related

Switching an image at specific frequencies c++

I am currently developing a stimuli provider for the brain's visual cortex as a part of a university project. The program is to (preferably) be written in c++, utilising visual studio and OpenCV. The way it is supposed to work is that the program creates a number of threads, accordingly to the amount of different frequencies, each running a timer for their respective frequency.
The code looks like this so far:
void timerThread(void *param) {
t *args = (t*)param;
int id = args->data1;
float freq = args->data2;
unsigned long period = round((double)1000 / (double)freq)-1;
while (true) {
Sleep(period);
show[id] = 1;
Sleep(period);
show[id] = 0;
}
}
It seems to work okay for some of the frequencies, but others vary quite a lot in frame rate. I have tried to look into creating my own timing function, similar to what is done in Arduino's "blinkWithoutDelay" function, though this worked very badly. Also, I have tried with the waitKey() function, this worked quite like the Sleep() function used now.
Any help would be greatly appreciated!
You should use timers instead of "sleep" to fix this, as sometimes the loop may take more or less time to complete.
Restart the timer at the start of the loop and take its value right before the reset- this'll give you the time it took for the loop to complete.
If this time is greater than the "period" value, then it means you're late, and you need to execute right away (and even lower the period for the next loop).
Otherwise, if it's lower, then it means you need to wait until it is greater.
I personally dislike sleep, and instead constantly restart the timer until it's greater.
I suggest looking into "fixed timestep" code, such as the one below. You'll need to put this snippet of code on every thread with varying values for the period (ns) and put your code where "doUpdates()" is.
If you need a "timer" library, since I don't know OpenCV, I recommend SFML (SFML's timer docs).
The following code is from here:
long int start = 0, end = 0;
double delta = 0;
double ns = 1000000.0 / 60.0; // Syncs updates at 60 per second (59 - 61)
while (!quit) {
start = timeAsMicro();
delta+=(double)(start - end) / ns; // You can skip dividing by ns here and do "delta >= ns" below instead //
end = start;
while (delta >= 1.0) {
doUpdates();
delta-=1.0;
}
}
Please mind the fact that in this code, the timer is never reset.
(This may not be completely accurate but is the best assumption I can make to fix your problem given the code you've presented)

QCustomPlot Huge Amount of Data Plotting

I am trying to plot some serial data on my Qt Gui program using qcustomplot class. I had no trouble when I tried to plot low sampling frequency datas like 100 data/second. The graph was really cool and was plotting the data smoothly. But at high sampling rates such 1000data/second, plotter makes a bottleneck for serial read function. It slow downs serial there was a huge delay like 4-5 seconds apart from device. Straightforwardly, plotter could not reach the data stream speed. So, is there any common issue which i dont know about or any recommendation?
I thougth these scenarious,
1- to devide whole program to 2 or 3 thread. For example, serial part runs in one thread and plotting part runs in another thread and two thread communicates with a QSemaphore
2- fps of qcustom plot is limited. but there should be a solution because NI LABVIEW plots up to 2k of datas without any delay
3- to desing a new virtual serial device in usb protocol. Now, I am using ft232rl serial to usb convertor.
4- to change programming language. What is the situation and class support in C# or java for realtime plotting? (I know it is like a kid saying, but this is a pretex to be experienced in other languages)
My serial device send data funct(it is foo device for experiment there is no serious coding) is briefly that:
void progTask()
{
DelayMsec(1); //my delay function, milisecond
//read value from adc13
Adc13Read(adcValue.ui32Part);
sendData[0] = (char)'a';
sendData[1] = (char)'k';
sendData[2] = adcValue.bytes[0];
sendData[3] = (adcValue.bytes[1] & 15);
Qt Program read function is that:
//send test data
UARTSend(UART6_BASE,&sendData[0],4);
}
union{
unsigned char bytes[2];
unsigned int intPart;
unsigned char *ptr;
}serData;
void MedicalSoftware::serialReadData()
{
if(serial->bytesAvailable()<4)
{
//if the frame size is less than 4 bytes return and
//wait to full serial receive buffer
//note: serial->setReadBufferSize(4)!!!!
return;
}
QByteArray serialInData = serial->readAll();
//my algorithm
if(serialInData[0] == 'a' && serialInData[1] == 'k')
{
serData.bytes[0] = serialInData[2];
serData.bytes[1] = serialInData[3];
}else if(serialInData[2] == 'a' && serialInData[3] == 'k')
{
serData.bytes[0] = serialInData[0];
serData.bytes[1] = serialInData[1];
}
else if(serialInData[1] == 'a' && serialInData[2] == 'k')
{
serial->read(1);
return;
}else if(serialInData[0] == 'k' && serialInData[3] == 'a')
{
serData.bytes[0] = serialInData[1];
serData.bytes[1] = serialInData[2];
}
plotMainGraph(serData.intPart);
serData.intPart = 0;
}
And qcustom plot setting fuction is:
void MedicalSoftware::setGraphsProperties()
{
//MAIN PLOTTER
ui->mainPlotter->addGraph();
ui->mainPlotter->xAxis->setRange(0,2000);
ui->mainPlotter->yAxis->setRange(-0.1,3.5);
ui->mainPlotter->xAxis->setLabel("Time(s)");
ui->mainPlotter->yAxis->setLabel("Magnitude(mV)");
QSharedPointer<QCPAxisTickerTime> timeTicker(new QCPAxisTickerTime());
timeTicker->setTimeFormat("%h:%m:%s");
ui->mainPlotter->xAxis->setTicker(timeTicker);
ui->mainPlotter->axisRect()->setupFullAxesBox();
QPen pen;
pen.setColor(QColor("blue"));
ui->mainPlotter->graph(0)->setPen(pen);
dataTimer = new QTimer;
}
And the last is plot function:
void MedicalSoftware::plotMainGraph(const quint16 serData)
{
static QTime time(QTime::currentTime());
double key = time.elapsed()/1000.0;
static double lastPointKey = 0;
if(key-lastPointKey>0.005)
{
double value0 = serData*(3.3/4096);
ui->mainPlotter->graph(0)->addData(key,value0);
lastPointKey = key;
}
ui->mainPlotter->xAxis->setRange(key+0.25, 2, Qt::AlignRight);
counter++;
ui->mainPlotter->replot();
counter = 0;
}
Quick answer:
Have you tried:
ui->mainPlotter->replot(QCustomPlot::rpQueuedReplot);
according to the documentation it can improves performances when doing a lot of replots.
Longer answer:
My feeling on your code is that you are trying to replot as often as you can to get a "real time" plot. But if you are on a PC with a desktop OS there is no such thing as real time.
What you should care about is:
Ensure that the code that read/write to the serial port is not delayed too much. "Too much" is to be interpreted with respect to the connected hardware. If it gets really time critical (which seems to be your case) you have to optimize your read/write functions and eventually put them alone in a thread. This can go as far as reserving a full hardware CPU core for this thread.
Ensure that the graph plot is refreshed fast enough for the human eyes. You do not need to do a full repaint each time you receive a single data point.
In your case you receive 1000 data/s which make 1 data every ms. That is quite fast because that is beyond the default timer resolution of most desktop OS. That means you are likely to have more than a single point of data when calling your "serialReadData()" and that you could optimize it by calling it less often (e.g call it every 10ms and read 10 data points each time). Then you could call "replot()" every 30ms which would add 30 new data points each time, skip about 29 replot() calls every 30ms compared to your code and give you ~30fps.
1- to devide whole program to 2 or 3 thread. For example, serial part
runs in one thread and plotting part runs in another thread and two
thread communicates with a QSemaphore
Dividing the GUI from the serial part in 2 threads is good because you will prevent a bottleneck in GUI to block the serial communication. Also you could skip using semaphore and simply rely on Qt signal/slot connections (connected in Qt::QueuedConnection mode).
4- to change programming language. What is the situation and class
support in C# or java for realtime plotting? (I know it is like a kid
saying, but this is a pretex to be experienced in other languages)
Changing the programming language, in best case, won't change anything or could hurt your performances, especially if you go toward languages which are not compiled to native CPU instructions.
Changing the plotting library on the other hand could change the performances. You can look at Qt Charts and Qwt. I do not know how they compare to QCustomPlot though.

ASIOCallbacks::bufferSwitchTimeInfo comes very slowly in 2.8MHz Samplerate with DSD format on Sony PHA-3

I bought a Sony PHA-3 and try to write an app to play DSD in native mode. (I've succeeded in DoP mode.)
However, When I set the samplerate to 2.8MHz, I found the ASIOCallbacks::bufferSwitchTimeInfo come not so fast as it should be.
It'll take nearly 8 seconds to request for 2.8MHz samples which should be completed in 1 second.
The code is merely modified from the host sample of asiosdk 2.3, thus I'll post a part of the key codes to help complete my question.
After ASIO Start, the host sample will keep printing the progress to indicating the time info like this:
fprintf (stdout, "%d ms / %d ms / %d samples **%ds**", asioDriverInfo.sysRefTime,
(long)(asioDriverInfo.nanoSeconds / 1000000.0),
(long)asioDriverInfo.samples,
(long)(**asioDriverInfo.samples / asioDriverInfo.sampleRate**));
The final expression will tell me how many seconds has elapsed. (asioDriverInfo.samples/asioDriverInfo.sampleRate).
Where asioDriverInfo.sampleRate is 2822400 Hz.
And asioDriverInfo.samples is assigned in the ASIOCallbacks::bufferSwitchTimeInfo like below:
if (timeInfo->timeInfo.flags & kSamplePositionValid)
asioDriverInfo.samples = ASIO64toDouble(timeInfo->timeInfo.samplePosition);
else
asioDriverInfo.samples = 0;
It's the original code of the sample.
So I can easily find out the time elapsed very slowly.
I've tried to raise the samplerate to even higher, say 2.8MHz * 4, it's even longer to see the time to advance 1 second.
I tried to lower the samplerate to below 2.8MHz, the API failed.
I surely have set the SampleFormat according to the guide of the sdk.
ASIOIoFormat aif;
memset(&aif, 0, sizeof(aif));
aif.FormatType = kASIODSDFormat;
ASIOSampleRate finalSampleRate = 176400;
if(ASE_SUCCESS == ASIOFuture(kAsioSetIoFormat,&aif) ){
finalSampleRate = 2822400;
}
In fact, without setting the SampleFormat to DSD, setting samplerate to 2.8MHz will lead to an API failure.
Finally, I remembered all the DAW (Cubase / Reaper, ...) have an option to set the thread priority, so I doubted the thread of the callback is not high enough and also try to raise its thread priority to see if this could help. However, when I check the thread priority, it returns THREAD_PRIORITY_TIME_CRITICAL.
static double processedSamples = 0;
if (processedSamples == 0)
{
HANDLE t = GetCurrentThread();
int p = GetThreadPriority(t); // I get THREAD_PRIORITY_TIME_CRITICAL here
SetThreadPriority(t, THREAD_PRIORITY_HIGHEST); // So the priority is no need to raise anymore....(SAD)
}
It's same for the ThreadPriorityBoost property. It's not disabled (already boosted).
Anybody has tried to write a host asio demo and help me resolve this issue?
Thanks very much in advance.
Issue cleared.
I should getBufferSize and createBuffers after kAsioSetIoFormat.

opencv videocapture default setting

I am using mac book and have a program written in C++, the program is to extract successive frames from the webcam. The extracted frames are then grayscaled and smoothed using opencv functions. After that i would use CVNorm to find out the relative difference between 2 frames. I am using videoCapture class.
I found out that the frame rate is 30fps and using CVNorm, the relative difference obtained between successive frames are less than 200 most of the time.
I am trying to do the same thing on xcode so as to implement the program on ipad. This time I am using AVCaptureSession, the same steps are performed but i realize that the relative difference between 2 frames are much higher (>600).
Thus i would like to know about the default setting for videoCapture class, I know that i can edit the setting using cvSetCaptureProperty but i cannot find the default setting of it. After that i would compare it with the setting of the AVcaptureSession and hope to find out why there is such a huge difference in CVNorm when i use these 2 approaches to extract my frame.
Thanks in advance.
OpenCV's VideoCapture class is just a simple wrapper for capturing video from cameras or for reading video files. It is built upon several multimedia frameworks (avfoundation, dshow, ffmpeg, v4l, gstreamer, etc.) and totally hides them from the outside. The problem is coming from here, it is really hard to achieve the same behaviour of capturing under different platform and multimedia frameworks. The common set of (capture's) properties are poor and setting their values is rather only a suggestion instead of a requirement.
In summary, the default properties can differ under different platforms, but in case of AV Foundation framework:
The cvCreateCameraCapture_AVFoundation(int index) function will create a CvCapture instance under iOS, which is defined in cap_qtkit.mm. Seems like you are not able to change the sampling rate, only CV_CAP_PROP_FRAME_WIDTH, CV_CAP_PROP_FRAME_HEIGHT and DISABLE_AUTO_RESTART are supported.
The grabFrame() implementation is below. I'm absolutely not an Objective-C expert, but it seems like it waits until the capture updates the image or a time out occurs.
bool CvCaptureCAM::grabFrame() {
return grabFrame(5);
}
bool CvCaptureCAM::grabFrame(double timeOut) {
NSAutoreleasePool* localpool = [[NSAutoreleasePool alloc] init];
double sleepTime = 0.005;
double total = 0;
[NSTimer scheduledTimerWithTimeInterval:100 target:nil selector:#selector(doFireTimer:) userInfo:nil repeats:YES];
while (![capture updateImage] && (total += sleepTime)<=timeOut) {
[[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:sleepTime]];
}
[localpool drain];
return total <= timeOut;
}

Is event recording on time-sensitive possible?

Basic Question
Is there any way to event recording an playback within a time-sensitive (framerate independent) system?
Any help - including a simple "No sorry it is impossible" - would be greatly appreciated. I have spent almost 20 hours working on this over the past few weekends and am driving myself crazy.
Full Details
This is being currently aimed at a game but the libraries I'm writing are designed to be more general and this concept applies to more than just my C++ coding.
I have some code that looks functionally similar this... (it is written in C++0x but I'm taking some liberties to make it more compact)
void InputThread()
{
InputAxisReturn AxisState[IA_SIZE];
while (Continue)
{
Threading()->EventWait(InputEvent);
Threading()->EventReset(InputEvent);
pInput->GetChangedAxis(AxisState);
//REF ALPHA
if (AxisState[IA_GAMEPAD_0_X].Changed)
{
X_Axis = AxisState[IA_GAMEPAD_0_X].Value;
}
}
}
And I have a separate thread that looks like this...
//REF BETA
while (Continue)
{
//Is there a message to process?
StandardWindowsPeekMessageProcessing();
//GetElapsedTime() returns a float of time in seconds since its last call
UpdateAll(LoopTimer.GetElapsedTime());
}
Now I'd like to record input events for playback for testing and some limited replay functionality.
I can easily record the events with precision timing by simply inserting the following code where I marked //REF ALPHA
//REF ALPHA
EventRecordings.pushback(EventRecording(TimeSinceRecordingBegan, AxisState));
The real issue is playing these back. My LoopTimer is extremely high precision using the High Performance Counter (QueryPreformanceCounter). This means that it is nearly impossible to hit the same time difference using code like below in place of //REF BETA
// REF BETA
NextEvent = EventRecordings.pop_back()
Time TimeSincePlaybackBegan;
while (Continue)
{
//Is there a message to process?
StandardWindowsPeekMessageProcessing();
//Did we reach the next event?
if (TimeSincePlaybackBegan >= NextEvent.TimeSinceRecordingBegan)
{
if (NextEvent.AxisState[IA_GAMEPAD_0_X].Changed)
{
X_Axis = NextEvent.AxisState[IA_GAMEPAD_0_X].Value;
}
NextEvent = EventRecordings.pop_back();
}
//GetElapsedTime() returns a float of time in seconds since its last call
Time elapsed = LoopTimer.GetElapsedTime()
UpdateAll(elapsed);
TimeSincePlabackBegan += elapsed;
}
The issue with this approach is that you will almost never hit the exact same time so you will have a few microseconds where the playback doesn't match the recording.
I also tried event snapping. Kind of a confusing term but basically if the TimeSincePlaybackBegan > NextEvent.TimeSinceRecordingBegan then TimeSincePlaybackBegan = NextEvent.TimeSinceRecordingBegan and ElapsedTime was altered to suit.
It had some interesting side effects which you would expect (like some slowdown) but it unfortunately still resulted in the playback de-synchronizing.
For some more background - and possibly a reason why my time snapping approach didn't work - I'm using BulletPhysics somewhere down that UpdateAll call. Kind of like this...
void Update(float diff)
{
static const float m_FixedTimeStep = 0.005f;
static const uint32 MaxSteps = 200;
//Updates the fps
cWorldInternal::Update(diff);
if (diff > MaxSteps * m_FixedTimeStep)
{
Warning("cBulletWorld::Update() diff > MaxTimestep. Determinism will be lost");
}
pBulletWorld->stepSimulation(diff, MaxSteps, m_FixedTimeStep);
}
But I also tried with pBulletWorkd->stepSimulation(diff, 0, 0) which according to http://www.bulletphysics.org/mediawiki-1.5.8/index.php/Stepping_the_World should have done the trick but still with no avail.
Answering my own question for anyone else who stumbles upon this.
Basically if you want deterministic recording and playback you need to lock the frame-rate. If the system cannot handle the frame-rate you must introduce slowdown or risk dsyncronization.
After two weeks of additional research I've decided it is just not possible due to floating point inadequacies and the fact that floating point numbers are not necessarily associative.
The only solution to have a deterministic engine that relies on floating point numbers is to have a stable and fixed frame-rate. Any change in the frame-rate will - across a long term - result in the playback becoming desynchronized.