I am writing desktop app for windows on C++ using Qt for GUI and GStreamer for audio processing.
In my app I need to monitor several internet aac audio streams if they are online, and listen to available stream that has the most priority. For this task I use GstDiscoverer object from GStreamer, but I have some problems with it.
I check audio streams every 1-2 seconds, so GstDiscoverer is called very often.
And every time my app is running, eventually it will crash with segmentation fault error during GstDiscoverer check.
I tried both sync and async methods of calling GstDiscoverer ( gst_discoverer_discover_uri(), gst_discoverer_discover_uri_async() ) , both work the same way.
The crash happens in aac_type_find() function from gsttypefindfunctions.c on line 1122 (second line of code below).
len = ((c.data[offset + 3] & 0x03) << 11) |
(c.data[offset + 4] << 3) | ((c.data[offset + 5] & 0xe0) >> 5);
Local variables received from debugger during one of crashes:
As we can see, offset variable is greater than c.size, so c.data[offset] is out of range, I think that's why segmentation fault happens.
This happens not regularly. The program can work several hours or ten minutes.
But it seems to me that it happens more often if time interval between calls of GstDiscoverer is little. So, there is some probability of crash calling aac_type_find().
I tried GStreamer versions 1.6.1 and latest 1.6.2, the bug exists in both.
Can somebody help me with this problem? Is this Gstreamer bug or maybe I do something wrong?
It was reported to the GStreamer project here and a patch for the crash was merged and will be in the next releases: https://bugzilla.gnome.org/show_bug.cgi?id=759910
Related
I am trying to do communication from QT Application to Arduino. The flow is like this: QT Application sends a '1' and Arduino is expected to respond with some data(the data String length is huge, around 300). QT Application is sending '1' at the rate of around 5Hz(every 200ms).
The problem I am facing is, there is an accumulative delay between the Arduino to QT communication. That is, the data I receive from Arduino is not recent data but the frequency of data coming of Arduino is 5Hz only(which is as expected), just the data coming is not recent. This delay keeps on increasing with time. I believe there is some problem with buffer or something.
What I tried:
QSerialPort serialPort; is my device port
serialPort.clear()
serialPort.flush()
Increasing and decreasing Baud Rate from both ends.
Reduce character length from Arduino, here delay reduces significantly but the accumulated delay is observed after a long time.
to clear serial communication buffer, but the issue still persists.
Here is my code snippet:
connect(timer_getdat, SIGNAL(timeout()), this, SLOT(Rec()));
timer_getdat->start(200);
where Rec() is the function where I do communication part.
In Rec():
serialPort.write("1", 2);
// serialPort.waitForBytesWritten(100);
long long bytes_available = serialPort.bytesAvailable();
if (bytes_available >= 1)
{
serialPort.readLine(temp, 500);
serialPort.flush(); // no change
serialPort.clear(); // no change by .clear() also
}
I have been stuck on this issue for a quite long time. The above code snippet is what I think is necessary but if anyone needs more clarification, I may reveal more of the code.
I also encountered with the same issue, and yes QSerialPort.clear() and QSerialPort.flush() doesn't help. Try doing readAll()
So change the part in your Rec() function to something like this:
serialPort.write("1", 2);
long long bytes_available = serialPort.bytesAvailable();
if (bytes_available >= 1)
{
serialPort.readLine(temp, 500);
serialPort.readAll(); // This reads all the data in buffer at once and clears the queue.
}
Even on QT forums, I didn't find the answer to this, was playing with all functions available with QSerialPort class and readAll() seems to work.
About readAll(), Qt documentation says:
Reads all remaining data from the device, and returns it as a byte
array.
My explanation for the resolution is that readAll captures all of the data from the communication buffer and empties it.
This should be the job of clear() function but apparently readAll() seems to work.
First, I program for Vintage computer groups. What I write is specifically for MS-DOS and not windows, because that's what people are running. My current program is for later systems and not the 8086 line, so the plan was to use IRQ 8. This allows me to set the interrupt rate in binary values from 2 / second to 8192 / second (2, 4, 8, 16, etc...)
Only, for some reason, on the newer old systems (ok, that sounds weird,) it doesn't seem to be working. In emulation, and the 386 system I have access to, it works just fine, but on the P3 system I have (GA-6BXC MB w/P3 800 CPU,) it just doesn't work.
The code
setting up the interrupt
disable();
oldrtc = getvect(0x70); //Reads the vector for IRQ 8
settvect(0x70,countdown); //Sets the vector for
outportb(0x70,0x8a);
y = inportb(0x71) & 0xf0;
outportb(0x70,0x8a);
outportb(0x71,y | _MRATE_); //Adjustable value, set for 64 interrupts per second
outportb(0x70,0x8b);
y = inportb(0x71);
outportb(0x70,0x8b);
outportb(0x71,y | 0x40);
enable();
at the end of the interrupt
outportb(0x70,0x0c);
inportb(0x71); //Reading the C register resets the interrupt
outportb(0xa0,0x20); //Resets the PIC (turns interrupts back on)
outportb(0x20,0x20); //There are 2 PICs on AT machines and later
When closing program down
disable();
outportb(0x70,0x8b);
y = inportb(0x71);
outportb(0x70,0x8b);
outportb(0x71,y & 0xbf);
setvect(0x70,oldrtc);
enable();
I don't see anything in the code that can be causing the problem. But it just doesn't seem to make sense. While I don't completely trust the information, MSD "does" report IRQ 8 as the RTC Counter and says it is present and working just fine. Is it possible that later systems have moved the vector? Everything I find says that IRQ 8 is vector 0x70, but the interrupt never triggers on my Pentium III system. Is there some way to find if the Vectors have been changed?
It's been a LONG time since I've done any MS-DOS code and I don't think I ever worked with this particular interrupt (I'm pretty sure you can just read the memory location to fetch the time too, and IRQ0 can be used to trigger you at an interval too, so maybe that's better. Anyway, given my rustiness, forgive me for kinda link dumping.
http://wiki.osdev.org/Real_Time_Clock the bottom of that page has someone saying they've had problem on some machines too. RBIL suggests it might be a BIOS thing: http://www.ctyme.com/intr/rb-7797.htm
Without DOS, I'd just capture IRQ0 itself and remap all of them to my own interrupt numbers and change the timing as needed. I've done that somewhat recently! I think that's a bad idea on DOS though, this looks more recommended for that: http://www.ctyme.com/intr/rb-2443.htm
Anyway though, I betcha it has to do with the BIOS thing:
"Notes: Many BIOSes turn off the periodic interrupt in the INT 70h handler unless in an event wait (see INT 15/AH=83h,INT 15/AH=86h).. May be masked by setting bit 0 on I/O port A1h "
The following code is inside a thread and reads input data coming over usb. Approximately every 80 readings it misses one of the packets coming from an stm32 board. The board is programmed to send data packets to the android tablet every one second.
// Non Working Code
while(running){
int resp = bulktransfer(mInEp,mBuf,mBuf.lenght,1000);
if(resp>0){
dispatchMessage(mBuf);
}else if(resp<0)
showsBufferEmptyMessage();
}
I was looking the Missile Launcher example in android an other libraries on the internet and they put a delay of 50ms between each poll. Doing this it solves the missing package problem.
//Working code
while(running){
int resp = bulktransfer(mInEp,mBuf,mBuf.lenght,1000);
if(resp>0){
dispatchMessage(mBuf);
}else if(resp<0)
showsBufferEmptyMessage();
try{
Thread.sleep(50);
}catch(Exception e){}
}
Does anyone knows the reason why the delay works. Most of the libraries on github has this delay an as I mention before the google example too.
I am putting down my results regarding this problem. After all seems that the UsbConnection.bulkTransfer(...) method has some bug when called continuously. The solution was to use the asynchronous API, UsbRequest class. Using this method I could read from the input endpoint without delay and no data was lost during the whole stress test. So the direction to take is asynchronous UsbRequest instead of synchronously bulktransfer.
I'm using the parallel computing toolbox (PCT) in combination with the Simbiology toolbox in MATLAB 2012b. I’m receiving an intermittent error message when I run my script with a remote pool of workers, but not with a local pool of workers:
Caught std::exception Exception message is:
vector::_M_range_check
Error using parallel_function (line 589)
Error in remote execution of remoteParallelFunction : RUNTIME_ERROR
Error in PSOFit (line 486)
parfor ns = 1:r.NumSwp
Error in PSOopt_driver (line 209)
PSOFit(ObjFuncName,LB,UB,PSOopts);
The error does not occur when I comment out the call to the function sbiosimulate (a SimBiology function for model evaluation).
I have a couple of ideas:
I’ve introduced some sort of race condition, that causes a problem in accessing the model variables (is this possible in MATLAB?)
Model compilation in simbiology is sometimes but not always compatible with the PCT, and I’ve hit some sort of edge case
Since sbiosimulate evaluates compiled C++ code, for some inputs there might be a bug in the source that generates the exception
I am aware of this.
I'm a developer of SimBiology. I believe this is a bug that was introduced into SimBiology's C++ code in the R2012a release. The bug is triggered when a simulation ends without producing any simulation results. This can sometimes occur when the model is configured to report only particular times (using the OutputTimes options) AND the simulation is configured to end after a particular amount of real time (using the MaximumWallClock option). Basically, the simulation "times out" before it ever gets a chance to log the first output time.
One way to work around this problem is to always include time 0 in the OutputTimes. This time will always get logged before evaluating the MaximumWallClock criterion, preventing the bug from getting triggered. I am also contacting this user directly and will work on fixing the bug in a future release.
I have an application which makes use of winsock.
I/O part is handled on an other thread.
And I am using blocking select method for sockets.
But the point is that after 5-6 hours,my application gives 0xC00000FD exception, at the line of select function.
As far as I know, this exception occurs when there is recursion, or very large local variables. But neither of them is the case for me.
So do you have any idea why am I getting this exception?
Or any ideas to discover what actually causes exception?
many thanks
EDIT 2:
Dear All, I am very sorry but since reproducing the case takes long time, I just realized that this has not solved the problem.
Everything seems ok when stack overflow exception occurs at the line of select function.
I mean it is a server socket with a one client connected. So there is 2 socket in rset and 1 in wset. After selecting, I am checking all ready sockets and making required, read,write,accept. Timeout is 250 ms. Do you think can this be the problem? I don't want this function to be blocking so it is not null. But what will be the exact difference if I use {0,0}
An important hint is:
Same code was working without any problem, when client socket wasn't sending any data.
But when I started sending some data from client to server this problem occured.
I am sure that there is no problem with FD_SETs and FD_CLRs, I mean when client was not sending only 1(server) socket was in rset and 1(client) was in wset.
Anyway I had a look a lot of samples, but it seems that there is not a difference.
Please see local variables screenshot below(I have deleted name of executable, since it is a commercial product)
http://img192.imageshack.us/img192/1948/stackoverflow.jpg
And here is the call stack:
ntdll.dll!7c90df3a()
[Frames below may be incorrect and/or missing, no symbols loaded for ntdll.dll]
mswsock.dll!71a53c9c()
ntdll.dll!7c90d26c()
mswsock.dll!71a55f9f()
mswsock.dll!71a55974()
ws2_32.dll!71ab314f()
xyz.exe!vm_socket_select(vm_socket * hds=0x04c1fb84, int nhd=1, int masks=7) Line 230 + 0x1b bytes C
xyz.exe!ND::nd_socket::SocketThreadProc() Line 173 + 0x12 bytes C++
xyz.exe!ND::nd_socket::ThreadRoutineStarter(void * u=0x07d63f90) Line 332 C++
xyz.exe!_callthreadstartex() Line 348 + 0x6 bytes C
xyz.exe!_threadstartex(void * ptd=0x011a3ce8) Line 326 + 0x5 bytes C
kernel32.dll!7c80b713()
I am waiting for any advice.
Many thanks
Have you tried stopping your program in a debugger after some time running? Then take a look at the stack it might give you a hint.
Recursion doesn't mean one of your functions call itself endlessly, it can't be more tricky and involve several layers before it comes back where it started.