Raspberry pi lockup when reading data from MPU6050+BMP180 - c++

I am trying to write a flight control program for quadcopter, and all my code can be found on https://github.com/sgsdxzy/adc
My 10DOF sensor board is GY87, which consists of 3 I2C devices: MPU6050, HMC5883L and BMP180(same API as BMP085). I use i2cdevlib and pigpio to get data from sensors.
MPU6050 can use I2C slaves, and I set HMC5883L as its slave. I successfully got DMP running. MPU6050 can generate an interrupt when DMP data is ready and send the data to a fifo. I wired the interrupt to a gpio, and used pigpio's gpioSetAlertFunc() to monitor it's status. If an interrupt is generated, my program will read the fifo and get DMP data, then write them to some (thread-safe) global variables. This alone is working well.
BMP180 is a relatively simpler device, which you set some register to change mode and wait for some time, then read other registers to get results. This alone is working well too.
However, when I combined the two, a random system lockup occured: I set the interrupt handler, then enter a loop: every 0.1s I set BMP180 to measure and get it's data, then print all global variables on screen; MPU6050 just generate 100 interrupts per second and my program handled them in time, so every 0.1s I get the latest data. This works quite well with correct results for several loops, but then at a random point the program stucks, and top shows it's using up 2 cores.
Remove either
1) the interrupt handling
2) or the measure of BMP180 in every loop
and the program will just run flawlessly and I tested it can ran stable for at least half an hour.
But combining two it will always stuck using up all processor power, either after 10 seconds or 1 minute.
I just can't understand why, could someone point out what's wrong? Or at least teach me how to debug what causes the lockup (using which tool, as the lockup happens at a random point and there are too many procedures, using gdb to run the program step-by-step isn't an option)
Thanks in advance.

All right, I found out the i2cdevlib is not thread-safe, and caused the problem.
As i2cdevlib's api is so far from pigpio, porting all these codes to pure pigpio is not trival. Currently I will get do with a pthread mutex to let only one thread access i2c at the same time.

Related

Getting notified when an I2C-Value changes in C++

I recently started experimenting with I2C-Hardware on my raspberry pi. Following this tutorial Using the I2C interface I already know how to read and set values. However, the program I want to realize needs the current value on a specific address all the time. So, I made a thread and query the value constantly in a never ending loop, which seems primitive to me. Is it possible to get notified in an event-like manner when a value on an I2C-adress changes?
A platform independend solution would also be much welcomed.
I was able to get what I wanted.
I use the following repeater for the I2C-Bus: link and it turns out there is a soldering bridge (LB2) you can set that sets a signal on GPIO17 whenever a value on the I2C-Bus changes since it has last been changed. I can now listen on this events accordingly.
Generally speaking, the I2C bus has no interrupt capability. So with only I2C, all you can do is poll the chip for a certain event to happen or value to change.
Most chips do have an interrupt line (sometimes even more than one) that can be programmed to trigger on certain events. The behavior of this line depends on the chip. Usually it needs to be enabled (using I2C commands) and it needs to be linked to a GPIO input line. For these, interrupt support is available.

Multithreading with very low latencies for calls between threads

I'm starting on a new robot control software for our robotic system which has to read out values from multiple, independent sensors and control the motors accordingly.
The software will be running on an i5 PC with PREEMPT_RT Ubuntu. Each sensor device comes with an SDK which I want to run in a separate thread. As soon as they get new values from their sensors (can be up to 50 doubles at once from one sensor) they should update those values in a superior control thread. The update rate depends on the sensor, but will be as fast as 1 kHz. As soon as the "main sensor" got new values and sent them to the control thread, the thread of the main sensor should trigger the control loop in the control thread (with a non-blocking call). The control thread should then compute new values for the motors with the currently stored sensor values and transmit them to the motors. The main sensor that will trigger the control loop is also getting new data at a 1 kHz rate, so this often the control thread must be performed.
I'm unsure now how to approach this. Do you think the threads functionality from C++11 can already solve this? Or should I use something like pthreads or boost?
The main requirements are super low latency (~ 10 µs) to send data (up to 50 doubles) from one thread to another, and the ability to trigger a function (non-blocking) in another thread.
As soon as the sensor threads sent the current data to the control thread, they should continue monitoring the hardware to check for new sensor values and retrieve them. Some of the sensor threads perform extra computations and filtering on the sensor data, that's why I want them to run in extra threads, then taking advantage of the quadcore processor.

Adapting program from single to multicore

I am considering a programming project. Will run under Ubuntu or other Linux OS on a small board. Quad core x86 - N-Series Pentium. The software generates 8 fast signals; square wave pulse trains for stepper motor motion control of 4 axes. Step signals being 50-100 KHz maximum, but usually slower. Want to avoid jitter in these stepping signals (call it good fidelity), so that around 1-2us for each thread loop cycle would be a nice target. The program does other kinds of tasks, like read/write hard drive, Ethernet, continues update on the graphics display, keyboard. The existing Single core programs just can not process motion signals with this kind of timing and require external hardware/techniques to achieve this.
I have been reading other posts here, like on a thread running selected core, continuously. The exact meaning in these posts is "lose", not sure really what is meant. Continuous might mean testing every minute or ?????
So, I might be wordy, but it will clear I hope. The considered program has all the threads, routines, memory, shared memory all included. I am not considering that this program launches another program or service. Other threads are written in this program and launched when the program starts up. I will call this signal generating thread the FAST THREAD.
The FAST THREAD is to be launched to an otherwise "free" core. It needs to be the only thread that runs on the core. Hopefully, the OS thread scheduler component on that core can be "turned off", so that it does not even interrupt on that core to decide what thread runs next. In looking at the processor manual, Each core has a counter timer chip. Is it possible then that I can use it to provide a continuous train of interrupts then into my "locked in" FAST THREAD for timing purposes? This is the range of about 1-2 us. If not, then just reading one channel on that CTC to provide software sync. This fast thread will, therefore, see (experience) no delays from the interrupts issued in the other cores and associated multicore fabric. This FAST THREAD, when running, will continue to run until the program closes. This could be hours.
Input data to drive this FAST THREAD will be common shared memory defined in the program. There are also hardware signals for motion limits (From GPIOs or SDI port). If any go TRUE, that forces a programmed halt all motion. It does not need a 1~2us response. It could go to a slower Motion loop.
Ah, the output:
Some motion data is written back to the shared memory (assigned for this purpose). Like current location, and current loop number,
Some signals need to be output (the 8 outputs). There are numerous free GPIOs. Not sure of the path taken to get the signaled GPIO pin to change the output. The system call to Linux initiates the pin change event. There is also an SDI port available, running up to the 25Mhz clock. It seems these ports (GPIO, UART, USB, SDI) exist in the fabric that is not on any specific core. I am not sure of the latency from the issuance of these signals in the program until the associated external pin actually presents that signal. In the fast thread, even 10us would be OK, if it was always the same latency! I know that will not be so, there will jitter. I need to think on this spec.
There will possibly be a second dedicated core (similar to above) for slower motion planning. That leaves two cores for everything thing else. Since then everything else items (sata, video screen, keyboard ...) are already able to work in a single core, then the remaining two cores should be great.
At close of program, the FAST THREAD returns the CTC and any other device on its core back to "as it was", re-enables the OS components in this core to their more normal operation. End of thread.
Concluding: I have described the overall program, so as for you to understand what I want to do with this FAST THREAD running, how responsive it needs to be, and that it needs to be undisturbed!! This processor runs in the 1.5 ~ 2.0 GHz range. It certainly can do the repeated calculations in the required time frame.
DESIRED: I do not know the system calls that would allow me to use a selected x86 core in this way. Any pointers would be helpful. Any manual or document that described these calls/procedures.
Can this use of a core also be done in windows 7,10)?
Thanks for reading and any pointers you have.
Stan

Idendify the reason for a 200 ms freezing in a time critical loop

New description of the problem:
I currently run our new data acquisition software in a test environment. The software has two main threads. One contains a fast loop which communicates with the hardware and pushes the data into a dual buffer. Every few seconds, this loop freezes for 200 ms. I did several tests but none of them let me figure out what the software is waiting for. Since the software is rather complex and the test environment could interfere too with the software, I need a tool/technique to test what the recorder thread is waiting for while it is blocked for 200 ms. What tool would be useful to achieve this?
Original question:
In our data acquisition software, we have two threads that provide the main functionality. One thread is responsible for collecting the data from the different sensors and a second thread saves the data to disc in big blocks. The data is collected in a double buffer. It typically contains 100000 bytes per item and collects up to 300 items per second. One buffer is used to write to in the data collection thread and one buffer is used to read the data and save it to disc in the second thread. If all the data has been read, the buffers are switched. The switch of the buffers seems to be a major performance problem. Each time the buffer switches, the data collection thread blocks for about 200 ms, which is far too long. However, it happens once in a while, that the switching is much faster, taking nearly no time at all. (Test PC: Windows 7 64 bit, i5-4570 CPU #3.2 GHz (4 cores), 16 GB DDR3 (800 MHz)).
My guess is, that the performance problem is linked to the data being exchanged between cores. Only if the threads run on the same core by chance, the exchange would be much faster. I thought about setting the thread affinity mask in a way to force both threads to run on the same core, but this also means, that I lose real parallelism. Another idea was to let the buffers collect more data before switching, but this dramatically reduces the update frequency of the data display, since it has to wait for the buffer to switch before it can access the new data.
My question is: Is there a technique to move data from one thread to another which does not disturb the collection thread?
Edit: The double buffer is implemented as two std::vectors which are used as ring buffers. A bool (int) variable is used to tell which buffer is the active write buffer. Each time the double buffer is accessed, the bool value is checked to know which vector should be used. Switching the buffers in the double buffer just means toggling this bool value. Of course during the toggling all reading and writing is blocked by a mutex. I don't think that this mutex could possibly be blocking for 200 ms. By the way, the 200 ms are very reproducible for each switch event.
Locking and releasing a mutex just to switch one bool variable will not take 200ms.
Main problem is probably that two threads are blocking each other in some way.
This kind of blocking is called lock contention. Basically this occurs whenever one process or thread attempts to acquire a lock held by another process or thread. Instead parallelism you have two thread waiting for each other to finish their part of work, having similar effect as in single threaded approach.
For further reading I recommend this article for a read, which describes lock contention with more detailed level.
Since you are running on windows maybe you use visual studio? if yes I would resort to VS profiler which is quite good (IMHO) in such cases, once you don't need to check data/instruction caches (then the Intel's vTune is a natural choice). From my experience VS is good enough to catch contention problems as well as CPU bottlenecks. you can run it directly from VS or as standalone tool. you don't need the VS installed on your test machine you can just copy the tool and run it locally.
VSPerfCmd.exe /start:SAMPLE /attach:12345 /output:samples - attach to process 12345 and gather CPU sampling info
VSPerfCmd.exe /detach:12345 - detach from process
VSPerfCmd.exe /shutdown - shutdown the profiler, the samples.vsp is written (see first line)
then you can open the file and inspect it in visual studio. if you don't see anything making your CPU busy switch to contention profiling - just change the "start" argument from "SAMPLE" to "CONCURRENCY"
The tool is located under %YourVSInstallDir%\Team Tools\Performance Tools\, AFAIR it is available from VS2010
Good luck
After discussing the problem in the chat, it turned out that the Windows Performance Analyser is a suitable tool to use. The software is part of the Windows SDK and can be opened using the command wprui in a command window. (Alois Kraus posted this useful link: http://geekswithblogs.net/akraus1/archive/2014/04/30/156156.aspx in the chat). The following steps revealed what the software had been waiting on:
Record information with the WPR using the default settings and load the saved file in the WPA.
Identify the relevant thread. In this case, the recording thread and the saving thread obviously had the highest CPU load. The saving thread could be easily identified. Since it saves data to disc, it is the one that with file access. (Look at Memory->Hard Faults)
Check out Computation->CPU usage (Precise) and select Utilization by Process, Thread. Select the process you are analysing. Best display the columns in the order: NewProcess, ReadyingProcess, ReadyingThreadId, NewThreadID, [yellow bar], Ready (µs) sum, Wait(µs) sum, Count...
Under ReadyingProcess, I looked for the process with the largest Wait (µs) since I expected this one to be responsible for the delays.
Under ReadyingThreadID I checked each line referring to the thread with the delays in the NewThreadId column. After a short search, I found a thread that showed frequent Waits of about 100 ms, which always showed up as a pair. In the column ReadyingThreadID, I was able to read the id of the thread the recording loop was waiting for.
According to its CPU usage, this thread did basically nothing. In our special case, this led me to the assumption that the serial port io command could cause this wait. After deactivating them, the delay was gone. The important discovery was that the 200 ms delay was in fact composed of two 100 ms delays.
Further analysis showed that the fetch data command via the virtual serial port pair gets sometimes lost. This might be linked to very high CPU load in the data saving and compression loop. If the fetch command gets lost, no data is received and the first as well as the second attempt to receive the data timed out with their 100 ms timeout time.

How to test Interrupt Latency?

Windows Embedded Compact 7.
Is there a way to test interrupt latency time from user space?
Are there any tools provided as part of platform builder?
I also saw a program called Intrtime.exe - but no examples on how to use it.
How does one test the interrupt latency time?
Reference for Intrtime.exe but how do I implement it?
http://www.ece.ufrgs.br/~cpereira/temporeal_pos/www/WindowsCE2RT.htm
EDIT
Also found:
ILTiming.exe Real-Time Measurement Tool (Compact 2013)
http://msdn.microsoft.com/en-us/library/ee483144.aspx
This really is a test that requires hardware, and there are a couple "latencies" you might measure. Once is the time from the interrupt signal to when the driver ISR reacts and the second is from when the interrupt occurs to when an IST reacts.
I did this back in the CE 3.0/CE 4.0 days by attaching a signal generator to an interruptable input an then having an ISR pulse a second input and an IST pulse a third input when they received the interrupt. I hooked a scope up to the input and outputs and used it to measure time between the input signal and output signals to get not just latency, but also jitter. You could easily add a 4th line for CE 7 so you could check an IST in user space and an IST in kernel space. I'd definitely be interested to see the results.
I don't think you can effectively measure this with software running on the platform, as you get into the problem of the code trying to do the measurement affecting the results. You're also talking time way, way below the system tick resolution so the scheduler is going to be problematic as well. CeLog might be able to get you an idea on these times, but getting it set up and running is probably more work than just hooking up a scope.
What is usually meant by interrupt latency is the time between an interrupt source asserting the interrupt line and a thread (sometimes in user-space) being scheduled and then executing as a result.
Unless your CPU has some accurate way of time-stamping interrupt events as they arrive at the CPU (rather than when an ISR runs), the only truly accurate measurement is one done externally - by measuring the time between a the interrupt line being asserted and some observable signal that the thread responding to the interrupt can control. A DSO or logic analyser is usually used for this purpose.
Software techniques usually rely on storing an accurate time-stamp at the earliest opportunity in an ISR. If you're certain the time between interrupt line becoming asserted and the ISR running is negligible, this might be valid. If, on the other hand, disabling of interrupts is being used to control concurrency, or interrupts are nested, you probably want to be measuring this as well.