I'm starting on a new robot control software for our robotic system which has to read out values from multiple, independent sensors and control the motors accordingly.
The software will be running on an i5 PC with PREEMPT_RT Ubuntu. Each sensor device comes with an SDK which I want to run in a separate thread. As soon as they get new values from their sensors (can be up to 50 doubles at once from one sensor) they should update those values in a superior control thread. The update rate depends on the sensor, but will be as fast as 1 kHz. As soon as the "main sensor" got new values and sent them to the control thread, the thread of the main sensor should trigger the control loop in the control thread (with a non-blocking call). The control thread should then compute new values for the motors with the currently stored sensor values and transmit them to the motors. The main sensor that will trigger the control loop is also getting new data at a 1 kHz rate, so this often the control thread must be performed.
I'm unsure now how to approach this. Do you think the threads functionality from C++11 can already solve this? Or should I use something like pthreads or boost?
The main requirements are super low latency (~ 10 µs) to send data (up to 50 doubles) from one thread to another, and the ability to trigger a function (non-blocking) in another thread.
As soon as the sensor threads sent the current data to the control thread, they should continue monitoring the hardware to check for new sensor values and retrieve them. Some of the sensor threads perform extra computations and filtering on the sensor data, that's why I want them to run in extra threads, then taking advantage of the quadcore processor.
Related
I am considering a programming project. Will run under Ubuntu or other Linux OS on a small board. Quad core x86 - N-Series Pentium. The software generates 8 fast signals; square wave pulse trains for stepper motor motion control of 4 axes. Step signals being 50-100 KHz maximum, but usually slower. Want to avoid jitter in these stepping signals (call it good fidelity), so that around 1-2us for each thread loop cycle would be a nice target. The program does other kinds of tasks, like read/write hard drive, Ethernet, continues update on the graphics display, keyboard. The existing Single core programs just can not process motion signals with this kind of timing and require external hardware/techniques to achieve this.
I have been reading other posts here, like on a thread running selected core, continuously. The exact meaning in these posts is "lose", not sure really what is meant. Continuous might mean testing every minute or ?????
So, I might be wordy, but it will clear I hope. The considered program has all the threads, routines, memory, shared memory all included. I am not considering that this program launches another program or service. Other threads are written in this program and launched when the program starts up. I will call this signal generating thread the FAST THREAD.
The FAST THREAD is to be launched to an otherwise "free" core. It needs to be the only thread that runs on the core. Hopefully, the OS thread scheduler component on that core can be "turned off", so that it does not even interrupt on that core to decide what thread runs next. In looking at the processor manual, Each core has a counter timer chip. Is it possible then that I can use it to provide a continuous train of interrupts then into my "locked in" FAST THREAD for timing purposes? This is the range of about 1-2 us. If not, then just reading one channel on that CTC to provide software sync. This fast thread will, therefore, see (experience) no delays from the interrupts issued in the other cores and associated multicore fabric. This FAST THREAD, when running, will continue to run until the program closes. This could be hours.
Input data to drive this FAST THREAD will be common shared memory defined in the program. There are also hardware signals for motion limits (From GPIOs or SDI port). If any go TRUE, that forces a programmed halt all motion. It does not need a 1~2us response. It could go to a slower Motion loop.
Ah, the output:
Some motion data is written back to the shared memory (assigned for this purpose). Like current location, and current loop number,
Some signals need to be output (the 8 outputs). There are numerous free GPIOs. Not sure of the path taken to get the signaled GPIO pin to change the output. The system call to Linux initiates the pin change event. There is also an SDI port available, running up to the 25Mhz clock. It seems these ports (GPIO, UART, USB, SDI) exist in the fabric that is not on any specific core. I am not sure of the latency from the issuance of these signals in the program until the associated external pin actually presents that signal. In the fast thread, even 10us would be OK, if it was always the same latency! I know that will not be so, there will jitter. I need to think on this spec.
There will possibly be a second dedicated core (similar to above) for slower motion planning. That leaves two cores for everything thing else. Since then everything else items (sata, video screen, keyboard ...) are already able to work in a single core, then the remaining two cores should be great.
At close of program, the FAST THREAD returns the CTC and any other device on its core back to "as it was", re-enables the OS components in this core to their more normal operation. End of thread.
Concluding: I have described the overall program, so as for you to understand what I want to do with this FAST THREAD running, how responsive it needs to be, and that it needs to be undisturbed!! This processor runs in the 1.5 ~ 2.0 GHz range. It certainly can do the repeated calculations in the required time frame.
DESIRED: I do not know the system calls that would allow me to use a selected x86 core in this way. Any pointers would be helpful. Any manual or document that described these calls/procedures.
Can this use of a core also be done in windows 7,10)?
Thanks for reading and any pointers you have.
Stan
i'm currently dealing with a Qt GUI i have to set up for a measurement device. The device is working with a frame grabber card which grabs images from a line camera really fast. My image processing which is not that complex takes 0.2ms to complete, and it takes about 40ms to display the signal and the processing result with QCustomPlot which is totally okay.
Besides the GUI output the processed signal will also be put out as an analog signal by a NI DAQ device.
My problem is that i have to update the analog signal with a constant frequency and still update the GUI from time to time.
My current approach or idea was to create a data pool thread and two worker threads. One worker thread receives the data from the frame grabber, processes it an updates the data pool. The second worker thread updates the analog channel of the NI DAQ with a certain frequency of about 2-5kHz given by a clock in the NI DAQ device.
And the GUI thread would read the data pool from to time to time to update the signal display with a rate of about 20-30Hz.
I wanted to use the Qt thread management and he signal-and-slot mechanism because of its "simplicity" and because i already worked with threads in combination with Qt and its thread classes.
Is there maybe a better way, does somebody have an idead or any suggestion? Is it possible that i get problems in the timing of the threads?
Furhtermore is it possible to assign one thread to one single CPU core on a multi core CPU, so that this core only processes this single thread?
Is there maybe a better way, does somebody have an idead or any suggestion? Is it possible that i get problems in the timing of the threads?
Signal/Slot mechanism is fine, try it and if you get into performance issues you can still try to find another approach. I used Signal/Slot Mechanism for real-time video processing with QAbstractVideoSurface and Mediaplayer. It worked for me.
Furhtermore is it possible to assign one thread to one single CPU core on a multi core CPU, so that this core only processes this single thread?
Why would you do that? The operating system or threading library has a scheduler, which takes care of such things. As long you got no good reason doing this yourself, you should just use the existing way.
I would try it with three threads: 1)UI thread, 2)grab-and-process thread, 3)analogue output thread.
The trick is to use a triple buffer to connect output of grab-and-process to input of analogue output.
Say, at moment t, thread(2) finishes processing frame[(t+0)%3], change output destination to frame[(t+1)%3] immediately, and notifies thread(3), which is looping through data in frame[(t+2)%3], to switch to frame[(t+0)%3] when appropriate.
I used this technique when I was working on an image processing project that has a 10fps processing frame rate and a 60fps NTSC output frame rate. To eliminate the tearing effect, a circular buffer with three buffers is the least.
New description of the problem:
I currently run our new data acquisition software in a test environment. The software has two main threads. One contains a fast loop which communicates with the hardware and pushes the data into a dual buffer. Every few seconds, this loop freezes for 200 ms. I did several tests but none of them let me figure out what the software is waiting for. Since the software is rather complex and the test environment could interfere too with the software, I need a tool/technique to test what the recorder thread is waiting for while it is blocked for 200 ms. What tool would be useful to achieve this?
Original question:
In our data acquisition software, we have two threads that provide the main functionality. One thread is responsible for collecting the data from the different sensors and a second thread saves the data to disc in big blocks. The data is collected in a double buffer. It typically contains 100000 bytes per item and collects up to 300 items per second. One buffer is used to write to in the data collection thread and one buffer is used to read the data and save it to disc in the second thread. If all the data has been read, the buffers are switched. The switch of the buffers seems to be a major performance problem. Each time the buffer switches, the data collection thread blocks for about 200 ms, which is far too long. However, it happens once in a while, that the switching is much faster, taking nearly no time at all. (Test PC: Windows 7 64 bit, i5-4570 CPU #3.2 GHz (4 cores), 16 GB DDR3 (800 MHz)).
My guess is, that the performance problem is linked to the data being exchanged between cores. Only if the threads run on the same core by chance, the exchange would be much faster. I thought about setting the thread affinity mask in a way to force both threads to run on the same core, but this also means, that I lose real parallelism. Another idea was to let the buffers collect more data before switching, but this dramatically reduces the update frequency of the data display, since it has to wait for the buffer to switch before it can access the new data.
My question is: Is there a technique to move data from one thread to another which does not disturb the collection thread?
Edit: The double buffer is implemented as two std::vectors which are used as ring buffers. A bool (int) variable is used to tell which buffer is the active write buffer. Each time the double buffer is accessed, the bool value is checked to know which vector should be used. Switching the buffers in the double buffer just means toggling this bool value. Of course during the toggling all reading and writing is blocked by a mutex. I don't think that this mutex could possibly be blocking for 200 ms. By the way, the 200 ms are very reproducible for each switch event.
Locking and releasing a mutex just to switch one bool variable will not take 200ms.
Main problem is probably that two threads are blocking each other in some way.
This kind of blocking is called lock contention. Basically this occurs whenever one process or thread attempts to acquire a lock held by another process or thread. Instead parallelism you have two thread waiting for each other to finish their part of work, having similar effect as in single threaded approach.
For further reading I recommend this article for a read, which describes lock contention with more detailed level.
Since you are running on windows maybe you use visual studio? if yes I would resort to VS profiler which is quite good (IMHO) in such cases, once you don't need to check data/instruction caches (then the Intel's vTune is a natural choice). From my experience VS is good enough to catch contention problems as well as CPU bottlenecks. you can run it directly from VS or as standalone tool. you don't need the VS installed on your test machine you can just copy the tool and run it locally.
VSPerfCmd.exe /start:SAMPLE /attach:12345 /output:samples - attach to process 12345 and gather CPU sampling info
VSPerfCmd.exe /detach:12345 - detach from process
VSPerfCmd.exe /shutdown - shutdown the profiler, the samples.vsp is written (see first line)
then you can open the file and inspect it in visual studio. if you don't see anything making your CPU busy switch to contention profiling - just change the "start" argument from "SAMPLE" to "CONCURRENCY"
The tool is located under %YourVSInstallDir%\Team Tools\Performance Tools\, AFAIR it is available from VS2010
Good luck
After discussing the problem in the chat, it turned out that the Windows Performance Analyser is a suitable tool to use. The software is part of the Windows SDK and can be opened using the command wprui in a command window. (Alois Kraus posted this useful link: http://geekswithblogs.net/akraus1/archive/2014/04/30/156156.aspx in the chat). The following steps revealed what the software had been waiting on:
Record information with the WPR using the default settings and load the saved file in the WPA.
Identify the relevant thread. In this case, the recording thread and the saving thread obviously had the highest CPU load. The saving thread could be easily identified. Since it saves data to disc, it is the one that with file access. (Look at Memory->Hard Faults)
Check out Computation->CPU usage (Precise) and select Utilization by Process, Thread. Select the process you are analysing. Best display the columns in the order: NewProcess, ReadyingProcess, ReadyingThreadId, NewThreadID, [yellow bar], Ready (µs) sum, Wait(µs) sum, Count...
Under ReadyingProcess, I looked for the process with the largest Wait (µs) since I expected this one to be responsible for the delays.
Under ReadyingThreadID I checked each line referring to the thread with the delays in the NewThreadId column. After a short search, I found a thread that showed frequent Waits of about 100 ms, which always showed up as a pair. In the column ReadyingThreadID, I was able to read the id of the thread the recording loop was waiting for.
According to its CPU usage, this thread did basically nothing. In our special case, this led me to the assumption that the serial port io command could cause this wait. After deactivating them, the delay was gone. The important discovery was that the 200 ms delay was in fact composed of two 100 ms delays.
Further analysis showed that the fetch data command via the virtual serial port pair gets sometimes lost. This might be linked to very high CPU load in the data saving and compression loop. If the fetch command gets lost, no data is received and the first as well as the second attempt to receive the data timed out with their 100 ms timeout time.
I have an embedded Linux platform (the Beagleboard, running Angstrom Linux) with two devices connected:
a Laser range finder (Hokuyo UTM 30) connected via USB
a custom external board connected via SPI
We have a written a Linux kernel module which is responsible for the SPI data transfer. It has an IRQ handler in which spi_async is called which in turn causes an async callback method to be called.
My C++ application consists of three threads:
a main thread for data processing
a laser polling thread
an SPI polling thread
I am experiencing problems which seem to be caused by how the modules described above interact.
When I switch off the USB device (laser range finder) I receive all SPI messages correctly (1 message every 3ms, message length divided by data rate is <1ms), independent from thread scheduling
When I switch on the USB device and I run my program with normal thread scheduling (SCHED_OTHER, priority 0, no nice level set) about 1% of the messages is "lost" because the callback method of spi_async is running when the next IRQ occurs (I could handle this case differently in order not to loose the messages, so this is not a big issue.)
With the USB device turned on and I run the program with SCHED_RR and
priority = 10 for main thread
priority = 10 for SPI reading thread
priority = 4 for USB/Laser polling thread
then I am loosing 40% of the messages because the IRQ is triggered again before the spi-callback method is called! (I could still maybe find a workaround, but the problem is that I need fast response times which can no longer be reached in this case). I need to use the thread scheduling and the laser device so I am looking for a way to solve this case.
Question 1:
My assumption was that IRQ handlers and the callbacks triggered by spi_async in kernel space have higher priority than any thread running in user space (no matter if SCHED_RR or SCHED_OTHER). This would mean that turning to SCHED_RR in my application shouldn't slow down SPI transfer, but this seems very wrong. Is it?
Question 2:
How can I determine what happens here? Which debugging aids exist? (Or maybe you don't need any further information?) The main question for me is: why do I experience the problems only when the laser device is turned on. Could the USB driver consume so much time?
----- EDIT:
I have made the following observation:
The spi_async's callback calls wake_up_interruptible(&mydata->readq); (with wait_queue_head_t readq;). From the user space (my app) I call a function which results in poll_wait(file, &mydata->readq, wait); When the poll returns the user space calls read().
When my application runs with SCHED_OTHER I can see that the callback method first finishes before the read() method in my kernel module is entered.
When my application runs with SCHED_RR read is entered before exiting the callback.
This seems to proof that the priority of the user space threads is higher than the callback method's context's priority. Is there any way to change this behaviour and still have SCHED_RR for my application's threads?
Not all kernel thread have an RT priority. Imagine a periodically waking up thread that needs to do some background work is waking up. You don't want this thread to preemt your RT thread. So I guess your first assumption is wrong.
Based on your other questions :
your main processing loop receives SPI data through a queue
the spi processing thread feeds the main processing queue
It seems your main processing thread get in the way of the spi driver thread responsible for the spi data transfer.
Here is what happens :
an IRQ is fired
spi_async is called, which means a data transfer is queued, that will be picked up by a thread created by the spi master driver.
spi master thread compete with your main processing thread, the laser thread, but this kernel thread has not RT priority, so it looses every time one of the RR thread is running.
What you can do is going back to normal scheduling, while playing with the various CONFIG_PREEMPT_ options. Or mess with the spi master driver, to ensure that any delayed work is queued with enough priority. Or even not queued at all.
I am new to multithreading and have designed a program that receives data from two microcontroller measuring various temperatures (Ambient and Water) and draws the data to the screen. Right now the program is singly threaded and its performance SUCKS A BIG ONE.
I get basic design approaches with multithreading but not well enough to create a thread to do a task but what I don't get is how to get threads to perform seperate task and place the data into a shared data pool. I figured that I need to make a queue that has one consumer and multiple producers (would like to use std::queue). I have seen some code on the gtkmm threading docs that show a single Con/Pro queue and they would lock the queue object produce data and signal the sleeping thread that it is finished then the producer would sleep. For what I need would I need to sleep a thread, would there be data conflicts if i didn't sleep any of the threads, and would sleeping a thread cause a data signifcant data delay (I need realtime data to be drawn 30 frames a sec)
How would I go about coding such a queue using the gtkmm/glibmm library.
Here's a suggestion:
1. Have two threads, that are responsible for obtaining data and placing into a buffer. Each thread has it's own (circular) buffer.
2. There will be a third thread that is responsible for getting data from the buffers and displaying on the screen.
3. The screen thread sends messages to the data threads requesting some data, then displays the data. The messages help synchronize execution and avoid dead-locks.
4. None of the threads should "wait on single or multiple objects", but poll for events.
Think of this scenario using people. One person is delivering water temperature readings. Another person delivering ambient temperature readings. A third person receives or asks for the data and displays the data (on a white board). The objective is to keep everybody operating at maximum efficiency without any collisions.
If you're looking for a lock free implementation of this, you won't find one. When data structures are being written to, something needs to keep two threads from simultaneously updating the data structure and corrupting it.
Is there any reason you can't have each thread collect on it's own, with it's own structure, and then combine the results at the end?