I currently need to write a Windows 10 application that collects data from four different sensors (encoders and IMUs) via serial. I am writing this in Visual Studio in c++.
Each sensor requires me to write to it once to initialize it and then just read values. I was thinking of having a thread for each sensor (i.e. serial port) and a main thread to gather all of the data and save it in a file.
In other Stack Overflow questions, however, people have suggested having a thread to write and another to read, or a single threaded overlapped approach.
What are the pros/cons of this versus multithreaded? And if I do choose the multithreaded path do I need to use overlapped I/O or can I do non-overlapped?
Windows documentation states that for non-overlapped:
If one thread is blocked waiting for its I/O operation to complete, all other threads that subsequently call a communications API will be blocked until the original operation completes. For instance, if one thread were waiting for a ReadFile function to return, any other thread that issued a WriteFile function would be blocked.
Is this still true if I am using different serial ports for each sensor? Since I need to read these values as fast as possible I cannot afford blocking.
Any help is appreciated! This is my first time designing a system...
Related
I have a question about threads communication.
there is client and server.
server:
main function- its job is to listen to some port (TCP communication) and get commands from the client
the thread job is to transmit video fluently to the client.
client:
main function- transmit commands to server
thread- watch the video
the TCP\video part works fine.
after the main function of the server, got the command from the client, I need to send the command to the video thread and send back from the video thread to the server's main- "o.k" .
the problem is to send commands from the server's main, to the video thread and vice versa.
its enough that the command will be one variable..
any ideas?
thanks!
The pipe is a bad approach towards two way communication, you could use, shared memory.
In shared memory, both processes have access to some memory that can be used read or write, such that writes in one are visible in the reads of the other and vice versa.
for more details on shared memory http://www.cs.cf.ac.uk/Dave/C/node27.html
if threads and one variable then use atomic variable. if object then use locking (trylock inside video streaming loop and lock write command inside main). if you want commands as queue-ed then use safe concurent queue
I think on your case:
I would do it with two Wait-free ring buffer from boost examples. Making two single producer -consumer.In one producer will be main function consumer other thread,on another one vice versa. (it be like using two pipes in unix but efficient one)
A wait-free ring buffer provides a mechanism for relaying objects from one single "producer" thread to one single "consumer" thread without any locks. The operations on this data structure are "wait-free" which means that each operation finishes within a constant number of steps. This makes this data structure suitable for use in hard real-time systems or for communication with interrupt/signal handlers.
Wait-free ring buffer
But considering that I am not aware of your situation there are might be more right ways.Maybe redesining at all
My Linux C++ application is periodically reading sensor data. Readout is done by simple file I/O operation (OS is writing to file, application is reading from this file).
Some information about my platform:
I have single core processor with hyper-threading
sensor data update frequency is 1 second
application GUI runs in main thread and shouldn't be blocked
I considered two approaches for sensor data read out:
timer running in main application thread
separate thread with infinite loop which does sensor data readout and then sleeps
Which approach makes more sens, are there any other alternatives ? What are the costs of both solution (e.g. blocking of main thread in first or context switching in second approach) ?
I don't know anything about your application or the hardware, but here are a few things to consider:
If you use a thread, you will have to create a communication channel of some sort to tell the main thread that data has been updated. Usually this would be a pipe(), as signals are inherently unreliable and condition locks don't work with I/O multiplexing (i.e. select()/poll()).
Can you get the entire set of data without blocking? If so, then just reading it in the main thread is probably easier. However, if your read can block you'll probably need some more "keep track of my read state to incorporate it into my central select()", whereas a thread can just block until more data is available.
Thus, neither solution is automatically "easier" to do.
I wouldn't worry about "context switching" for a read that only occurs once per second; that's irrelevant.
What else does the main thread have to do? Is it ok if it blocks? If so, then you dont need to do the timer, etc in a separate thread.
If the main thread cant block waiting for the periodic timer, then a separate thread must be created. The communication of data between the threads can be via an object that is accessible to both threads and protected via a mutex (look up pthread_mutex_t), which is quite simple to do.
As for which solution would be better and what are the costs, it depends on what else the main thread is doing. But for something this simple, either way should be about the same, and the context switching shouldnt affect anything. What should affect performance the most is how performance intensive the reads are.
I believe that cost of the context switch once a second is not an issue even for single-core CPU without hyper-threading especially taking to the account that the application is running in user space, thus is not really time-critical. The polling of your sensor in the main thread complicates the logic of the application. So, I would recommend you to start a thread for that purpose.
A sleep-loop will skew the timing because each iteration is going to take longer than 1sec. Timers don't have that problem, and they are made for this scenario. So choose a timer.
Performance-wise there is no difference because you are only triggering once a second.
If the Linux driver is reading a sensor data and writing it to a device file every second, you shouldn't duplicate the timer logic in your application. It may happen that after 1 second sleep your application will still read the same data as 1 second ago. A better approach would be to have a thread that would call a blocking read on a device file. When new sensor data is available, blocking read returns, the thread can process the data and call read again.
I'm writing a POSIX compatible multi-threaded server in c/c++ that must be able to accept, read from, and write to a large number of connections asynchronously. The server has several worker threads which perform tasks and occasionally (and unpredictably) queue data to be written to the sockets. Data is also occasionally (and unpredictably) written to the sockets by the clients, so the server must also read asynchronously. One obvious way of doing this is to give each connection a thread which reads and writes from/to its socket; this is ugly, though, since each connection may persist for a long time and the server thus may have to hold hundred or thousand threads just to keep track of connections.
A better approach would be to have a single thread that handled all communications using the select()/pselect() functions. I.e., a single thread waits on any socket to be readable, then spawns a job to process the input that will be handled by a pool of other threads whenever input is available. Whenever the other worker threads produce output for a connection, it gets queued, and the communication thread waits for that socket to be writable before writing it.
The problem with this is that the communication thread may be waiting in the select() or pselect() function when output is queued by the worker threads of the server. It's possible that, if no input arrives for several seconds or minutes, a queued chunk of output will just wait for the communication thread to be done select()ing. This shouldn't happen, however--data should be written as soon as possible.
Right now I see a couple solutions to this that are thread-safe. One is to have the communication thread busy-wait on input and update the list of sockets it waits on for writing every tenth of a second or so. This isn't optimal since it involves busy-waiting, but it will work. Another option is to use pselect() and send the USR1 signal (or something equivalent) whenever new output has been queued, allowing the communication thread to update the list of sockets it is waiting on for writable status immediately. I prefer the latter here, but still dislike using a signal for something that should be a condition (pthread_cond_t). Yet another option would be to include, in the list of file descriptors on which select() is waiting, a dummy file that we write a single byte to whenever a socket needs to be added to the writable fd_set for select(); this would wake up the communications server because that particular dummy file would then be readable, thus allowing the communications thread to immediately update it's writable fd_set.
I feel intuitively, that the second approach (with the signal) is the 'most correct' way to program the server, but I'm curious if anyone knows either which of the above is the most efficient, generally speaking, whether either of the above will cause race conditions that I'm not aware of, or if anyone knows of a more general solution to this problem. What I really want is a pthread_cond_wait_and_select() function that allows the comm thread to wait on both a change in sockets or a signal from a condition.
Thanks in advance.
This is a fairly common problem.
One often used solution is to have pipes as a communication mechanism from worker threads back to the I/O thread. Having completed its task a worker thread writes the pointer to the result into the pipe. The I/O thread waits on the read end of the pipe along with other sockets and file descriptors and once the pipe is ready for read it wakes up, retrieves the pointer to the result and proceeds with pushing the result into the client connection in non-blocking mode.
Note, that since pipe reads and writes of less then or equal to PIPE_BUF are atomic, the pointers get written and read in one shot. One can even have multiple worker threads writing pointers into the same pipe because of the atomicity guarantee.
Unfortunately, the best way to do this is different for each platform. The canonical, portable way to do it is to have your I/O thread block in poll. If you need to get the I/O thread to leave poll, you send a single byte on a pipe that the thread is polling. That will cause the thread to exit from poll immediately.
On Linux, epoll is the best way. On BSD-derived operating systems (including OSX, I think), kqueue. On Solaris, it used to be /dev/poll and there's something else now whose name I forget.
You may just want to consider using a library like libevent or Boost.Asio. They give you the best I/O model on each platform they support.
Your second approach is the cleaner way to go. It's totally normal to have things like select or epoll include custom events in your list. This is what we do on my current project to handle such events. We also use timers (on Linux timerfd_create) for periodic events.
On Linux the eventfd lets you create such arbitrary user events for this purpose -- thus I'd say it is quite accepted practice. For POSIX only functions, well, hmm, perhaps one of the pipe commands or socketpair I've also seen.
Busy-polling is not a good option. First you'll be scanning memory which will be used by other threads, thus causing CPU memory contention. Secondly you'll always have to return to your select call which will create a huge number of system calls and context switches which will hurt overall system performance.
I am designing a client and server socket program.
I have a file to be transferred to the server from the client using UDP, I repeat I am using UDP.....
I am sending through UDP so, the sending rate is too fast then the receiver, so I have created 3 threads listening on the same socket, so that when one thread is doing some work(I mean writing to a file using fwrite) with the received data the other thread can recv from the client.
My 1st question is when I am using a fwrite with multiple threads I have to use locks as the file pointer is shared between the threads. I am right in thinking???
My 2nd question is "Will there be any improvement in the performance if I use multiple threads to fwrite using locks over using a single thread to do the fwrite work with no locks...??? " ... Please guide me...
I would use one thread. Saves the complications. You can buffer the data and use asynchronous writes
http://www.gnu.org/s/hello/manual/libc/Asynchronous-Reads_002fWrites.html
Cache the data before writing it.
Let the writing happen in another thread.
Doing it the way you do will require locking the socket.
Q1: yes you do need to lock it (very slow!). Why not use a separate file descriptor in each thread? the problem comes mostly with the current file position managed by that descriptor.
Q2: Neither. If data needs ordering (yes, UDP!) you should still buffer it. RAM is much faster then disk IO. Feed a stream to buffer it and handle the data in that stream in a separate thread.
Similar to Ed's answer, I'd suggest using asynchronous I/O and a single thread for your server. Though I find using Boost.Asio easier than posix AIO.
My 1st question is when I am using a fwrite with multiple threads I have to use locks as the file pointer is shared between the threads
Yes, you always have to use locks when multiple threads are writing to a single object (file, memory, etc).
My 2nd question is "Will there be any improvement in the performance if I use multiple threads to fwrite using locks over using a single thread to do the fwrite work with no locks...??? "
I would use two threads. The first thread does nothing but read from the socket and store the data in memory. The second thread reads data from memory and writes it to the file. Treat the memory buffer as a FIFO queue and use a mutex to protect the queue pointers. You'll gain nothing from a third thread. In fact, it would probably harm performance and it definitely makes the problem far more complicated.
First, try to avoid using UDP for bulk transfers. If you use UDP you have to reinvent your own flow control protocol, as well as logic for retransmission and reordering. From the sounds of it, your problems boil down to missing flow control - so why not just use TCP?
Anyway, don't put your file writing in another thread. Modern OSes will internally buffer disk writes in any case - you'll only start blocking if you're writing data much faster than the disk can keep up, in which case buffering inside your process will only buy you another few seconds at most. Switch to TCP, or implement a proper flow control mechanism.
I am reading data from multiple serial ports. At present I am using a custom signal handler (by setting sa_handler) to compare and wake threads based on file descriptor information. I was searching for a way out to have individual threads with unique signal handlers, in this regard I found that select system call is to be used.
Now I have following questions:
If I am using a thread (Qt) then where do I put the select system call to monitor the serial port?
Is the select system call thread safe?
Is it CPU intensive because there are many things happening in my app including GUI update?
Please do not mind, if you find these questions ridiculous. I have never used such a mechanism for serial communication.
The POSIX specification (select) is the place to look for the select definition. I personally recommend poll - it has a better interface and can handle any number of descriptors, rather than a system-defined limit.
If I understand correctly you're waking threads based on the state of certain descriptors. A better way would be to have each thread have its own descriptor and call select itself. You see, select does not modify the system state, and as long as you use thread-local variables it'll be safe. However, you will definitely want to ensure you do not close a descriptor that a thread depends on.
Using select/poll with a timeout leaves the "waiting" up to the kernel side, which means the thread is usually put to sleep. While the thread is sleeping it is not using any CPU time. A while/for loop on a select call without a timeout on the other hand will give you a higher CPU usage as you're constantly spinning in the loop.
Hope this helps.
EDIT: Also, select/poll can have unpredictable results when working with the same descriptor in multiple threads. The simple reason for this is that the first thread might be woken up because the descriptor is ready for reading, but the second thread has to wait for the next "available for reading" wakeup.
As long as you're not selecting on the same descriptor in multiple threads you should not have a problem.
It is a system call -- it should be thread safe, I think.
I did not do this before, but I would be rather surprised, if it where not. How CPU intensive select() is, depends in my opinion largely on the number of file handles you are waiting for. select() is mostly used, to wait for a number (>1) of file handles to become ready.
It should also be mentioned that select() should not be used to poll the file handles -- for performance reason. Normal usage is: You have your work done and some time can elapse till the next thing is going on. Now you suspend your process with select and let another process run. select() normally does suspend the active process. How this works together with threads, I am not sure! I would think, that the whole process (and all threads) are suspended. But this might be documented. It also could depend (on Linux) whether you use system-threads or User-Threads. The kernel will not know User-Threads and hence suspend the whole process.