I have a process that continuously needs to write information. Furthermore, there is a second process which sometimes connects to the "information channel" of the writing process and should read the information that are written since it's connected. This process might also deconnect and reconnect several times again.
I am currently realizing this with a named pipe, by using mkfifo() in my c++ program. Unfortunately if I call open() on this fifo it blocks until a process opens the fifo for reading. This is quite normal for named pipes, but I need this open command to be non-blocking.
Do you know an alternative to mkfifo in this case?
Heinrich
You could use Unix-domain sockets, or regular TCP sockets on loopback interface.
You can use shared memory or mmap. It should contain offset to the oldest data, and the block of memory for data
fifo is limited to 64k (depends on distribution and some settings).
I finally used the unix message queue, Reader and Writer can.be started totally independet and everything can be performed non blocking
Related
I am working on an app to start multiple streams in listener and caller modes after creating sockets. Right now, if I start one stream, the process kind of hangs because the stream is waiting for data. So this is clear to me that I need to start the stream in an async kind of process, so that the rest of the app keeps working.
Do I start the stream in:
separate threads
separate processes using fork
also read about select, will that work
Does blocking/non-blocking sockets solve this problem.
This app is being done in c++.
You can either use a library like Boost.Asio or the C function poll() (or select() which does basically the same thing) to wait on multiple sockets at once. Either way, you want to "multiplex" the sockets, meaning you block until any of them has data available, then you read from that one. This is how many network applications are built, and is usually more efficient, more scalable, and less error-prone than having a thread or process for each connection.
I'm using local a Unix socket to communicate between two different processes. Thing is, some parts of the code on bth ends take different time to run, and I need recv and send to be synced across both processes. Is there a way to force send and recv to wait for the next corresponding line on the opposite process?
You must implement a protocol. After all, you can not be sure that the sockets are in sync. For example you could send one package with 100 bytes and then receive two ore even more packages adding it up.
By default, recv() will block (wait) until there are data to read, while send() will block until there is space in the buffer to write to. For most applications, this is enough synchronisation (if you design your protocol sanely).
So I recommend you just think about the details of how your communication will work, and try it out. Then if there is still a problem, come back with a question that is as specific as possible.
How should one monitor data that went through a FIFO ? Simply open and keep watching doesn't work, since if the monitor reads all bytes, the actual program that needs data will fail to receive the data.
I am not sure what kind of FIFO you have there (pipe? socket? maybe you should elaborate more on your question in general), but the only case where I know about forward-reading is with sockets.
You can use recv() with the flag MSG_PEEK with the following effect:
This flag causes the receive operation to return data from the
beginning of the receive queue without removing that data from the
queue. Thus, a subsequent receive call will return the same data.
You can implement IPC with sockets, too (unix(7)), so you might want to add them to your project (if you are using linux/unix). If you want to know how to use sockets then you should read the man page: socket(2) and socket(7) or in case of Windows, recv() and socket().
You might also want to try to use 2 FIFO's, one to your monitor and the other one from your monitor to your actual program. Then you simply read all incoming data with your monitor and filter the relevant parts and write them directly to your actual program. This might come in handy if you have multiple receivers inside your actual programs and want to split up the incoming data.
If you simply want to know whether there is data to read, you can use select(2) or pselect(2) or maybe poll(2), or select()
You should use one of the following system calls:
select()- source: man -s 2 select
pselect()- source: man -s 2 pselect
select() and pselect() allow a program to monitor multiple
file
descriptors, waiting until one or more of the file descriptors become
"ready" for some class of I/O operation (e.g., input possible). A file
descriptor is considered ready if it is possible to perform the corre-
sponding I/O operation (e.g., read(2)) without blocking.
Note they are all I/O blocking calls.
ppoll()- man -s 2 ppoll
poll()- man -s 2 poll
Also read the difference between the both set of system calls: http://www.unixguide.net/network/socketfaq/2.14.shtml
And using pselect or ppoll is always better than select and poll for safer uses.
I am a newbie of network programming and I've hear about epoll. I read a couple of tutorials and now I got some basic idea of what epoll does and how I can implement this.
The question is that can I use epoll even if client will using udp connection? All the tutorials I read used tcp connection.
Also is there a good tutorials or a sample code that explains multi-thread based server implementation using epoll? Those tutorials I got from online only showed how to create a simple echo server on single thread.
Thanks in advance.
There is no problem to use epoll with UDP, the epoll just notifies if there is any data to read in the file descriptor. There are some implications in the read/write... operations related to the UDP socket behaviour (from the man page of epoll):
For stream-oriented files (e.g., pipe, FIFO, stream socket), the condition
that the read/write I/O space is exhausted can also be detected by
checking the amount of data read from / written to the target file
descriptor. For example, if you call read(2) by asking to read a certain
amount of data and read(2) returns a lower number of bytes, you can be
sure of having exhausted the read I/O space for the file descriptor. The
same is true when writing using write(2). (Avoid this latter technique if
you cannot guarantee that the monitored file descriptor always refers to a
stream-oriented file.)
On the other hand is not very usual to use the epoll directly. The best way of using epoll is using an event loop library, libev, or libevent, for example. This is a better aproach, beacause epoll is not available in every system and using this kind of libraries your programs are more portable.
Here you can found an example of libev use with UDP, and Here other example with libevent.
I am currently developing a modular framework using shared memory in C & C++.
The goal is to have independent programs in both C and C++, talk to each other through shared memory.
E.g. one program is responsible for reading a GPS and another responsible for processing the data from several sensors.
A master program will start all the slave programs
(currently i am using fp = popen(./slave1/slave1,"r"); to do this) and then make shared memory segments that each slave can connect to.
The thought behind this is that if a slave dies, it can be revived by the master and reconnect to the same shared memory segment.
Slaves can also be exchanged during runtime (e.g. switch one GPS with another).
The problem is that I spawn the slave via popen, and pass the shared memory ID to the slave. Via the pipe the slave transmits back the size needed.
After this is done i want to reroute the slave's pipe to terminal to display debug messages and not pass through the master.
Suggestions are greatly appreciated, as well as other solutions to the issue.
The key is to have some form of communication prior to setting up the shared memory.
I suggest to use another mean to communicate. Named pipe are the way I think. Rerouting standard out/err will be tricky at best.
I suggest to use boost.interprocess to handle IPC. And be attentive to synchronization :)
my2c
You may want to look into the SCM_RIGHTS transfer mode of unix domain sockets - this lets you pass a file descriptor across a local socket. This can be used to pass stderr and the like to your slave processes.
You can also pass shared memory segments as a file descriptor as well (at least on Linux). Create a file with a random name in /dev/shm and unlink it immediately. Now ftruncate to the desired size and mmap with MAP_SHARED. Now you have a shared memory segment tied to a file descriptor. One of the nice things about this approach is the shared memory segment is automatically destroyed if all processes holding it terminate.
Putting this together, you can pass your slaves one end of a unix domain socketpair(). Just leave the fd open over exec, and pass the fd number on the command line. Pass whatever configuration information you want as normal data over the socket pair, then hand over a file descriptor pointing to your shared memory segment.
Pipes are not reroutable -- they go where they go when they were created. What you need to do is have the slave close the pipe when its done with it, and then reopen its stdout elsewhere. If you always want output to the terminal, you can use freopen("/dev/tty", "w", stdout), but then it will always go to the terminal -- you can't redirect it anywhere else.
To address the specific issue, send debug messages to stderr, rather than stdout.