Maybe I'm misunderstanding How to make a pipe in c++ thus http://linux.die.net/man/2/pipe, but how does the pipe know where to send to or receive from?
Upon a database update via an ajax page, I want that ajax program to send a message to my websocket program to update all of the other relevant users, and it's been recommended that using pipe would probably be best how 2 c++ programs call each other's class/functions on same linux box?.
Is there just one pipe and all programs read it and validate the message?
Note: I'm using fastcgi++ and websocket++ if that helps.
If you want multiple independent processes to read from the pipe, you need to use a named pipe, also known as a FIFO.
Using the mkfifo function, one process creates a file in the file system (normally in /tmp). This file can then be opened for reading or writing using the normal open system call by any other process that have access to that file.
Related
I'm new to Named Pipes and it seems the basic normal operation is:
Server program:
Call CreateNamedPipe(..) to create an instance of a named pipe.
Call
ConnectNamedPipe(..) to wait for the client program to connect.
Call
WriteFile(..) to send data down the pipe.
Call CloseHandle(..) to
disconnect and close the pipe instance.
Client program:
Call CreateFile(..) to connect to the pipe.
Call ReadFile(..) to get data from the pipe.
Process or output the data.
Call CloseHandle(..) to disconnect from the pipe.
I would really like to be able to open a pipe and push data down it, without knowing/caring if there is a client. If there's no client, the data just gets discarded. Perhaps more a 'sink' than a pipe is a better way to describe this? The server doesn't handle connection attempts it just "throws the data" for anyone who happens to listen.
Is this possible using Named Pipes? If not, is this mechanism provided by some other technique on Windows (or available through STL)? My research so far implies a Named Pipe has a strict 1:1 relationship, 1 server and 1 client which suggests it can almost but not quite do what I want.
So far, using a file as a buffer is my only thought to totally decouple server and client but that is not a very good approach at all.
Is there any way to check if a file is in use in C/C++? Or do I have to ALWAYS implement a lock/semaphore to prevent simultaneous access of any file by multiple threads/processes?
If we consider Linux, and the following scenario: I want to transfer,in chunks, the contents of a file stored in device A to another device B through RS-232 communication, using a pre-defined communication framework. When the request for this transfer comes, I want to verify the file is NOT being used by any process in device A, before sending a "Ready to Transfer : OK" response, following which I will start reading and transmitting the data in chunks.
Is there a way to check file if is already in use without doing fopen/fclose on the said file?
actually
fopen();
is the best way to find this out.
Do fopen() on the receiving end, if it is successful, send the "OK to receive" message.
I am programming a shell in c++. It needs to be able to pipe the output from one thing to another. For example, in linux, you can pipe a textfile to more by doing cat textfile | more.
My function to pipe one thing to another is declared like this:
void pipeinput(string input, string output);
I send "cat textfile" as the input, and "more" as the output.
In c++ examples that show how to make pipes, fopen() is used. What do I send as my input to fopen()? I have seen c++ examples of pipeing using dup2 and without suing dup2. What's dup2 used for? How do you know if you need to use it or not?
Take a look at popen(3), which is a way to avoid execvp.
For a simple, two-command pipeline, the function interface you propose may be sufficient. For the general case of an N-stage pipeline, I don't think it is flexible enough.
The pipe() system call is used to create a pipe. In context, you will be creating one pipe before forking. One of the two processes will arrange for the write end of the pipe to become its standard output (probably using dup2()), and will then close both of the file descriptors originally returned by pipe(). It will then execute the command that writes to the pipe (cat textfile in your example). The other process will arrange for the read enc of the pipe to become its standard input (probably using dup2() again), and will then close both of the file descriptor originally returned by pipe(). It will then execute the command that reads from the pipe (more in your example).
Of course, there will be still a third process around - the parent shell process - which forked off a child to run the entire pipeline. You might decide you want to refine the mechanisms a bit if you want to track the statuses of each process in the pipeline; the process organization is then a bit different.
fopen() is not used to create pipes. It can be used to open the file descriptor, but it is not necessary to do so.
Pipes are created with the pipe(2) call, before forking off the process. The subprocess has a little bit of file descriptor management to do before execing the command. See the example in pipe's documentation.
I am writing a C++ server side application called quote of the day. I am using the winsock2 library. I want to send the contents of a file back to the client, including newlines by using the send function. The way i tried it doesn't work. How would i go about doing this?
Reading the file and writing to the socket are 2 distinct operations. Winsock does not have an API for sending a file directly.
As for reading the file, simply make sure you open it in read binary mode if using fopen, or simply use the CreateFile, and ReadFile Win32 API and it will be binary mode by default.
Usually you will read the file in chunks (for example 10KB at a time) and then send each of those chunks over the socket by using send or WSASend. Once you are done, you can close the socket.
On the receiving side, read whatever's available on the socket until the socket is closed. As you read data into a buffer, write the amount read to a file.
Hmm... I think Win32 should have something similar to "sendfile" in Linux.
If it doesn't you still can use memory-mapping (but, don't forgot to handle files with size larger than available virtual address space). You probably will need to use blocking sockets to avoid returning to application until all data is consumed. And I think there was something with "overlapped" operation to implement async IO.
I recommend dropping winsock and instead using something more modern such as Boost.Asio:
http://www.boost.org/doc/libs/1_37_0/doc/html/boost_asio/tutorial.html
There is also an example on transmitting a file:
http://www.boost.org/doc/libs/1_37_0/doc/html/boost_asio/examples.html
I am wrapping existing C++ code from a BSD project in our own custom wrapper and I want to integrate it to our code with as few changes as possible. This code uses fprintf to print to stderr in order to log / report errors.
I want to redirect this to an alternative place within the same process. On Unix I have done this with a socketpair and a thread: one end of the socket is where I send stderr (via a call to dup2) and the other end is monitored in a thread, where I can then process the output.
This does not work on Windows though because a socket is not the same as a file handle.
All documents I have found on the web show how to redirect output from a child process, which is not what I want. How can I redirect stderr within the same process getting a callback of some sort when output is written? (and before you say so, I've tried SetStdHandle but cannot find any way to make this work)...
You can use a similar technique on Windows, you just need to use different words for the same concepts. :) This article: http://msdn.microsoft.com/en-us/library/ms682499.aspx uses a win32 pipe to handle I/O from another process, you just have to do the same thing with threads within the same process. Of course, in your case all output to stderr from anywhere in the process will be redirected to your consumer.
Actually, other pieces of the puzzle you may need are _fdopen and _open_osfhandle. In fact, here's a related example from some code I released years ago:
DWORD CALLBACK DoDebugThread(void *)
{
AllocConsole();
SetConsoleTitle("Copilot Debugger");
// The following is a really disgusting hack to make stdin and stdout attach
// to the newly created console using the MSVC++ libraries. I hope other
// operating systems don't need this kind of kludge.. :)
stdout->_file = _open_osfhandle((long)GetStdHandle(STD_OUTPUT_HANDLE), _O_TEXT);
stdin->_file = _open_osfhandle((long)GetStdHandle(STD_INPUT_HANDLE), _O_TEXT);
debug();
stdout->_file = -1;
stdin->_file = -1;
FreeConsole();
CPU_run();
return 0;
}
In this case, the main process was a GUI process which doesn't start with stdio handles at all. It opens a console, then shoves the right handles into stdout and stdin so the debug() function (which was designed as a stdio interactive function) can interact with the newly created console. You should be able to open some pipes and do the same sort of thing to redirect stderr.
You have to remember that what MSVCRT calls "OS handles" are not Win32 handles, but another layer of handles added just to confuse you. MSVCRT tries to emulate the Unix handle numbers where stdin = 0, stdout = 1, stderr = 2 and so on. Win32 handles are numbered differently and their values always happen to be a multiple of 4. Opening the pipe and getting all the handles configured properly will require getting your hands messy. Using the MSVCRT source code and a debugger is probably a requirement.
You mention that you don't want to use a named pipe for internal use; it's probably worth poining out that the documentation for CreatePipe() states, "Anonymous pipes are implemented using a named pipe with a unique name. Therefore, you can often pass a handle to an anonymous pipe to a function that requires a handle to a named pipe." So, I suggest that you just write a function that creates a similar pipe with the correct settings for async reading. I tend to use a GUID as a string (generated using CoCreateGUID() and StringFromIID()) to give me a unique name and then create the server and client ends of the named pipe with the correct settings for overlapped I/O (more details on this, and code, here: http://www.lenholgate.com/blog/2008/02/process-management-using-jobs-on-windows.html).
Once I have that I wire up some code that I have to read a file using overlapped I/O with an I/O Completion Port and, well, then I just get async notifications of the data as it arrives... However, I've got a fair amount of well tested library code in there that makes it all happen...
It's probably possible to set up the named pipe and then just do an overlapped read with an event in your OVERLAPPED structure and check the event to see if data was available... I don't have any code available that does that though.