linux prevent file descriptor from closing on program exit - c++

I have a peculiar use case where on linux when using uinput http://thiemonge.org/getting-started-with-uinput, the process that creates the virtual input device, if it dies by default releases all open file descriptors.
In this case it also releases the created virtual input device and the device flat out disappears from /dev/input.
I am wondering if there is a simple solution to this problem, the most obvious being to not release the open file descriptor upon program termination. The more annoying one to spawn a proxy process to simply hold the FD.

I ended up going the proxy approach like so:
void main(int argc, char **argv) {
create_uinput_device();
print_eventn();
set_argv0_eventn();
if (fork()) {
return;
} else {
//hold the uinput fd
while(1) { sleep(1000); }
}
}
This way when we cat /proc/[p]/cmdline we can easily find the /dev/input/event[n] and which pid is currently holding it. We can memcpy the new cmdline to argv0. So this is kind of a hack around this.
Also conveniently when we run this program we return right away with the /dev/input/event[n] device we need to forward to qemu; due to the print.
To truly fix this someone needs to merge into qemu changes so qemu itself creates the virtual input device, this is quite complex due to the vast amount of options that can be passed. Regardless once figured out, the virtual input device created by uinput will live as long as the qemu instance.

Related

How to stop a thread reading stdin in a C++ Linux console application?

I am writing a console application that accepts input (one-line commands) from stdin. This application reads input in a dedicated thread, all input is stored in a queue and later processed by the main thread in a safe way. When the exit command is entered by the user, it is intercepted by the input thread which stops listening for new input, the thread is joined into the main one, and the application stops as requested.
Now I am containerizing this application, but I still want to be able to attach to the container and input commands from stdin, so I specified tty and stdin_open to be true in my docker compose service file, and that did the trick.
But I also want docker compose to be able to gracefully stop the application, so I decided to implement sigTerm() in my application so that it can receive the signal from docker compose and gracefully stop, however I'm stuck on that part, because the input thread is blocking while waiting for input on stdin. I can properly receive the signal, that's not at all the point here, but I'm looking for a way to be able to properly stop my containerized application while still being able to input commands from the keyboard.
My application could be simplified like that :
void gracefulStop() {
while (getThreadCount() > 1) { // this function exists somewhere else.
if (userInputThread.joinable()) {
userInputThread.join();
removeFromThreadCount(); // this function exists somewhere else.
}
std::this_thread::sleep_for(std::chrono::seconds(1));
}
exit(SUCCESS);
}
void sigTerm(int s) {
// Maybe do some stuff here, but what...
gracefulStop();
}
void userInputLoopThreadFunc() {
addToThreadCount(); // this function exists somewhere else.
while (keepGoing) {
char buf[4096];
if (!fgets(buf, sizeof(buf), stdin)) {
break; // we couldn't read from stdin, stop trying.
}
std::string input = std::string(buf); // we received a command
// Intercept exit command
if (input.starts_with("exit")) {
keepGoing = false;
}
// IRL there's thread safety
userInputQueue.push(input); // this will be processed by mainLoop() later
}
}
int main(int argc, char **argv) {
// Register the signal
signal(SIGTERM, sigTerm);
// Begin listening to user input
userInputThread = std::thread(&userInputLoopThreadFunc, this);
// this mainLoop function contains the core of the application
// as well as the processing code of the user input
mainLoop();
// if mainLoop function returned, we received the 'exit' command
gracefulStop();
}
I've read multiple question/answers like this one about non-blocking user input (the accepted answer advises to use a dedicated thread for input, which is what I am doing), or this other one about how to stop reading stdin, and the accepted answer seems promising but :
using ncurses for what I'm trying to do seems really overkill
If using select() and the timeout mechanism described, what would happen if the timeout occurs while typing a command?
Also I've read about the c++20 jthread here :
The class jthread represents a single thread of execution. It has the same general behavior as std::thread, except that jthread automatically rejoins on destruction, and can be cancelled/stopped in certain situations.
But I'm not sure that would help me here.
I'm thinking about multiple possibilities to solve my issue :
Find a way to send a newline character to the stdin of my application without user interaction, would be hackish if at all possible but would probably unblock fgets.
Kill the thread, I understand killing a thread is considered a bad practice, but since the only thing I'm doing here is stopping the application, maybe I can live with that, would there be any side effect? How would I do that?
Rewriting user input in another way (unknown to me yet, jthread, something else?) that would allow sigTerm() to stop the application.
Maybe use ncurses (would that really help me to stop the application by receiving a signal?)
Go with select() and the timeout mechanism and live with the risk of an interrupted input
Give up on user input and have some vacation time.
You can close stdin in your signal handler. fgets will then return immediately (and presumably, return NULL).
The good news is that close is on the list of functions that are safe to call from a signal handler (it's a pretty restrictive list). Happy days.
There's an alternative based around EINTR, but it looks messy since you don't know for certain that fgets will actually return when it gets it.
Also, closing stdin should still work should you switch to using cin and getline, which would definitely improve your code (*). That probably returns and sets badbit when you close stdin, although the code can be made more robust than by checking for that alone. Perhaps just set a (volatile) flag in your signal handler and test that.
(*) Because getline can read into a std::string, which means it can read arbitrary long lines without worrying about allocating a fixed-size buffer that is 'big enough'.

Capturing child stdout to a buffer

I'm developing a cross platform project currently. On windows i had a class that ran a process/script (using a commandline), waited for it to end, and read everything from it's stdout/stderr to a buffer. I then printed the output to a custom 'console'. Note: This was not a redirection of child stdout to parent stdout, just a pipe from child stdout to parent.
I'm new to OSX/unix-like api's but i can understand the canonical way of doing something like this is forking and piping stdouts together. However, i dont want to redirect it to stdout and i would like to capture the output.. It should work pretty much like this (pseudocode, resemblance with unix functions purely coincidental):
class program
{
string name, cmdline;
string output;
program(char * name, char * cmdline)
: name(name), cmdline(cmdline) {};
int run()
{
// run program - spawn it as a new process
int pid = exec(name, cmdline);
// wait for it to finish
wait(pid);
char buf[size];
int n;
// read output of program's stdout
// keep appending data until there's nothing left to read
while (read(pid, buf, size, &n))
output.append(buf, n);
// return exit code of process
return getexitcode(pid);
}
const string & getOutput() { return output; }
};
How would i go about doing this on OSX?
E:
Okay so i studied the relevant api's and it seems that some kind of fork/exec combo is unavoidable. Problem at hand is that my process is very large and forking it really seems like a bad idea (i see that some unix implementations can't do it if the parent process takes up 50%+ of the system ram).
Can't i avoid this scheme in any way? I see that vfork() might be a possible contender, so maybe i could try to mimic the popen() function using vfork. But then again, most man pages state that vfork might very well just be fork()
You have a library call to do just that: popen. It will provide you with a return value of a file descriptor, and you can read that descriptor till eof. It's part of stdio, so you can do that on OSX, but other systems as well. Just remember to pclose() the descriptor.
#include <stdio.h>
FILE * popen(const char *command, const char *mode);
int pclose(FILE *stream);
if you want to keep output with absolutely no redirection, the only thing we can think of is using something like "tee" - a command which splits the output to a file but maintains its own stdout. It's fairly easy to implement that in code as well, but it might not be necessary in this case.

Transferring data between executables

I have two executables written in C++ on Windows. I generate some data in one, and want to call the other executable to process this data. I could write the data out to a file then read it in the other executable, but that seems rather expensive in terms of disk I/O. What is a better way of doing this? It seems like a simple enough question but google just isn't helping!
Let's say the data is around 100MB, and is generated in its entirety before needing to be sent (i.e. no streaming is needed).
Answers that work when mixing 32 bit and 64 bit processes gain bonus points.
If your processes can easily write to and read from file, just go ahead. Create the file with CreateFile and mark it as temporary & shareable. Windows uses this hint to delay physical writes, but all file semantics are still obeyed. Since your file is only 100 MB and actively in use, Windows is almost certainly able to keep its contents fully in RAM.
You can use Boost.MPI. It is from Boost, which has high quality standard, and the code sample seems pretty explicit:
http://www.boost.org/doc/libs/1_53_0/doc/html/mpi/tutorial.html#mpi.point_to_point
// The following program uses two MPI processes to write "Hello, world!"
// to the screen (hello_world.cpp):
int main(int argc, char* argv[])
{
mpi::environment env(argc, argv);
mpi::communicator world;
if (world.rank() == 0) {
world.send(1, 0, std::string("Hello"));
std::string msg;
world.recv(1, 1, msg);
std::cout << msg << "!" << std::endl;
} else {
std::string msg;
world.recv(0, 0, msg);
std::cout << msg << ", ";
std::cout.flush();
world.send(0, 1, std::string("world"));
}
return 0;
}
Assuming you only want to go "one direction" (that is, you don't need to get data BACK from the child process), you could use _popen(). You write your data to the pipe and the child process reads the data from stdin.
If you need bidirectional flow of data, then you will need to use two pipes, one as input and one as output, and you will need to set up a scheme for how the child process connects to those pipes [you can still set up the stdin/stdout to be the data path, but you could also use a pair of named pipes].
A third option is a shared memory region. I've never done this in Windows, but the principle is pretty much the same as what I've used in Linux [and many years back in OS/2]:
1. Create a memory region with a given name in your parent process.
2. The child process opens the same memory region.
3. Data is stored by parent process and read by child process.
4. If necessary, semaphores or similar can be used to signal completion/results ready/etc.

Holding scroll-bar gets command prompt to pause in Windows

I have a program where I record data through an ADC system from National Instruments (NI).
The device buffers information for some time, and then the program collects the buffer data at some point. If the program collects data larger than the buffer, then the buffer would have to free without my program receiving the data, which will cause the NI library to throw an exception saying that requested data isn't available anymore, since it was lost.
Since my program is a command-prompt program, if the user clicks and holds the scrollbar, the program pauses, which could get this problem to happen.
How can I get over this problem without increasing the buffer size? Can I disable this holding thing in Windows?
Thanks.
Only the thread that is attempting to output to the console is blocked. Make this a separate thread, and your problem goes away.
Of course, you'll need to buffer up your output, and do something sensible if the buffer overflows.
For reference, here's the simple code I used to test this, you will note that the counter continues to increase even when the scroll bar is held down:
#include <Windows.h>
#include <stdio.h>
volatile int n = 0;
DWORD WINAPI my_thread(LPVOID parameter)
{
for (;;)
{
n = n + 1;
Sleep(800);
}
}
int main(int argc, char ** argv)
{
if (!CreateThread(NULL, 0, my_thread, NULL, 0, NULL))
{
printf("Error %u from CreateThread\n", GetLastError());
return 0;
}
for (;;)
{
printf("Hello! We're at %u\n", n);
Sleep(1000);
}
return 0;
}
Whilst there may be ways to bypass each individual problem you can possibly conceive with the output [including for example running it over a network on a sometimes slow output link, or some such], I think the correct thing to do is to disconnect your output from your collecting of data. It shouldn't be hard to do this by adding a separate thread that collects the data, and having the main thread display to the command prompt window. That way, not matter which variation of "output is blocked" Windows throws at you, it will work - at least until you run out of RAM, but tat that point it's YOUR program's decision to do something [e.g. throw away some data or some such].
This is generally how the problem "I need to collect something, and I also need to allow users to view the data, but I don't want the two to interfere with each other" is solved.
First use the GetConsoleWindow winapi function and get the HWND of your console.
now i suggest two ways to do this,
Method I
Subclass the window by creating your own WindowProcedure. (get help from here)
Now that you have subclassed it, you can intercept the WM_VSCROLL and WM_HSCROLL messages and do your own remedy to your code.
Method II
Change the size of the window using some function like SetWindowPos so that the scroll bars are not needed.
or Change the size of the console screen buffer so that the scroll bars are not needed.
Method I has lot of control over the application, but its a little bit complex than the method II which is very simple.
If you want to forbid the user from resizing the console window, just remove the WS_THICKFRAME from the WindowStyle of the console window.
I was in a similar situation and found that this kind of blocking behaviour could be caused by the quick edit "feature" of the command prompt. This question explains about it and the answer shows how to disable it. Hope that helps, even after some years :)

How to easily pass a very long string to a worker process under Windows?

My native C++ Win32 program spawns a worker process and needs to pass a huge configuration string to it. Currently it just passes the string as a command line to CreateProcess(). The problem is the string is getting longer and now it doesn't fit into the 32K characters limitation imposed by Windows.
Of course I could do something like complicating the worker process start - I use the RPC server in it anyway and I could introduce an RPC request for passing the configuration string, but this will require a lot of changes and make the solution not so reliable. Saving the data into a file for passing is also not very elegant - the file could be left on the filesystem and become garbage.
What other simple ways are there for passing long strings to a worker process started by my program on Windows?
One possible strategy is to create a named Pipe and pass the handle ( or pipe name) to the other process. Then use normal Read\Write operations on Pipe to extract the data.
There are several good answers already, but the easiest way is to save it in a file, and pass the filename in the command line.
As well as being simple, an advantage of this approach is that the apps will be very loosely coupled (you'll potentially be able to use the child application stand-alone in other ways, rather than always having to launch it from a program that knows how to pipe data into it via a specialised interface)
If you want to be sure that the file is cleaned up after processing, mark it for deletion on the next reboot. THen if anybody forgets to clean it up, the OS will deal with it for you on the next reboot.
I would prefer Boost's message queue. It's extremely simple yet sophisticated. Here's example:
#include <boost/interprocess/ipc/message_queue.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
#include <boost/shared_ptr.hpp>
using namespace boost::interprocess;
// ------------------------------------------------------------------------------
// Your worker:
// ------------------------------------------------------------------------------
try {
message_queue::remove("NAME_OF_YOUR_QUEUE");
boost::shared_ptr<message_queue> mq(new message_queue(create_only, "NAME_OF_YOUR_QUEUE", 65535, 32));
char message[1024];
std::size_t size_received;
unsigned int priority;
if (mq->timed_receive(&message, sizeof(message), size_received, priority, boost::posix_time::ptime(boost::posix_time::second_clock::universal_time()) + boost::posix_time::seconds(1))) {
std::string s(message); // s now contains the message.
}
} catch (std::exception &) {
// ...
}
// ------------------------------------------------------------------------------
// And the sender:
// ------------------------------------------------------------------------------
try {
boost::shared_ptr<message_queue> mq(new message_queue(create_only, "NAME_OF_YOUR_QUEUE", 1024, 1024));
std::stringstream message;
message << "the very very very long message you wish to send over";
while (!mq.try_send(message.str().c_str(), message.str().length(), 0))
::Sleep(33);
} catch (std::exception &) {
// ...
}
Use shared memory. Pass to a worker process name of shared memory object. Another solution is to use WM_COPYDATA message.
How about reading it from stdin :) It seems to work for the Unix folks.
Guaranteed a lot easier than passing pipe names/handles around!
Here is some official code from MSDN for creating child processes with I/O pipes.
Is it a possibility to set up a named shared memory segment?
http://msdn.microsoft.com/en-us/library/aa366551(VS.85).aspx
You could use an inheritable handle to a section object. In your parent process create a section object (CreateFileMapping) and specify that its handle is to be inherited by the child process; then pass the handle value to the child process on the command line. The child process can then open the section object (OpenFileMapping). Though I would prefer a named section object as the semantics of using it are easier to understand.