I am wondering what the best practice is for executing new processes (programs) from a running process. To be more specific, I am implementing a C/C++ job scheduler that has to run multiple binaries while communicating with them. Is exec or fork common? Or is there any library taking care of this?
You can use popen() to spawn the processes and communicate with them. In order to handle communication with many processes from a single parent process, use select() or poll() to multiplex the reading/writing of the file descriptors given to you by popen() (you can use fileno() to turn a FILE* into an integer file descriptor).
If you want a library to abstract much of this for you, I suggest libuv. Here's a complete example program I whipped up, largely following the docs at https://nikhilm.github.io/uvbook/processes.html#spawning-child-processes:
#include <cstdio>
#include <cstdlib>
#include <inttypes.h>
#include <uv.h>
static void alloc_buffer(uv_handle_t *handle, size_t suggested_size, uv_buf_t *buf)
{
*buf = uv_buf_init((char*)malloc(suggested_size), suggested_size);
}
void echo_read(uv_stream_t *server, ssize_t nread, const uv_buf_t* buf)
{
if (nread == -1) {
fprintf(stderr, "error echo_read");
return;
}
puts(buf->base);
}
static void on_exit(uv_process_t *req, int64_t exit_status, int term_signal)
{
fprintf(stderr, "Process %d exited with status %" PRId64 ", signal %d\n",
req->pid, exit_status, term_signal);
uv_close((uv_handle_t*)req, NULL);
}
int main()
{
uv_loop_t* loop = uv_default_loop();
const int N = 3;
uv_pipe_t channel[N];
uv_process_t child_req[N];
for (int ii = 0; ii < N; ++ii) {
char* args[3];
args[0] = const_cast<char*>("ls");
args[1] = const_cast<char*>(".");
args[2] = NULL;
uv_pipe_init(loop, &channel[ii], 1);
uv_stdio_container_t child_stdio[3]; // {stdin, stdout, stderr}
child_stdio[STDIN_FILENO].flags = UV_IGNORE;
child_stdio[STDOUT_FILENO].flags = uv_stdio_flags(UV_CREATE_PIPE | UV_WRITABLE_PIPE);
child_stdio[STDOUT_FILENO].data.stream = (uv_stream_t*)&channel[ii];
child_stdio[STDERR_FILENO].flags = UV_IGNORE;
uv_process_options_t options = {};
options.exit_cb = on_exit;
options.file = "ls";
options.args = args;
options.stdio = child_stdio;
options.stdio_count = sizeof(child_stdio) / sizeof(child_stdio[0]);
int r;
if ((r = uv_spawn(loop, &child_req[ii], &options))) {
fprintf(stderr, "%s\n", uv_strerror(r));
return EXIT_FAILURE;
} else {
fprintf(stderr, "Launched process with ID %d\n", child_req[ii].pid);
uv_read_start((uv_stream_t*)&channel[ii], alloc_buffer, echo_read);
}
}
return uv_run(loop, UV_RUN_DEFAULT);
}
The above will spawn three copies of ls to print the contents of the current directory. They all run asynchronously.
Okay let's start..
There are few ways to create another parallel task from one task. Although I wouldn't name all of them as processes.
Using fork() system call
Now as you have already mentioned that fork() creates a process from your parent process. There are few good things and few bad things about fork().
Good things
fork() is able to create a completely different process & in multi-core CPU systems, it can truly achieve parallelism
fork() also creates a child process with different pid & hence it is nice if you ever want to kill that process explicitly.
wait() & waitpid() system calls are nice to make the parent wait for child.
fork generates SIGCHILD signal and with sigaction function you can make the parent wait for child without blocking it.
Bad things
fork processes do not share the same address space & hence if one process is having say a variable var, the other process cannot access directly that same var. Hence communication is a big issue.
To communicate you need to use certain IPC mechanisms like pipe, namedpipe, messageQueues or sharedMemory
Now out of these pipe, namedpipe and messageQueues can use read & write system calls and because read & write system calls are blocking system calls, you application remains synchronized but these IPCs are very slow. The only fast IPC is sharedMemory but it cannot use read & write & hence you need to use your own synchronization mechanisms, like semaphores. But implementing semaphores for bigger applications is difficult.
Here comes pthread
Now thread removes all the difficulties that are faced by fork.
It doesn't create a separate process.
It rather creates few light-weight subtasks which can run almost parallel.
They all share same address space & hence no need for any IPC.
The come with mutex which is wonderful for any synchronizations needed even for bigger applications.
Thread also don't create any process hence all threads is a part of same process and hence will have same pid.
Note: In C++, thread is a part of C++ library, not a system call.
Note 2: Boost threads in C++ are much more mature & recommended to use.
The main idea although is to know that when to use thread & when to use process.
If you need to create a sub-task which doesn't need to work with some other task but it has to work in isolation, use process; otherwise use thread.
The exec family syscalls are different. It uses your same pid. Hence if you create an application with 500 lines say, and you get a exec call at line number 250, then that exec process will be pasted on your whole process and after exec call, you program will not resume from 251 line. Also, exec calls don't flush your stdio buffers.
But yes, if you intend to create a separate process, and then use exec call to perform that task and then come out, then you are welcome to do it, but remember the IPC to store the output otherwise it is of no use
For more info on fork click here
For more info on thread click here
For boost therad click here
#John Zwinck answer is also good but I know little about select() system call but yes it is possible that way too
Edited: As # Jonathan Leffler pointed
Editing after a long: After some years I now never think of using all these SPOOKY libraries or senseless gruesome ways of parallel or should I say SEEMINGLY parallel processing. Enter coroutines, the future of CONCURRENT processing. Look at the following Go code. Sure this is possible in C/C++ too. This code would hardly be few milliseconds slower for 7.7 mil rows in database than its C/C++ thread based implementation but sever times more manageable and scalable.
package main
import (
"fmt"
"reflect"
"github.com/jinzhu/gorm"
_ "github.com/jinzhu/gorm/dialects/sqlite"
)
type AirQuality struct {
// gorm.Model
// ID uint `gorm:"column:id"`
Index string `gorm:"column:index"`
BEN string `gorm:"column:BEN"`
CH4 string `gorm:"column:CH4"`
CO string `gorm:"column:CO"`
EBE string `gorm:"column:EBE"`
MXY string `gorm:"column:MXY"`
NMHC string `gorm:"column:NMHC"`
NO string `gorm:"column:NO"`
NO2 string `gorm:"column:NO_2"`
NOX string `gorm:"column:NOx"`
OXY string `gorm:"column:OXY"`
O3 string `gorm:"column:O_3"`
PM10 string `gorm:"column:PM10"`
PM25 string `gorm:"column:PM25"`
PXY string `gorm:"column:PXY"`
SO2 string `gorm:"column:SO_2"`
TCH string `gorm:"column:TCH"`
TOL string `gorm:"column:TOL"`
Time string `gorm:"column:date; type:timestamp"`
Station string `gorm:"column:station"`
}
func (AirQuality) TableName() string {
return "AQ"
}
func main() {
c := generateRowsConcurrent("boring!!")
for row := range c {
fmt.Println(row)
}
}
func generateRowsConcurrent(msg string) <-chan []string {
c := make(chan []string)
go func() {
db, err := gorm.Open("sqlite3", "./load_testing_7.6m.db")
if err != nil {
panic("failed to connect database")
}
defer db.Close()
rows, err := db.Model(&AirQuality{}).Limit(20).Rows()
defer rows.Close()
if err != nil {
panic(err)
}
for rows.Next() {
var aq AirQuality
db.ScanRows(rows, &aq)
v := reflect.Indirect(reflect.ValueOf(aq))
var buf []string
for i := 0; i < v.NumField(); i++ {
buf = append(buf, v.Field(i).String())
}
c <- buf
}
defer close(c)
}()
return c
}
Related
I am launching a command using system api (I am ok with using this api with C/C++). The command I pass may hang at times and hence I would like to kill after certain timeout.
Currently I am using it as:
system("COMMAND");
I want to use it something like this:
Run a command using a system independent API (I don't want to use CreateProcess since it is for Windows only) Kill the process if it does not exit after 'X' Minutes.
Since system() is a platform-specific call, there cannot be a platform-independent way of solving your problem. However, system() is a POSIX call, so if it is supported on any given platform, the rest of the POSIX API should be as well. So, one way to solve your problem is to use fork() and kill().
There is a complication in that system() invokes a shell, which will probably spawn other processes, and I presume you want to kill all of them, so one way to do that is to use a process group. The basic idea is use fork() to create another process, place it in its own process group, and kill that group if it doesn't exit after a certain time.
A simple example - the program forks; the child process sets its own process group to be the same as its process ID, and uses system() to spawn an endless loop. The parent process waits 10 seconds then kills the process group, using the negative value of the child process PID. This will kill the forked process and any children of that process (unless they have changed their process group.)
Since the parent process is in a different group, the kill() has no effect on it.
#include <unistd.h>
#include <stdlib.h>
#include <signal.h>
#include <stdio.h>
int main() {
pid_t pid;
pid = fork();
if(pid == 0) { // child process
setpgid(getpid(), getpid());
system("while true ; do echo xx ; sleep 5; done");
} else { // parent process
sleep(10);
printf("Sleep returned\n");
kill(-pid, SIGKILL);
printf("killed process group %d\n", pid);
}
exit(0);
}
There is no standard, cross-platform system API. The hint is that they are system APIs! We're actually "lucky" that we get system, but we don't get anything other than that.
You could try to find some third-party abstraction.
Check below C++ thread based attempt for linux. (not tested)
#include <iostream>
#include <string>
#include <thread>
#include <stdio.h>
using namespace std;
// execute system command and get output
// http://stackoverflow.com/questions/478898/how-to-execute-a-command-and-get-output-of-command-within-c
std::string exec(const char* cmd) {
FILE* pipe = popen(cmd, "r");
if (!pipe) return "ERROR";
char buffer[128];
std::string result = "";
while(!feof(pipe)) {
if(fgets(buffer, 128, pipe) != NULL)
result += buffer;
}
pclose(pipe);
return result;
}
void system_task(string& cmd){
exec(cmd.c_str());
}
int main(){
// system commad that takes time
string command = "find /";
// run the command in a separate thread
std::thread t1(system_task, std::ref(command));
// gives some time for the system task
std::this_thread::sleep_for(chrono::milliseconds(200));
// get the process id of the system task
string query_command = "pgrep -u $LOGNAME " + command;
string process_id = exec(query_command.c_str());
// kill system task
cout << "killing process " << process_id << "..." << endl;
string kill_command = "kill " + process_id;
exec(kill_command.c_str());
if (t1.joinable())
t1.join();
cout << "continue work on main thread" << endl;
return 0;
}
I had a similar problem, in a Qt/QML development: I want to start a bash command, while continuing to process events on the Qt Loop, and killing the bash command if it takes too long.
I came up with the following class that I'm sharing here (see below), in hope it may be of some use to people with a similar problem.
Instead of calling a 'kill' command, I call a cleanupCommand supplied by the developper. Example: if I'm to call myscript.sh and want to check that it won't last run for more than 10 seconds, I'll call it the following way:
SystemWithTimeout systemWithTimeout("myScript.sh", 10, "killall myScript.sh");
systemWithTimeout.start();
Code:
class SystemWithTimeout {
private:
bool m_childFinished = false ;
QString m_childCommand ;
int m_seconds ;
QString m_cleanupCmd ;
int m_period;
void startChild(void) {
int rc = system(m_childCommand.toUtf8().data());
if (rc != 0) SYSLOG(LOG_NOTICE, "Error SystemWithTimeout startChild: system returned %d", rc);
m_childFinished = true ;
}
public:
SystemWithTimeout(QString cmd, int seconds, QString cleanupCmd)
: m_childFinished {false}, m_childCommand {cmd}, m_seconds {seconds}, m_cleanupCmd {cleanupCmd}
{ m_period = 200; }
void setPeriod(int period) {m_period = period;}
void start(void) ;
};
void SystemWithTimeout::start(void)
{
m_childFinished = false ; // re-arm the boolean for 2nd and later calls to 'start'
qDebug()<<"systemWithTimeout"<<m_childCommand<<m_seconds;
QTime dieTime= QTime::currentTime().addSecs(m_seconds);
std::thread child(&SystemWithTimeout::startChild, this);
child.detach();
while (!m_childFinished && QTime::currentTime() < dieTime)
{
QTime then = QTime::currentTime();
QCoreApplication::processEvents(QEventLoop::AllEvents, m_period); // Process events during up to m_period ms (default: 200ms)
QTime now = QTime::currentTime();
int waitTime = m_period-(then.msecsTo(now)) ;
QThread::msleep(waitTime); // wait for the remaning of the 200 ms before looping again.
}
if (!m_childFinished)
{
SYSLOG(LOG_NOTICE, "Killing command <%s> after timeout reached (%d seconds)", m_childCommand.toUtf8().data(), m_seconds);
int rc = system(m_cleanupCmd.toUtf8().data());
if (rc != 0) SYSLOG(LOG_NOTICE, "Error SystemWithTimeout 164: system returned %d", rc);
m_childFinished = true ;
}
}
I do not know any portable way to do that in C nor C++ languages. As you ask for alternatives, I know it is possible in other languages. For example in Python, it is possible using the subprocess module.
import subprocess
cmd = subprocess.Popen("COMMAND", shell = True)
You can then test if COMMAND has ended with
if cmd.poll() is not None:
# cmd has finished
and you can kill it with :
cmd.terminate()
Even if you prefere to use C language, you should read the documentation for subprocess module because it explains that internally it uses CreateProcess on Windows and os.execvp on Posix systems to start the command, and it uses TerminateProcess on Windows and SIG_TERM on Posix to stop it.
I keep running into this problem of trying to run a thread with the following properties:
runs in an infinite loop, checking some external resource, e.g. data from the network or a device,
gets updates from its resource promptly,
exits promptly when asked to,
uses the CPU efficiently.
First approach
One solution I have seen for this is something like the following:
void class::run()
{
while(!exit_flag)
{
if (resource_ready)
use_resource();
}
}
This satisfies points 1, 2 and 3, but being a busy waiting loop, uses 100% CPU.
Second approach
A potential fix for this is to put a sleep statement in:
void class::run()
{
while(!exit_flag)
{
if (resource_ready)
use_resource();
else
sleep(a_short_while);
}
}
We now don't hammer the CPU, so we address 1 and 4, but we could wait up to a_short_while unnecessarily when the resource is ready or we are asked to quit.
Third approach
A third option is to do a blocking read on the resource:
void class::run()
{
while(!exit_flag)
{
obtain_resource();
use_resource();
}
}
This will satisfy 1, 2, and 4 elegantly, but now we can't ask the thread to quit if the resource does not become available.
Question
The best approach seems to be the second one, with a short sleep, so long as the tradeoff between CPU usage and responsiveness can be achieved.
However, this still seems suboptimal, and inelegant to me. This seems like it would be a common problem to solve. Is there a more elegant way to solve it? Is there an approach which can address all four of those requirements?
This depends on the specifics of the resources the thread is accessing, but basically to do it efficiently with minimal latency, the resources need to provide an API for either doing an interruptible blocking wait.
On POSIX systems, you can use the select(2) or poll(2) system calls to do that, if the resources you're using are files or file descriptors (including sockets). To allow the wait to be preempted, you also create a dummy pipe which you can write to.
For example, here's how you might wait for a file descriptor or socket to become ready or for the code to be interrupted:
// Dummy pipe used for sending interrupt message
int interrupt_pipe[2];
int should_exit = 0;
void class::run()
{
// Set up the interrupt pipe
if (pipe(interrupt_pipe) != 0)
; // Handle error
int fd = ...; // File descriptor or socket etc.
while (!should_exit)
{
// Set up a file descriptor set with fd and the read end of the dummy
// pipe in it
fd_set fds;
FD_CLR(&fds);
FD_SET(fd, &fds);
FD_SET(interrupt_pipe[1], &fds);
int maxfd = max(fd, interrupt_pipe[1]);
// Wait until one of the file descriptors is ready to be read
int num_ready = select(maxfd + 1, &fds, NULL, NULL, NULL);
if (num_ready == -1)
; // Handle error
if (FD_ISSET(fd, &fds))
{
// fd can now be read/recv'ed from without blocking
read(fd, ...);
}
}
}
void class::interrupt()
{
should_exit = 1;
// Send a dummy message to the pipe to wake up the select() call
char msg = 0;
write(interrupt_pipe[0], &msg, 1);
}
class::~class()
{
// Clean up pipe etc.
close(interrupt_pipe[0]);
close(interrupt_pipe[1]);
}
If you're on Windows, the select() function still works for sockets, but only for sockets, so you should install use WaitForMultipleObjects to wait on a resource handle and an event handle. For example:
// Event used for sending interrupt message
HANDLE interrupt_event;
int should_exit = 0;
void class::run()
{
// Set up the interrupt event as an auto-reset event
interrupt_event = CreateEvent(NULL, FALSE, FALSE, NULL);
if (interrupt_event == NULL)
; // Handle error
HANDLE resource = ...; // File or resource handle etc.
while (!should_exit)
{
// Wait until one of the handles becomes signaled
HANDLE handles[2] = {resource, interrupt_event};
int which_ready = WaitForMultipleObjects(2, handles, FALSE, INFINITE);
if (which_ready == WAIT_FAILED)
; // Handle error
else if (which_ready == WAIT_OBJECT_0))
{
// resource can now be read from without blocking
ReadFile(resource, ...);
}
}
}
void class::interrupt()
{
// Signal the event to wake up the waiting thread
should_exit = 1;
SetEvent(interrupt_event);
}
class::~class()
{
// Clean up event etc.
CloseHandle(interrupt_event);
}
You get a efficient solution if your obtain_ressource() function supports a timeout value:
while(!exit_flag)
{
obtain_resource_with_timeout(a_short_while);
if (resource_ready)
use_resource();
}
This effectively combines the sleep() with the obtain_ressurce() call.
Check out the manpage for nanosleep:
If the nanosleep() function returns because it has been interrupted by a signal, the function returns a value of -1 and sets errno to indicate the interruption.
In other words, you can interrupt sleeping threads by sending a signal (the sleep manpage says something similar). This means you can use your 2nd approach, and use an interrupt to immediately wake the thread if it's sleeping.
Use the Gang of Four Observer Pattern:
http://home.comcast.net/~codewrangler/tech_info/patterns_code.html#Observer
Callback, don't block.
Self-Pipe trick can be used here.
http://cr.yp.to/docs/selfpipe.html
Assuming that you are reading the data from file descriptor.
Create a pipe and select() for readability on the pipe input as well as on the resource you are interested.
Then when data comes on resource, the thread wakes up and does the processing. Else it sleeps.
To terminate the thread send it a signal and in signal handler, write something on the pipe (I would say something which will never come from the resource you are interested in, something like NULL for illustrating the point). The select call returns and thread on reading the input knows that it got the poison pill and it is time to exit and calls pthread_exit().
EDIT: Better way will be just to see that the data came on the pipe and hence just exit rather than checking the value which came on that pipe.
The Win32 API uses more or less this approach:
someThreadLoop( ... )
{
MSG msg;
int retVal;
while( (retVal = ::GetMessage( &msg, TaskContext::winHandle_, 0, 0 )) > 0 )
{
::TranslateMessage( &msg );
::DispatchMessage( &msg );
}
}
GetMessage itself blocks until any type of message is received therefore not using any processing (refer). If a WM_QUIT is received, it returns false, exiting the thread function gracefully. This is a variant of the producer/consumer mentioned elsewhere.
You can use any variant of a producer/consumer, and the pattern is often similar. One could argue that one would want to split the responsibility concerning quitting and obtaining of a resource, but OTOH quitting could depend on obtaining a resource too (or could be regarded as one of the resources - but a special one). I would at least abstract the producer consumer pattern and have various implementations thereof.
Therefore:
AbstractConsumer:
void AbstractConsumer::threadHandler()
{
do
{
try
{
process( dequeNextCommand() );
}
catch( const base_except& ex )
{
log( ex );
if( ex.isCritical() ){ throw; }
//else we don't want loop to exit...
}
catch( const std::exception& ex )
{
log( ex );
throw;
}
}
while( !terminated() );
}
virtual void /*AbstractConsumer::*/process( std::unique_ptr<Command>&& command ) = 0;
//Note:
// Either may or may not block until resource arrives, but typically blocks on
// a queue that is signalled as soon as a resource is available.
virtual std::unique_ptr<Command> /*AbstractConsumer::*/dequeNextCommand() = 0;
virtual bool /*AbstractConsumer::*/terminated() const = 0;
I usually encapsulate command to execute a function in the context of the consumer, but the pattern in the consumer is always the same.
Any (welln at least, most) approaches mentioned above will do the following: thread is created, then it's blocked wwiting for resource, then it's deleted.
If you're worried about efficiency, this is not a best approach when waiting for IO. On Windows at least, you'll allocate around 1mb of memory in user mode, some in kernel for just one additional thread. What if you have many such resources? Having many waiting threads will also increase context switches and slow down your program. What if resource takes longer to be available and many requests are made? You may end up with tons of waiting threads.
Now, the solution to it (again, on Windows, but I'm sure there should be something similar on other OSes) is using threadpool (the one provided by Windows). On Windows this will not only create limited amount of threads, it'll be able to detect when thread is waiting for IO and will stwal thread from there and reuse it for other operations while waitting.
See http://msdn.microsoft.com/en-us/library/windows/desktop/ms686766(v=vs.85).aspx
Also, for more fine-grained control bit still having ability give up thread when waiting for IO, see IO completion ports (I think they'll anyway use threadpool inside): http://msdn.microsoft.com/en-us/library/windows/desktop/aa365198(v=vs.85).aspx
I was wondering if C++ had any way of doing interrupts. I want one program to store information in a text file, while the other one prints a statement depending on what is in the text file. Since I want it to be as accurate as possible, I need the print program to be interrupted when the update program updates the file.
C++ itself doesn't give this capability, it knows nothing of other programs that may or may not be running.
What you need to look into is IPC (inter-process communications), something your operating system will probably provide.
Things like signals, shared memory, semaphores, message queues and so on.
Since you seem to be using the file itself as the method of delivering content to the other process, signals are probably the way to go. You would simply raise a signal from process A to process B and a signal handler would run in the latter.
Of course this all depends on which operating system you're targeting.
If you are using Windows you can use FindFirstChangeNotification.
Here's some old code I have. This is run in it's own thread:
DWORD CDiskWatcher::Run(void *vpParameter)
{
CFileNotifyInterface *pIface = (CFileNotifyInterface *)vpParameter;
HANDLE handles[2];
handles[0] = m_hQuitEvent;
handles[1] = ::FindFirstChangeNotification(m_szPath, FALSE, FILE_NOTIFY_CHANGE_LAST_WRITE|FILE_NOTIFY_CHANGE_FILE_NAME);
DWORD dwObject;
if (INVALID_HANDLE_VALUE != handles[1]) {
do {
// Wait for the notification
dwObject = ::WaitForMultipleObjects(2, handles, FALSE, INFINITE);
if (WAIT_OBJECT_0 + 1 == dwObject) {
// Continue waiting...
::FindNextChangeNotification(handles[1]);
pIface->FireFileSystemChange(m_szPath);
}
} while (WAIT_OBJECT_0 != dwObject);
// Close handle
::FindCloseChangeNotification(handles[1]);
}
return 0;
}
Note m_hQuitEvent is created with CreateEvent() and CFileNotifyInterface is for callbacks:
class CFileNotifyInterface
{
public:
virtual void FireFileSystemChange(const char *szPath) = 0;
};
I need to run an external program from within a c++ application. I need the output from that program (i want to see it while the program is still running) and it also needs to get input.
What is the best and most elegant way to redirect the IO? Should it be running in it's own thread? Any examples?
It's running on OSX.
I implemented it like this:
ProgramHandler::ProgramHandler(std::string prog): program(prog){
// Create two pipes
std::cout << "Created Class\n";
pipe(pipe1);
pipe(pipe2);
int id = fork();
std::cout << "id: " << id << std::endl;
if (id == 0)
{
// In child
// Close current `stdin` and `stdout` file handles
close(fileno(stdin));
close(fileno(stdout));
// Duplicate pipes as new `stdin` and `stdout`
dup2(pipe1[0], fileno(stdin));
dup2(pipe2[1], fileno(stdout));
// We don't need the other ends of the pipes, so close them
close(pipe1[1]);
close(pipe2[0]);
// Run the external program
execl("/bin/ls", "bin/ls");
char buffer[30];
while (read(pipe1[0], buffer, 30)) {
std::cout << "Buf: " << buffer << std::endl;
}
}
else
{
// We don't need the read-end of the first pipe (the childs `stdin`)
// or the write-end of the second pipe (the childs `stdout`)
close(pipe1[0]);
close(pipe2[1]);
// Now you can write to `pipe1[1]` and it will end up as `stdin` in the child
// Read from `pipe2[0]` to read from the childs `stdout`
}
}
but as an output i get this:
Created Class
id: 84369
id: 0
I don't understand why it s called twice and why it wont fork the first time. What am I doing/understanding wrong.
If using a POSIX system (like OSX or Linux) then you have to learn the system calls pipe, fork, close, dup2 and exec.
What you do is create two pipes, one for reading from the external application and one for writing. Then you fork to create a new process, and in the child process you set up the pipes as stdin and stdout and then call exec which replaces the child process with an external program using your new stdin and stdout file handles. In the parent process you can not read the output from the child process, and write to its input.
In pseudo-code:
// Create two pipes
pipe(pipe1);
pipe(pipe2);
if (fork() == 0)
{
// In child
// Close current `stdin` and `stdout` file handles
close(FILENO_STDIN);
close(FILENO_STDOUT);
// Duplicate pipes as new `stdin` and `stdout`
dup2(pipe1[0], FILENO_STDIN);
dup2(pipe2[1], FILENO_STDOUT);
// We don't need the other ends of the pipes, so close them
close(pipe1[1]);
close(pipe2[0]);
// Run the external program
exec("/some/program", ...);
}
else
{
// We don't need the read-end of the first pipe (the childs `stdin`)
// or the write-end of the second pipe (the childs `stdout`)
close(pipe1[0]);
close(pipe2[1]);
// Now you can write to `pipe1[1]` and it will end up as `stdin` in the child
// Read from `pipe2[0]` to read from the childs `stdout`
}
Read the manual pages of the system calls for more information about them. You also need to add error checking as all of these system calls may fail.
Well there is a pretty standard way to do this. In general you would like to fork the process and to close the standard I/O (fd 0,1) of the child. Before forking have create two pipes, after forking close the standard input and output in the child and connect them to the pipe, using dup.
Pseudo code, shows only one side of the connection, I'm sure you can figure out the other side.
int main(){
int fd[2]; // file descriptors
pipe(fd);
// Fork child process
if (fork() == 0){
char buffer [80];
close(1);
dup(fd[1]); // this will take the first free discriptor, the one you just closed.
close(fd[1]); // clean up
}else{
close(0);
dup(fd[0]);
close(fd[0]);
}
return 0;
}
After you have the pipe set up and one of the parent threads waiting on a select or something, you can call exec for your external tool and have all the data flowing.
The basic approach to communicate with a different program on POSIX systems is to setup a pipe(), then fork() your program, close() and dup() file descriptors into the correct location, and finally to exec??() the desired executable.
Once this is done, you have your two programs connected with suitable streams. Unfortunately, this doesn't deal with any form of asynchronous processing of the two programs. That is, it is likely that you either want to access the created file descriptor with suitable asynchronous and non-blocking operations (i.e., setup the various file descriptors to be non-blocking and/or access them only when poll() yields results indicating that you can access them). If there is just that one executable it may be easier to control it from a separate thread, though.
A different approach (and if you are also writing the external program) is to use shared memory. Something along the lines of (pseudo code)
// create shared memory
int l_shmid = shmget(key, size ,0600 | IPC_CREAT);
if(l_shmid < 0)
ERROR
// attach to shared memory
dataptr* ptr = (dataptr*)shmat(l_shmid, NULL, 0600);
// run external program
pid_t l_pid = fork();
if(l_pid == (pid_t)-1)
{
ERROR
// detach & delete shared mem
shmdt(ptr);
shmctl(l_shmid,
IPC_RMID,
(shmid_ds *)NULL);
return;
}
else if(l_pid == 0)
{
// child:
execl(path,
args,
NULL);
return;
}
// wait for the external program to finish
int l_stat(0);
waitpid(l_pid, &l_stat, 0);
// read from shmem
memset(mydata, ..,..);
memcpy(mydata, ptr, ...);
// detach & close shared mem
shmdt(ptr);
shmctl(l_shmid,
IPC_RMID,
(shmid_ds *)NULL);
Your external program can write to shared memory in a similar way. No need for pipes & reading/writing/selecting etc.
I'm writing a program, and once a button is pushed, I have to execute a server process (that will stop only if I decide to kill him).
To execute this process, I decided to use fork/execv mechanism :
void Command::RunServer() {
pid = fork();
if (pid==0) {
chdir("./bin");
char str[10];
sprintf(str,"%d",port);
char *argv[] = {"./Server", str};
execv("./Server",argv);
}
else {
config->pid = pid;
return;
}
}
And in the method "button pushed", I do:
command->RunServer();
It seemed to work nicely a few days ago... and now i get error :
main: xcb_io.c:221: poll_for_event: Assertion `(((long) (event_sequence) - (long) (dpy->request)) <= 0)' failed.
Should I try to switch to pthread? Did I do something bad?
Thanks,
eo
When you do fork() all file descriptors of your process are duplicated in the new one. And when you do exec*() all file descriptors are also kept, unless they are marked with the flag FD_CLOEXEC.
My guess is that some fd used by some library (Xlib, probably) is inherited by the new process, and that the duplication is causing chaos in your program.
In these cases is useful the BSD function closefrom() (closefrom(3)) if you want to keep the standard I/O opened. Unfortunately, in linux there is no such function, so you have to do a close-all loop or similar cruft:
int open_max = sysconf (_SC_OPEN_MAX);
for (int i = 3; i < open_max; i++)
close(i);
You can read more about this problem here.
In the call to execv, argv has to be terminated by a null pointer.
The preceding line should be:
char* argv[] = { "./Server", str, NULL };