Implementing background process in a dummy C++ shell - c++

I've been trying to mimic & in my dummy shell.
The foreground process works fine, but as soon as I include the & symbol, it doesn't behave as expected. The program shows unexpected behavior. It first executes the process (which should not be executed as a foreground process) and then it just freezes until I press the Enter key.
Here is the snippet of my code.
if(background)
{
int bgpid;
pid_t fork_return;
fork_return = fork();
if(fork_return == 0)
{
setpgid(0,0);
if(execvp(path, args) == -1)
{
bgpid = getpid();
cout<<"Error\n";
return 1;
}
else if(fork_return != -1)
{
addToTable(bgpid);
return 1;
}
}else{
court<<"ERROR\n";
return 1;
}
}
Output Image is also attached here

Related

How to convert the system call to fork in C++ linux

This is the code for playing sound file in C++ linux code
string str1 = "aplay ";
str1 = str1 + " out.wav" + " & ";
const char *command = str1.c_str();
system(command);
** Entire code is available here : Playing sound C++ linux aplay : device or resource busy
I just want to know how to play this in a fork() as I read that system call is too taxing on cpu, which ofcourse is in my case.
Please help
fork will make a copy of your process, so you can easily write:
// fork the current process: beyond this point, you will have 2 process
int ret = fork();
if (ret == 0) {
// in child: execute the long command
system("aplay out.wav");
// exit the child process
exit(0);
}
// child process will not go here
if (ret < 0) {
perror("fork");
}
After, you should know that system will do for you fork + exec + wait. Since you don't want your parent process to wait the child, you can write:
// fork the current process: beyond this point, you will have 2 process
int ret = fork();
if (ret == 0) {
// in child: execute the long command
char program[] = "/usr/bin/aplay";
char *args[] = {"/usr/bin/aplay", "out.wav" };
ret = execv(program, args);
// this point will be reach only if `exec` fails
// so if we reach this point, we've got an error.
perror("execv");
exit(0);
}
// child process will not go here
if (ret < 0) {
perror("fork");
}

How to get pid of process executed with system() command in c++

When we use system() command, program wait until it complete but I am executing a process using system() and using load balance server due to which program comes to next line just after executing system command. Please note that that process may not be complete.
system("./my_script");
// after this I want to see whether it is complete or not using its pid.
// But how do i Know PID?
IsScriptExecutionComplete();
Simple answer: you can't.
The purpose of system() is to block when command is being executed.
But you can 'cheat' like this:
pid_t system2(const char * command, int * infp, int * outfp)
{
int p_stdin[2];
int p_stdout[2];
pid_t pid;
if (pipe(p_stdin) == -1)
return -1;
if (pipe(p_stdout) == -1) {
close(p_stdin[0]);
close(p_stdin[1]);
return -1;
}
pid = fork();
if (pid < 0) {
close(p_stdin[0]);
close(p_stdin[1]);
close(p_stdout[0]);
close(p_stdout[1]);
return pid;
} else if (pid == 0) {
close(p_stdin[1]);
dup2(p_stdin[0], 0);
close(p_stdout[0]);
dup2(p_stdout[1], 1);
dup2(::open("/dev/null", O_RDONLY), 2);
/// Close all other descriptors for the safety sake.
for (int i = 3; i < 4096; ++i)
::close(i);
setsid();
execl("/bin/sh", "sh", "-c", command, NULL);
_exit(1);
}
close(p_stdin[0]);
close(p_stdout[1]);
if (infp == NULL) {
close(p_stdin[1]);
} else {
*infp = p_stdin[1];
}
if (outfp == NULL) {
close(p_stdout[0]);
} else {
*outfp = p_stdout[0];
}
return pid;
}
Here you can have not only PID of the process, but also it's STDIN and STDOUT. Have fun!
Not an expert on this myself, but if you look at the man page for system:
system() executes a command specified in command by calling /bin/sh -c command, and returns after the command has been completed
You can go into the background within the command/script you're executing (and return immediately), but I don't think there's a specific provision in system for that case.
Ideas I can think of are:
Your command might return the pid through the return code.
Your code might want to look up the name of the command in the active processes (e.g. /proc APIs in unix-like environments).
You might want to launch the command yourself (instead of through a SHELL) using fork/exec
As the other answers said, std::system blocks until complete anyway. However, if you want to run the child process async and you are ok with boost you can use boost.process (ref):
#include <boost/process.hpp>
namespace bp = boost::process;
bp::child c(bp::search_path("echo"), "hello world");
std::cout << c.id() << std::endl;
// ... do something with ID ...
c.wait();
You can check exit status of your command by following code :
int ret = system("./my_script");
if (WIFEXITED(ret) && !WEXITSTATUS(ret))
{
printf("Completed successfully\n"); ///successful
}
else
{
printf("execution failed\n"); //error
}

C++ - SQLite3 leaks handles in multithread environment

I wrote a simple program that spawns 10 threads, each thread opens a database (common to all the threads), or creates it (with "Write-Ahead Log" option) if open fails, creates a table on the database and then it goes into an infinite loop in which it adds one row at the time into its table. I found out that the program leaks about 2 handles every 5 minutes, I tried a tool called Memory Verify which tells me that the leaked handles are SQLite3 file locks (line 34034 on the version 3.7.13) but I am not sure whether the bug is in SQLite or in the way I use it.
I haven't specified any compiler option to build SQLite3 so it is built as Multi-Thread and as far as I understand Multi-Thread should work fine in my case as every threads has its own SQLite connection.
To open or create a database I use the following code:
bool Create()
{
int iFlags = 0;
iFlags = iFlags | SQLITE_OPEN_READWRITE | SQLITE_OPEN_NOMUTEX | SQLITE_OPEN_CREATE;
return sqlite3_open_v2(dbName_sm.c_str(), &pHandle_m, iFlags, 0) == SQLITE_OK;
}
bool Open()
{
int iFlags = 0;
iFlags = iFlags | SQLITE_OPEN_READWRITE | SQLITE_OPEN_NOMUTEX;
return sqlite3_open_v2(dbName_sm.c_str(), &pHandle_m, iFlags, 0) == SQLITE_OK;
}
The hard loop in every thread calls ExecuteQuery which does prepare, step and finalize of an INSERT statement:
bool ExecuteQuery(const std::string& statement)
{
bool res = Prepare(statement);
if(!res)
{
return false;;
}
SQLiteStatus status = Step();
Finalize();
res = (ESuccess == status || EDatabaseDone == status);
return res;
}
bool Prepare(const std::string& statement)
{
return sqlite3_prepare_v2(pHandle_m, statement.c_str(), -1, &pStmt_m, 0) == SQLITE_OK;
}
enum SQLiteStatus { ESuccess, EDatabaseDone, EDatabaseTimeout, EDatabaseError };
SQLiteStatus Step()
{
int iRet = sqlite3_step(pStmt_m);
if (iRet == SQLITE_DONE)
{
return EDatabaseDone;
}
else if (iRet == SQLITE_BUSY)
{
return EDatabaseTimeout;
}
else if (iRet != SQLITE_ROW)
{
return EDatabaseError;
}
return ESuccess;
}
bool Finalize()
{
int iRet = sqlite3_finalize(pStmt_m);
pStmt_m = 0;
return iRet == SQLITE_OK;
}
Do you guys see any mistake in my code or is it a known issue in SQLite? I tried to google it for a couple of days but I couldn't find anything about it.
Thank you very much for your help.
Regards,
Andrea
P.S. I forgot to say that I am running my test on a WinXP 64bit PC, the compiler is VS2010, the application is compiled in 32bit, SQLite version is 3.7.13...
check whether you have sqlite3_reset after every sqlite3_step because this is one case that might causes leaks. after preparing a statement with sqlite3_prepare and executing it with sqlite3_step,you need to always reset it with sqlite3_reset.

C++ Map Iteration and Stack Corruption

I am trying to use a system of maps to store and update data for a chat server. The application is mutlithreaded and uses a lock system to prevent multiple threads from accessing the data.
The problem is this: when a client is removed individually from the map, it is ok. However, when I try to call multiple closes, it leaves some in the memory. If I at any point call ::clear() on the map, it causes a debug assertion error with either "Iterator not compatible" or similar. The code will work the first time (tested using 80+ consoles connected as a test), but due to it leaving chunks behind, will not work again. I have tried researching ways, and I have written systems to stop the code execution until each process has completed. I appreciate any help so far, and I have attached the relevant code snippets.
//portion of server code that handles shutting down
DWORD WINAPI runserver(void *params) {
runserverPARAMS *p = (runserverPARAMS*)params;
/*Server stuff*/
serverquit = 0;
//client based cleanup
vector<int> tokill;
map<int,int>::iterator it = clientsockets.begin();
while(it != clientsockets.end()) {
tokill.push_back(it->first);
++it;
}
for(;;) {
for each (int x in tokill) {
clientquit[x] = 1;
while(clientoffline[x] != 1) {
//haulting execution until thread has terminated
}
destoryclient(x);
}
}
//client thread based cleanup complete.
return 0;
}
//clientioprelim
DWORD WINAPI clientioprelim(void* params) {
CLIENTthreadparams *inparams = (CLIENTthreadparams *)params;
/*Socket stuff*/
for(;;) {
/**/
}
else {
if(clientquit[inparams->clientid] == 1)
break;
}
}
clientoffline[inparams->clientid] = 1;
return 0;
}
int LOCKED; //exported as extern via libraries.h so it's visible to other source files
void destoryclient(int clientid) {
for(;;) {
if(LOCKED == 0) {
LOCKED = 1;
shutdown(clientsockets[clientid], 2);
closesocket(clientsockets[clientid]);
if((clientsockets.count(clientid) != 0) && (clientsockets.find(clientid) != clientsockets.end()))
clientsockets.erase(clientsockets.find(clientid));
if((clientname.count(clientid) != 0) && (clientname.find(clientid) != clientname.end()))
clientname.erase(clientname.find(clientid));
if((clientusername.count(clientid) != 0) && (clientusername.find(clientid) != clientusername.end()))
clientusername.erase(clientusername.find(clientid));
if((clientaddr.count(clientid) != 0) && (clientaddr.find(clientid) != clientaddr.end()))
clientaddr.erase(clientusername.find(clientid));
if((clientcontacts.count(clientid) != 0) && (clientcontacts.find(clientid) != clientcontacts.end()))
clientcontacts.erase(clientcontacts.find(clientid));
if((clientquit.count(clientid) != 0) && (clientquit.find(clientid) != clientquit.end()))
clientquit.erase(clientquit.find(clientid));
if((clientthreads.count(clientid) != 0) && (clientthreads.find(clientid) != clientthreads.end()))
clientthreads.erase(clientthreads.find(clientid));
LOCKED = 0;
break;
}
}
return;
}
Are you really using an int for locking or was it just a simplification of the code? If you really use an int: this won't work and the critical section can be entered twice (or more) simultaneously, if both threads check the variable before one assigns to it (simplified). See mutexes in Wikipedia for reference. You could either use some sort of mutex provided by windows or boost thread instead of the int.

Linux: Executing child process with piped stdin/stdout

Using Linux and C++, I would like a function that does the following:
string f(string s)
{
string r = system("foo < s");
return r;
}
Obviously the above doesn't work, but you get the idea. I have a string s that I would like to pass as the standard input of a child process execution of application "foo", and then I would like to record its standard output to string r and then return it.
What combination of Linux syscalls or POSIX functions should I use?
I'm using Linux 3.0 and do not need the solution to work with older systems.
The code provided by eerpini does not work as written. Note, for example, that the pipe ends that are closed in the parent are used afterwards. Look at
close(wpipefd[1]);
and the subsequent write to that closed descriptor. This is just transposition, but it shows this code has never been used. Below is a version that I have tested. Unfortunately, I changed the code style, so this was not accepted as an edit of eerpini's code.
The only structural change is that I only redirect the I/O in the child (note the dup2 calls are only in the child path.) This is very important, because otherwise the parent's I/O gets messed up. Thanks to eerpini for the initial answer, which I used in developing this one.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <errno.h>
#define PIPE_READ 0
#define PIPE_WRITE 1
int createChild(const char* szCommand, char* const aArguments[], char* const aEnvironment[], const char* szMessage) {
int aStdinPipe[2];
int aStdoutPipe[2];
int nChild;
char nChar;
int nResult;
if (pipe(aStdinPipe) < 0) {
perror("allocating pipe for child input redirect");
return -1;
}
if (pipe(aStdoutPipe) < 0) {
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
perror("allocating pipe for child output redirect");
return -1;
}
nChild = fork();
if (0 == nChild) {
// child continues here
// redirect stdin
if (dup2(aStdinPipe[PIPE_READ], STDIN_FILENO) == -1) {
exit(errno);
}
// redirect stdout
if (dup2(aStdoutPipe[PIPE_WRITE], STDOUT_FILENO) == -1) {
exit(errno);
}
// redirect stderr
if (dup2(aStdoutPipe[PIPE_WRITE], STDERR_FILENO) == -1) {
exit(errno);
}
// all these are for use by parent only
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);
// run child process image
// replace this with any exec* function find easier to use ("man exec")
nResult = execve(szCommand, aArguments, aEnvironment);
// if we get here at all, an error occurred, but we are in the child
// process, so just exit
exit(nResult);
} else if (nChild > 0) {
// parent continues here
// close unused file descriptors, these are for child only
close(aStdinPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);
// Include error check here
if (NULL != szMessage) {
write(aStdinPipe[PIPE_WRITE], szMessage, strlen(szMessage));
}
// Just a char by char read here, you can change it accordingly
while (read(aStdoutPipe[PIPE_READ], &nChar, 1) == 1) {
write(STDOUT_FILENO, &nChar, 1);
}
// done with these in this example program, you would normally keep these
// open of course as long as you want to talk to the child
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
} else {
// failed to create child
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);
}
return nChild;
}
Since you want bidirectional access to the process, you would have to do what popen does behind the scenes explicitly with pipes. I am not sure if any of this will change in C++, but here is a pure C example :
void piped(char *str){
int wpipefd[2];
int rpipefd[2];
int defout, defin;
defout = dup(stdout);
defin = dup (stdin);
if(pipe(wpipefd) < 0){
perror("Pipe");
exit(EXIT_FAILURE);
}
if(pipe(rpipefd) < 0){
perror("Pipe");
exit(EXIT_FAILURE);
}
if(dup2(wpipefd[0], 0) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
if(dup2(rpipefd[1], 1) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
if(fork() == 0){
close(defout);
close(defin);
close(wpipefd[0]);
close(wpipefd[1]);
close(rpipefd[0]);
close(rpipefd[1]);
//Call exec here. Use the exec* family of functions according to your need
}
else{
if(dup2(defin, 0) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
if(dup2(defout, 1) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
close(defout);
close(defin);
close(wpipefd[1]);
close(rpipefd[0]);
//Include error check here
write(wpipefd[1], str, strlen(str));
//Just a char by char read here, you can change it accordingly
while(read(rpipefd[0], &ch, 1) != -1){
write(stdout, &ch, 1);
}
}
}
Effectively you do this :
Create pipes and redirect the stdout and stdin to the ends of the two pipes (note that in linux, pipe() creates unidirectional pipes, so you need to use two pipes for your purpose).
Exec will now start a new process which has the ends of the pipes for stdin and stdout.
Close the unused descriptors, write the string to the pipe and then start reading whatever the process might dump to the other pipe.
dup() is used to create a duplicate entry in the file descriptor table. While dup2() changes what the descriptor points to.
Note : As mentioned by Ammo# in his solution, what I provided above is more or less a template, it will not run if you just tried to execute the code since clearly there is a exec* (family of functions) missing, so the child will terminate almost immediately after the fork().
Ammo's code has some error handling bugs. The child process is returning after dup failure instead of exiting. Perhaps the child dups can be replaced with:
if (dup2(aStdinPipe[PIPE_READ], STDIN_FILENO) == -1 ||
dup2(aStdoutPipe[PIPE_WRITE], STDOUT_FILENO) == -1 ||
dup2(aStdoutPipe[PIPE_WRITE], STDERR_FILENO) == -1
)
{
exit(errno);
}
// all these are for use by parent only
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);