Running Blocking Function in Background - c++

I am trying to run a program that essentially runs forever. I need to start it with certain parameters that I load in so I am running it from within my c++ program. Say this is the program.
int main() {
while(true) {
// do something
}
}
It compiles to a shared library object: forever.so.
If I do the following from c++ nothing happens.
int main() {
system("forever.so &");
}
If I do the following my program blocks forever (as would be expected).
int main() {
system("forever.so");
}
What am I doing wrong here?
I can provide the strace and other info.
To be clear. The issue is that if I check processes with
ps -a
I do not see any of my processes. This is when the program is paused during a debug session so the processes should still be alive.

system uses /bin/sh, which is a POSIX shell. & after a command means to run it in the background, which means it won't block. Read here for more info.

Related

C++ system call to other C++ program not working when called on startup

I have a C++ program which is called at startup via a cronjob (in crontab):
#reboot sudo /home/pi/CAN/RCR_datalogging/logfileControl
Which does run logfileControl anytime the Pi is booted as it shows up in the list of running programs (ps -e). LogfileControl contains two system calls to C++ programs related to SocketCAN (SocketCAN is part of the Linux Kernel, it allows for dealing with CAN data as network sockets). I want logfileControl to run on startup so that it can initialize the CAN socket (system call 1) and then start the first logfile (systemcall 2, candumpExternal, this is candump from socketCAN with a minor modification to make the logfile a specific location rather than just where candump is, but using the original version had the same issue). The first systemcall seems to be working properly as if I try and initialize the socket again it is busy, but the second systemcall doesn't appear to be happening as a logfile is not created at all as a logfile is not created. If I manually run logfileControl from the command line it works as expected and creates the logfile which has left me quite confused...
Does anyone have an insight as to what is going on here?
system("sudo /sbin/ip link set can0 up type can bitrate 500000");
// This is ran initially as logging should start as soon as the pi is on
system("/home/pi/CAN/RCR_datalogging/candumpExternal can0 -l -s 0"); // candump with the option to log(-l) as well as
// continue to output to console (-s 0)
std::cout <<"Setup Complete" << std:: endl;
while(true) { //sleeping indefinitely so that the program can stay open and wait for button presses
sleep(60);
}
Edit: I also tried adding a simple 5 second pause at the beginning of the program, but this didn't seem to make any difference.

pidof from a background script for another background process

I wrote a c++ program to check if a process is running or not . this process is independently launched at background . my program works fine when I run it on foreground but when I time schedule it, it do not work .
int PID= ReadCommanOutput("pidof /root/test/testProg1"); /// also tested with pidof -m
I made a script in /etc/cron.d/myscript to time schedule it as follows :-
45 15 * * * root /root/ProgramMonitor/./testBkg > /root/ProgramMonitor/OutPut.txt
what could be the reason for this ?
string ReadCommanOutput(string command)
{
string output="";
int its=system((command+" > /root/ProgramMonitor/macinfo.txt").c_str());
if(its==0)
{
ifstream reader1("/root/ProgramMonitor/macinfo.txt",fstream::in);
if(!reader1.fail())
{
while(!reader1.eof())
{
string line;
getline(reader1,line);
if(reader1.fail())// for last read
break;
if(!line.empty())
{
stringstream ss(line.c_str());
ss>>output;
cout<<command<<" output = ["<<output<<"]"<<endl;
break;
}
}
reader1.close();
remove("/root/ProgramMonitor/macinfo.txt");
}
else
cout<<"/root/ProgramMonitor/macinfo.txt not found !"<<endl;
}
else
cout<<"ERROR: code = "<<its<<endl;
return output;
}
its output coming as "ERROR: code = 256"
thanks in advacee .
If you really wanted to pipe(2), fork(2), execve(2) then read the output of a pidof command, you should at least use popen(3) since ReadCommandOutput is not in the Posix API; at the very least
pid_t thepid = 0;
FILE* fpidof = popen("pidof /root/test/testProg1");
if (fpidof) {
int p=0;
if (fscanf(fpidof, "%d", &p)>0 && p>0)
thepid = (pid_t)p;
pclose(fpidof);
}
BTW, you did not specify what should happen if several processes (or none) are running the testProg1....; you also need to check the result of pclose
But you don't need to; actually you'll want to build, perhaps using snprintf, the pidof command (and you should be scared of code injection into that command, so quote arguments appropriately). You could simply find your command by accessing the proc(5) file system: you would opendir(3) on "/proc/", then loop on readdir(3) and for every entry which has a numerical name like 1234 (starts with a digit) readlink(2) its exe entry like e.g. /proc/1234/exe ...). Don't forget the closedir and test every syscall.
Please read Advanced Linux Programming
Notice that libraries like Poco or toolkits like Qt (which has a layer QCore without any GUI, and providing QProcess ....) could be useful to you.
As to why your pidof is failing, we can't guess (perhaps a permission issue, or perhaps there is no more any process like you want). Try to run it as root in another terminal at least. Test its exit code, and display both its stdout & stderr at least for debugging purposes.
Also, a better way (assuming that testProg1 is some kind of a server application, to be run in at most one single process) might be to define different conventions. Your testProg1 might start by writing its own pid into /var/run/testProg1.pid and your current application might then read the pid from that file and check, with kill(2) and a 0 signal number, that the process is still existing.
BTW, you could also improve your crontab(5) entry. You could make it run some shell script which uses logger(1) and (for debugging) runs pidof with its output redirected elsewhere. You might also read the mail perhaps sent to root by cron.
Finally I solved this problem by using su command
I have used
ReadCommanOutput("su -c 'pidof /root/test/testProg1' - root");
insteadof
ReadCommanOutput("pidof /root/test/testProg1");

Why C-forkbombs don't work like bash ones?

If I run the classical bash forkbomb:
:(){ :&:&};:
my system hangs after a few seconds.
I tried to write a forkbomb in C, here is the code:
#include <unistd.h>
int main( )
{
while(1) {
fork();
}
return 0;
}
When I run it the system gets less responsive, but I can kill that process (even after minutes) just pressing ^C.
The above code is different from the original bash forkbomb I posted: it's something more like:
:( )
{
while true
do
:
done
}
(I didn't test it; don't know if it'd hang the system).
So I also tried to implement the original version; here the code:
#include <unistd.h>
inline void colon( const char *path )
{
pid_t pid = fork( );
if( pid == 0 ) {
execl( path, path, 0 );
}
}
int main( int argc, char **argv )
{
colon( argv[0] );
colon( argv[0] );
return 0;
}
But still nothing: I can run it and then easily kill it. It's not hanging my system.
Why?
What's so special about bash forkbombs? Is it because bash uses a lot more memory/CPU? Because bash processes call a lot more system calls (eg, to access filesystem) than mine?
That C program is tiny, seriously tiny. In addition, fork()'ing a program like that is very, very efficient. An interpreter, such as Bash, however, is much more expensive in terms of RAM usage, and needs to access the disk all the time.
Try running it for much longer. :)
The real cause for this is that in BASH the process you create is detached from the parent. If the parent process (the one you initially started) is killed, the rest of the processes live on. But in the C implementations you listed the child processes die if the parent is killed, so it's enough to bring down the initial process you started to bring down the whole tree of ever-forking processes.
I have not yet come up with a C forkbomb implementation that detaches child processes so that they're not killed if the parent dies. Links to such implementations would be appreciated.
In your bash forkbomb, you are putting new processes in new background process groups, so you are not going to be able to ^C them.
Thats basically because of the size.
When you are running the bash fork bomb it loads big monster programs into memory (with respect to your c program) each of them start hogging on your cpu resources.Definitely when big monsters start reproducing trouble comes more quickly than if bees start doing the same.
So the computer hangs immediately.However if you keep your c executable running for long it will hang the system too.Just that the time will be much much more.
If you want to compare the size of bash with size of c program check out /proc//status.
first with pid of any running instance of bash,and then with pid of any running instance of a c program

How to prevent a Linux program from running more than once?

What is the best way to prevent a Linux program/daemon from being executed more than once at a given time?
The most common way is to create a PID file: define a location where the file will go (inside /var/run is common). On successful startup, you'll write your PID to this file. When deciding whether to start up, read the file and check to make sure that the referenced process doesn't exist (or if it does, that it's not an instance of your daemon: on Linux, you can look at /proc/$PID/exe). On shutdown, you may remove the file but it's not strictly necessary.
There are scripts to help you do this, you may find start-stop-daemon to be useful: it can use PID files or even just check globally for the existence of an executable. It's designed precisely for this task and was written to help people get it right.
Use the boost interprocess library to create a memory block that will be created by the process. If it already exists, it means that there is another instance of the process. Exit.
The more precise link to what you need would be this one.
#include <boost/interprocess/shared_memory_object.hpp>
#include <boost/scoped_ptr.hpp>
int main()
{
using boost::interprocess;
boost::scoped_ptr<shared_memory_object> createSharedMemoryOrDie;
try
{
createSharedMemoryOrDie.reset(
new shared_memory_object(create_only, "shared_memory", read_write));
} catch(...)
{
// executable is already running
return 1;
}
// do your thing here
}
If you have access to the code (i.e. are writing it):
create a temporary file, lock it, remove when done, return 1; if file exists, or,
list processes, return 1; if the process name is in the list
If you don't:
create a launcher wrapper to the program that does one of the above
I do not know what your exact requirement is but I had a similar requirement; in that case I started my daemon from a Shell script ( it was a HP-UX machine) and before starting the daemon I checked if an exec by same name is already running. If it is; then don't start a new one.
By this way I was also able control the number of instances of a process.
I think this scheme should work (and is also robust against crashes):
Precondition: There is a PID file for your application (typically in /var/run/)
1. Try to open the PID file
2. If it does not exist, create it and write your PID to it. Continue with the rest of the program
3. If it exist, read the PID
4. If the PID is still running and is an instance of your program, then exit
5. If the PID does not exist or is used by another program, remove the PID file and go to step 2.
6. At program termination, remove the PID file.
The loop in step 5 ensures that, if two instances are started at the same time, only one will be running in the end.
Have a pid file and on the startup do a 'kill -0 <pid>'. Where is the value read from file. If the response is != 0 then the daemon is not alive and you might restart it
Another approach would be to bind to a port and handle the bind exception on the second attempt to start the daemon. If the port is in use then exit otherwise continue running the daemon.
I believe my solution is the simplest:
(don't use it if racing condition is a possible scenario, but on any other case this is a simple and satisfying solution)
#include <sys/types.h>
#include <unistd.h>
#include <sstream>
void main()
{
// get this process pid
pid_t pid = getpid();
// compose a bash command that:
// check if another process with the same name as yours
// but with different pid is running
std::stringstream command;
command << "ps -eo pid,comm | grep <process name> | grep -v " << pid;
int isRuning = system(command.str().c_str());
if (isRuning == 0) {
cout << "Another process already running. exiting." << endl;
return 1;
}
return 0;
}

Problems with system() calls in Linux

I'm working on a init for an initramfs in C++ for Linux. This script is used to unlock the DM-Crypt w/ LUKS encrypted drive, and set the LVM drives to be available.
Since I don't want to have to reimplement the functionality of cryptsetup and gpg I am using system calls to call the executables. Using a system call to call gpg works fine if I have the system fully brought up already (I already have a bash script based initramfs that works fine in bringing it up, and I use grub to edit the command line to bring it up using the old initramfs). However, in the initramfs it never even acts like it gets called. Even commands like system("echo BLAH"); fail.
So, does anyone have any input?
Edit: So I figured out what was causing my errors. I have no clue as to why it would cause errors, but I found it.
In order to allow hotplugging, I needed to write /sbin/mdev to /proc/sys/kernel/hotplug...however I ended up switching around the parameters (on a function I wrote myself no less) so I was writing /proc/sys/kernel/hotplug to /sbin/mdev.
I have no clue as to why that would cause the problem, however it did.
Amardeep is right, system() on POSIX type systems runs the command through /bin/sh.
I doubt you actually have a legitimate need to invoke these programs you speak of through a Bourne shell. A good reason would be if you needed them to have the default set of environment variables, but since /etc/profile is probably also unavailable so early in the boot process, I don't see how that can be the case here.
Instead, use the standard fork()/exec() pattern:
int system_alternative(const char* pgm, char *const argv[])
{
pid_t pid = fork();
if (pid > 0) {
// We're the parent, so wait for child to finish
int status;
waitpid(pid, &status, 0);
return status;
}
else if (pid == 0) {
// We're the child, so run the specified program. Our exit status will
// be that of the child program unless the execv() syscall fails.
return execv(pgm, argv);
}
else {
// Something horrible happened, like system out of memory
return -1;
}
}
If you need to read stdout from the called process or send data to its stdin, you'll need to do some standard handle redirection via pipe() or dup2() in there.
You can learn all about this sort of thing in any good Unix programming book. I recommend Advanced Programming in the UNIX Environment by W. Richard Stevens. The second edition coauthored by Rago adds material to cover platforms that appeared since Stevens wrote the first edition, like Linux and OS X, but basics like this haven't changed since the original edition.
I believe the system() function executes your command in a shell. Is the shell executable mounted and available that early in your startup process? You might want to look into using fork() and execve().
EDIT: Be sure your cryptography tools are also on a mounted volume.
what do you have in initramfs ? You could do the following :
int main() {
return system("echo hello world");
}
And then strace it in an initscript like this :
strace -o myprog.log myprog
Look at the log once your system is booted