I used system("source ~/.bash_profile") to reload it, but it does not work. Is there any other function or code to reload the bash with C++ program in linux?
The command source ~/.bash_profile works, if you're actually in the process running the bash shell you want to affect.
When you started your C++ program, that started up in a different process, a child of the bash process, and therefore unable to modify the environment of that bash process.
And, in fact, it's even worse than that since a system() call in a C++ program almost certainly runs another process to do that work, so it's twice removed from the bash process.
And, in fact, it's worse even than that, since source is a bash internal command, meaning you'd have to start up a bash shell to execute it :-)
Hence, what you end up with is the following hierarchy of processes, all of which can really only affect their own environment, not anything above them:
original-bash
|
C++-program
|
bash-run-by-system
Related
I have a C++ program from which I want to execute multiple commands in a shell.
My current solution use the system() function and looks like this:
return_value = system(SETUP_ENVIRONMENT; RUN_USEFUL_APP_1);
... do_something_else ...
return_value = system(SETUP_ENVIRONMENT; RUN_USEFUL_APP_2);
... do_something_else ...
return_value = system(SETUP_ENVIRONMENT; RUN_USEFUL_APP_3);
...
It works, but SETUP_ENVIRONMENT takes a few seconds making the program really slow. But I have to run it every time since system() runs in a new shell each time.
I want to be able to setup my shell once and then run all commands in it.
execute_in_shell(SETUP_ENVIRONMENT);
return_value = execute_in_shell(RUN_USEFUL_APP_1);
... do_something_else ...
return_value = execute_in_shell(RUN_USEFUL_APP_2);
... do_something_else ...
return_value = execute_in_shell(RUN_USEFUL_APP_3);
...
How do I do that?
I'm on Linux.
Alternatively to answer 1, you could also use your program to create a shell script which will run all your useful programs and execute this script at once. Then the shell won't be started each time for each particular useful program.
You have three reasonable options for doing this, depending on your specific need.
If the various calls you make to external tools are part of coherent routine, then you can – and probably should – follow #dmi's advice and write a short shell script that you can call from your C++ program.
If you instead need to start procedures here and there, you might be interested into running the shell as an inferior process and attaching your program to it – so that instead of talking with your terminal, the shell process talks to your C++ program.
This method is not very difficult but has a few gotchas (for instance, some programs like ssh, sudo or docker may expect to be attached to a tty). It is very well covered in most introductions to system programming (look for inter process communication and subprocesses) for any Unix variant. Let me outline that procedure:
use the pipe system call to create pipes (stdin_r, stdin_w)
use the pipe system call to create pipes (stdout_r, stdout_w)
use the pipe system call to create pipes (stderr_r, stderr_w)
use the fork system call to duplicate your program
In the child, you close stdin_w, stdout_r, stderr_r, and use the
exec system call parametrised by stdin_r, stdout_w, stderr_w to
run the shell.
In the parent, you close stdin_r, stdout_w, stderr_w, and you
can now write commands in stdin_w, and read command output from
stdout_r and stderr_r.
(This intentionally very sketchy, I included the outline only so that you are sure you found the right place in your favourite textbook).
There are third party libraries implementing all that low-level stuff for you. You can use boost::process (which is not yet an official part of boost now) whose usage is illustrated with a full tutorial. There are plenty of alternatives such as pstreams.
The third option would be to avoid using the shell and executing directly shell commands you use. This is the approach followed by Rashell, an OCaml library defining primitives allowing to reliably compose sub-processes, which you can use for your own inspiration.
I am trying to create a c++ daemon that runs on a Red Hat 6.3 platform and am having trouble understanding the differences between the libc daemon() call, the daemon shell command, startproc, start-stop-daemon and about half a dozen other methods that google suggests for creating daemons.
I have seen suggestions that two forks are needed, but calling daemon only does one. Why is the second fork needed?
If I write the init.d script to call bash daemon, does the c code still need to call daemon?
I implemented my application to call the c daemon() function since it seems the simplest solution, but I am running into the problem of my environment variables seem to get discarded. How do I prevent this?
I also need to run the daemon as a particular user, not as root.
What is the simplest way to create a C++ daemon that keeps its environment variables, runs as a specific user, and is started on system boot?
Why is the second fork needed?
Answered in What is the reason for performing a double fork when creating a daemon?
bash daemon shell command
My bash 4.2 does not have a builtin command named daemon. Are you sure yours is from bash? What version, what distribution?
environment variables seem to get discarded.
I can see no indication to that effect in the documentation. Are you sure it is due to daemon? Have you checked whether they are present before, and missing after that call?
run the daemon as a particular user
Read about setresuid and related functions.
What is the simplest way to create a C++ daemon that keeps its environment variables, runs as a specific user, and is started on system boot?
Depends. If you want to keep your code simple, forget about all of this and let the init script do this via e.g. start-stop-daemon. If you want to handle this in your app, daemon combined with retresuid should be a good approach, although not the only one.
I want to be sure that the following will not compromise my process:
The Solaris program writes heavily to stdout (via C++ wcout stream). The output serves for tracing, so during testing and analyisis the programmer/tester can easily observe what happens. But the program is actually a server process, so in the production version it will run as a demon without attached console and write all the trace output to files.
I assume that stdout is redirected to nul for a program without console, in this case I guess all is fine. However I want to be sure that the stdout output is not buffered somewhere such that after sufficient run-time we could have memory or disk space problems.
Note: we cannot redirect the trace output to a file because this would grow too large. Instead our own file tracing mechanism makes sure that new files are created and old ones deleted to always keep a certain amount of tracing and not more.
That depends how the daemon is started, I guess. When the daemon process is created, the streams have to be taken care of somehow (for example, they need to be detached from the current process, least the daemon would have to be terminated when the shell from which it was started manually exits).
It depends on how the daemon is started. If it's started as a cron job,
the output will be captured and mailed to whoever owns the crontab
entry, unless you redirect the output in the command line. (But
programs started as cron jobs aren't truly daemons.)
More generally, all processes are started from another program (except
the init processes); most of the time, that program is a shell (even
crontab invokes a shell to start its jobs), and the command is given
as a command line. And you can redirect the output anywhere you please
in a command line; /dev/null is a popular choice for cases like yours.
Most daemons are started from an rc file; a shell script installed
under /etc/rcn.d. Just redirect your output there.
Or better yet, rewrite your code to use some form of rotating logs,
instead of standard out.
I've got a program called pgm1 which create a new process using fork.
Then in this process, I launch a new program (pgm2) using the following command:
execv( exec_path_name, argv ).
But the thing is that with this method I've got both output in the same terminal.
I've been searching for a while ans the only solution i found was this one:
Open a new terminal with a system call
Attach my pgm2 to the new terminal using this soft http://blog.nelhage.com/2011/01/reptyr-attach-a-running-process-to-a-new-terminal/comment-page-1/#comment-27264
So my question is really simple, is there a more simple way to do that ?
Thanks in advance !
PS: Distro - Ubuntu 11.10 32bit
I can think of two possible solutions:
Do The Right Thing(TM) and send your output to a file: Each process can use a different file, providing both clear separation of the output and better record-keeping. As a bonus, you are also bound to see a performance improvement - terminal output is computationally expensive, even nowadays...
Execute a terminal emulator with the proper arguments: Most terminal emulators provide a way to execute a specific program in place of the shell. For example xterm:
$ xterm top
This will launch top in an xterm instance, without a shell. Quiting top also terminates the xterm window.
If your terminal emulator of choice supports this, you can use it simply by modifying the arguments passed to execv(). Of course, in this case you will be actually executing the terminal emulator instead of your program, which will then call your own process.
Keep in mind that, depending on the terminal emulator, any open file descriptors may not be passed correctly to your program - the terminal will at least mangle the standard file descriptors.
I need to run a linux command such as "df" from my linux daemon to know free space,used space, total size of the parition and other info. I have options like calling system,exec,popen etc..
But as this each command spawn a new process , is this not possible to run the commands in the same process from which it is invoked?
And at the same time as I need to run this command from a linux daemon, as my daemon should not hold any terminal. Will it effect my daemon behavior?
Or is their any C or C++ standard API for getting the mounted paritions information
There is no standard API, as this is an OS-specific concept.
However,
You can parse /proc/mounts (or /etc/mtab) with (non-portable) getmntent/getmntent_r helper functions.
Using information about mounted filesystems, you can get its statistics with statfs.
You may find it useful to explore the i3status program source code: http://code.stapelberg.de/git/i3status/tree/src/print_disk_info.c
To answer your other questions:
But as this each command spawn a new process , is this not possible to run the commands in the same process from which it is invoked?
No; entire 'commands' are self-contained programs that must run in their own process.
Depending upon how often you wish to execute your programs, fork();exec() is not so bad. There's no hard limits beyond which it would be better to gather data yourself vs executing a helper program. Once a minute, you're probably fine executing the commands. Once a second, you're probably better off gathering the data yourself. I'm not sure where the dividing line is.
And at the same time as I need to run this command from a linux daemon, as my daemon should not hold any terminal. Will it effect my daemon behavior?
If the command calls setsid(2), then open(2) on a terminal without including O_NOCTTY, that terminal might become the controlling terminal for that process. But that wouldn't influence your program, because your program already disowned the terminal when becoming a daemon, and as the child process is a session leader, it cannot change your process's controlling terminal.