So I'm trying to run a series of commands via using system(), however I noticed that changes don't carry on. For example, I define a variable p as user input set /p p=Enter name (or in powershell $p=Read-Input "Enter Name"
Now if I want to use P again, I use system("echo %p%") (or %p I forgot which), but p is undefined here since a new system call creates a new cmd call. I also tried system("CD test") yet the CD remains the same and doesn't change in the next system call.
How can I make sure the system calls use each others' variables and such?
Each call of system has it's own environment, by definition.
On Linux and Mac, you can use popen and MSVC has a similar _popen.
That said, Remy's comment is a viable alternative. You can still use system to start curl after you've called SetCurrentDirectory() from your own code. Your problem is that successive children don't inherit each others environment; they all inherit from the parent process (i.e. your C++ code). So get that parent right.
Related
Basically the question is in the title. I'm using the setenv() fucntion to set the environmental variable in my cpp program, where I also use fork() exec() chain, which create a child process. The problem is that the created variable is also accessible from this child process. This makes setenv() equivalent to export ABC=EFG behavior in the shell. What I want is to separate this functionality. I want to separately set the variable ABC=EFG and make it available to the child process export ABC. How to do this?
EDIT: I decided to add my comment to #SergeyA's answer here. How does bash handle env variables in a situation like this, for example? If I write ABC=EFG and call a script consisting from only one line echo $ABC it won't print anything unless I previously called export ABC. I'm just writing a shell and trying to mimic this behavior.
There is no direct way of doing this. Calling exec is always going to make child process inheriting environment variables of the parent process.
You can use exceve to explicitly specify environment variables to be visible to child process.
I designed a GUI with Qt5 and with the click of a button I want to start a python software. At first it tried writing a simple shell script which looks like this:
#!/bin/bash
cd ~/Printrun
python pronterface.py
The script works fine when executed directly in the shell but now i want a c++ code that execute the script. I already found some useful stuff here, but i cant seem to get it to work on my project. Maybe it is because I'm running the code from Qt?
Here is the code:
execlp("home/user/", "./my_shell_script.sh", (char *)0);
and
execlp("home/user/Printrun", "python pronterface.py", (char *)0);
both dont return any errors but dont seem to work.
I am sorry if I am missing something obvious I pretty new to stuff like this.
EDIT: I also tried system() with no success and i read that exec is a much cleaner solution
You are misinterpreting the specifications of execlp(). The first argument, sometimes documented as path (but more often so for different exec functions), is expected to be a path all the way to the executable, not stopping short at the directory containing it. The variable arguments then designate all elements to be provided as process's argv vector, including the zeroth, which conventionally names the program being executed. Thus:
execlp("home/user/my_shell_script.sh", "my_shell_script.sh", (char *)0);
Note, however, that when the path you give contains any slash characters, as yours does, execlp() is functionally equivalent to execl().
Moreover, your python example (at least) seems to assume, incorrectly, that the first argument gives the initial working directory for the process. If you want to change the working directory then you must do so manually. For example,
chdir("home/user/Printrun");
execlp("python", "pronterface.py", (char *)0);
Furthermore, do be sure to check function return values for errors, and to handle them appropriately (not demonstrated above), though in the case of exec-family functions, returning at all indicates an error.
In that light however, you should recognize that if you're using this in the handler function for a GUI widget then you have a potential problem. Very likely in that case you want to fork() first, so that the new process runs alongside the GUI instead of replacing it, or to use system() instead, which handles all the forking, execing, and waiting for you. Note also that unlike the exec functions, system() runs your command via a shell. For example, then:
system("cd ~/Printrun; python pronterface.py");
I am writing a bash script that runs a C++ program multiple times. I use getenv() and putenv() to create, get, and update environment variables in the C++ program. After the C++ program ends, the bash script needs to grab these variables and perform some basic logic. The problem is that when the C++ program exits, the environment variables disappear. Is there any way to permanently store these variables after the program's termination so that the bash script can use them? If not, what is the best way to share variables between a bash script and a C++ program? The only solution I can think of is writing output to files. I do not want to print this data in the console. Any help would be greatly appreciated.
Each process has its own copy of the environment variables, which are initialised by copying them from the parent process when the new process is launched. When you change an environment variable in your process, the parent process has no knowledge of this.
In order to pass back information from a child to a parent, you will need to set up some other kind of communications channel. It could be files on disk, or a pipe, or (depending on the capabilities of your parent, bash might not be able to do all this) shared memory or some other IPC mechanism. The parent program would then be responsible for changing its own environment variables based on information received from the child.
I personally have only ever been able to do this in 16-bit DOS assembler, by tracing the pointer to the previous process until it points at itself, which means that you've reached the first instance of COMMAND.COM, and then altered its environment manually.
If your program returned the variables via standard output as string, like this:
FOO=23; BAR=45;
Then, bash could call it like this:
eval `./your_program`
end $FOO and $BAR will be accessible to bash.
To test this try:
eval `echo "FOO=23; BAR=45;"`
echo "$FOO $BAR"
Of course, in this method the program does not change environment variables of calling process (which is not possible), but just returns a string that is then evaluated by bash and the evaluation sets the variables.
Do not use this method if your program processes input from not trusted source. If someone tricked your program to print "rm -rf /" to the standard output you would be doomed.
As far as i know under a "standard" GNU/Linux environment you can set environment variables in 3 ways:
using command line utilities like export
editing files like ~/.profile or ~/.bashrc for an user or the equivalent files under /etc for system
feeding temporary values to a command like this CXX=g++ CUSTOM_VERSION=2.5 command
the last one is usually used to customize builds and it's good because doesn't harm the system and does not interfere with any system settings or values or files and everything is back to normal after the execution of the command. It's the best way if you like to have a temporary modification for a particular set of variables.
There is no way for a program to set environment variables in its parent. Or, well, no legitimate way. Hacking into its process with ptrace does not count. :-)
What you should do is output the environment variables on standard output. Have the shell script read them and set them. If the environment variables are all that you output the call is very simple:
`program`
The back-ticks will capture the output of the program. Then it will replace the back-ticks with the output. Which will be commands to set shell variables. Then later in your shell script be sure to do:
export VAR1
export VAR2
You need the export commands in order to move them into the environment that is passed to programs launched from the shell.
You cannot set environment variables which survive the life-time of your process, so the easiest solution is to write to output files as you suggested or write to specific filehandles handed down from Bash:
C++:
int main (int argc, char* argv[])
{
// missing error handling
int fd = atoi(argv[1]);
const char* env = "BLAH=SMURF";
write(5, env, strlen(env));
return 0;
}
Bash:
# discard stdout and stderr, redirect 5 to stdout so that it can be captured
# and tell the process that it should write to 5 (the first 5)
VARIABLES=`./command 5 2>&1 1>/dev/null 5>&1`
This is probably a crack-pot idea but it should work :)
I know the system() function but that creates it's own environment so every variable set here isn't forwarded to main console. I wonder is it possible to send command as it would be written by the user or as it would be executed by *.bat file ?
The reason I need this is that I look for a way to set env variable of parent CMD process. And yes I know that system doesn't want me to do it, but maybe there is a some workaround for that...
Idea is to create app that would set as a variable anything that is send to it via input pipe, like this:
echo Bob| setvar name
so then:
echo %name%
would produce Bob
The whole idea is to make easier setting a variable from any program output (I know how to do it with for command) with taking account of peculiarities with special batch characters like ^!% since these are allowed in file names. It would make simpler many cmd scripts.
You can certainly run programs in the same console window as your program. That's the default behavior for CreateProcess. MSDN has more details on what happens between related processes sharing a console. You'll probably want to wait for the child process to terminate before continuing to run your own program.
However, that won't help with your real goal. The window where a program runs has absolutely nothing to do with the environment variables of any of its ancestor processes. You'll have to look elsewhere for a solution to your real problem.
I would like to write a program that sets an environment variable in an instance of the shell (cmd.exe) it was called from. The idea is that I could store some state in this variable and then use it again on a subsequent call.
I know there are commands like SetEnvironmentVariable, but my understanding is that those only change the variable for the current process and won't modify the calling shell's variables.
Specifically what I would like to be able to do is create a command that can bounce between two directories. Pushd/Popd can go to a directory and back, but don't have a way of returning a 2nd time to the originally pushed directory.
MSDN states the following:
Calling SetEnvironmentVariable has no
effect on the system environment
variables. To programmatically add or
modify system environment variables,
add them to the
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session
Manager\Environment registry key, then
broadcast a WM_SETTINGCHANGE message
with lParam set to the string
"Environment". This allows
applications, such as the shell, to
pick up your updates. Note that the
values of the environment variables
listed in this key are limited to 1024
characters.
Considering that there are two levels of environment - System and Process - changing those in the shell would constitute changing the environment of another process. I don't believe that this is possible.
A common techniques is the write an env file, that is then "call"ed from the script.
del env.var
foo.exe ## writes to env.var
call env.var
In Windows when one process creates another, it can simply let the child inherit the current environment strings, or it can give the new child process a modified, or even completely new environment.
See the full info for the CreateProccess() win32 API
There is no supported way for a child process to reach back to the parent process and change the parent's environment.
That being said, with CMD scripts and PowerShell, the parent command shell can take output from the child process and update its own environment. This is a common technique.
personly, I don't like any kind of complex CMD scripts - they are a bitch to write an debug. You may want to do this in PowerShell - there is a learning curve to be sure, but it is much richer.
There is a way...
Just inject your code into parent process and call SetEnvironmentVariableA inside
cmd's process memory. After injecting just free the allocated memory.