I designed a GUI with Qt5 and with the click of a button I want to start a python software. At first it tried writing a simple shell script which looks like this:
#!/bin/bash
cd ~/Printrun
python pronterface.py
The script works fine when executed directly in the shell but now i want a c++ code that execute the script. I already found some useful stuff here, but i cant seem to get it to work on my project. Maybe it is because I'm running the code from Qt?
Here is the code:
execlp("home/user/", "./my_shell_script.sh", (char *)0);
and
execlp("home/user/Printrun", "python pronterface.py", (char *)0);
both dont return any errors but dont seem to work.
I am sorry if I am missing something obvious I pretty new to stuff like this.
EDIT: I also tried system() with no success and i read that exec is a much cleaner solution
You are misinterpreting the specifications of execlp(). The first argument, sometimes documented as path (but more often so for different exec functions), is expected to be a path all the way to the executable, not stopping short at the directory containing it. The variable arguments then designate all elements to be provided as process's argv vector, including the zeroth, which conventionally names the program being executed. Thus:
execlp("home/user/my_shell_script.sh", "my_shell_script.sh", (char *)0);
Note, however, that when the path you give contains any slash characters, as yours does, execlp() is functionally equivalent to execl().
Moreover, your python example (at least) seems to assume, incorrectly, that the first argument gives the initial working directory for the process. If you want to change the working directory then you must do so manually. For example,
chdir("home/user/Printrun");
execlp("python", "pronterface.py", (char *)0);
Furthermore, do be sure to check function return values for errors, and to handle them appropriately (not demonstrated above), though in the case of exec-family functions, returning at all indicates an error.
In that light however, you should recognize that if you're using this in the handler function for a GUI widget then you have a potential problem. Very likely in that case you want to fork() first, so that the new process runs alongside the GUI instead of replacing it, or to use system() instead, which handles all the forking, execing, and waiting for you. Note also that unlike the exec functions, system() runs your command via a shell. For example, then:
system("cd ~/Printrun; python pronterface.py");
Related
So I'm trying to run a series of commands via using system(), however I noticed that changes don't carry on. For example, I define a variable p as user input set /p p=Enter name (or in powershell $p=Read-Input "Enter Name"
Now if I want to use P again, I use system("echo %p%") (or %p I forgot which), but p is undefined here since a new system call creates a new cmd call. I also tried system("CD test") yet the CD remains the same and doesn't change in the next system call.
How can I make sure the system calls use each others' variables and such?
Each call of system has it's own environment, by definition.
On Linux and Mac, you can use popen and MSVC has a similar _popen.
That said, Remy's comment is a viable alternative. You can still use system to start curl after you've called SetCurrentDirectory() from your own code. Your problem is that successive children don't inherit each others environment; they all inherit from the parent process (i.e. your C++ code). So get that parent right.
I have a C++ program from which I want to execute multiple commands in a shell.
My current solution use the system() function and looks like this:
return_value = system(SETUP_ENVIRONMENT; RUN_USEFUL_APP_1);
... do_something_else ...
return_value = system(SETUP_ENVIRONMENT; RUN_USEFUL_APP_2);
... do_something_else ...
return_value = system(SETUP_ENVIRONMENT; RUN_USEFUL_APP_3);
...
It works, but SETUP_ENVIRONMENT takes a few seconds making the program really slow. But I have to run it every time since system() runs in a new shell each time.
I want to be able to setup my shell once and then run all commands in it.
execute_in_shell(SETUP_ENVIRONMENT);
return_value = execute_in_shell(RUN_USEFUL_APP_1);
... do_something_else ...
return_value = execute_in_shell(RUN_USEFUL_APP_2);
... do_something_else ...
return_value = execute_in_shell(RUN_USEFUL_APP_3);
...
How do I do that?
I'm on Linux.
Alternatively to answer 1, you could also use your program to create a shell script which will run all your useful programs and execute this script at once. Then the shell won't be started each time for each particular useful program.
You have three reasonable options for doing this, depending on your specific need.
If the various calls you make to external tools are part of coherent routine, then you can – and probably should – follow #dmi's advice and write a short shell script that you can call from your C++ program.
If you instead need to start procedures here and there, you might be interested into running the shell as an inferior process and attaching your program to it – so that instead of talking with your terminal, the shell process talks to your C++ program.
This method is not very difficult but has a few gotchas (for instance, some programs like ssh, sudo or docker may expect to be attached to a tty). It is very well covered in most introductions to system programming (look for inter process communication and subprocesses) for any Unix variant. Let me outline that procedure:
use the pipe system call to create pipes (stdin_r, stdin_w)
use the pipe system call to create pipes (stdout_r, stdout_w)
use the pipe system call to create pipes (stderr_r, stderr_w)
use the fork system call to duplicate your program
In the child, you close stdin_w, stdout_r, stderr_r, and use the
exec system call parametrised by stdin_r, stdout_w, stderr_w to
run the shell.
In the parent, you close stdin_r, stdout_w, stderr_w, and you
can now write commands in stdin_w, and read command output from
stdout_r and stderr_r.
(This intentionally very sketchy, I included the outline only so that you are sure you found the right place in your favourite textbook).
There are third party libraries implementing all that low-level stuff for you. You can use boost::process (which is not yet an official part of boost now) whose usage is illustrated with a full tutorial. There are plenty of alternatives such as pstreams.
The third option would be to avoid using the shell and executing directly shell commands you use. This is the approach followed by Rashell, an OCaml library defining primitives allowing to reliably compose sub-processes, which you can use for your own inspiration.
Recently, i've got interested in making a front-end for command-line program.
I guess there's two way to do it.
First one is just including source code and calling main proc with arguments
(Of course, there should be some changes in source code).
Second one, which is there's no source code and just program, is just executing program internally then reading the command line with APIs.
Though I well know about the first solution, i don't know what APIs is needed to do the second solution.
I'm talking about the APIs that get a command-line string or something like that.
See this question for information on how to run an external application; basically, you need to call CreateProcess function. I'm not sure what you mean by "reading the command line", I suppose you mean reading the output of an executed program? As for capturing an external application's output, there's already another question asking for that, you will probably find this answer most helpful.
here is a codeProject project that I have used and can handle command line arguments for you (in the setup you describe). If you are not happy with it, you can use the direct WinApi calls using CommandLineToArgvW.
Enjoy!
I know that in a DOS/Windows application, you can issue system commands from code using lines like:
system("pause");
or
system("myProgram.exe");
...from stdlib.h. Is there a similar Linux command, and if so which header file would I find it in?
Also, is this considered bad programming practice? I am considering trying to get a list of loaded kernal modules using the lsmod command. Is that a good idea or bad idea? I found some websites that seemed to view system calls (at least system("pause");) in a negative light.
system is a bad idea for several reasons:
Your program is suspended until the command finishes.
It runs the command through a shell, which means you have to worry about making sure the string you pass is safe for the shell to evaluate.
If you try to run a backgrounded command with &, it ends up being a grandchild process and gets orphaned and taken in by the init process (pid 1), and you have no way of checking its status after that.
There's no way to read the command's output back into your program.
For the first and final issues, popen is one solution, but it doesn't address the other issues. You should really use fork and exec (or posix_spawn) yourself for running any external command/program.
Not surprisingly, the command is still
system("whatever");
and the header is still stdlib.h. That header file's name means "standard library", which means it's on every standard platform that supports C.
And yes, calling system() is often a bad idea. There are usually more programmatic ways of doing things.
If you want to see how lsmod works, you can always look-up its source code and see what the major system calls are that it makes. Then use those calls yourself.
A quick Google search turns up this link, which indicates that lsmod is reading the contents of /proc/modules.
Well, lsmod does it by parsing the /proc/modules file. That would be my preferred method.
I think what you are looking for are fork and exec.
I have a GUI application, which creates a QProcess inside, catches its output and shows it on a form. I need to somehow catch key events from the form to pass them to QProcess (to make it fell as close as possible to real terminal window).
So, I suppose, I should process keyReleaseEvent() and somehow transform either event.text() (which is QString) or event.key() (which is int) to argument, suitable for process.write() (which takes char* or QByteArray). Is there some recommended way to do such a conversion (taking into account localization issues, ctrl/alt/shift modifiers and so on)? I do not really want to construct some sort of mapping from key() return values to char* strings; and text() drops modifiers.
Moreover, if I start process with command bash -c sudo something in QProcess, it exits instantly, complaining that "no tty present and no askpass program specified", so I may be doing something completely wrong...
The problem is more than just deciding what to write to the process.
You can't emulate a terminal just by reading/writing stdout/stdin of a process, it's more complicated than that. Think about the program less, or any pager, for example. How does it know how many lines to print at a time? It needs information about the terminal which isn't represented through stdin/stdout/stderr.
Emulating a terminal is beyond the scope of QProcess. If you're really sure you need to do this then use some existing Qt-based terminal emulator as a starting point (e.g. Konsole).