Transition to command prompt once my console program finishes? - c++

I'm writing some executables that use the Windows console, in C and C++.
I'm trying to get the console to not close after the logic of my program finishes... But not just merely pause or wait, I'd like it to become a cmd.exe command line console itself, ready to accept new prompts.
Essentially I'd like the behavior of running my program via double-clicking or drag-and-dropping to be equivalent to hitting winkey + r and running :
cmd /k "program.exe [list of drag+drop files if any]"
(While not opening a new console if run from a command-line itself.)
Is this possible at all?
Edit
I've been tinkering with this and arrived to a solution that seems to work:
std::getenv("PROMPT") will return 0 when not run from the commandline (I think anyway, not sure if that holds in all cases), and so that can be used to fork the logic depending on how the executable is run.
The following code works for me at least, in my limited experimentation with it. If it's run from the explorer, it uses it's first instance to invoke cmd.exe with parameters that lets THAT instance invoke our program again, with the original parameters.
int main(int argc, char * argv[]) {
// checks if we're in the explorer window, if so delegates to a new instance
if (std::getenv("PROMPT") == NULL) {
printf("Starting from explorer...\n");
std::string str("cmd /Q /k \"");
for (uint32 n = 0; n < argc; ++n) {
str.append("\"");
str.append(argv[n]);
str.append("\"");
if(n < argc-1)
str.append(" ");
}
str.append("\"\n");
system(str.c_str());
return 0;
}
// actual code we want to run
uint32 fileCount = 0;
for (uint32 n = 0; n < argc - 1; ++n) {
fileCount++;
printf("file %02u >> %s\n", n, argv[n+1]);
}
if (fileCount == 0) {
printf("No inputs...\n");
}
return 0;
}
So I guess conceptually, it looks like this.
____stays open_______________________ __closes when finished___
program.exe [paramList] ---> cmd.exe -+-> program.exe [paramList]
|
+-> any subsequent commands
|
etc

In my opinion, you have linked your program as a Windows console program, so it will always open a console terminal when you run it. In that case, your program is presenting the information it outputs in the console (as if the standard output had been redirected to the opened window) this means you cannot use it as a UNIX filter like dir or copy commands. (indeed, you are not in a unix system, so the console is emulated with a special windows library that is linked to your program).
To be able to run your program inside a cmd.exe invocation in a normal window terminal (as you do with the dir command --well, dir is internal to cmd.exe, but others, like xcopy.exe aren't), you need to build your program as a different program type (a unix filter command or a windowless console program, I don't remember the program type name as I'm not a frequent windows developer) so the standard input and the standard output (these are things that Windows hinerits from MS-DOS) are preserved on the program that started it, and you program is capable of running with no window at all.
Windows console program is a different thing that a windows filter program that doesn't require a console to run. The later is like any other ms-dos like command (like dir or copy) and they have an interface more similar to the unix like counterparts.
If you do this, you will be able to run your program from cmd in another window, and it will not create a Windows terminal console to show your program output.

You could simply insert the line
system( "cmd" );
at the end of your program, which will call the command prompt after your program finished executing.
However, this may not fulfil your following requirement:
(While not opening a new console if run from a command-line itself.)
Although using system( "cmd" ); will not open a new console window (it will use the existing one), it will create a new process cmd.exe, which means that you will now have 2 cmd.exe processes if your program was invoked by cmd.exe. Also, if the original cmd.exe process was invoked by your own program, then you will now have 2 processes running your program. If you now call your program again from this new command prompt, you will now have 3 cmd.exe processes and 3 processes running your program. This could get ugly very quickly, especially if you are repeatedly calling your program from a batch file.
In order to prevent this, then your program could try to somehow detect whether its parent process already is cmd.exe, and if it was, it should exit normally instead of invoking cmd.exe again.
Unfortunately, the Windows API does not seem to offer any official way for a child process to obtain the process ID of its parent process. However, according to this page, it is possible to accomplish this using undocumented functions.
Using undocumented functions is generally not advisable, though. Therefore, it would probably be better if you always called your program from a command prompt, so that it could simply exit normally.

Related

C++ system call to other C++ program not working when called on startup

I have a C++ program which is called at startup via a cronjob (in crontab):
#reboot sudo /home/pi/CAN/RCR_datalogging/logfileControl
Which does run logfileControl anytime the Pi is booted as it shows up in the list of running programs (ps -e). LogfileControl contains two system calls to C++ programs related to SocketCAN (SocketCAN is part of the Linux Kernel, it allows for dealing with CAN data as network sockets). I want logfileControl to run on startup so that it can initialize the CAN socket (system call 1) and then start the first logfile (systemcall 2, candumpExternal, this is candump from socketCAN with a minor modification to make the logfile a specific location rather than just where candump is, but using the original version had the same issue). The first systemcall seems to be working properly as if I try and initialize the socket again it is busy, but the second systemcall doesn't appear to be happening as a logfile is not created at all as a logfile is not created. If I manually run logfileControl from the command line it works as expected and creates the logfile which has left me quite confused...
Does anyone have an insight as to what is going on here?
system("sudo /sbin/ip link set can0 up type can bitrate 500000");
// This is ran initially as logging should start as soon as the pi is on
system("/home/pi/CAN/RCR_datalogging/candumpExternal can0 -l -s 0"); // candump with the option to log(-l) as well as
// continue to output to console (-s 0)
std::cout <<"Setup Complete" << std:: endl;
while(true) { //sleeping indefinitely so that the program can stay open and wait for button presses
sleep(60);
}
Edit: I also tried adding a simple 5 second pause at the beginning of the program, but this didn't seem to make any difference.

How do I keep a command executed by QProcess when the process ends?

I'm trying to find a way to keep the commands executed by QProcess after GUI program is terminated in Linux system. Now, when the process ends, all the commands executed are gone. Is there a way to keep that after QProcess is terminated?
// code which executes command in linux
QProcess *mproc = new Qprocess(this);
QStringList args;
mproc->setWorkingDirectory("/home/test");
args << "-c" << "source tool_def1.env; source tool_def2.env; myProg";
mproc->start("/bin/csh", args);
The tool_def1.env and tool_def2.env file are included some environment variables for executing myProg, like set path = (~~~~).
In GUI Program, this code is well done. And, I want to execute myProg program in terminal which GUI program is run after GUI program is terminated.
But, if GUI Program is terminated, I can't run myProg because the environment variables of tool_def1.env and tool_def2.env file is disappear.
Is it possible to keep the environment variables? Or, is it possible to execute myProg program in other process with environment variables of mproc process as following?
QProcess *mproc2 = new QProcess(this);
mproc2->setWorkingDirectory("/home/test2");
mproc2->start("myProg");
The overload of QProcess::startDetached you are using is a static method, so it will not take into consideration attributes of particular instances, i.e., mproc->setWorkingDirectory("/home/test") doesn't set the working directory for the static method, only for mproc. When you launch the process, as the working directory is not set for the static call, the program cannot be found and it fails.
As you can see in the documentation, the static startDetached also admits the working directory as a parameter, so you can change your code to:
QStringList args;
args << "-c" << "source tool_def1.env; source tool_def2.env; myProg";
QProcess::startDetached("/bin/csh", args, "/home/test");
Another way is using the non-static version, which requires the program to be specified separately:
QProcess mproc(this);
QStringList args;
args << "-c" << "source tool_def1.env; source tool_def2.env; myProg";
mproc.setArguments(args);
mproc.setWorkingDirectory("/home/test");
mproc.setProgram("/bin/csh");
qint64 pid; // to store the process ID (will become invalid if the child process exits)
mproc.startDeatached(&pid);
Regarding your second question, take a look at QProcess::setProcessEnvironment. Just that you'll have to use the non-static way to set the environment of the process. You can get the environment variables of the current process using QProcess::systemEnvironment.
mproc.setProcessEnvironment(QProcess::systemEnvironment());
Update from comments: if you want to always use the environment variables active when the GUI application was running (is it some kind of configurator?) you can just save them to a file (a JSON for example), then load and set them manually from the second process.
To extract them, you can do something like:
const auto env_vars = QProcess::systemEnvironment().toStringList();
Now env_vars will be a list of strings with format NAME_OF_ENVAR=VALUE_OF_ENVAR. You can save such list to the file (you will need to prepend an export at the beginning of each line to be usable with source).
I've tested the static version in Windows (VS 15.8.2 and Qt 5.10.0) and it worked as expected. Code used:
#include <qprocess.h>
int main(int argc, char* argv[])
{
QProcess::startDetached("cmd", QStringList() << "/c" << "test.exe", "../test/");
return 0;
}
where test.exe's code is basically a never ending process.
Note: An interesting fact, and as a note for developer using VS. For the same program and build, if it is executed from the command line it works correctly: application ends and the second process keeps running in the background, but if executed from the VS IDE then the console window keeps alive, and if I close it, the the second process is killed too. When debugged, the debugger ends but the console is still shown. I suppose it is because VS somehow tracks all created processes when launched from the IDE.

pidof from a background script for another background process

I wrote a c++ program to check if a process is running or not . this process is independently launched at background . my program works fine when I run it on foreground but when I time schedule it, it do not work .
int PID= ReadCommanOutput("pidof /root/test/testProg1"); /// also tested with pidof -m
I made a script in /etc/cron.d/myscript to time schedule it as follows :-
45 15 * * * root /root/ProgramMonitor/./testBkg > /root/ProgramMonitor/OutPut.txt
what could be the reason for this ?
string ReadCommanOutput(string command)
{
string output="";
int its=system((command+" > /root/ProgramMonitor/macinfo.txt").c_str());
if(its==0)
{
ifstream reader1("/root/ProgramMonitor/macinfo.txt",fstream::in);
if(!reader1.fail())
{
while(!reader1.eof())
{
string line;
getline(reader1,line);
if(reader1.fail())// for last read
break;
if(!line.empty())
{
stringstream ss(line.c_str());
ss>>output;
cout<<command<<" output = ["<<output<<"]"<<endl;
break;
}
}
reader1.close();
remove("/root/ProgramMonitor/macinfo.txt");
}
else
cout<<"/root/ProgramMonitor/macinfo.txt not found !"<<endl;
}
else
cout<<"ERROR: code = "<<its<<endl;
return output;
}
its output coming as "ERROR: code = 256"
thanks in advacee .
If you really wanted to pipe(2), fork(2), execve(2) then read the output of a pidof command, you should at least use popen(3) since ReadCommandOutput is not in the Posix API; at the very least
pid_t thepid = 0;
FILE* fpidof = popen("pidof /root/test/testProg1");
if (fpidof) {
int p=0;
if (fscanf(fpidof, "%d", &p)>0 && p>0)
thepid = (pid_t)p;
pclose(fpidof);
}
BTW, you did not specify what should happen if several processes (or none) are running the testProg1....; you also need to check the result of pclose
But you don't need to; actually you'll want to build, perhaps using snprintf, the pidof command (and you should be scared of code injection into that command, so quote arguments appropriately). You could simply find your command by accessing the proc(5) file system: you would opendir(3) on "/proc/", then loop on readdir(3) and for every entry which has a numerical name like 1234 (starts with a digit) readlink(2) its exe entry like e.g. /proc/1234/exe ...). Don't forget the closedir and test every syscall.
Please read Advanced Linux Programming
Notice that libraries like Poco or toolkits like Qt (which has a layer QCore without any GUI, and providing QProcess ....) could be useful to you.
As to why your pidof is failing, we can't guess (perhaps a permission issue, or perhaps there is no more any process like you want). Try to run it as root in another terminal at least. Test its exit code, and display both its stdout & stderr at least for debugging purposes.
Also, a better way (assuming that testProg1 is some kind of a server application, to be run in at most one single process) might be to define different conventions. Your testProg1 might start by writing its own pid into /var/run/testProg1.pid and your current application might then read the pid from that file and check, with kill(2) and a 0 signal number, that the process is still existing.
BTW, you could also improve your crontab(5) entry. You could make it run some shell script which uses logger(1) and (for debugging) runs pidof with its output redirected elsewhere. You might also read the mail perhaps sent to root by cron.
Finally I solved this problem by using su command
I have used
ReadCommanOutput("su -c 'pidof /root/test/testProg1' - root");
insteadof
ReadCommanOutput("pidof /root/test/testProg1");

Problems with system() calls in Linux

I'm working on a init for an initramfs in C++ for Linux. This script is used to unlock the DM-Crypt w/ LUKS encrypted drive, and set the LVM drives to be available.
Since I don't want to have to reimplement the functionality of cryptsetup and gpg I am using system calls to call the executables. Using a system call to call gpg works fine if I have the system fully brought up already (I already have a bash script based initramfs that works fine in bringing it up, and I use grub to edit the command line to bring it up using the old initramfs). However, in the initramfs it never even acts like it gets called. Even commands like system("echo BLAH"); fail.
So, does anyone have any input?
Edit: So I figured out what was causing my errors. I have no clue as to why it would cause errors, but I found it.
In order to allow hotplugging, I needed to write /sbin/mdev to /proc/sys/kernel/hotplug...however I ended up switching around the parameters (on a function I wrote myself no less) so I was writing /proc/sys/kernel/hotplug to /sbin/mdev.
I have no clue as to why that would cause the problem, however it did.
Amardeep is right, system() on POSIX type systems runs the command through /bin/sh.
I doubt you actually have a legitimate need to invoke these programs you speak of through a Bourne shell. A good reason would be if you needed them to have the default set of environment variables, but since /etc/profile is probably also unavailable so early in the boot process, I don't see how that can be the case here.
Instead, use the standard fork()/exec() pattern:
int system_alternative(const char* pgm, char *const argv[])
{
pid_t pid = fork();
if (pid > 0) {
// We're the parent, so wait for child to finish
int status;
waitpid(pid, &status, 0);
return status;
}
else if (pid == 0) {
// We're the child, so run the specified program. Our exit status will
// be that of the child program unless the execv() syscall fails.
return execv(pgm, argv);
}
else {
// Something horrible happened, like system out of memory
return -1;
}
}
If you need to read stdout from the called process or send data to its stdin, you'll need to do some standard handle redirection via pipe() or dup2() in there.
You can learn all about this sort of thing in any good Unix programming book. I recommend Advanced Programming in the UNIX Environment by W. Richard Stevens. The second edition coauthored by Rago adds material to cover platforms that appeared since Stevens wrote the first edition, like Linux and OS X, but basics like this haven't changed since the original edition.
I believe the system() function executes your command in a shell. Is the shell executable mounted and available that early in your startup process? You might want to look into using fork() and execve().
EDIT: Be sure your cryptography tools are also on a mounted volume.
what do you have in initramfs ? You could do the following :
int main() {
return system("echo hello world");
}
And then strace it in an initscript like this :
strace -o myprog.log myprog
Look at the log once your system is booted

How to get forkpty/execvp() to properly handle redirection and other bash-isms?

I've got a GUI C++ program that takes a shell command from the user, calls forkpty() and execvp() to execute that command in a child process, while the parent (GUI) process reads the child process's stdout/stderr output and displays it in the GUI.
This all works nicely (under Linux and MacOS/X). For example, if the user enters "ls -l /foo", the GUI will display the contents of the /foo folder.
However, bash niceties like output redirection aren't handled. For example, if the user enters "echo bar > /foo/bar.txt", the child process will output the text "bar > /foo/bar.txt", instead of writing the text "bar" to the file "/foo/bar.txt".
Presumably this is because execvp() is running the executable command "echo" directly, instead of running /bin/bash and handing it the user's command to massage/preprocess.
My question is, what is the correct child process invocation to use, in order to make the system behave exactly as if the user had typed in his string at the bash prompt? I tried wrapping the user's command with a /bin/bash invocation, like this: /bin/bash -c the_string_the_user_entered, but that didn't seem to work. Any hints?
ps Just calling system() isn't a good option, since it would cause my GUI to block until the child process exits, and some child processes may not exit for a long time (if ever!)
If you want the shell to do the I/O redirection, you need to invoke the shell so it does the I/O redirection.
char *args[4];
args[0] = "bash";
args[1] = "-c";
args[2] = ...string containing command line with I/O redirection...;
args[4] = 0;
execv("/bin/bash", args);
Note the change from execvp() to execv(); you know where the shell is - at least, I gave it an absolute path - so the path-search is not relevant any more.