Hide sh: -c error messages when calling “system” in c++ Linux - c++

I'm using system to execute a command with it's arguments. I don't want to use exec/fork.
When I have unmatched quotes in my command this error appears:
sh: -c: line 0: unexpected EOF while looking for matching `''
sh: -c: line 1: syntax error: unexpected end of file
How do I suppress these shell's error messages? I tried adding >/dev/null 2>&1 at the end of the invalid command but it doesn't suppress the shell error message. For background, I'm running user supplied commands which may or may not be valid. I can't know in advance if they'll be valid or not, but I want to suppress the error messages regardless.
Here's an example of code that generates the type of error I'm trying to suppress:
int main()
{
// This command is meant to be invalid as I'm trying to suppress the shell syntax error message
system("date ' >/dev/null 2>&1");
return 0;
}
Can you help me?

Think that system forks a process and then executes the command you've provided. The new process inherits the descriptor from its parent and that new process is writing in its standard error.
So, this code snippet may do what you want:
#include <stdlib.h>
#include <unistd.h>
int main()
{
int duperr;
duperr = dup(2);
close(2); /* close stderr so the new process can't output the error */
system("date '");
dup2(duperr, 2);
close(duperr);
/* here you can use stderr again */
write(2, "hello world\n", 12);
return 0;
}
To make writes to stderr be silently suppressed, you can output the errors to /dev/null:
#include <stdlib.h>
#include <unistd.h>
#include <fcntl.h>
#include <stdio.h>
int main(void) {
int devnull_fd, duperr_fd;
/* get a descriptor to /dev/null (devnull_fd) */
devnull_fd = open("/dev/null", O_WRONLY | O_APPEND);
/* save a descriptor "pointing" to the actual stderr (duperr_fd) */
duperr_fd = dup(STDERR_FILENO);
/* now STDERR_FILENO "points" to "/dev/null" */
dup2(devnull_fd, STDERR_FILENO);
system("date '");
/* restore stderr */
dup2(duperr_fd, STDERR_FILENO);
close(duperr_fd);
close(devnull_fd);
/* here you can use stderr again */
write(STDERR_FILENO, "hello world\n", 12);
return 0;
}
Remember to check the return value of the function calls.

The logical problem is that you're not redirecting anything because the part >/dev/null 2>&1 ends up being contained in a single-quoted part.
The string however doesn't end and therefore bash complains on stderr.
The solution for this specific case is to quote single-quotes by preceding them with a backslash, i.e. to call system("date \\' >/dev/null 2>&1"). Note however that bash exact quoting rules are a small nightmare.
A workaround I can think to in general is to save the command in a file, say cmd.txt, and then execute
system("bash < cmd.txt >/dev/null 2>&1");
May be this can be done also with bash -c without creating the file but I simply was not able to make any sense of single quote escaping rules for -c and I'm not going to waste neurons on the totally broken bash grammar.

You might want to run a correct command, then remove the quote, so
system("date >/dev/null 2>&1");
However, what is the point of running that? If (your PATH is common enough so that...) date is indeed /bin/date (see date(1)...) its only effect is to produce some output, and you are discarding it by redirecting it to /dev/null. So the only visible effect of your call to system is to spend several millions of CPU cycles. Of course, in weird cases, it could fail (e.g. because /bin/sh does not exist, /bin/date does not exist, fork has failed, etc...) ....
Maybe you are coding a shell. Then you should parse the command and do fork & execve .....
If you want to get the output of some command like date, better use popen(3) with pclose. In the particular case of the date it is much better to use time(2), localtime(3), strftime(3) (then you don't need to depend upon an external command, and you won't need to use popen)
If you really want to suppress the output of the shell started by system you could do the fork & execve (of bin/sh -c) yourself and redirect the standard and error outputs (to a file descriptor opening /dev/null) with dup2. See also daemon(3) & wordexp(3)
Maybe embedding an interpreter in your application (e.g. lua or guile) would be more sensible (assuming you don't care about user mistakes)

Related

Double echo when running commands under a pty

I'm writing a program to create a pty, then fork and execute an ssh command with the slave side of the pty as its stdin. The full source code is here.
using namespace std;
#include <iostream>
#include <unistd.h>
#include <fcntl.h>
int main() {
int fd = posix_openpt(O_RDWR);
grantpt(fd);
unlockpt(fd);
pid_t pid = fork();
if (pid == 0) { //slave
freopen(ptsname(fd), "r", stdin);
execlp("ssh", "ssh", "user#192.168.11.40", NULL);
} else { //master
FILE *f = fdopen(fd, "w");
string buf;
while (true) {
getline(cin, buf);
if (!cin) {
break;
}
fprintf(f, "%s\n", buf.c_str());
}
}
}
After executing this program and inputting just echo hello (and a newline), the child command re-sends my input before its own output, thus duplicating my input line:
~ $ echo hello
echo hello #duplication
hello
~ $
I think this is due to the fact that a pty behaves almost the same as a normal terminal. If I add freopen("log.txt", "w", stdout);" and input the same command, I get just
echo hello #This is printed because I typed it.
and the contents of log.txt is this:
~ $ echo hello #I think this is printed because a pty simulates input.
hello
~ $
How can I avoid the duplication?
Is that realizable?
I know it is somehow realizable, but don't know how to. In fact, the rlwrap command behaves the same as my program, except that it doesn't have any duplication:
~/somedir $ rlwrap ssh user#192.168.11.40
~ $ echo hello
hello
~ $
I'm reading the source code of rlwrap now, but haven't yet understood its implementation.
Supplement
As suggested in this question (To me, not the answer but the OP was helpful.), unsetting the ECHO terminal flag disables the double echoing. In my case, adding this snippets to the slave block solved the problem.
termios terminal_attribute;
int fd_slave = fileno(fopen(ptsname(fd_master), "r"));
tcgetattr(fd_slave, &terminal_attribute);
terminal_attribute.c_lflag &= ~ECHO;
tcsetattr(fd_slave, TCSANOW, &terminal_attribute);
It should be noted that this is not what rlwrap does. As far as I tested rlwrap <command> never duplicates its input line for any <command> However, my program echoes twice for some <command>s. For example,
~ $ echo hello
hello #no duplication
~ $ /usr/bin/wolfram
Mathematica 12.0.1 Kernel for Linux ARM (32-bit)
Copyright 1988-2019 Wolfram Research, Inc.
In[1]:= 3 + 4
3 + 4 #duplication (my program makes this while `rlwrap` doesn't)
Out[1]= 7
In[2]:=
Is this because the <command> (ssh when I run wolfram remotely) re-enables echoing? Anyway, I should keep reading the source code of rlwrap.
As you already observed, after the child has called exec() the terminal flags of the slave side are not under your control anymore, and the child may (and often will) re-enable echo. This means that is is not of much use to change the terminal flags in the child before calling exec.
Both rlwrap and rlfe solve the problem in their own (different) ways:
rlfe keeps the entered line, but removes the echo'ed input from the child's output before displaying it
rlwrap removes the entered line and lets it be replaced by the echo
Whatever approach you use, you have to know whether your input has been (in rlfes case) or will be (in rlwraps case) echoed back. rlwrap, at least, does this by not closing the pty's slave end in the parent process, and then watching its terminal settings (in this case, the ECHO bit in its c_lflag) to know whether the slave will echo or not.
All this is rather cumbersome, of course. The rlfe approach is probably easier, as it doesn't require the use of the readline library, and you could simply strcmp() the received output with the input you just sent (which will only go wrong in the improbable case of a cat command that disables echo on its input)

What is the safe way of get the return status from an executed shell command in c++

Since the function std::system(const char* command) from cstdlib doesn't guarantee that will return the correct return status from the shell, then how can I run a command in shell using c/c++ and have a guarantee that will give me the right return value?
In my case, for example, I ran a command with:
bool is_process_running(std::string p_name){
std::string command_str= "ps aux | grep '" + p_name + "' | egrep -v '(grep|bash)'";
int result(0);
result= system(command_str.c_str());
return result == 0;
}
If I run, for example, ps aux | grep 'my_process' | egrep -v '(grep|bash)' directly into the terminal and after that echo $?, I see it returning 0 because my_process is running and also returning 1 when I use a non running process. But, the code above returns a different value. This code used to work when I tested in CentOs 6 but now in CentOs 7 doesn't work anymore. So, what can I use to run the shell command and get the correct result?
I also found a solution using pidof command but I can't use this because pidof doesn't consider the parameters passed to my_process which I need since I have many instances of this process each with different arguments.
The problem is that the exit status of Bash isn't guaranteed to be the exit status of the last executed command. If there's an error in the command you pass, you can't really distinguish it from egrep failing to match anything.
What you need to do is to do is to both get the exit status and parse the output (both to standard output and standard error). This can be accomplished by copying much of what the system function does: First create a pipe for the output (both stderr and stdout could be using the same pipe), then fork a new process to run the shell, and then execute the shell and the pipeline.
In the parent process you wait for the child to exit, and get its exit status. If it's zero then you know everything worked fine, and you can discard all the output from the pipe. If it's non-zero you have to read the output from the pipe to see what happened, if there was some error except egrep failing.

How do I make my c++ code know whether a command run from "system(cmd)" failed or not?

Let's assume I am running a unix command using system("foocmd param1") on c++.
If foocmd returns "Invalid argument" back to the terminal via stderr, then how do I get my c++ code to know whether foocmd failed?
Here is my attempted solution:
My assumption is that I should check whether anything got returned to stderr by calling the command.
To do that, I tried switching over to popen. Currently, this is the way I check. I first output my stderr into a file.
sprintf(cmd, "foocmd param1 2>temp.txt");
system(cmd);
Then I check if temp.txt is empty or not.
But there has to be a better way. Can anyone lend me a hand?
The usual way is to examine the return value of system():
If it's zero, the command executed successfully and exited with a status of 0.
If it's negative, there was a problem with system() itself, and you can't assume anything.
If it's positive, then you can use WEXITSTATUS() and related macros to find out how the process exited.
See the system(3) man page for the full details.
Most of the time, you are only interested in whether the command says it succeeded:
if (!system(cmd)) {
syslog(LOG_WARNING, "Command \"%s\" failed", cmd);
/* maybe some more error handling here */
goto err_return;
}

c++ - Way to know if Linux command exists before executing it

I'd like to write a function that generate gz file. The function will only be operational on Linux so I'd like to use gzip command (just execute external command).
So far I have this:
bool generate_gz( const String& path )
{
bool res = false;
// LINUX
#ifndef __WXMSW__
if( !gzip_command_exists())
cout << "cannot compress file. 'gzip' command is not available.\n";
else
res = (0 == execute_command(String::Format("gzip %s", path.c_str())));
// WINDOWS
#else
// do nothing - result will be false
#endif
return res;
}
bool gzip_command_exists()
{
// TBD
}
Question
Is there a way to implement gzip_command_exists()? If so, does it have to involve running ( or trying to run) gzip command?
The simplest is to execute via system() : "which gzip" and see the exit code of the system call:
RETURN VALUE
The value returned is -1 on error (e.g. fork(2) failed), and the return status of the command otherwise. This latter return status
is in
the format specified in wait(2). Thus, the exit code of the command will be WEXITSTATUS(status). In case /bin/sh could not be
executed,
the exit status will be that of a command that does exit(127).
What to look for:
:~$ which gzip
/bin/gzip
:~$ echo $?
0
:~$ which gzip11
:~$ echo $?
1
If you do not want to spawn an external command, you can use the stat function to check if a file exists and if it is executable on a POSIX system.
It you do not want to hard code the path to gzip it is slightly more complicated. You will have to obtain the PATH environment variable, split it on colons, and then check each path for gzip. Again the name and format of path variables are POSIX specific. Check getenv function to read the path, and you could use strtok to split it.
It is questionable if it is worth checking, though, vs. just trying to run it and handling any errors.
You could use popen(3) to read the output of /usr/bin/which gzip (and you could also use it to compress on the fly by write-popen-ing a gzip > file.gz command). you could also have: FILE* pgzipv = popen("gzip --version", "r"); and fgets the first line then pclose....
You could consider using getenv("PATH") then making a loop on it with an access test to each constructed path obtained by appending /gzip to each element in the PATH, etc... You could also fork then execvp using gzip --version with stdout and stderr suitably redirected, etc..
Notice that both popen(3) and system(3) would fail when asked to execute a non-existing program (since they both fork(2) a /bin/sh shell with -c). So you don't need to test the existence of gzip and you always need to test the success of system or popen (which can fail for many reasons, see below for fork failure, and the documentation for other failures).
To be picky, checking that gzip exists is useless: it [the file /bin/gzip] could (unlikely) have been removed between your check -e.g. with access as below or with popen as above- and your later invocation of system or popen; so your first check for gzip don't bring anything.
On most Linux systems, gzip is generally available at /bin/gzip (and in practice gzip is always installed);
this is required by the file system hierarchy standard (which says that if gzip is installed it should be at that file path). Then you could just use access(2) e.g. with code like
#define GZIP_PATH "/bin/gzip" /* per FSH, see www.pathname.com/fhs */
if (access(GZIP_PATH, X_OK)) { perror(GZIP_PATH); exit(EXIT_FAILURE); };
At last, you don't need at all to fork a gzip process to gzip-compress a file. You could (and you should) simply use a library like zlib (which is required according to the Linux Standard Base as libz.so.1); you want its gzopen, gzwrite, gzprintf, gzputs, gzclose etc .... functions! That would be faster (no need to fork(2) any external process) and more reliable (no dependency on some external program like gzip; would work even if fork is not possible because limits have been reached - see setrlimit(2) with RLIMIT_NPROC and ulimit builtin of bash(1))
See also Advanced Linux Programming

Can system() return before piped command is finished

I am having trouble using system() from libc on Linux. My code is this:
system( "tar zxvOf some.tar.gz fileToExtract | sed 's/some text to remove//' > output" );
std::string line;
int count = 0;
std::ifstream inputFile( "output" );
while( std::getline( input, line != NULL ) )
++count;
I run this snippet repeatedly and occasionally I find that count == 0 at the end of the run - no lines have been read from the file. I look at the file system and the file has the contents I would expect (greater than zero lines).
My question is should system() return when the entire command passed in has completed or does the presence of the pipe '|' mean system() can return before the part of the command after the pipe is completed?
I have explicitly not used a '&' to background any part of the command to system().
To further clarify I do in practice run the code snippet multiples times in parallel but the output file is a unique filename named after the thread ID and a static integer incremented per call to system(). I'm confident that the file being output to and read is unique for each call to system().
According to the documentation
The system() function shall not return until the child process has terminated.
Perhaps capture the output of "output" when it fails and see what it is? In addition, checking the return value of system would be a good idea. One scenario is that the shell command you are running is failing and you aren't checking the return value.
system(...) calls the standard shell to execute the command, and the shell itself should return only after the shell has regained control over the terminal. So if there's one of the programs backgrounded, system will return early.
Backgrounding happens through suffixing a command with & so check if the string you pass to system(...) contains any & and if so make sure they're properly quoted from shell processing.
System will only return after completion of its command and the file output should be readable in full after that. But ...
... multiple instances of your code snippet run in parallel would interfere because all use the same file output. If you just want to examine the contents of output and do not need the file itself, I would use popen instead of system. popen allows you to read the output of the pipe via a FILE*.
In case of a full file system, you could also see an empty output while the popen version would have no trouble with this condition.
To notice errors like a full file system, always check the return code of your calls (system, popen, ...). If there is an error the manpage will tell you to check errno. The number errno can be converted to a human readable text by strerror and output by perror.