I'm writing a program to create a pty, then fork and execute an ssh command with the slave side of the pty as its stdin. The full source code is here.
using namespace std;
#include <iostream>
#include <unistd.h>
#include <fcntl.h>
int main() {
int fd = posix_openpt(O_RDWR);
grantpt(fd);
unlockpt(fd);
pid_t pid = fork();
if (pid == 0) { //slave
freopen(ptsname(fd), "r", stdin);
execlp("ssh", "ssh", "user#192.168.11.40", NULL);
} else { //master
FILE *f = fdopen(fd, "w");
string buf;
while (true) {
getline(cin, buf);
if (!cin) {
break;
}
fprintf(f, "%s\n", buf.c_str());
}
}
}
After executing this program and inputting just echo hello (and a newline), the child command re-sends my input before its own output, thus duplicating my input line:
~ $ echo hello
echo hello #duplication
hello
~ $
I think this is due to the fact that a pty behaves almost the same as a normal terminal. If I add freopen("log.txt", "w", stdout);" and input the same command, I get just
echo hello #This is printed because I typed it.
and the contents of log.txt is this:
~ $ echo hello #I think this is printed because a pty simulates input.
hello
~ $
How can I avoid the duplication?
Is that realizable?
I know it is somehow realizable, but don't know how to. In fact, the rlwrap command behaves the same as my program, except that it doesn't have any duplication:
~/somedir $ rlwrap ssh user#192.168.11.40
~ $ echo hello
hello
~ $
I'm reading the source code of rlwrap now, but haven't yet understood its implementation.
Supplement
As suggested in this question (To me, not the answer but the OP was helpful.), unsetting the ECHO terminal flag disables the double echoing. In my case, adding this snippets to the slave block solved the problem.
termios terminal_attribute;
int fd_slave = fileno(fopen(ptsname(fd_master), "r"));
tcgetattr(fd_slave, &terminal_attribute);
terminal_attribute.c_lflag &= ~ECHO;
tcsetattr(fd_slave, TCSANOW, &terminal_attribute);
It should be noted that this is not what rlwrap does. As far as I tested rlwrap <command> never duplicates its input line for any <command> However, my program echoes twice for some <command>s. For example,
~ $ echo hello
hello #no duplication
~ $ /usr/bin/wolfram
Mathematica 12.0.1 Kernel for Linux ARM (32-bit)
Copyright 1988-2019 Wolfram Research, Inc.
In[1]:= 3 + 4
3 + 4 #duplication (my program makes this while `rlwrap` doesn't)
Out[1]= 7
In[2]:=
Is this because the <command> (ssh when I run wolfram remotely) re-enables echoing? Anyway, I should keep reading the source code of rlwrap.
As you already observed, after the child has called exec() the terminal flags of the slave side are not under your control anymore, and the child may (and often will) re-enable echo. This means that is is not of much use to change the terminal flags in the child before calling exec.
Both rlwrap and rlfe solve the problem in their own (different) ways:
rlfe keeps the entered line, but removes the echo'ed input from the child's output before displaying it
rlwrap removes the entered line and lets it be replaced by the echo
Whatever approach you use, you have to know whether your input has been (in rlfes case) or will be (in rlwraps case) echoed back. rlwrap, at least, does this by not closing the pty's slave end in the parent process, and then watching its terminal settings (in this case, the ECHO bit in its c_lflag) to know whether the slave will echo or not.
All this is rather cumbersome, of course. The rlfe approach is probably easier, as it doesn't require the use of the readline library, and you could simply strcmp() the received output with the input you just sent (which will only go wrong in the improbable case of a cat command that disables echo on its input)
Related
What I'm trying to achieve is to open a new terminal from a C/C++ program and run vim. I'm doing this by forking and execing "xterm -e vim [fname]". Try as I might, I can't seem to get xterm to understand what it is I want it to do.
Below is the relevant code segment:
int pid = fork();
if (pid){
//parent
int retstat;
waitpid (pid, &retstat, 0);
}else{
//child
char* ifname_cchararr = (char*)malloc(ifname.length() + 1);
strcpy (ifname_cchararr, ifname.c_str());
char* const argv[4] = {"-e", "vim", ifname_cchararr, NULL};
// std::cout << ifname_cchararr<<std::endl;
execvp ("xterm", argv);
}
Running the program results in xterm complaining:
-e : Explicit shell already was /usr/bin/vim
-e : bad command line option "testfile"
I get the feeling I've messed up argc somehow, but I'm confused, because running the following in an xterm window:
xterm -e vim testfile
works perfectly fine.
Please enlighten me!
You forgot to add xterm as first argument in argv. It may seems a bit weird, that you have to add the program-name to argv, since you already tell execvp which program you're calling, but thats how it is. For more information to why, see this recently asked question on Unix & Linux: Why does argv include the program name
I would like to set a script in order to continuously parse a specific marker in a xml file.
The script contains the following while loop:
function scan_t()
{
INPUT_FILE=${1}
while : ; do
if [[ -f "$INPUT_FILE" ]]
then
ret=`cat ${INPUT_FILE} | grep "<data>" | awk -F"=|>" '{print $2}' | awk -F"=|<" '{print $1}'`
if [[ "$ret" -ne 0 ]] && [[ -n "$ret" ]]
then
...
fi
fi
done
}
scant_t "/tmp/test.xml"
The line format is :
<data>0</data> or <data>1</data> <data>2</data> ..
Even if the condition if [[ -f "$INPUT_FILE" ]] has been added to the script, sometimes I get:
cat: /tmp/test.xml: No such file or directory.
Indeed, the $INPUT_FILE is normally consumed by an other process which is charged to suppress the file after reading.
This while loop is only used for test, the cat error doesn't matter but I would like to hide this return because it pollutes the terminal a lot.
If some other process can also read and remove the file before this script sees it, you've designed your system with a race condition. (I assume that "charged to suppress" means "designed to unlink"...)
If it's optional for this script to see every input file, then just redirect stderr to /dev/null (i.e. ignore errors when the race condition bites). If it's not optional, then have this script rename the input file to something else, and have the other process watch for that. Check for that file existing before you do the rename, to make sure you don't overwrite a file the other process hasn't read yet.
Your loop has a horrible design. First, you're busy-waiting (with no sleep at all) on the file coming into existence. Second, you're running 4 programs when the input exists, instead of 1.
The busy-wait can be avoided by using inotifywait to watch the directory for changes. So the if [[ -f $INPUT_FILE ]] loop body only runs after a modification to the directory, rather than as fast as a CPU core can run it.
The second is simpler to address: never cat file | something. Either something file, or something < file if something doesn't take filenames on its command line, or behaves differently. cat is only useful if you have multiple files to concatenate. For reading a file into a shell variable, use foo=$(<file).
I see from comments you've already managed to turn your whole pipeline into a single command. So write
INPUT_FILE=foo;
inotifywait -m -e close_write -e moved_to --format %f . |
while IFS= read -r event_file;do
[[ $event_file == $INPUT_FILE ]] &&
awk -F '[<,>]' '/data/ {printf "%s ",$3} END {print ""}' "$INPUT_FILE" 2>/dev/null
# echo "$event_file" &&
# date;
done
# tested and working with the commented-out echo/date commands
Note that I'm waiting for close_write and moved_to, rather than other events, to avoid jumping the gun and reading a file that's not finished being written. Put $INPUT_FILE in its own directory, so you don't get false-positive events waking up your loop for other filenames.
To also implement the rename-to-input-for-next-stage suggestion, you'd put a while [[ -e $INPUT2 ]]; do sleep 0.2; done; mv -n "$INPUT_FILE" "$INPUT2" busy-wait loop after the awk.
An alternative would be to run inotifywait once per loop iteration, but that has the potential for you to get stuck with $INPUT_FILE created before inotifywait started watching. So the producer would be waiting for the consumer to consume, and the consumer wouldn't see the event.
# Race condition with an asynchronous producer, DON'T USE
while inotifywait -qq -e close_write -e moved_to; do
[[ $event_file == $INPUT_FILE ]] &&
awk -F '[<,>]' '/data/ {printf "%s ",$3} END {print ""}' "$INPUT_FILE" 2>/dev/null
done
There doesn't seem to be a way to specify the name of a file that doesn't exist yet, even as a filter, so the loop body needs to test for the specific file existing in the dir before using.
If you don't have inotifywait available, you could just put a sleep into the loop. GNU sleep supports fractional seconds, like sleep 0.5. Busybox probably doesn't. You might want to write a tiny trivial C program anyway, which keeps trying to open(2) the file in a loop that includes a usleep or nanosleep. When open succeeds, redirect stdin from that, and exec your awk program. That way, there's no race possible between a stat and an open.
#include <unistd.h> // for usleep/dup2
#include <sys/types.h> // for open
#include <sys/stat.h>
#include <fcntl.h>
#include <errno.h>
#include <stdio.h> // for perror
void waitloop(const char *path)
{
const char *const awk_args[] = { "-F", "[<,>]",
"/data/ {printf \"%s \",$3} END {print \"\"}",
path
};
while(42) {
int fd = open(path, O_RDONLY);
if (-1 != fd) {
// if you fork() here, you can avoid the shell loop too.
dup2(fd, 0); // redirect stdin from fd. In theory should check for error here, too.
close(fd); // and do this in the parent after fork
execv("/usr/bin/awk", (char * const*)awk_args); // execv's prototype doesn't prevent it from modifying the strings?
} else if(errno != ENOENT) {
perror("opening the file");
} // else ignore ENOENT
usleep(10000); // 10 milliseconds.
}
}
// optional TODO: error-check *all* the system calls.
This compiles, but I haven't tested it. Looping inside a single process doing open / usleep is much lighter weight than running a whole process to do sleep 0.01 from a shell.
Even better would be to use inotify to watch for directory events to detect the file appearing, instead of usleep. To avoid a race, after setting up the inotify watch, do another check for the file existing, in case it got created after your last check, but before the inotify watch became active.
I'm using system to execute a command with it's arguments. I don't want to use exec/fork.
When I have unmatched quotes in my command this error appears:
sh: -c: line 0: unexpected EOF while looking for matching `''
sh: -c: line 1: syntax error: unexpected end of file
How do I suppress these shell's error messages? I tried adding >/dev/null 2>&1 at the end of the invalid command but it doesn't suppress the shell error message. For background, I'm running user supplied commands which may or may not be valid. I can't know in advance if they'll be valid or not, but I want to suppress the error messages regardless.
Here's an example of code that generates the type of error I'm trying to suppress:
int main()
{
// This command is meant to be invalid as I'm trying to suppress the shell syntax error message
system("date ' >/dev/null 2>&1");
return 0;
}
Can you help me?
Think that system forks a process and then executes the command you've provided. The new process inherits the descriptor from its parent and that new process is writing in its standard error.
So, this code snippet may do what you want:
#include <stdlib.h>
#include <unistd.h>
int main()
{
int duperr;
duperr = dup(2);
close(2); /* close stderr so the new process can't output the error */
system("date '");
dup2(duperr, 2);
close(duperr);
/* here you can use stderr again */
write(2, "hello world\n", 12);
return 0;
}
To make writes to stderr be silently suppressed, you can output the errors to /dev/null:
#include <stdlib.h>
#include <unistd.h>
#include <fcntl.h>
#include <stdio.h>
int main(void) {
int devnull_fd, duperr_fd;
/* get a descriptor to /dev/null (devnull_fd) */
devnull_fd = open("/dev/null", O_WRONLY | O_APPEND);
/* save a descriptor "pointing" to the actual stderr (duperr_fd) */
duperr_fd = dup(STDERR_FILENO);
/* now STDERR_FILENO "points" to "/dev/null" */
dup2(devnull_fd, STDERR_FILENO);
system("date '");
/* restore stderr */
dup2(duperr_fd, STDERR_FILENO);
close(duperr_fd);
close(devnull_fd);
/* here you can use stderr again */
write(STDERR_FILENO, "hello world\n", 12);
return 0;
}
Remember to check the return value of the function calls.
The logical problem is that you're not redirecting anything because the part >/dev/null 2>&1 ends up being contained in a single-quoted part.
The string however doesn't end and therefore bash complains on stderr.
The solution for this specific case is to quote single-quotes by preceding them with a backslash, i.e. to call system("date \\' >/dev/null 2>&1"). Note however that bash exact quoting rules are a small nightmare.
A workaround I can think to in general is to save the command in a file, say cmd.txt, and then execute
system("bash < cmd.txt >/dev/null 2>&1");
May be this can be done also with bash -c without creating the file but I simply was not able to make any sense of single quote escaping rules for -c and I'm not going to waste neurons on the totally broken bash grammar.
You might want to run a correct command, then remove the quote, so
system("date >/dev/null 2>&1");
However, what is the point of running that? If (your PATH is common enough so that...) date is indeed /bin/date (see date(1)...) its only effect is to produce some output, and you are discarding it by redirecting it to /dev/null. So the only visible effect of your call to system is to spend several millions of CPU cycles. Of course, in weird cases, it could fail (e.g. because /bin/sh does not exist, /bin/date does not exist, fork has failed, etc...) ....
Maybe you are coding a shell. Then you should parse the command and do fork & execve .....
If you want to get the output of some command like date, better use popen(3) with pclose. In the particular case of the date it is much better to use time(2), localtime(3), strftime(3) (then you don't need to depend upon an external command, and you won't need to use popen)
If you really want to suppress the output of the shell started by system you could do the fork & execve (of bin/sh -c) yourself and redirect the standard and error outputs (to a file descriptor opening /dev/null) with dup2. See also daemon(3) & wordexp(3)
Maybe embedding an interpreter in your application (e.g. lua or guile) would be more sensible (assuming you don't care about user mistakes)
I wrote a c++ program to check if a process is running or not . this process is independently launched at background . my program works fine when I run it on foreground but when I time schedule it, it do not work .
int PID= ReadCommanOutput("pidof /root/test/testProg1"); /// also tested with pidof -m
I made a script in /etc/cron.d/myscript to time schedule it as follows :-
45 15 * * * root /root/ProgramMonitor/./testBkg > /root/ProgramMonitor/OutPut.txt
what could be the reason for this ?
string ReadCommanOutput(string command)
{
string output="";
int its=system((command+" > /root/ProgramMonitor/macinfo.txt").c_str());
if(its==0)
{
ifstream reader1("/root/ProgramMonitor/macinfo.txt",fstream::in);
if(!reader1.fail())
{
while(!reader1.eof())
{
string line;
getline(reader1,line);
if(reader1.fail())// for last read
break;
if(!line.empty())
{
stringstream ss(line.c_str());
ss>>output;
cout<<command<<" output = ["<<output<<"]"<<endl;
break;
}
}
reader1.close();
remove("/root/ProgramMonitor/macinfo.txt");
}
else
cout<<"/root/ProgramMonitor/macinfo.txt not found !"<<endl;
}
else
cout<<"ERROR: code = "<<its<<endl;
return output;
}
its output coming as "ERROR: code = 256"
thanks in advacee .
If you really wanted to pipe(2), fork(2), execve(2) then read the output of a pidof command, you should at least use popen(3) since ReadCommandOutput is not in the Posix API; at the very least
pid_t thepid = 0;
FILE* fpidof = popen("pidof /root/test/testProg1");
if (fpidof) {
int p=0;
if (fscanf(fpidof, "%d", &p)>0 && p>0)
thepid = (pid_t)p;
pclose(fpidof);
}
BTW, you did not specify what should happen if several processes (or none) are running the testProg1....; you also need to check the result of pclose
But you don't need to; actually you'll want to build, perhaps using snprintf, the pidof command (and you should be scared of code injection into that command, so quote arguments appropriately). You could simply find your command by accessing the proc(5) file system: you would opendir(3) on "/proc/", then loop on readdir(3) and for every entry which has a numerical name like 1234 (starts with a digit) readlink(2) its exe entry like e.g. /proc/1234/exe ...). Don't forget the closedir and test every syscall.
Please read Advanced Linux Programming
Notice that libraries like Poco or toolkits like Qt (which has a layer QCore without any GUI, and providing QProcess ....) could be useful to you.
As to why your pidof is failing, we can't guess (perhaps a permission issue, or perhaps there is no more any process like you want). Try to run it as root in another terminal at least. Test its exit code, and display both its stdout & stderr at least for debugging purposes.
Also, a better way (assuming that testProg1 is some kind of a server application, to be run in at most one single process) might be to define different conventions. Your testProg1 might start by writing its own pid into /var/run/testProg1.pid and your current application might then read the pid from that file and check, with kill(2) and a 0 signal number, that the process is still existing.
BTW, you could also improve your crontab(5) entry. You could make it run some shell script which uses logger(1) and (for debugging) runs pidof with its output redirected elsewhere. You might also read the mail perhaps sent to root by cron.
Finally I solved this problem by using su command
I have used
ReadCommanOutput("su -c 'pidof /root/test/testProg1' - root");
insteadof
ReadCommanOutput("pidof /root/test/testProg1");
I'm running a program and redirecting cout to an outfile, like so:
./program < infile.in > outfile.o
I want to be able to read in an option ('-h' or '--help') from the command line and output a help message to the terminal. Is there a way I can do this but still have the regular cout from the rest of the program go to the outfile?
Would cout be the right object to use for such a thing?
You should use cerr to output your help message to STDERR, which is not included in your redirection to outfile.o.
Given ./program < infile.in > outfile.o:
cout << "This writes to STDOUT, and gets redirected to outfile.";
cerr << "This doesn't get redirected, and displays on screen.";
If, later on, you want to redirect both STDOUT and STDERR, you can do
./program < infile.in &> outfile.o
If you want to redirect only STDERR, but allow STDOUT to display, use
./program < infile.in 2> outfile.o
Bash redirection is more complex than most people realize, and often everything except the simplest form (">") gets overlooked.
If you're on linux you can use the pseudo device /dev/tty to output to a controlling terminal (if any). This will work even if stderr is redirected as well as stdout. Other operating systems may provide similar mechanisms.
E.g.
#include <iostream>
#include <ostream>
#include <fstream>
int main()
{
std::ofstream term("/dev/tty", std::ios_base::out);
term << "This goes to terminal\n";
std::cout << "This goes to stdout\n";
return 0;
}
Will work like this:
$ ./a.out
This goes to stdout
This goes to terminal
$ ./a.out >/dev/null
This goes to terminal
Note the way that the with the two streams being buffered independently the relative ordering if they are outputting to the same device is not necessarily preserved. This can be adjusted by flushing the streams at appropriate times.
~$ cmd | tee log_file to dup stdout to file and terminal
~$ cmd 2>log_file to print stdout onto terminal and stderr into a file
You may like to output the help message to stderr. Stderr is generally used for non-normal output and you may consider a usage paragraph to be such output.
One of the things I've done - not saying this is always appropriate - is write modules that have something like this signature.
void write_out(ostream &o);
And then I can create fstream objects and pass them in, or pass in cout and cerr, whatever I need to at that time. This can be helpful in writing logging code where sometimes you want to see on-terminal what happens, and at other times you just want a logfile.
HTH.
You should use cerr instead of cout. Using shell redirection > only redirects stdout (cout), not stderr (cerr).