building a shell - IO trouble - c++

I am working on a shell for a systems programming class. I have been having some trouble with the file redirection. I just got redirecting the output to work, e.x. "ls > a" however when I type a command like "cat < a" into my shell it deletes everything in the file. I feel like the problem stems from the second if statement- "fdin = open(_inputFile,777)"
If that is the case a link to a recommended tutorial / other examples would be much appreciated.
On a side note, I included the entire function, however at the point which it creates the pipe, I have not tested anything there yet. I don't believe it works properly either though, but that may be from a mistake in another file.
void Command:: execute(){
if(_numberOfSimpleCommands == 0){
prompt();
return;
}
//save input/output
int defaultin = dup(0);
int defaultout = dup(1);
//initial input
int fdin;
if(_inputFile){
fdin = open(_inputFile,0777);
}else{
//use default input
fdin = dup(defaultin);
}
//execution
int pid;
int fdout;
for(int i = 0; i < _numberOfSimpleCommands; i++){
dup2(fdin,0);
close(fdin);
//setoutput
if(i == _numberOfSimpleCommands -1){
if(_outFile){
fdout = creat(_outFile,0666);
}else{
fdout = dup(defaultout);
}
}else{
int fdpipe[2];
pipe(fdpipe);
fdout = fdpipe[0];
fdin = fdpipe[1];
}
dup2(fdout,1);
close(fdout);
//create child
pid = fork();
if(pid == 0){
execvp(_simpleCommands[0]->_arguments[0],_simpleCommands[0]->_arguments);
perror("-myshell");
_exit(1);
}
}
//restore IO defaults
dup2(defaultin,0);
dup2(defaultout,1);
close(defaultin);
close(defaultout);
if(!_background){
waitpid(pid,0,0);
}
}

Your call open(_inputFile, 0777) is incorrect. The second argument to open is supposed to contain a bitwise or'd combination of values that specify access mode and file creation flags, among other things (O_RDONLY, O_WRONLY, etc). Since you're passing 0777, that probably ends up containing both O_CREAT and O_TRUNC, which causes _inputFile to be erased. You probably want open(_inputFile, O_RDONLY).

Related

Do input redirection and capture command output (Custom shell-like program)

I'm writing a custom shell where I try to add support for input, output redirections and pipes just like standard shell. I stuck at point where I cannot do input redirection, but output redirection is perfectly working. My implementation is something like this (only related part), you can assume that (string) input is non-empty
void execute() {
... // stuff before execution and initialization of variables
int *fds;
std::string content;
std::string input = readFromAFile(in_file); // for input redirection
for (int i = 0; i < commands.size(); i++) {
fds = subprocess(commands[i]);
dprintf(fds[1], "%s", input.data()); // write to write-end of pipe
close(fds[1]);
content += readFromFD(fds[0]); // read from read-end of pipe
close(fds[0]);
}
... // stuff after execution
}
int *subprocess(std::string &cmd) {
std::string s;
int *fds = new int[2];
pipe(fds);
pid_t pid = fork();
if (pid == -1) {
std::cerr << "Fork failed.";
}
if (pid == 0) {
dup2(fds[1], STDOUT_FILENO);
dup2(fds[0], STDIN_FILENO);
close(fds[1]);
close(fds[0]);
system(cmd.data());
exit(0); // child terminates
}
return fds;
}
My thought is subprocess returns a pipe (fd_in, fd_out) and parent can write to write-end and read-from read-end afterwards. However when I try an input redirection something like sort < in.txt, the program just hangs. I think there is a deadlock because one waiting other to write, and other one to read, however, after parent writes to write-end it closes, and then read from read-end. How should I consider this case ?
When I did a bit of searching, I saw this answer, which my original thinking was similar except that in the answer it mentions creating two pipes. I did not quite understand this part. Why do we need two separate pipes ?

Copy/Paste not working on a Linux based shell

After is searched a lot on the internet, am here in search of the solution for the below problem.
Thanks in advance for helping hands!
I have a Linux-based shell (not bash shell but similar) where am using a 3rd part code tinyrl (from clish/klish) to read the input from the shell.
Below is the function from tinyrl that reads the user input and returns it which is further used to display on the shell.
tinyrl_vt100_getchar(const tinyrl_vt100_t *this)
{
unsigned char c='\0';
int istream_fd = -1;
FILE *sfd = this->istream;
if (!sfd) return VT100_ERR;
istream_fd = fileno(sfd);
/* Just wait for the input if no timeout */
if (this->timeout <= 0) {
if ((c = getc(sfd)) == EOF) {
if (feof(sfd))
return VT100_EOF;
else
return VT100_ERR;
}
return c;
}
/* Set timeout for the select() */
fd_set rfds;
FD_ZERO(&rfds);
FD_SET(istream_fd, &rfds);
struct timeval tv;
tv.tv_sec = this->timeout;
tv.tv_usec = 0;
int retval = -1;
while (((retval = select(istream_fd + 1, &rfds, NULL, NULL, &tv)) < 0) &&
(EAGAIN == errno));
/* Error or timeout */
if (retval < 0)
return VT100_ERR;
if (!retval)
return VT100_TIMEOUT;
if ((c = getc(sfd)) == EOF) {
if (feof(sfd))
return VT100_EOF;
else
return VT100_ERR;
}
return c;
}
Problem description
When an input is given by copy-pasting the content, only one character appears and after pressing any key (like an arrow, space, backspace, letter, etc) the remaining content appears on the shell.
Example:
If we have copied “abc def xyz” content, then when pasted on the shell it displays only “a” and when any key is pressed it displays remaining content.
The expectation is it should not be required to press any key to complete the paste operation.
The solution is I have already tried
Using read() in place of getc(). This actually resolves the copy-paste issue but it causes other problems that keystroke value like "[C, [D, etc" getting printed on the shell when the cursor is moved rapidly through the content we pasted on the shell.
So this solution is not useful.
Thanks!

Qt GUI app unexpectedly ending

Hi I am working on Linux and I am trying to create a GUI app to go with my executable I have made.
For some reason it unexpectedly ends. There is no error message, it just says in the Qt console window it unexpectedly ended with exit code 0.
Can someone please have a look at it for me. I am working on Linux.
I will also paste the code here.
void MainWindow::on_pushButton_clicked()
{
QString stringURL = ui->lineEdit->text();
ui->labelError->clear();
if(stringURL.isEmpty() || stringURL.isNull()) {
ui->labelError->setText("You have not entered a URL.");
stringURL.clear();
return;
}
std::string cppString = stringURL.toStdString();
const char* cString = cppString.c_str();
char* output;
//These arrays will hold the file id of each end of two pipes
int fidOut[2];
int fidIn[2];
//Create two uni-directional pipes
int p1 = pipe(fidOut); //populates the array fidOut with read/write fid
int p2 = pipe(fidIn); //populates the array fidIn with read/write fid
if ((p1 == -1) || (p2 == -1)) {
printf("Error\n");
return;
}
//To make this more readable - I'm going to copy each fileid
//into a semantically more meaningful name
int parentRead = fidIn[0];
int parentWrite = fidOut[1];
int childRead = fidOut[0];
int childWrite = fidIn[1];
//////////////////////////
//Fork into two processes/
//////////////////////////
pid_t processId = fork();
//Which process am I?
if (processId == 0) {
/////////////////////////////////////////////////
//CHILD PROCESS - inherits file id's from parent/
/////////////////////////////////////////////////
::close(parentRead); //Don't need these
::close(parentWrite); //
//Map stdin and stdout to pipes
dup2(childRead, STDIN_FILENO);
dup2(childWrite, STDOUT_FILENO);
//Exec - turn child into sort (and inherit file id's)
execlp("htmlstrip", "htmlstrip", "-n", NULL);
} else {
/////////////////
//PARENT PROCESS/
/////////////////
::close(childRead); //Don't need this
::close(childWrite); //
//Write data to child process
//char strMessage[] = cString;
write(parentWrite, cString, strlen(cString));
::close(parentWrite); //this will send an EOF and prompt sort to run
//Read data back from child
char charIn;
while ( read(parentRead, &charIn, 1) > 0 ) {
output = output + (charIn);
printf("%s", output);
}
::close(parentRead); //This will prompt the child process to quit
}
return;
}
EDIT:: DEBUGGING RESULTS
I ran the debugger and this is the error I received:
The inferior stopped because it received a signal from the Operating System.
Signal name : SIGSEGV
Signal meaning : Segmentation fault
You haven't initialized the "output" variable. On the last lines of your code, you do this:
while ( read(parentRead, &charIn, 1) > 0 ) {
output = output + (charIn);
printf("%s", output);
}
Which will do nasty things, since you are adding a byte read from your child process, to the output variable, which is a pointer that contains garbage, and then printing the contents of the "output" variable's address as a string. You probably want "output" to be a std::string, that way your code could make sense:
std::string output;
/* ... */
while ( read(parentRead, &charIn, 1) > 0 ) {
output += (charIn);
}
std::cout << output;
Once you have read all the data your child process has generated, you can write it to stdout.
EDIT: since you want to set the contents of "output" to a QPlainTextEdit, you can use QPlainTextEdit::setPlainText:
while ( read(parentRead, &charIn, 1) > 0 ) {
output += (charIn);
}
plainTextEdit.setPlainText(output.c_str());

Linux: Executing child process with piped stdin/stdout

Using Linux and C++, I would like a function that does the following:
string f(string s)
{
string r = system("foo < s");
return r;
}
Obviously the above doesn't work, but you get the idea. I have a string s that I would like to pass as the standard input of a child process execution of application "foo", and then I would like to record its standard output to string r and then return it.
What combination of Linux syscalls or POSIX functions should I use?
I'm using Linux 3.0 and do not need the solution to work with older systems.
The code provided by eerpini does not work as written. Note, for example, that the pipe ends that are closed in the parent are used afterwards. Look at
close(wpipefd[1]);
and the subsequent write to that closed descriptor. This is just transposition, but it shows this code has never been used. Below is a version that I have tested. Unfortunately, I changed the code style, so this was not accepted as an edit of eerpini's code.
The only structural change is that I only redirect the I/O in the child (note the dup2 calls are only in the child path.) This is very important, because otherwise the parent's I/O gets messed up. Thanks to eerpini for the initial answer, which I used in developing this one.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <errno.h>
#define PIPE_READ 0
#define PIPE_WRITE 1
int createChild(const char* szCommand, char* const aArguments[], char* const aEnvironment[], const char* szMessage) {
int aStdinPipe[2];
int aStdoutPipe[2];
int nChild;
char nChar;
int nResult;
if (pipe(aStdinPipe) < 0) {
perror("allocating pipe for child input redirect");
return -1;
}
if (pipe(aStdoutPipe) < 0) {
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
perror("allocating pipe for child output redirect");
return -1;
}
nChild = fork();
if (0 == nChild) {
// child continues here
// redirect stdin
if (dup2(aStdinPipe[PIPE_READ], STDIN_FILENO) == -1) {
exit(errno);
}
// redirect stdout
if (dup2(aStdoutPipe[PIPE_WRITE], STDOUT_FILENO) == -1) {
exit(errno);
}
// redirect stderr
if (dup2(aStdoutPipe[PIPE_WRITE], STDERR_FILENO) == -1) {
exit(errno);
}
// all these are for use by parent only
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);
// run child process image
// replace this with any exec* function find easier to use ("man exec")
nResult = execve(szCommand, aArguments, aEnvironment);
// if we get here at all, an error occurred, but we are in the child
// process, so just exit
exit(nResult);
} else if (nChild > 0) {
// parent continues here
// close unused file descriptors, these are for child only
close(aStdinPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);
// Include error check here
if (NULL != szMessage) {
write(aStdinPipe[PIPE_WRITE], szMessage, strlen(szMessage));
}
// Just a char by char read here, you can change it accordingly
while (read(aStdoutPipe[PIPE_READ], &nChar, 1) == 1) {
write(STDOUT_FILENO, &nChar, 1);
}
// done with these in this example program, you would normally keep these
// open of course as long as you want to talk to the child
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
} else {
// failed to create child
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);
}
return nChild;
}
Since you want bidirectional access to the process, you would have to do what popen does behind the scenes explicitly with pipes. I am not sure if any of this will change in C++, but here is a pure C example :
void piped(char *str){
int wpipefd[2];
int rpipefd[2];
int defout, defin;
defout = dup(stdout);
defin = dup (stdin);
if(pipe(wpipefd) < 0){
perror("Pipe");
exit(EXIT_FAILURE);
}
if(pipe(rpipefd) < 0){
perror("Pipe");
exit(EXIT_FAILURE);
}
if(dup2(wpipefd[0], 0) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
if(dup2(rpipefd[1], 1) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
if(fork() == 0){
close(defout);
close(defin);
close(wpipefd[0]);
close(wpipefd[1]);
close(rpipefd[0]);
close(rpipefd[1]);
//Call exec here. Use the exec* family of functions according to your need
}
else{
if(dup2(defin, 0) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
if(dup2(defout, 1) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
close(defout);
close(defin);
close(wpipefd[1]);
close(rpipefd[0]);
//Include error check here
write(wpipefd[1], str, strlen(str));
//Just a char by char read here, you can change it accordingly
while(read(rpipefd[0], &ch, 1) != -1){
write(stdout, &ch, 1);
}
}
}
Effectively you do this :
Create pipes and redirect the stdout and stdin to the ends of the two pipes (note that in linux, pipe() creates unidirectional pipes, so you need to use two pipes for your purpose).
Exec will now start a new process which has the ends of the pipes for stdin and stdout.
Close the unused descriptors, write the string to the pipe and then start reading whatever the process might dump to the other pipe.
dup() is used to create a duplicate entry in the file descriptor table. While dup2() changes what the descriptor points to.
Note : As mentioned by Ammo# in his solution, what I provided above is more or less a template, it will not run if you just tried to execute the code since clearly there is a exec* (family of functions) missing, so the child will terminate almost immediately after the fork().
Ammo's code has some error handling bugs. The child process is returning after dup failure instead of exiting. Perhaps the child dups can be replaced with:
if (dup2(aStdinPipe[PIPE_READ], STDIN_FILENO) == -1 ||
dup2(aStdoutPipe[PIPE_WRITE], STDOUT_FILENO) == -1 ||
dup2(aStdoutPipe[PIPE_WRITE], STDERR_FILENO) == -1
)
{
exit(errno);
}
// all these are for use by parent only
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);

Capturing stdout from a system() command optimally [duplicate]

This question already has answers here:
How do I execute a command and get the output of the command within C++ using POSIX?
(12 answers)
Closed 7 years ago.
I'm trying to start an external application through system() - for example, system("ls"). I would like to capture its output as it happens so I can send it to another function for further processing. What's the best way to do that in C/C++?
From the popen manual:
#include <stdio.h>
FILE *popen(const char *command, const char *type);
int pclose(FILE *stream);
Try the popen() function. It executes a command, like system(), but directs the output into a new file. A pointer to the stream is returned.
FILE *lsofFile_p = popen("lsof", "r");
if (!lsofFile_p)
{
return -1;
}
char buffer[1024];
char *line_p = fgets(buffer, sizeof(buffer), lsofFile_p);
pclose(lsofFile_p);
EDIT: misread question as wanting to pass output to another program, not another function. popen() is almost certainly what you want.
System gives you full access to the shell. If you want to continue using it, you can
redirect it's output to a temporary file, by system("ls > tempfile.txt"), but choosing a secure temporary file is a pain. Or, you can even redirect it through another program: system("ls | otherprogram");
Some may recommend the popen() command. This is what you want if you can process the output yourself:
FILE *output = popen("ls", "r");
which will give you a FILE pointer you can read from with the command's output on it.
You can also use the pipe() call to create a connection in combination with fork() to create new processes, dup2() to change the standard input and output of them, exec() to run the new programs, and wait() in the main program to wait for them. This is just setting up the pipeline much like the shell would. See the pipe() man page for details and an example.
The functions popen() and such don't redirect stderr and such; I wrote popen3() for that purpose.
Here's a bowdlerised version of my popen3():
int popen3(int fd[3],const char **const cmd) {
int i, e;
int p[3][2];
pid_t pid;
// set all the FDs to invalid
for(i=0; i<3; i++)
p[i][0] = p[i][1] = -1;
// create the pipes
for(int i=0; i<3; i++)
if(pipe(p[i]))
goto error;
// and fork
pid = fork();
if(-1 == pid)
goto error;
// in the parent?
if(pid) {
// parent
fd[STDIN_FILENO] = p[STDIN_FILENO][1];
close(p[STDIN_FILENO][0]);
fd[STDOUT_FILENO] = p[STDOUT_FILENO][0];
close(p[STDOUT_FILENO][1]);
fd[STDERR_FILENO] = p[STDERR_FILENO][0];
close(p[STDERR_FILENO][1]);
// success
return 0;
} else {
// child
dup2(p[STDIN_FILENO][0],STDIN_FILENO);
close(p[STDIN_FILENO][1]);
dup2(p[STDOUT_FILENO][1],STDOUT_FILENO);
close(p[STDOUT_FILENO][0]);
dup2(p[STDERR_FILENO][1],STDERR_FILENO);
close(p[STDERR_FILENO][0]);
// here we try and run it
execv(*cmd,const_cast<char*const*>(cmd));
// if we are there, then we failed to launch our program
perror("Could not launch");
fprintf(stderr," \"%s\"\n",*cmd);
_exit(EXIT_FAILURE);
}
// preserve original error
e = errno;
for(i=0; i<3; i++) {
close(p[i][0]);
close(p[i][1]);
}
errno = e;
return -1;
}
The most efficient way is to use stdout file descriptor directly, bypassing FILE stream:
pid_t popen2(const char *command, int * infp, int * outfp)
{
int p_stdin[2], p_stdout[2];
pid_t pid;
if (pipe(p_stdin) == -1)
return -1;
if (pipe(p_stdout) == -1) {
close(p_stdin[0]);
close(p_stdin[1]);
return -1;
}
pid = fork();
if (pid < 0) {
close(p_stdin[0]);
close(p_stdin[1]);
close(p_stdout[0]);
close(p_stdout[1]);
return pid;
} else if (pid == 0) {
close(p_stdin[1]);
dup2(p_stdin[0], 0);
close(p_stdout[0]);
dup2(p_stdout[1], 1);
dup2(::open("/dev/null", O_WRONLY), 2);
/// Close all other descriptors for the safety sake.
for (int i = 3; i < 4096; ++i) {
::close(i);
}
setsid();
execl("/bin/sh", "sh", "-c", command, NULL);
_exit(1);
}
close(p_stdin[0]);
close(p_stdout[1]);
if (infp == NULL) {
close(p_stdin[1]);
} else {
*infp = p_stdin[1];
}
if (outfp == NULL) {
close(p_stdout[0]);
} else {
*outfp = p_stdout[0];
}
return pid;
}
To read output from child use popen2() like this:
int child_stdout = -1;
pid_t child_pid = popen2("ls", 0, &child_stdout);
if (!child_pid) {
handle_error();
}
char buff[128];
ssize_t bytes_read = read(child_stdout, buff, sizeof(buff));
To both write and read:
int child_stdin = -1;
int child_stdout = -1;
pid_t child_pid = popen2("grep 123", &child_stdin, &child_stdout);
if (!child_pid) {
handle_error();
}
const char text = "1\n2\n123\n3";
ssize_t bytes_written = write(child_stdin, text, sizeof(text) - 1);
char buff[128];
ssize_t bytes_read = read(child_stdout, buff, sizeof(buff));
The functions popen() and pclose() could be what you're looking for.
Take a look at the glibc manual for an example.
In Windows, instead of using system(), use CreateProcess, redirect the output to a pipe and connect to the pipe.
I'm guessing this is also possible in some POSIX way?
Actually, I just checked, and:
popen is problematic, because the process is forked. So if you need to wait for the shell command to execute, then you're in danger of missing it. In my case, my program closed even before the pipe got to do it's work.
I ended up using system call with tar command on linux. The return value from system was the result of tar.
So: if you need the return value, then not no only is there no need to use popen, it probably won't do what you want.
In this page: capture_the_output_of_a_child_process_in_c describes the limitations of using popen vs. using fork/exec/dup2/STDOUT_FILENO approach.
I'm having problems capturing tshark output with popen.
And I'm guessing that this limitation might be my problem:
It returns a stdio stream as opposed to a raw file descriptor, which
is unsuitable for handling the output asynchronously.
I'll come back to this answer if I have a solution with the other approach.
I'm not entirely certain that its possible in standard C, as two different processes don't typically share memory space. The simplest way I can think of to do it would be to have the second program redirect its output to a text file (programname > textfile.txt) and then read that text file back in for processing. However, that may not be the best way.