Hi I am working on Linux and I am trying to create a GUI app to go with my executable I have made.
For some reason it unexpectedly ends. There is no error message, it just says in the Qt console window it unexpectedly ended with exit code 0.
Can someone please have a look at it for me. I am working on Linux.
I will also paste the code here.
void MainWindow::on_pushButton_clicked()
{
QString stringURL = ui->lineEdit->text();
ui->labelError->clear();
if(stringURL.isEmpty() || stringURL.isNull()) {
ui->labelError->setText("You have not entered a URL.");
stringURL.clear();
return;
}
std::string cppString = stringURL.toStdString();
const char* cString = cppString.c_str();
char* output;
//These arrays will hold the file id of each end of two pipes
int fidOut[2];
int fidIn[2];
//Create two uni-directional pipes
int p1 = pipe(fidOut); //populates the array fidOut with read/write fid
int p2 = pipe(fidIn); //populates the array fidIn with read/write fid
if ((p1 == -1) || (p2 == -1)) {
printf("Error\n");
return;
}
//To make this more readable - I'm going to copy each fileid
//into a semantically more meaningful name
int parentRead = fidIn[0];
int parentWrite = fidOut[1];
int childRead = fidOut[0];
int childWrite = fidIn[1];
//////////////////////////
//Fork into two processes/
//////////////////////////
pid_t processId = fork();
//Which process am I?
if (processId == 0) {
/////////////////////////////////////////////////
//CHILD PROCESS - inherits file id's from parent/
/////////////////////////////////////////////////
::close(parentRead); //Don't need these
::close(parentWrite); //
//Map stdin and stdout to pipes
dup2(childRead, STDIN_FILENO);
dup2(childWrite, STDOUT_FILENO);
//Exec - turn child into sort (and inherit file id's)
execlp("htmlstrip", "htmlstrip", "-n", NULL);
} else {
/////////////////
//PARENT PROCESS/
/////////////////
::close(childRead); //Don't need this
::close(childWrite); //
//Write data to child process
//char strMessage[] = cString;
write(parentWrite, cString, strlen(cString));
::close(parentWrite); //this will send an EOF and prompt sort to run
//Read data back from child
char charIn;
while ( read(parentRead, &charIn, 1) > 0 ) {
output = output + (charIn);
printf("%s", output);
}
::close(parentRead); //This will prompt the child process to quit
}
return;
}
EDIT:: DEBUGGING RESULTS
I ran the debugger and this is the error I received:
The inferior stopped because it received a signal from the Operating System.
Signal name : SIGSEGV
Signal meaning : Segmentation fault
You haven't initialized the "output" variable. On the last lines of your code, you do this:
while ( read(parentRead, &charIn, 1) > 0 ) {
output = output + (charIn);
printf("%s", output);
}
Which will do nasty things, since you are adding a byte read from your child process, to the output variable, which is a pointer that contains garbage, and then printing the contents of the "output" variable's address as a string. You probably want "output" to be a std::string, that way your code could make sense:
std::string output;
/* ... */
while ( read(parentRead, &charIn, 1) > 0 ) {
output += (charIn);
}
std::cout << output;
Once you have read all the data your child process has generated, you can write it to stdout.
EDIT: since you want to set the contents of "output" to a QPlainTextEdit, you can use QPlainTextEdit::setPlainText:
while ( read(parentRead, &charIn, 1) > 0 ) {
output += (charIn);
}
plainTextEdit.setPlainText(output.c_str());
Related
I'm writing a custom shell where I try to add support for input, output redirections and pipes just like standard shell. I stuck at point where I cannot do input redirection, but output redirection is perfectly working. My implementation is something like this (only related part), you can assume that (string) input is non-empty
void execute() {
... // stuff before execution and initialization of variables
int *fds;
std::string content;
std::string input = readFromAFile(in_file); // for input redirection
for (int i = 0; i < commands.size(); i++) {
fds = subprocess(commands[i]);
dprintf(fds[1], "%s", input.data()); // write to write-end of pipe
close(fds[1]);
content += readFromFD(fds[0]); // read from read-end of pipe
close(fds[0]);
}
... // stuff after execution
}
int *subprocess(std::string &cmd) {
std::string s;
int *fds = new int[2];
pipe(fds);
pid_t pid = fork();
if (pid == -1) {
std::cerr << "Fork failed.";
}
if (pid == 0) {
dup2(fds[1], STDOUT_FILENO);
dup2(fds[0], STDIN_FILENO);
close(fds[1]);
close(fds[0]);
system(cmd.data());
exit(0); // child terminates
}
return fds;
}
My thought is subprocess returns a pipe (fd_in, fd_out) and parent can write to write-end and read-from read-end afterwards. However when I try an input redirection something like sort < in.txt, the program just hangs. I think there is a deadlock because one waiting other to write, and other one to read, however, after parent writes to write-end it closes, and then read from read-end. How should I consider this case ?
When I did a bit of searching, I saw this answer, which my original thinking was similar except that in the answer it mentions creating two pipes. I did not quite understand this part. Why do we need two separate pipes ?
This is the code for playing sound file in C++ linux code
string str1 = "aplay ";
str1 = str1 + " out.wav" + " & ";
const char *command = str1.c_str();
system(command);
** Entire code is available here : Playing sound C++ linux aplay : device or resource busy
I just want to know how to play this in a fork() as I read that system call is too taxing on cpu, which ofcourse is in my case.
Please help
fork will make a copy of your process, so you can easily write:
// fork the current process: beyond this point, you will have 2 process
int ret = fork();
if (ret == 0) {
// in child: execute the long command
system("aplay out.wav");
// exit the child process
exit(0);
}
// child process will not go here
if (ret < 0) {
perror("fork");
}
After, you should know that system will do for you fork + exec + wait. Since you don't want your parent process to wait the child, you can write:
// fork the current process: beyond this point, you will have 2 process
int ret = fork();
if (ret == 0) {
// in child: execute the long command
char program[] = "/usr/bin/aplay";
char *args[] = {"/usr/bin/aplay", "out.wav" };
ret = execv(program, args);
// this point will be reach only if `exec` fails
// so if we reach this point, we've got an error.
perror("execv");
exit(0);
}
// child process will not go here
if (ret < 0) {
perror("fork");
}
I'm doing some practice on process management in Linux and how to use system calls and communication between child and parent processes. I need to implement a pipe to get the string provided by child process, which is the directory list as string and pass it to the parent process to count the number of lines in that string and find the number of files in that directory by doing that. The problem i faced is here:
error: initializer fails to determine size of ‘dirFileList’
char dirFileList[] = read(tunnel[0],buf,MAX_BUF)
Also my code is down below:
#define die(e) do { fprintf(stderr, "%s\n", e); exit(EXIT_FAILURE); } while (0);
#define MAX_BUF 2024
int main()
{
const char *path = (char *)"/"; /* Root path */
const char *childCommand = (char *)"ls |"; /* Command to be executed by the child process */
const char *parentCommand = (char *)"wc -l"; /* Command to be executed by the parent process */
int i = 0; /* A simple loop counter :) */
int counter = 0; /* Counts the number of lines in the string provided in the child process */
int dirFileNum; /* Keeps the list of files in the directory */
int tunnel[2]; /* Defining an array of integer to let the child process store a number and parent process to pick that number */
pid_t pID = fork();
char buf[MAX_BUF]; /* Fork from the main process */
if (pipe(tunnel) == -1) /* Pipe from the parent to the child */
die("pipe died.");
if(pID == -1) /* Check if the fork result is valid */
{
die("fork died.");
}
else if(pID == 0) /* Check if we are in the child process */
{
dup2 (tunnel[1], STDOUT_FILENO); /* Redirect standard output */
close(tunnel[0]);
close(tunnel[1]);
execl(childCommand, path); /* Execute the child command */
die("execl died.");
}
else /* When we are still in the main process */
{
close(tunnel[1]);
char dirFileList[] = read(tunnel[0],buf,MAX_BUF); /* Read the list of directories provided by the child process */
for(i;i<strlen(dirFileList);i++) /* Find the number of lines in the list provided by the child process */
if(dirFileList[i] == '\n')
counter++;
printf("Root contains %d files.", counter); /* Print the result */
wait(NULL); /* Wait until the job is done by the child process */
}
return 0;
}
If you'd shown us the whole error message, we'd see it's referring to this line:
char dirFileList[] = read(tunnel[0],buf,MAX_BUF);
You can't declare an indeterminate array like that. And if you read the man page of read(2), you'll see that the return value is
On success, the number of bytes read ...
On error, -1 ...
So you want something like
int bytes_read = read(...);
if (bytes_read < 0) {
perror("read");
exit(1);
}
Some additional review (which you didn't ask for, but may be instructive):
Don't cast string literals to char*, especially when you're then assigning to const char* variables.
Instead of just printing a fixed message on error, you can be more informative after a call that has set errno if you use perror() - see my sample above.
die() could be implemented as a function, which will make it easier to debug and to use correctly than a macro.
Using Linux and C++, I would like a function that does the following:
string f(string s)
{
string r = system("foo < s");
return r;
}
Obviously the above doesn't work, but you get the idea. I have a string s that I would like to pass as the standard input of a child process execution of application "foo", and then I would like to record its standard output to string r and then return it.
What combination of Linux syscalls or POSIX functions should I use?
I'm using Linux 3.0 and do not need the solution to work with older systems.
The code provided by eerpini does not work as written. Note, for example, that the pipe ends that are closed in the parent are used afterwards. Look at
close(wpipefd[1]);
and the subsequent write to that closed descriptor. This is just transposition, but it shows this code has never been used. Below is a version that I have tested. Unfortunately, I changed the code style, so this was not accepted as an edit of eerpini's code.
The only structural change is that I only redirect the I/O in the child (note the dup2 calls are only in the child path.) This is very important, because otherwise the parent's I/O gets messed up. Thanks to eerpini for the initial answer, which I used in developing this one.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <errno.h>
#define PIPE_READ 0
#define PIPE_WRITE 1
int createChild(const char* szCommand, char* const aArguments[], char* const aEnvironment[], const char* szMessage) {
int aStdinPipe[2];
int aStdoutPipe[2];
int nChild;
char nChar;
int nResult;
if (pipe(aStdinPipe) < 0) {
perror("allocating pipe for child input redirect");
return -1;
}
if (pipe(aStdoutPipe) < 0) {
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
perror("allocating pipe for child output redirect");
return -1;
}
nChild = fork();
if (0 == nChild) {
// child continues here
// redirect stdin
if (dup2(aStdinPipe[PIPE_READ], STDIN_FILENO) == -1) {
exit(errno);
}
// redirect stdout
if (dup2(aStdoutPipe[PIPE_WRITE], STDOUT_FILENO) == -1) {
exit(errno);
}
// redirect stderr
if (dup2(aStdoutPipe[PIPE_WRITE], STDERR_FILENO) == -1) {
exit(errno);
}
// all these are for use by parent only
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);
// run child process image
// replace this with any exec* function find easier to use ("man exec")
nResult = execve(szCommand, aArguments, aEnvironment);
// if we get here at all, an error occurred, but we are in the child
// process, so just exit
exit(nResult);
} else if (nChild > 0) {
// parent continues here
// close unused file descriptors, these are for child only
close(aStdinPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);
// Include error check here
if (NULL != szMessage) {
write(aStdinPipe[PIPE_WRITE], szMessage, strlen(szMessage));
}
// Just a char by char read here, you can change it accordingly
while (read(aStdoutPipe[PIPE_READ], &nChar, 1) == 1) {
write(STDOUT_FILENO, &nChar, 1);
}
// done with these in this example program, you would normally keep these
// open of course as long as you want to talk to the child
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
} else {
// failed to create child
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);
}
return nChild;
}
Since you want bidirectional access to the process, you would have to do what popen does behind the scenes explicitly with pipes. I am not sure if any of this will change in C++, but here is a pure C example :
void piped(char *str){
int wpipefd[2];
int rpipefd[2];
int defout, defin;
defout = dup(stdout);
defin = dup (stdin);
if(pipe(wpipefd) < 0){
perror("Pipe");
exit(EXIT_FAILURE);
}
if(pipe(rpipefd) < 0){
perror("Pipe");
exit(EXIT_FAILURE);
}
if(dup2(wpipefd[0], 0) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
if(dup2(rpipefd[1], 1) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
if(fork() == 0){
close(defout);
close(defin);
close(wpipefd[0]);
close(wpipefd[1]);
close(rpipefd[0]);
close(rpipefd[1]);
//Call exec here. Use the exec* family of functions according to your need
}
else{
if(dup2(defin, 0) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
if(dup2(defout, 1) == -1){
perror("dup2");
exit(EXIT_FAILURE);
}
close(defout);
close(defin);
close(wpipefd[1]);
close(rpipefd[0]);
//Include error check here
write(wpipefd[1], str, strlen(str));
//Just a char by char read here, you can change it accordingly
while(read(rpipefd[0], &ch, 1) != -1){
write(stdout, &ch, 1);
}
}
}
Effectively you do this :
Create pipes and redirect the stdout and stdin to the ends of the two pipes (note that in linux, pipe() creates unidirectional pipes, so you need to use two pipes for your purpose).
Exec will now start a new process which has the ends of the pipes for stdin and stdout.
Close the unused descriptors, write the string to the pipe and then start reading whatever the process might dump to the other pipe.
dup() is used to create a duplicate entry in the file descriptor table. While dup2() changes what the descriptor points to.
Note : As mentioned by Ammo# in his solution, what I provided above is more or less a template, it will not run if you just tried to execute the code since clearly there is a exec* (family of functions) missing, so the child will terminate almost immediately after the fork().
Ammo's code has some error handling bugs. The child process is returning after dup failure instead of exiting. Perhaps the child dups can be replaced with:
if (dup2(aStdinPipe[PIPE_READ], STDIN_FILENO) == -1 ||
dup2(aStdoutPipe[PIPE_WRITE], STDOUT_FILENO) == -1 ||
dup2(aStdoutPipe[PIPE_WRITE], STDERR_FILENO) == -1
)
{
exit(errno);
}
// all these are for use by parent only
close(aStdinPipe[PIPE_READ]);
close(aStdinPipe[PIPE_WRITE]);
close(aStdoutPipe[PIPE_READ]);
close(aStdoutPipe[PIPE_WRITE]);
I'm trying to open a file with the parent then, send it to the child. I want the child to look for specific word and send the line from the text file back to the parent.
With my Code right now, I can send the text file to the children but I cant check the file and send it back to the parent.
int fd[2];
pid_t cpid;
pipe(fd);
if ((cpid = fork()) == -1)
{
cout << "ERROR" << endl;
exit(1);
}
// child process
if (cpid == 0)
{
// don't need the write-side of this
close(fd[WRITE_FD]);
std::string s;
char ch;
while (read(fd[READ_FD], &ch, 1) > 0)
{
if (ch != 0)
s.push_back(ch);
else
{
//std::cout << s << " "; //'\n'; //print the txt
while(getline(s, ch, '.'))
{
printf("%s\n", toSend.c_str());
}
s.clear();
}
}
// finished with read-side
close(fd[READ_FD]);
}
// parent process
else
{
// don't need the read-side of this
close(fd[READ_FD]);
fstream fileWords ("words.txt");
string toSend;
while (fileWords >> toSend)
{
// send word including terminator
write(fd[WRITE_FD], toSend.c_str(), toSend.length()+1);
}
// finished with write-side
close(fd[WRITE_FD]);
wait(NULL);
}
return EXIT_SUCCESS;
Pipes are intended for unidirectional communication. If you try to use a pipe for bidirectional communication, it's almost certain that programs will end up reading their own output back into themselves (or similar undesired behavior), rather than successfully communicating with each other. There's two similar approaches that would work for bidirectional communication:
Create two pipes, and give each process the read end of one and the write end of the other. Then there's no ambiguity about where data will end up.
Use a socket instead of a pipe. The socketpair function makes this easy: just do socketpair(AF_UNIX, SOCK_STREAM, 0, fd) in place of pipe(fd). Sockets work just like pipes, but are bidirectional (writes to either of the FD's always get read by the other FD).