In gdb, we normally use r > log.txt to redirect stdout. This also runs the program.
Is there a way to redirect stdout (only, not stdin) without actually starting the program?
My intended workflow is:
redirect stdout to log.txt
call func1(a, b, c) # I want the stdout output going to log.txt, without any gdb info, just stdout
Note that tty command won't work in this case (I want to redirect stdout only).
My intended workflow is:
Your intended workflow will not work: you can't call func1(...) without first running the program.
It appears that what you want is (roughly):
start the program (runs to main).
call func1(...) with its output redirected to a file.
This answer shows how to redirect the output wherever you want at an arbitrary point in program execution.
Related
I am trying to debug an application in gdb running on Ubuntu 18.04.
In some parts of the code, I can set breakpoints and successfully debug problems.
But in other parts, triggering a breakpoint causes the process to exit.
Is there some way I can get debut statements to appear in the gdb console?
I currently use gdb to attach to the process and then debug from that point.
I have code (std::cout) that sends statements to standard out but they are not showing up in the gdb console.
Nor should you expect them to.
When an application is started, its std::cout messages are going to file descriptor 1 (stdout). This can be the terminal window in which the app was started, or a file if the output was redirected. It could also be a pipe or /dev/null.
GDB does not "steal" that output (if it did, it would be harder to debug a program that is the source of input for another program going through a pipe).
Your first task should be to determine where the output is going. On Linux, this is usually as easy as ls -l /proc/$pid/fds/1 (replace $pid with the actual process id of the process you are debugging).
An additional complication is that the stdout can be fully buffered (if it goes into a file, pipe or socket), and may not be flushed by the time your breakpoint is hit.
P.S. In theory, you can "steal" the output from wherever it's going to your current terminal by running the following GDB commands:
(gdb) print open("/dev/tty", 2, 0) # open new fd in the inferior process
# going to current terminal.
# This will print something, e.g. 5
# Now make stdout go to that newly-opened fd
(gdb) call dup2($whatever_last_command_printed, 1)
but I wouldn't recommend this, as it can interfere with the program in unexpected ways.
My running process handles stdin by using getchar(). It works fine when I run it in foreground. However if I run it in background and do echo "a">> /proc/pid/fd/0 it won't work. On my system, /proc/pid/fd/0 is as same as /proc/pts/0, so how do I send to the process's stdin so that getchar() can see it? I'm working in C++ over ssh.
When you run multiple programs in background, they still have /dev/pts/XX as their control terminal (and stdin), but they are no longer eligible to read from it -- only shell or foreground task can do that. If they do, they'll get SIGTTIN signal that stops background process:
myaut#zenbook:~$ cat &
[1] 15250
myaut#zenbook:~$
[1]+ Stopped cat
Reasoning for such behavior is simple: multiple programs reading from one source leads to race condition. I.e. when you input to shell who am i, shell will read who, background task #1 will read am and task #2 will read i.
The solution is simple -- do not use pseudo-terminals to transfer data between processes:
Use pipes -- unnamed or named (with mkfifo). They are as simple as reading from stdin. Modern shells also provide coprocesses that allow to avoid named pipes.
Use UNIX sockets in complex cases
If you still need a pseudo-terminal, create a new one for your program with screen or other terminal emulator.
My task is to make a program in C that opens a pipe, then one of the process' executes the ls command and writes the output into the pipe. The other process should then read it and display it in the shell.
So, the main problem I'm struggling with is this:
execl("/bin/ls", "/bin/ls", NULL);
How can I redirect the output to the pipe?
Another way I tried to do, was to redirect the output to a file, read the file and then write that to the pipe and then finally read it at the other end (and delete the file).
In a shell that would like this:
ls > ls_out.txt
But I wasn't able to reproduce that with execl.
Of course my favorite solution remains something like:
execl("/bin/ls", "bin/ls", " > my_pipe", NULL)
Use popen() instead of execl() to read the application output, see http://linux.die.net/man/3/popen
Create a pipe in your program.
Do the fork.
In the child process make the writing end of the pipe the new STDOUT_FILENO. (Read about the dup2 system call)
Execute the program you want to run.
In the parent process read from the reading end of the pipe.
This is how e.g. popen works under the hood.
You seem to focus on exec() a lot; that call is for running a process. The pipe must be set up before-hand, since the process you're going to start will just use whatever stdin/stdout already exists. See fork(), dup() and pipe() for the low-level primitives.
I think this might be what you're looking for?
for i in ls -1; echo $i; done
or
ls -1 | xargs echo
?
I have two processes written in C++, piped one after the other. One gives some information to the other's stdin, then they both go on to do something else.
The problem is that the second process hangs inside cin.getline(), even though there's no more data to be exchanged. The solution was for the first process to fclose(stdout), and that works, except when I use the process wrapped up in a script. So apparently the stdout of the script is still open after closing it by the process - which seems fair but in my case, can I close it? Thanks
Since your program doesn't terminate, you can exec your-program in the script instead of just your-program and save an open file descriptor at the writing end of the pipe (and a bunch of other things).
Alternatively, start your program in the background and exit the script.
You can also close the standard output, but if you do that before you start your program, it won't be able to use the closed file descriptor. So you have to close it while the program is running. This is not exactly trivial. I can think of starting the program in the background, closing the standard output (use exec 1>&- for that) and bringing the program back to the foreground.
I am creating a child-parent fork() to be able to communicate with a shell(/bin/sh) from the parent through a pipe.
The problem is:
In a parent I set a select() on a child output, but it unblocks only when the process is finished! So when I run say ps it's okay. but when I run /bin/sh it does not output until shell exits. But I want to read it's output!
for(;;) {
select(PARENT_READ+1,&sh,NULL,NULL,NULL); // This unblocks only when shell exits!
if (FD_ISSET(PARENT_READ,&sh)) {
while (n = read (PARENT_READ, &buf,30)) {
buf[30]='\0';
printf("C: %s\n",buf);
};
};
}
The answer is somewhere in the field of disabling buffering of pipes?
A lot of programs change their behavior depending on whether or not they think they're talking to a terminal (tty), and the shell definitely does this this. Also, the default C streams stdout and stderr are probably unbuffered if stdout is a tty, and fully buffered otherwise - that means that they don't flush until the internal buffer is full, the program explicitly flushes them, or the program ends.
To work around this problem, you have to make your program pretend to be a terminal. To do this, you can use your system's pseudo-terminal APIs (try man 7 pty). The end result is a pair of file descriptors that sort-of work like a pipe.
Also, as an aside, when select unblocks, you should read exactly once from the triggered file descriptor. If you read more than once, which is possible with the loop you've got there, you risk blocking again on subsequent reads, unless you've got your FD in non-blocking mode.
However, I have to ask: why do you need to interact with the shell in this way? Is it possible to, say, just run a shell script, or use "/bin/sh -c your_command_here" instead? There are relatively few programs that actually need a real terminal to work correctly - the main ones are programs that prompt for a password, like ssh, su or sudo.