Reading the output streams from a C/C++ coded application - c++

First of all, i'm on Ubuntu 14.04
So, here's my problem: I'm dealing with a C++ coded application that has a graphical interface (games/music players/etc). This application constantly sends strings to a logger whenever something happens, but those are only visible inside the client.
What I've tried to do already (failures):
strace the application and filter the results (let's say if the application showed the message "Hello, user", i would log all the outputs to a test file and search for "Hello")
ltrace the application
debug the application with dbg
search for debug methods on C/C++ apps
What I've got from this last method is that programs usually log errors and messages through a clog stream. What could I do to retrieve that information?
Resuming, I have a graphical C/C++ coded application that constant inputs strings on a window inside the client; I want to read those strings or any other strings/inputs this application does. Any debugging/memory reading information may also be helpful!
Thanks

std::cout corresponds to stdout stream;
std::cerr and std::clog corresponds to stderr stream.
By default, content "sent" to stdout and stderr is displayed in terminal.
To see it, simply open terminal emulator (or, alternatively, terminal), type path where the program is and confirm using Enter. You're going to see content sent to stdout and stderr.
stdout is represented by number 1, stderr by 2.
Next hints are designed to work in bash shell. (they may work on another shells, but they don't have to). If you're not sure you're using bash, type bash in terminal and confirm using Enter.
When you want to send stream's content to file, after path (but before pressing Enter) write space and n>filename, where n is number of stream you want redirect (when skipped, 1=stdout is going to be redirected). By default you won't see redirected content in terminal.
When you want simply completely ignore stream's content, redirect stream to /dev/null.
When you want send stdout to another program (if second program is console program, it's going to see it like entered by user using keyboard), after path (but before pressing Enter) type |program_name and_possible_parameters. You can in example redirect stdout to grep.
grep is going to write only lines containing string passed as argument (after grep, type space and a string, if it contain a space, delimit argument using "" or '', if it contain ' " or \ precede it with backslash (\). If it's result is going to appear in terminal, I recommend to write --color=auto between grep and argument to tell grep to write every occurrence of argument using different color.
Finally, your command can look like
path_to_your_program |grep --color=auto "argument".
You can use more than one redirection in single command.
Redirections are processed from left to right.
When you want to redirect stderr to input of another program, you can redirect it's content to stdout by typing 2>&1 and then use |.

Related

Repeat character is getting skipped while dumping path programmatically in Powershell

I am dumping characters one by one of a script file path onto powershell console using SendMessage API.
At the first execution of program, powershell skips the repeated character from a path which creates an issue.
For e.g. "C:\myFolder\abbc\test.ps1"
From above path when I dump a characters onto powershell window, it skips 1 b from "abbc" so the final path that gets dumped on console is "C:\myFolder\abc\test.ps1".
This happens only first execution of application, on subsequent executions it works fine and accepts repeat characters as well.
SendMessage is not the right way to do it. You should use SendInput. See also how to use sendinput function C++ for more information.

Open the terminal and execute commands via C programming

Does someone know how to open the terminal and execute several commands using a C program ?
I have a program in C and another sets of commands executed by the terminal. I need to combine them into one program in C.
I'm using Ubuntu 10.04.
Thanks!
Your question may be somewhat misleading.
Because you want to run all the terminal commands in the c-code, perhaps you actually have only textual input / output with these commands. If so, you probably do not need the terminal.
I use popen when the output of the (terminal) program is a text stream. It is probably the easiest to use. As an example:
...
const char* cmndStr = "ls -lsa";
FILE* pipe = popen(cmndStr, "r");
...
The popen instruction executes the command in the cmndStr, and any text written to the commands (ls -lsa) standard output, is redirected into the pipe, which is then available for your C program to read in.
popen opens a separate process (but without a terminal to work in, just the pipe)
'Fork' is another way to launch a separate process, with some control over the launched processes' std i/o, but again, I think not a terminal.
On the other hand, if your output is not a simple text stream, maybe you can get by with a output-only dedicated terminal screen to accommodate special output activity. For instance, when I work with ncurses:
I manually open a terminal in the conventional way, and in the terminal
issue the command "tty" to find out the device name, and
issue a "cd" to set the focus to the working dir.
dmoen#C5:~$ tty
/dev/pts/1
dmoen#C5:~$ cd work
dmoen#C5:~/work$
Then I start my program (in a different tty), and let the program know which device I want it to use for the special output (i.e. /dev/pts/1 ) ... I typically use command line parameters to tell my program which pts or extra terminals I want it to use, but environment variables, pipes, in/out redirection, and other choices exist.
I have not tried (lately) to launch a terminal (as suggested by smrt28), except in shell. I believe this will work, but I do not see how the output from the terminal command (ls in the example) would be delivered back to your program. popen trivially delivers a text stream.
A long time ago, I used a device called 'pty' which works like a terminal, but I don't remember how to connect it usefully.
There is a set of 'exec' commands ... see man exec. To connect them back to your program, you will probably work with files, or perhaps redirecting i/o. Too many choices to list here.
And also, maybe you can connect these commands with your c program using shell pipes.
Check "man xterm", parameter -e. Then, in C, you can:
system("xterm -e ls")

How to disable std::cerr?

I got a program which contains a lot of std::cerr, it directly outputs to my terminal. I am wondering what is the difference between std::cerr and std::cout. And how can I disable the std::cerr (I don't want it output to my screen)?
As others have mentioned, if this is a Unix-like system then 2>/dev/null redirects stderr (2) to the big bit bucket in the sky (/dev/null).
But nobody here has explained what the difference between stderr and stdout is, so I feel obligated to at least touch on the topic.
std::cout is the standard output stream. This is typically where your program should output messages.
std::cerr is the standard error stream. This is usually used for error messages.
As such, if your program "contains lots of cerr" output, then it might be worth taking a look at why so many error messages are being printed, rather than simply hiding the messages. This is assuming, of course, that you don't just happen to have a program that emits lots of non-error output to stderr for some reason.
Assuming this program is executed on a *nix system, one possibility is to redirect stderr to /dev/null.
This old newsgroup post shows how to redirect. (code is too large to post here). You need to use streambuf* rdbuf.
cerr is an object of class ostream that represents the standard error stream. It is associated with the cstdio stream stderr.
By default, most systems have their standard error output set to the console, where text messages are shown, although this can generally be redirected.
Because cerr is an object of class ostream, we can write characters to it either as formatted data using for example the insertion operator (ostream::operator<<) or as unformatted data using the write member function, among others (see ostream).
2>/dev/null does the trick. Once again I need to make up the 30 characters.
In many systems, including Windows and Unixes, there are two standard output streams: stdout and stderr.
Normally, a program outputs to stdout, which can be either displayed on screen, or redirected to a file: program > output.txt or redirected as input for another program program1 | program2. For example, you can search in output of your program with the grep tool by running program | grep searchword.
However, if an error occurs, and you print it to stdout which is redirected, the user won't see it. That's why there is the second output for errors. Also the user usually doesn't want error text to be written to the output file, or be fed to grep.
When running a program, you can redirect its error output to a file with program 2>file. The file can be /dev/null, or &1, which would mean redirect to stdout.

Is stdout Ever Anything Other Than a Console Window?

From http://www.cplusplus.com/reference/iostream/cout/:
By default, most systems have their standard output set to the console, where text messages are shown, although this can generally be redirected.
I've never heard of a system where stdout is anything other than a console window, by default or otherwise. I can see how redirecting it might be beneficial in systems where printing is an expensive operation, but that shouldn't be an issue in modern computers, right?
Of course it could be. I may want to redirect standard out to a text file, another process, a socket, whatever.
By default it is the Console, but the are a variety of reasons to redirect it, the most useful (in step with the Unix philosophy) being the redirection of the output of one program to the input of another program. This allows one to create many small, lightweight programs that feed into one another and work as discrete parts of a larger system.
Basically, it's just a simple yet powerful mechanism for sharing data. It is more popular on *nix systems for the reason I mention above, but it applies to Windows as well.
On most systems you can redirect the standard input/output/error to other file descriptors or locations.
For example (on Unix):
./appname > output
Redirects the stdout from appname to a file named output.
./appname 2> errors > output
Redirects stdout to a file named output, and all errors from stderr to a file named errors.
On unix systems you can also have a program open a file descriptor and point it at stdin, such as this:
echo "input" > input
cat input | ./appname
This will cause the program to read from the pipe for stdin.
This is how in unix you can "pipe" various different utilities together to create one larger tool.
find . -type f | ./appname | grep -iv "search"
This will run the find command, and take its output and pipe it into ./appname, then appname's output will be sent to grep's input which then searches for the word "search", displaying just the results that match.
It allows many small utilities to have a very powerful effect.
Think of the >, <, and | like plumbing.
> is like the drain in a sink, it accepts data and stores it where you want to put it. When a shell encounters the > it will open a file.
> file
When the shell sees the above, it will open the file using a standard system call, and remember that file descriptor. In the above case since there is no input it will create an empty file and allow you to type more commands.
banner Hello
This command writes Hello in really big letters to the console, and will cause it to scroll (I am using Unix here since it is what I know best). The output is simply written to standard out. Using a "sink" (>) we can control where the output goes, so
banner Hello > bannerout
will cause all of the data from banner's standard output to be redirected to the file descriptor the shell has opened and thus be written to a file named bannerout.
Pipes work similarly to >'s in that they help control the flow of where the data goes. Pipes however can't write to files, and can only be used to help the flow of data go from one point to another.
For example, here is water flowing through several substations and waste cleaning:
pump --from lake | treatment --cleanse-water | pump | reservoir | pump > glass
The water flows from the lake, through a pipe to the water treatment plant, from the plant back into a pump that moves it to a reservoir, then it is pumped once more into the municipal water pipes and through your sink into your glass.
Notice that the pipes simply connect all of the outputs together, ultimately it ends up in your glass.
It is the same way with commands and processing them in a shell on Linux. It also follows a path to get to an end result.
Now there is one final thing that I hadn't discussed yet in my previous statements, that is the < input character. What it does is read from a file and output it to stdin on programs.
cat < bannerout
Will simply print what was stored in bannerout. This can be used if you have a file you want to process, but don't want to prepend cat <file> because of not wanting to run an extra command in the chain.
So try this:
echo "Hello" > bannerinput
banner < bannerinput
This will first put the string "Hello" in the file bannerinput, and then when your run banner it will read from the file bannerinput.
I hope this helps you understand how redirection and pipping works on Unix (some if not most will apply to Windows as well).
So far all of the answers have been in the context of the thing (shell, whatever) that invokes the program. The program itself can make stdout something other than the terminal. The C standard library provides freopen which lets the programmer redirect stdout in any compliant environment. POSIX provides a number of other mechanisms (popen, fdopen, ...) that gives the programmer even more control. I suspect Windows provides analogous mechanisms.
Any number of things can happen to the three standard file descriptors 0, 1 and 2. Anyone can launch a new process with the file descriptors attached to anything they like.
For instance, GNU screen puts the output into a pipe and allows dynamic reattaching of a session. SSH takes the output and returns it to the other end. And of course all the numerous shell redirectors regularly make use of manipulating the file descriptors.
For a program to have stdout it must be running on a hosted implementation (one with an Operating System), or on a free-standing implementation with extras.
I'm having a hard time figuring such an implementation without a console of some kind, but let's suppose for a moment that Mars Rover has a full OS and is programmed in C (or C++) and doesn't have that console
/* 2001-07-15: JPL: stdout is the headquarters */
puts("Help. I'm stuck.");
might have sent the message to NASA headquarters.
Both Windows and Linux will redirect stdout to a file if you run the program like this:
my_program > some_file
This is the most common case, but many other types of redirection are possible. On Linux, you can redirect stdout to anything that supports the "file descriptor" interface, such as a pipe, socket, file, and various other things.
One simple example of a case where one might want to redirect stdout is when passing the information to another program. The Unix/Linux command ps generates a list of processes that belong to the current user. If this list was long and you wanted to search for a particular process, you could enter
ps | grep thing
which would redirect the stdout of ps to the stdin of grep thing.

What is the way to separate command line output (processing from user interaction) on Unix?

I'm writing a console application in which user interaction might be necessary (prompt for keyboard input, cli arguments etc.), but I want to keep it separate from the result of the processing (which goes to cout, in a way that it can be piped to some other application).
How can I achieve this, if I can't just send all interaction with the user to cerr (not everything is an error)?
/dev/tty is the usual way, but it's also possible on most Unix-like operating systems to read from cerr/stderr because the system usually opens the tty once as stdin and dup()s it onto stdout and stderr.
When your stdout is piped somewhere else, the only way to show something on the terminal (apart from maybe things like curses and dialog) is stderr.
If you need user interaction, open /dev/tty, it will be the controlling terminal for the process. Standard error and standard input may be redirected as well.