I've got the following code:
std::cerr << "Hello" << std::endl;
std::cerr << "World" << std::endl;
...which would normally be great.
But: I'm running it in an Erlang port program, and Erlang has done ... something to the terminal meaning that "\n" is no longer converted to CRLF, which means that my output is appearing as...
Hello
World
What has Erlang done to my terminal? How do I detect it? How do I get std::endl to output \r\n in this case?
Note: it's (probably) not just that Erlang's ingesting stderr from my program and screwing up the line-feeds. If I use plain-ol' printf("Hello\n") in a NIF, I see the same problem.
Don't use ::std::endl in the first place. It doesn't really do what you want. It always outputs '\n' and flushes the stream. Flushing the stream is a huge and unnecessary performance hit most of the time.
And by '\n' I mean exactly that. An actual, literal,'\n'. It doesn't do any kind of translation or anything that people think it does. Just a flat '\n' and a flush, even on Windows. The newline translation you get under windows is handled by the lower level iostream facilities and is controlled by whether or not the stream is in binary mode.
What Erlang has done to your terminal is probably put it in CBREAK mode. There is a good question or two on here that have answers that describe the differences between raw, cooked, and cbreak terminal driver modes in Unix.
it might be possible for you to manually set your terminal back to cooked mode. The only reason Erlang would have for doing this is that normally you only get what people typed when they hit enter. CBREAK lets you get every character as it's typed.
You can also test yourself what mode the terminal is in. Linux (and later Posix revisions) have apparently replaced the three terminal modes with various flags who's combined effect results in behavior that's very similar to the old terminal modes.
First, you probably want to use isatty(1) (calling isatty on stdout) to see if your output really is a terminal before trying anything else.
Then, you can use tcgetattr to read the current settings for the various bits into a struct termios. This structure contains a member called c_oflag for ouput mode bits. The relevant bits in this case are probably ONLCR, and maybe (but I suspect not) OPOST.
Related
I have two loggers in my program. One that I made inside a gui, and one that is super complicated but very well designed and prints to the console. I am trying to get the output from the nice console logger to the rendered one. I have tried everything under the sun to get this to work but I can't seem to figure it out (due to my lack of understanding of the code from the other logger(spdlog).) My conclusion is that taking the logs directly from what is printed is the best way to do this but I can't find online anyone asking how to do this. I have seen a few questions but they just post code as an answer and don't really explain what is going on. My question: Is there a way to grab printed statements from the console and what are the performance issues/complications that come with doing something like this.
For example, if i do std::cout << "hello!" << std::endl; or some printf statement, I want to be able to further down in the code be able to grab "hello!"
My conclusion is that taking the logs directly from what is printed is the best way to do this but I can't find online anyone asking how to do this.
Consoles nowadays are terminal emulators. The original terminals' output went to printers and couldn't be (easily) read back.
Application's stdout and stderr (console) streams are write-only. Moreover, in Windows and Unix/Linux you can pipe your program's (console) output (either or both stderr and stdout) into another application with | (pipe) that creates a pipe IPC between stdout of your application and stdin of another one. That IPC pipe is write-only, your application cannot possibly read back from it.
You may be able to get access to the contents of the frame buffer of Windows cmd.exe that controls its Windows console window, but that won't be the verbatim byte-exact copy of data you wrote into stdout because of the escape sequences interpreted by Windows console.
If stdout is redirected into a file you can re-open that file for reading, but there is no portable way to re-open that file.
In other words, there is no portable way to read console output back.
I have tried everything under the sun to get this to work but I can't seem to figure it out (due to my lack of understanding of the code from the other logger(spdlog).
I bet you haven't tried reading spdlog documentation, in particular logger with multi sinks. A sink is an output abstraction, which implementation can write into a file, memory or both. What you need is attach your own sink to spdlog that prints into your UI.
Derive your sink from base_sink and implement abstract member functions:
virtual void sink_it_(const details::log_msg &msg) = 0; to print into the UI, and,
virtual void flush_() = 0; to do nothing.
Then attach one object of your sink class to that spdlog.
I was trying to add colors to some strings that have to be displayed in a terminal using ansi escape code. So far I haven't grasped the whole ascii escapes code thing, just trying out by copy pasting some escape codes. Then saw this answer which asked to verify that program should check that its being executed in a terminal or else continue without polluting strings with escape codes?
Answer explains to use a *nix based function isatty() which I found out resides in unistd.h which in turn wasn't promoted to cunistd by cpp standard based on my understanding that it wasn't in c's standard at first place.I tried to search SO again but wasn't able to understand well. Now I have two questions regarding this :
In what environment(right word?) can a program - using ascii escape codes, be executed that it requires an initial check? since I'm bulding for cli only.
What would be a proper solution according to ISO cpp standards for handling this issue? using unistd.h? would this use confine to modern cpp practices?
Also is there anything I should read/understand before dealing with ansi/colors related thing?
On a POSIX system (like Linux or OSX) the isatty function is indeed the correct function to determine if you're outputting to a terminal or not.
Use it as this
if (isatty(STDOUT_FILENO))
{
// Output using VT100 control codes
}
else
{
// Output is not a TTY, could be a pipe or redirected to a file
// Use normal output without control codes
}
I want to write a game in Linux terminal(In C/C++), so firstly I should be able to print the character I want to it. I tried with "printf()", but it seems a little inconvenient. I think there should be a character buffer for the output characters for a terminal. Is there any way to directly manipulate the buffer?
Thanks a lot.
It goes in a way different manner.
A terminal is nothing else, but a character device, which means it is practically unbuffered. Despite of this, you still can manipulate the screen position with appropriate sequences of characters, called "escape sequences". For example, if you issue the \e[A (0x1B 0x91 0x41) sequence, the cursor goes one line up while leaving the characters intact, while if you issue \e[10;10H, (0x1B 0x91 0x31 0x30 0x3B 0x31 0x30 0x48), your cursor will go to column 10 of row 10 (exactly what you want). After you moved the cursor, the next character you write out goes to that position. For further information on escape sequences, look at this link.
Another important thing to know about is the dimensions of your terminal. ioctl can inform you about the size of the terminal window:
#include <stdio.h>
#include <sys/ioctl.h>
#include <termios.h>
#include <unistd.h>
int main ()
{
struct winsize ws;
ioctl (STDOUT_FILENO, TIOCGWINSZ, &ws);
printf ("Rows: %d, Cols: %d\n", ws.ws_row, ws.ws_col);
return 0;
}
Note that the technique mentioned above is a solution to send commands to the terminal emulator connected to your pseudo terminal device. That is, the terminal device itself remains unbuffered, the commands are interpreted by the terminal emulator.
You might want to use the setbuf function, which allows you tell printf which buffer to be used. You can use your own buffer and control the contents.
However, this is the wrong approach for 2 reasons.
1st, it won't save you work compared to printf(), fwrite() and putchar().
2nd, and more important, even these functions won't help you. From your comment it's clear that you want to manipulate a character on the screen, for example, replace a '.' (empty floor) by a D (Dragon) when that Dragon approaches. You can't do this by manipulating the output buffer of printf(). Once the '.' is displayed, the output buffer has been flushed to the terminal, and if you manipulate that buffer, it has no effect. The terminal has received a copy of that buffer, and has displayed what the data in the buffer instructed it to display. In order to change what is displayed, you have to send new commands.
And this is exactly what ncurses does for you. It keeps track of the state of the terminal, the current content, the curser position and all the nasty details, like, how to make a character appear bold.
You won't succeed with printf. That's hopeless. You need to learn what ncurses can do for you, and then everything else is easy.
TLDR: use ncurses
This answer focuses on the why?
Why?
There probably is a way to modify the buffer used by the terminal emulator on your system, given that you have sufficient priviledges to write into respective memory and maybe even modify other system resources, as required.
As terminals historically have been distinct, isolated, physical devices rather than beeing conceptually emulated in software, you couldn't access them in any way other than sending them data.
(I mean, you could always print a message locally, to instruct a human to take a screwdriver and physically mess around with the physical terminal device, but that's not been the way how humans wanted to solve the contemporary issue of "how do I change the cursor position and rewrite characters on my (and potentially any other connected) terminal?").
As others have pointed out, most physical terminals have (at some point) been built to give special meaning to certain input sequences, instead of printing them, which makes them escape sequences in this context, according to how wikipedia.org defines them, that is.
A behavioral convention in how to respond to certain input sequences emerged (presumably for the sake of interoperability or for reasons of market predominance)and got standardized as ANSI escape codes.
Those input sequences survived the transition from physical termial devices to their emulated counterparts and even though you could probably manipulate the terminal emulator's memory using system calls, libraries such as ncurses allow you to easily make use of said ANSI escape codes in your console application.
Also, using such libraries is the obvious solution to make your application work remotely:
Yes, technically, you could ssh into another system (or get access in any other more obscure way that works for you),and cause system calls or any other event that would interfere with the terminal emulator in the desired way.
Firstly, I doubt most users would want to grant you priviledge to modify their terminal emulator's memory merely to enjoy your text adventure.
Also, interoperability would reduce, as you couldn't easily support that one user, who still has a VT100 and insists on using it to play your game.
I got a program which contains a lot of std::cerr, it directly outputs to my terminal. I am wondering what is the difference between std::cerr and std::cout. And how can I disable the std::cerr (I don't want it output to my screen)?
As others have mentioned, if this is a Unix-like system then 2>/dev/null redirects stderr (2) to the big bit bucket in the sky (/dev/null).
But nobody here has explained what the difference between stderr and stdout is, so I feel obligated to at least touch on the topic.
std::cout is the standard output stream. This is typically where your program should output messages.
std::cerr is the standard error stream. This is usually used for error messages.
As such, if your program "contains lots of cerr" output, then it might be worth taking a look at why so many error messages are being printed, rather than simply hiding the messages. This is assuming, of course, that you don't just happen to have a program that emits lots of non-error output to stderr for some reason.
Assuming this program is executed on a *nix system, one possibility is to redirect stderr to /dev/null.
This old newsgroup post shows how to redirect. (code is too large to post here). You need to use streambuf* rdbuf.
cerr is an object of class ostream that represents the standard error stream. It is associated with the cstdio stream stderr.
By default, most systems have their standard error output set to the console, where text messages are shown, although this can generally be redirected.
Because cerr is an object of class ostream, we can write characters to it either as formatted data using for example the insertion operator (ostream::operator<<) or as unformatted data using the write member function, among others (see ostream).
2>/dev/null does the trick. Once again I need to make up the 30 characters.
In many systems, including Windows and Unixes, there are two standard output streams: stdout and stderr.
Normally, a program outputs to stdout, which can be either displayed on screen, or redirected to a file: program > output.txt or redirected as input for another program program1 | program2. For example, you can search in output of your program with the grep tool by running program | grep searchword.
However, if an error occurs, and you print it to stdout which is redirected, the user won't see it. That's why there is the second output for errors. Also the user usually doesn't want error text to be written to the output file, or be fed to grep.
When running a program, you can redirect its error output to a file with program 2>file. The file can be /dev/null, or &1, which would mean redirect to stdout.
I am having this very weird behaviour with a C++ code: It gives me different results when running with and without redirecting the screen output to a file (reproducible in cygwin and linux). I mean, if I get the same executable and run it like ./run or run it like ./run >out.log, I get different results!
I use std::cout to output to screen, all lines ending with endl; I use ifstream for the input file; I use ofstream for output, all lines ending with endl.
I am using g++ 4.
Any idea what is going on?
UPDATE: I have hard-coded the input data, so 'ifstream' is not used, and problem persists.
UPDATE 2: That's getting interesting. I have probed three variables that are computed initially, and that's what I get when using with and without redirecting the output to file
redirected to file: 0 -0.02 0
direct to screen: 0 -0.02 1.04083e-17
So there's a round-off difference in the code variables with and without redirecting the output!
Now, why redirecting would interefere with an internal computation of the code?
UPDATE 3: If I redirect to /dev/null, I get the sam behaviour as outputing direct to screen, instead of redirecting to file.
There are a number of effects of running with nohup, but the main one is stdin and stdout will be redirected to /dev/null. In most cases, this means that stdout will be fully buffered instead of line buffered (its only line buffered if stdout is a terminal), so stuff that is output generally won't actually be output until you do an explicit flush.
Edit
Further updates make it unlikely the problem is directly related to the different behavior of nohup. At this point, I would suggest running with valgrind, as the most likely suspect is an uninitialized local variable or heap object. Such a variable will have an unpredictable (but generally repeatable) value that depends on the environment in which the function was invoked -- mostly on what previously called functions have left on the stack, which might well depend on nohup as you're seeing
Do you use threads in this application?
I've seen subtly different behaviour in a poorly synchronized threaded application on Linux with/without nohup, although I don't know whether this would have been reproduced with cygwin.
In my case, I had two initialization threads, but the order in which they completed was (mistakenly) significant. Without 'nohup' one would always complete first, but with 'nohup' the other generally would, I think the underlying cause was differences in IO buffering.