I've been experiencing a strange occasionally occurring bug for the last few days.
I have a console application that also displays a window opened with SDL for graphical output continuously running three threads. The main thread runs the event loop, and processes the console input. The second thread uses std::cin.getline to get the console input. This second thread, however, is also responsible for outputting logging information, which can be produced when the user clicks somewhere on the SDL window.
These log messages are sent to a mutex-protected stringstream regularly checked by thread 2. If there are log messages it deletes the prompt, outputs them and then prints a new prompt. Due to this it can't afford to block on getline, so this thread spawns the third thread that peeks cin and signals via an atomic when there's data to be got from the input stream, at which point getline is called and the input is passed to the logic on the main thread.
Here's the bit I haven't quite worked out, about 1 in 30 of these fails since the program doesn't receive exactly the same input as was typed into the terminal. You can see what I mean in the images here, the first line is what was type and the second is the Lua stacktrace due to receiving different (incorrect) input.
This occurs whether I use rlwrap or not. Is this due to peek and getline hitting the input stream at the same time? (This is possible as the peek loop just looks like:
while(!exitRequested_)
{
if (std::cin.peek())
inputAvailable_ = true; // this is atomic
std::this_thread::sleep_for(std::chrono::milliseconds(10));
}
Any thoughts? I looked at curses quickly, but it looks like quite a lot of effort to use. I've never heard of get line garbling stuff before. But I also printed every string that was received for a while and they matched what Lua is reporting.
As #davmac suggested, peek appears to have been interfering with getline. My assumption would be that this is linked to peek taking a character and then putting it back at the same time as getline takes the buffer.
Whatever the underlying cause of the issue is, I am >98% sure that the problem has been fixed by implementing the fix davmac suggested.
In several hours of use I have had no issues.
Moral, don't concurrently access cin, even if one of the functions doesn't modify the stream.
(Note, the above happened on both g++ and clang++ so I assume that it's just linked to the way the std library is frequently implemented).
As #DavidSchwartz pointed out, concurrent access to streams is explicitly prohibited, so that clearly explains why the fix works.
Related
I can receive and send data as long as I dont use fd_set(..) /select.
After that I can't send data to the client. The data is send "after" killing the process (pressing ctrl C).
For example if I run that snippet:
http://www.binarytides.com/multiple-socket-connections-fdset-select-linux/
I get the "welcome client-connected message" (line 126) but after the next loop, the new client is added via fd_set and select. Line 171 should send the received message back to the client, but I only get it back after killing the process.
Maybe it's because the "OS running the server" thinks that the connection is busy and buffers the output. And that could be the reason why killing the process causes the buffer to be send to client.
If I use write() instead of send() the behavoir doesn't change.
int count = write()
count is fine and the code doesn't throw any error.
I tried it on two ubuntu 14.04 systems (one lts and some other build from source)
If you need some more src-code I will upload it. I just think that the example in the link is well documented and shows the problem.
I already found a lot of stuff about the topic, but I can't figure out what I am doing wrong as all tutorials and docs do it that way.
Unluckily I am not that familiar with c++/linux and don't know what to investigate next. So any help is appreciated.
Thanks :)
My suspicion is that what you are seeing is not a network problem at all, but rather a buffering problem with your program's stdout stream. In particular, characters your program sends to stdout won't actually become visible in the terminal window until either (a) a newline character ('\n') is printed, or (b) you manually flush the stream (e.g. vi fflush(stdout), or cout.flush(), or (c) the program terminates (as happens when you press CTRL-C).
So most likely your client program did receive and print the message, but you aren't seeing it because the program is waiting for the newline character before printing anything to the terminal. (it makes sense to do that in cases where the program is printing out a line of text one small substring at a time; but it can be confusing)
The easy fix then (assuming this is indeed the problem), would be to call fflush(stdout) (or printf("\n"); after you call printf() to print the received text. (Or if you are using C++ streams, call cout.flush() or cout<<endl after your call to cout << theText)
Found the error, thanks Jeremy Friesner who mentioned the client. I read until "\n" occurs -> parse message. For testing my c++ server, I have sent messages without "\n". Thank you
I am writing a Linux command line application that ultimately leads to data acquisition from a piece of hardware. The nature of the data acquisition is that it will feed data to the program consistently at some defined data rate. Once the user enters into RxData (the receive loop), we do not want to stop unless we get a command from the terminal to tell it to stop. The problem I foresee is that using getchar() will hang the loop every iteration of the while loop because the program will expect the user to enter input. Am I wrong in this behavior?
On a side note, I know that when working with embedded devices, you can simply check a register to see if the buffer has increased and use that to determine whether or not to read from the buffer or not. I do not have that luxury on a Linux application (or do I?). Does some such function (let's call it getCharAvailable) which I can run, check if data has been input, and THEN signal my program to stop acquiring data?
I can't simply use SIGINT because I need to signal to the hardware to stop data acquisition as well as add a header to the recorded data. There needs to be a signal to stop acquisition.
In Linux (or any other Unix flavour), you can use select to look if there is available data on 2 (or more) file descriptors, sockets or any other thing that can be read. (It is the reason why this system call exists ...)
use the ncurse library and use getch in non-delay mode
I have a C++ program that takes input from the user on std::cin. At some points it needs to call a function that opens a GUI window with which the user can interact. While this window is open, my application is blocked. I noticed that if the user types anything into my application's window while the other window is open, then nothing happens immediately, but when control returns to my application those keystrokes are all acted upon at once. This is not desirable. I would like for all keystrokes entered while the application is blocked to be ignored; alternatively, a way to discard them all upon the application regaining control, but retaining the capability to react to keystrokes that occur after that.
There are various questions on Stack Overflow that explain how to clear a line of input, but as far as I can tell they tend to assume things like "the unwanted input only lasts until the next newline character". In this case this might not be so, because the user could press enter several times while the application is blocked. I have tried a variety of methods (getline(), get(), readsome(), ...) but they generally seem not to detect when cin is temporarily exhausted. Rather, they wait for the user to continue supplying content for cin. For example, if I use cin.ignore(n), then not only is everything typed while the GUI window was open ignored, but the program keeps waiting afterwards while the user types content until a total of n characters have been typed. That's not what I want - I want to ignore characters based on where in time they occurred, not where in the input stream they occur.
What is the idiom for "exhaust everything that's in cin right now, but then stop looking for more stuff"? I don't know what to search for to solve this.
I saw this question, which might be similar and has an answer, but the answer asks for the use of <termios.h>, which isn't available on Windows.
There is no portable way to achieve what you are trying to do. You basically need to set the input stream to non-blocking state and keep reading as long as there are any characters.
get() and getline() will just block until there is enough input to satisfy the request. readsome() only deals with the stream's internal buffer and is only use to non-blockingly extract what was already read from the streams internal buffer.
On POSIX systems you'd just set the O_NONBLOCK with fcntl() and keep read()ing from file descriptor 0 until the read returns a value <= 0 (if it is less than 0 there was an error; otherwise there is no input). Since the OS normally buffers input on a console, you'd also need to set the stream to non-canonical mode (using tcsetattr()). Once you are done you'd probably restore the original settings.
How to something similar on non-POSIX systems I don't know.
I'm coming from C and don't have too much programming knowledge, so bear with me if my idea is nonsense.
Right now, I'm trying to write a simple threaded application with double-buffered console output. I've got a thread which resets the cursor position, draws the buffer and then waits n milliseconds:
gotoxy(0, 0);
std::cout << *draw_buffer;
std::this_thread::sleep_for(std::chrono::milliseconds(33));
This works perfectly well. The buffer is filled independently by another thread and also causes no problems.
Now I want the user to be able to feed the application information. However, my drawing thread always puts the cursor back to the start, so the user input and the application output will interfere. I'm aware there are libraries like curses, but I'd prefer to write this myself, if possible. Unfortunately, I haven't found any solution to this. I guess there is no way to have two console cursors moving independently? How else could I approach this problem?
I think what you will need to do two things:
Create a mutex that controls which thread is writing to stdout.
Change the input mode so that when you invoke getchar, it returns immediately (rather than waiting for the user to press enter). You can then wait for the other thread to release the mutex, then move the cursor and echo the character the user pressed at the appropriate part of the screen.
You can change the input mode using tcsetattr, although this is from termios which is for *nix systems. Since you're using windows, this may not work for you unless you're using cygwin.
maybe check this out: What is the Windows equivalent to the capabilities defined in sys/select.h and termios.h
I have a program that creates pipes between two processes. One process constantly monitors the output of the other and when specific output is encountered it gives input through the other pipe with the write() function. The problem I am having, though is that the contents of the pipe don't go through to the other process's stdin stream until I close() the pipe. I want this program to infinitely loop and react every time it encounters the output it is looking for. Is there any way to send the input to the other process without closing the pipe?
I have searched a bit and found that named pipes can be reopened after closing them, but I wanted to find out if there was another option since I have already written the code to use unnamed pipes and I haven't yet learned to use named pipes.
Take a look at using fflush.
How are you reading the other end? Are you expecting complete strings? You aren't sending terminating NULs in the snippet you posted. Perhaps sending strlen(string)+1 bytes will fix it. Without seeing the code it's hard to tell.
Use fsync. http://pubs.opengroup.org/onlinepubs/007908799/xsh/fsync.html
From http://www.delorie.com/gnu/docs/glibc/libc_239.html:
Once write returns, the data is enqueued to be written and can be read back right away, but it is not necessarily written out to permanent storage immediately. You can use fsync when you need to be sure your data has been permanently stored before continuing. (It is more efficient for the system to batch up consecutive writes and do them all at once when convenient. Normally they will always be written to disk within a minute or less.) Modern systems provide another function fdatasync which guarantees integrity only for the file data and is therefore faster. You can use the O_FSYNC open mode to make write always store the data to disk before returning.