I am using boost::asio::async_read_until to read a message from a socket, using a newline as a delimiter.
e.g. boost::asio::async_read_until(socket, buffer, "\n", .....)
Everything is hunkey dorey and works fine.
Is there an alternative way to use this function, or a similar one, that can detect an endless sequence of characters that do not have a newline constant in them.
e.g. a malicious user might fire a continuous sequence of ZEROS at my server
socat /dev/zero TCP4:localhost:55555
I can't be the first person in history to have come across this problem
You can use version 4 of the 4 overloads:
http://www.boost.org/doc/libs/1_62_0/doc/html/boost_asio/reference/async_read_until/overload4.html
In your function object, which would normally be an interface to a state machine, you can count characters, detect illegal data streams or any other means necessary.
It's probably worth mentioning that if your application is running in a hostile environment (i.e. internet-facing) then you will probably want to use async_read_some to feed your state machine and will want a timer to catch and kill connections that sit and consume resources.
Related
I am considering using QTextEdit as console-like IO element (for serial data).
The problem with this approach is that (user) input and (communication) output are mixed and they might not be synchronous.
To detect new user input, it might be possible to store and compare plainText on certain input events, e.g. when Enter/Return is pressed.
Another approach might be to use the QTextEdit as view only for separately managed input and output buffers. This could also simplify the problem of potentially asynchronous data (device sends characters while user is typing, very unlikely in my case).
However, even merging the two "streams" by single-character timestamp holds potential for conflict.
Is there a (simple) solution or should I simply use separate and completely independent input/output areas?
Separate I/O areas is the simplest way to proceed if your UI is command driven and the input is line-oriented.
Alternatively, the remote device can be providing the echo, without a local echo. The remote device will then echo the characters back when it makes sense, to maintain coherent display.
You can also display a local line editing buffer to provide user feedback in case the remote echo was delayed or unavailable. That buffer would be only for feedback and have no impact on other behavior of the terminal; all keystrokes would be immediately sent to the remote device.
I am writing a Linux command line application that ultimately leads to data acquisition from a piece of hardware. The nature of the data acquisition is that it will feed data to the program consistently at some defined data rate. Once the user enters into RxData (the receive loop), we do not want to stop unless we get a command from the terminal to tell it to stop. The problem I foresee is that using getchar() will hang the loop every iteration of the while loop because the program will expect the user to enter input. Am I wrong in this behavior?
On a side note, I know that when working with embedded devices, you can simply check a register to see if the buffer has increased and use that to determine whether or not to read from the buffer or not. I do not have that luxury on a Linux application (or do I?). Does some such function (let's call it getCharAvailable) which I can run, check if data has been input, and THEN signal my program to stop acquiring data?
I can't simply use SIGINT because I need to signal to the hardware to stop data acquisition as well as add a header to the recorded data. There needs to be a signal to stop acquisition.
In Linux (or any other Unix flavour), you can use select to look if there is available data on 2 (or more) file descriptors, sockets or any other thing that can be read. (It is the reason why this system call exists ...)
use the ncurse library and use getch in non-delay mode
I have a C++ program that takes input from the user on std::cin. At some points it needs to call a function that opens a GUI window with which the user can interact. While this window is open, my application is blocked. I noticed that if the user types anything into my application's window while the other window is open, then nothing happens immediately, but when control returns to my application those keystrokes are all acted upon at once. This is not desirable. I would like for all keystrokes entered while the application is blocked to be ignored; alternatively, a way to discard them all upon the application regaining control, but retaining the capability to react to keystrokes that occur after that.
There are various questions on Stack Overflow that explain how to clear a line of input, but as far as I can tell they tend to assume things like "the unwanted input only lasts until the next newline character". In this case this might not be so, because the user could press enter several times while the application is blocked. I have tried a variety of methods (getline(), get(), readsome(), ...) but they generally seem not to detect when cin is temporarily exhausted. Rather, they wait for the user to continue supplying content for cin. For example, if I use cin.ignore(n), then not only is everything typed while the GUI window was open ignored, but the program keeps waiting afterwards while the user types content until a total of n characters have been typed. That's not what I want - I want to ignore characters based on where in time they occurred, not where in the input stream they occur.
What is the idiom for "exhaust everything that's in cin right now, but then stop looking for more stuff"? I don't know what to search for to solve this.
I saw this question, which might be similar and has an answer, but the answer asks for the use of <termios.h>, which isn't available on Windows.
There is no portable way to achieve what you are trying to do. You basically need to set the input stream to non-blocking state and keep reading as long as there are any characters.
get() and getline() will just block until there is enough input to satisfy the request. readsome() only deals with the stream's internal buffer and is only use to non-blockingly extract what was already read from the streams internal buffer.
On POSIX systems you'd just set the O_NONBLOCK with fcntl() and keep read()ing from file descriptor 0 until the read returns a value <= 0 (if it is less than 0 there was an error; otherwise there is no input). Since the OS normally buffers input on a console, you'd also need to set the stream to non-canonical mode (using tcsetattr()). Once you are done you'd probably restore the original settings.
How to something similar on non-POSIX systems I don't know.
I have a program that creates pipes between two processes. One process constantly monitors the output of the other and when specific output is encountered it gives input through the other pipe with the write() function. The problem I am having, though is that the contents of the pipe don't go through to the other process's stdin stream until I close() the pipe. I want this program to infinitely loop and react every time it encounters the output it is looking for. Is there any way to send the input to the other process without closing the pipe?
I have searched a bit and found that named pipes can be reopened after closing them, but I wanted to find out if there was another option since I have already written the code to use unnamed pipes and I haven't yet learned to use named pipes.
Take a look at using fflush.
How are you reading the other end? Are you expecting complete strings? You aren't sending terminating NULs in the snippet you posted. Perhaps sending strlen(string)+1 bytes will fix it. Without seeing the code it's hard to tell.
Use fsync. http://pubs.opengroup.org/onlinepubs/007908799/xsh/fsync.html
From http://www.delorie.com/gnu/docs/glibc/libc_239.html:
Once write returns, the data is enqueued to be written and can be read back right away, but it is not necessarily written out to permanent storage immediately. You can use fsync when you need to be sure your data has been permanently stored before continuing. (It is more efficient for the system to batch up consecutive writes and do them all at once when convenient. Normally they will always be written to disk within a minute or less.) Modern systems provide another function fdatasync which guarantees integrity only for the file data and is therefore faster. You can use the O_FSYNC open mode to make write always store the data to disk before returning.
I have two applications running on my machine. One is supposed to hand in the work and other is supposed to do the work. How can I make sure that the first application/process is in wait state. I can verify via the resources its consuming, but that does not guarantee so. What tools should I use?
Your 2 applications shoud communicate. There are a lot of ways to do that:
Send messages through sockets. This way the 2 processes can run on different machines if you use normal network sockets instead of local ones.
If you are using C you can use semaphores with semget/semop/semctl. There should be interfaces for that in other languages.
Named pipes block until there is both a read and a write operation in progress. You can use that for synchronisation.
Signals are also good for this. In C it is called sendmsg/recvmsg.
DBUS can also be used and has bindings for variuos languages.
Update: If you can't modify the processing application then it is harder. You have to rely on some signs that indicate the progress. (I am assuming you processing application reads a file, does some processing then writes the result to an output file.) Do you know the final size the result should be? If so you need to check the size repeatedly (or whenever it changes).
If you don't know the size but you know how the processing works you may be able to use that. For example the processing is done when the output file is closed. You can use strace to see all the system calls including the close. You can replace the close() function with the LD_PRELOAD environment variable (on windows you have to replace dlls). This way you can sort of modify the processing program without actually recompiling or even having access to its source.
you can use named pipes - the first app will read from it but it will be blank and hence it will keep waiting (blocked). The second app will write into it when it wants the first one to continue.
Nothing can guarantee that your application is in waiting state. You have to pass it some work and get back a response. It might be transactions or not - application can confirm that it got the message to process before it starts to process it or after it was processed (successfully or not). If it does not wait, passing a piece of work should fail. Whether when trying to write to a TCP/IP socket or other means, or if timeout occurs. This depends on implementation, what kind of transport you are using and other requirements.
There is actually a way of figuring out if the process (thread) is in blocking state and waiting for data on a socket (or other source), but that means that client should be on the same computer and have access privileges required to do that, but that makes no sense other than debugging, which you can do using any debugger anyway.
Overall, the idea of making sure that application is waiting for data before trying to pass it that data smells bad. Not to mention the racing condition - what if you checked and it was OK, and when you actually tried to send the data, you found out that application is not waiting at that time (even if that is microseconds).