Use TIdCmdTCPServer for binary data? - c++

I have a TCP server in my Windows application based on TIdCmdTCPServer and it has been performing great. Now I have a device that does not send text string but binary data. I really don't want to rewrite my working code to use TIdTCPServer.
Is there any way I can use my current code and grab the discarded data somewhere, i.e. can I have access to the raw data received without interfering with other connections that use text?
I tried the OnBeforeCommandHandler event, but it already seems to assume the data is a string (i.e. cuts off at the first zero byte).

TIdCmdTCPServer does not stop reading on nul bytes, like you claim. By default, it reads each command by calling TIdIOHandler.ReadLn(), which reads until a (CR+)LF is received. However, the TIdIOHandler.DefStringEncoding is set to US-ASCII by default, so that could be causing data loss when reading binary data as a string.
That being said, TIdCmdTCPServer is designed primarily for textual commands. By default, tt cannot receive binary commands, or binary parameters. However, if the binary data follows a textual command, your TIdCommandHandler.OnCommand event handlers can read the binary data after the command has been received, by simply reading the binary using the ASender.Context.Connection.IOHandler as needed.
Otherwise, if that does not suit your needs (because the binary commands are not in a format that would normally trigger an OnCommand event), you will have to derive a new class from TIdCmdTCPServer and have it either:
override the virtual ReadCommandLine() method to read a binary command and its parameters from the Connection and return that data in a string format of your choosing (you can use Indy's IndyTextEncoding_8bit encoding, or BytesToStringRaw() function, to help you with that, or use whatever string format you want). And then define a CommandHandler to match that stringified command.
override the virtual DoExecute() method, then you have full control over the reading of the Connection and can handle commands however you want. To trigger OnCommand events, call the server's CommandHandlers.HandleCommand() method passing it string values of your choosing.
Personally, I would not recommend mixing textual and non-textual clients on the same server. They are clearly using different protocols, so you should be using different servers on different ports to handle them separately.

Related

Is it possible to wrap existing TCP/OpenSSL session with `iostream`?

I use custom code to create SSL connection over native Berkeley sockets interface. I need to wrap the resulted socket with iostream to use existing algorithms written in C++ with these sockets data.
Is there any easy way to do it without need to implement stream and streambuf from scratch?
I learned boost::iostreams and boost::asio.
I didn't find any way to wrap existing OpenSSL session with boost::asio. Or may be anyone knows how to do that?
After boost:asio I concentrated my research on boost:iostreams.
boost::iostreams looks like good idea, however, its problem is that it uses read buffering. So, if we need to read just 1 byte from SSL session, it asks the TCP device to read 4 kilobytes and results in timeout. From the other hand, when I set buffer size to 0, boost::iostreams start to call write method for each byte, so when I try to write 10 bytes to stream, it calls SSL_write 10 times. TCP device itself can not use write buffering, because there are no way to forward flush method to device, so application level protocol may expect that data is sent to another peer while the data remains in output buffer.
So, we need unbuffered read and buffered flushable write; is that possible with boost::iostreams?
I found solution myself.
First of all, it is required to mark the device as flushable. Because there are not ready-made template for such device, you have to inherit device<dual_use, Ch> and override its category with multiple inheritance:
struct category : device<dual_use, Ch>::category, flushable_tag
Now when you will call flush on stream, it will forward the call to your device.
Next step is to disable stream own buffering (i. e. call open with 2nd and 3rd parameters equal to 0).
In such configuration boost will write to device each byte of data separatelly. However, you can implement buffering on device level, and flush the buffer on flush call.

QTextEdit for input and output

I am considering using QTextEdit as console-like IO element (for serial data).
The problem with this approach is that (user) input and (communication) output are mixed and they might not be synchronous.
To detect new user input, it might be possible to store and compare plainText on certain input events, e.g. when Enter/Return is pressed.
Another approach might be to use the QTextEdit as view only for separately managed input and output buffers. This could also simplify the problem of potentially asynchronous data (device sends characters while user is typing, very unlikely in my case).
However, even merging the two "streams" by single-character timestamp holds potential for conflict.
Is there a (simple) solution or should I simply use separate and completely independent input/output areas?
Separate I/O areas is the simplest way to proceed if your UI is command driven and the input is line-oriented.
Alternatively, the remote device can be providing the echo, without a local echo. The remote device will then echo the characters back when it makes sense, to maintain coherent display.
You can also display a local line editing buffer to provide user feedback in case the remote echo was delayed or unavailable. That buffer would be only for feedback and have no impact on other behavior of the terminal; all keystrokes would be immediately sent to the remote device.

Reading a Linux device with fstream

I'm attempting to get feedback from some hardware that is used over USBTMC and SCPI.
I can read and write to the device using /dev/usbtmc0 in a C++ [io]fstream, alternating by reading and writing to send and receive messages. Most commands are terminated by a single newline, so it's easy to tell when the end of a response is received. The simplified code I'm using for that is:
fstream usb;
usb.open("/dev/usbtmc0", fstream::in);
if (usb.good())
{
string output;
getline(usb, output);
usb.close();
// Do things with output
}
// Additional cleanup code...
There is, however, one thing that is escaping me, and it's defined in the SCPI/IEEE specification as "*LRN?". When sent, the connected device will send back arbitrary data (actual wording from the specification) that can be used to later reprogram the device should it get into a weird state.
The issue with the response message of this LRN command is that it contains one or more newlines. It does properly terminate the entire message with a newline, but the fact that there are newlines embedded makes it really tricky to work with. Some hardware will prefix the payload with a length, but some don't.
When reading data from the hardware, there is a 5 second timeout built into the Linux usbtmc kernel driver that will block any read calls if you try to read past what's available.
Using fstream::eof doesn't seem to return anything useful. It acts much like a socket. Is there a way that I can read all data on the device without knowing about its length, termination, and while avoiding a kernel timeout?
The problem with using fstream for this is that fstream has internal buffering, there's no 1:1 correlation between device fileOps->read calls and fstream operations.
For interacting with device drivers, you really need to use the low-level open, read, write functions from unistd.h and fcntl.h.

About Google's protobuf

I know it can be used to send/receive structured object from file,
but can it be used to send/receive sequences of structured object from a socket?
http://code.google.com/p/protobuf/
Protocol Buffers is a structured data serialization (and de-serialization) framework. It is only concerned with encoding a selection of pre-defined data types into a data stream. What you do with that stream is up to you. To quote the wiki:
If you want to write multiple messages
to a single file or stream, it is up
to you to keep track of where one
message ends and the next begins. The
Protocol Buffer wire format is not
self-delimiting, so protocol buffer
parsers cannot determine where a
message ends on their own. The easiest
way to solve this problem is to write
the size of each message before you
write the message itself. When you
read the messages back in, you read
the size, then read the bytes into a
separate buffer, then parse from that
buffer.
So yes, you could use it to send/receive multiple objects via a socket but you have to do some extra work to differentiate each object stream.
I'm not familiar with protobuf, but the documentation says you can create a FileInputStream (which can then be used to create a CodedInputStream) using a file descriptor. If you're on a system that supports BSD sockets, you should presumably be able to give it a socket file descriptor rather than an ordinary one.
Protocol Buffers does not handle any surrounding network/file I/O operations. You might want to consider using Thrift, which includes socket communication libraries and server libraries with the serialization/deserialization.

Can you explain in more detail what's the difference between PIPE_READMODE_MESSAGE/PIPE_READMODE_BYTE?

Though I've go through the document here, it still doesn't make sense to me what it is:
Data is read from the pipe as a stream
of messages. This mode can be only
used if PIPE_TYPE_MESSAGE is also
specified.
In BYTE mode, you are the one that needs to figure out the separation of the data so that it can be decoded at the receiving end. In MESSAGE mode, the API will do this for you. When you read the message on the other side you will have the whole block of data (the message).
In both cases, you will still need some header data to wrap your message/data to know what it is if you are mixing data types sent through the pipe.
EDIT: The documentation points to a very clear example of Client/Server using this API and the MESSAGE mode between both.
http://msdn.microsoft.com/en-us/library/aa365592%28v=VS.85%29.aspx
http://msdn.microsoft.com/en-us/library/aa365588%28v=VS.85%29.aspx
The difference between PIPE_TYPE_BYTE and PIPE_TYPE_MESSAGE type mode are explained on the http://msdn.microsoft.com/en-us/library/aa365605.aspx:
Type Mode
The type mode of a pipe determines how
data is written to a named pipe. Data
can be transmitted through a named
pipe as either a stream of bytes or as
a stream of messages. The pipe server
specifies the pipe type when calling
CreateNamedPipe to create an instance
of a named pipe. The type modes must
be the same for all instances of a
pipe.
To create a byte-type pipe, specify
PIPE_TYPE_BYTE or use the default
value. The data is written to the pipe
as a stream of bytes, and the system
does not differentiate between the
bytes written in different write
operations.
To create a message-type pipe, specify
PIPE_TYPE_MESSAGE. The system treats
the bytes written in each write
operation to the pipe as a message
unit. The system always performs write
operations on message-type pipes as if
write-through mode were enabled.
If you want to write a data stream with respect of pipes you should use PIPE_TYPE_BYTE type mode. Then you can write any data in the pipe buffer with respect of WriteFile and read there on the other side with respect of ReadFile. How exactly the data will be send is not important for you. The data from some WriteFile operation can be transfered as one data block.
If you use PIPE_TYPE_MESSAGE type mode every write operation follows to the data transfer, because the writing in the pipe will be interpret as a sending of the message. There are a special function TransactNamedPipe which allow you to write a message to and read a message from the specified named pipe into a single network operation.