I use custom code to create SSL connection over native Berkeley sockets interface. I need to wrap the resulted socket with iostream to use existing algorithms written in C++ with these sockets data.
Is there any easy way to do it without need to implement stream and streambuf from scratch?
I learned boost::iostreams and boost::asio.
I didn't find any way to wrap existing OpenSSL session with boost::asio. Or may be anyone knows how to do that?
After boost:asio I concentrated my research on boost:iostreams.
boost::iostreams looks like good idea, however, its problem is that it uses read buffering. So, if we need to read just 1 byte from SSL session, it asks the TCP device to read 4 kilobytes and results in timeout. From the other hand, when I set buffer size to 0, boost::iostreams start to call write method for each byte, so when I try to write 10 bytes to stream, it calls SSL_write 10 times. TCP device itself can not use write buffering, because there are no way to forward flush method to device, so application level protocol may expect that data is sent to another peer while the data remains in output buffer.
So, we need unbuffered read and buffered flushable write; is that possible with boost::iostreams?
I found solution myself.
First of all, it is required to mark the device as flushable. Because there are not ready-made template for such device, you have to inherit device<dual_use, Ch> and override its category with multiple inheritance:
struct category : device<dual_use, Ch>::category, flushable_tag
Now when you will call flush on stream, it will forward the call to your device.
Next step is to disable stream own buffering (i. e. call open with 2nd and 3rd parameters equal to 0).
In such configuration boost will write to device each byte of data separatelly. However, you can implement buffering on device level, and flush the buffer on flush call.
Related
Piggybacking on the topic described here (Using libcurl multi interface for consecutive requests for same "easy" handle), my organization has wrapper classes for select and poll to handle input/output from file descriptors. In keeping aligned with our wrapper classes, I would like to get the file descriptor of each easy handle. I'm using the multi interface to work with multiple easy handles in a real time application.
I understand I can use the curl_multi_fd_set to get the FD sets. I could loop through the FD set to get the FD number. However, I won't know the associated easy handle for the FD. Additionally, if an FD is opened above the FD_SET limit, I won't get that FD.
Another option I'm considering is to use the curl_easy_getinfo and use the ACTIVESOCKET or LASTSOCKET options. My libcurl is old, so I couldn't use the ACTIVESOCKET for a test. However, a little test I performed using the curl_multi_perform, followed by a curl_easy_getinfo(LASTSOCKET) gave me a result of -1 -- meaning no file descriptor. Easy handle requests were performed on websites such as google.com. I'll try to update my libcurl to a newer version to see if I get a different result with the ACTIVESOCKET.
Is there another way to get the file descriptor from the easy handle?
I would propose you switch over and use the multi_socket API instead, with curl_multi_socket_action being the primary driver.
This API calls you to tell you about each and every socket to wait for, and then you wait for that/those and tell libcurl when something happened on that socket. It allows you to incorporate libcurl into your own IO loop/socket wrapper systems pretty easily.
I am using Libevent library 2.0 for socket communication.
In order to add data to evbuffer, I am using evbuffer_add. The bufferevent stores the data in its internal buffer and transfers the data via socket using some predefined timeout and watermark settings.
My question is, is there any way to control the data transfer? Can we transfer the data explicitly any time and after any random number of bytes being written?
The idea behind this function is fire-and-forget. However, you can add a callback so that when the send finally happens, you can do some things:
evbuffer_add_cb
This doesn't allow you much control, but you can use it for some behaviors like appending the buffer.
Is there a way to check the number of bytes available from a USB device (printer in our case)?
We're using CreateFile and ReadFile and WriteFile for IO communications with our USB device, which works. But We can't figure out how much data is available without actually doing a read. We can't use GetFileSize, as even the documentation says you can't use it for a :
"nonseeking device such as a pipe or a communications device"...
So that doesn't work. Any suggestions? Are we doing our USB I/O incorrectly? Is there a better way to Read/Write to USB?
You first need to open up the port in asynchronous mode. To do that, pass the flag FILE_FLAG_OVERLAPPED to CreateFile. Then, when you call ReadFile, pass in a pointer to an OVERLAPPED structure. This does an asynchronous read and immediately returns ERROR_IO_PENDING without blocking (or, if the OS already has the data buffered, you might get lucky and get a successful read -- be prepared to handle that case).
Once the asynchronous I/O has started, you can then periodically check if it has completed with GetOverlappedResult.
This allows you to answer the question "are X bytes of data available?" for a particular value of X (the one passed to ReadFile). 95% of the time, that's good enough, since you're looking for data in a particular format. The other 5% of the time, you'll need to add another layer of abstraction top, where you keep doing asynchronous reads and store the data in a buffer.
Note that asynchronous I/O is very tricky to get right, and there's a lot of edge cases to consider. Carefully read all of the documentation for these functions to make sure your code is correct.
Can you use C#? If so you can access the USB port using System.IO.SerialPort class, and then set up a DataReceived event handler for incoming data. There is a BytesToRead property that tells you how much data is waiting to be read.
All of this must be available in native code, if I can find it I'll edit this.
EDIT: the best I can find for native is ReadPrinter - I don't see how to check if data is there, this will block if it's not.
I know it can be used to send/receive structured object from file,
but can it be used to send/receive sequences of structured object from a socket?
http://code.google.com/p/protobuf/
Protocol Buffers is a structured data serialization (and de-serialization) framework. It is only concerned with encoding a selection of pre-defined data types into a data stream. What you do with that stream is up to you. To quote the wiki:
If you want to write multiple messages
to a single file or stream, it is up
to you to keep track of where one
message ends and the next begins. The
Protocol Buffer wire format is not
self-delimiting, so protocol buffer
parsers cannot determine where a
message ends on their own. The easiest
way to solve this problem is to write
the size of each message before you
write the message itself. When you
read the messages back in, you read
the size, then read the bytes into a
separate buffer, then parse from that
buffer.
So yes, you could use it to send/receive multiple objects via a socket but you have to do some extra work to differentiate each object stream.
I'm not familiar with protobuf, but the documentation says you can create a FileInputStream (which can then be used to create a CodedInputStream) using a file descriptor. If you're on a system that supports BSD sockets, you should presumably be able to give it a socket file descriptor rather than an ordinary one.
Protocol Buffers does not handle any surrounding network/file I/O operations. You might want to consider using Thrift, which includes socket communication libraries and server libraries with the serialization/deserialization.
I am writing a C++ server side application called quote of the day. I am using the winsock2 library. I want to send the contents of a file back to the client, including newlines by using the send function. The way i tried it doesn't work. How would i go about doing this?
Reading the file and writing to the socket are 2 distinct operations. Winsock does not have an API for sending a file directly.
As for reading the file, simply make sure you open it in read binary mode if using fopen, or simply use the CreateFile, and ReadFile Win32 API and it will be binary mode by default.
Usually you will read the file in chunks (for example 10KB at a time) and then send each of those chunks over the socket by using send or WSASend. Once you are done, you can close the socket.
On the receiving side, read whatever's available on the socket until the socket is closed. As you read data into a buffer, write the amount read to a file.
Hmm... I think Win32 should have something similar to "sendfile" in Linux.
If it doesn't you still can use memory-mapping (but, don't forgot to handle files with size larger than available virtual address space). You probably will need to use blocking sockets to avoid returning to application until all data is consumed. And I think there was something with "overlapped" operation to implement async IO.
I recommend dropping winsock and instead using something more modern such as Boost.Asio:
http://www.boost.org/doc/libs/1_37_0/doc/html/boost_asio/tutorial.html
There is also an example on transmitting a file:
http://www.boost.org/doc/libs/1_37_0/doc/html/boost_asio/examples.html