Sending vector over internet using C++ - c++

I want to send a vector from one computer to another through internet. I'm looking for a Peer-to-Peer solution in C++. I have made a Winsock2 solution, but it can only send char* with the send and recv-functions which I don't think will work for my project.
Is there a way of using JSON with a P2P-solution in C++? So make a JSON-object of the vector and send it through internet? Or do you know a better solution?
The vector I want to send through internet to another computer looks like this:
Vector<AVpacket> data;
AVpacket is a struct from ffmpeg, consisting 14 data members. https://ffmpeg.org/doxygen/trunk/structAVPacket.html. You don't want to make this to a char*

You can actually send anything using the send and recv functions. You just have to make sure you pass a pointer to the data, and then typecast that pointer as a char * and it will work.
However, you can't send a std::vector as is. Instead you should first send its size (otherwise the receiving end will not know how much data it should receive) then you send the actual data in the vector, i.e. someVector.data() or &someVector[0].
Though in your case it will be even more complicated, as the structures you want to send contains embedded pointers. You can't send pointers over the Internet, it's barely possible to transfer pointers between two processes on the same system. You need to read about serialization and maybe about the related subject marshalling.
In short: You can send any kind of data, it doesn't have to be characters, and for the kind of structures you want to send you have to convert them to a format is transferable through serialization.

You can not simply send a vector. You can not be sure how the std allocator reserved the memory. So it is very likely that the whole memory of the vector is not just one big linear chunk.
In addition to this, as pointed out above, there are pointers in this struct. They point to data in your local memory. These addresses aren't valid on the recipients side, thus you would access invalid memory trying to read this.
I guess, that in order to achieve what you want to do, you have to try a completely different approach. Do not get lost by trying to send the data of the pointers or similar, rather try having parallel data on both machine.
E.g. both load the same video e.g. from a file which both share. Then use a unique identifier for that video to reference the video on both sides.

Related

Should use a stream or container when working with network, binary data and serialization

I am working on a TCP server using boost asio and I got lost with choosing the best data type to work with when dealing with byte buffers.
Currently I am using std::vector<char> for everything. One of the reasons is that most of examples of asio use vectors or arrays. I receive data from network and put it in a buffer vector. Once a packet is available, it is extracted from the buffer and decrypted/decompressed if needed (both operations may result in more amount of data). Then multiple messages are extracted from the payload.
I am not happy with this solution because it involves inserting and removing data from vectors constantly, but it does the job.
Now I need to work on data serialization. There is not an easy way to read or write arbitrary data types from a char vector so I ended up implementing a "buffer" that hides a vector inside, and allows to write (wrapper for insert) and read (wrapper for casting) from it. Then I can write uint16 code; buffer >> code; and also add serialization/deserialization methods to other objects while keeping things simple.
The thing is that every time I think about this I feel like I am using the wrong data type as container for the binary data. Reasons are:
Streams already do a good job as potentially endless source of data input or data output. While in background this may result in inserting and removing data, probably does a better job than using a char vector.
Streams already allow to read or write basic data types, so I don't have to reinvent the wheel.
There is no need to access to a specific position of data. Usually I need to read or write sequentially.
In this case, are streams the best choice or is there something that I am not seeing?. And if so, is stringstream the one I should use?
Any reasons to avoid streams and work only with containers?
PD: I can not use boost serialization or any other existing solution because I don't have control over the network protocol.
Your approach seems fine. You might consider a deque instead of a vector if you're doing a lot of stealing from the end and erasing from the front, but if you use circular-buffer logic while iterating then this doesn't matter either.
You could switch to a stream, but then you're completely at the mercy of the standard library, its annoyances/oddities, and the semantics of its formatted extraction routines — if these are insufficient then you have to extract N bytes and do your own reinterpretation anyway, so you're back to square one but with added copying and indirection.
You say you don't need random-access, so that's another reason not to care either way. Personally I like to have random-access in case I need to resync, or seek-ahead, or seek-behind, or even just during debugging want to have better capabilities without having to suddenly refactor all my buffer code.
I don't think there's any more specific answer to this in the general case.

Parser for TCP buffers

I want to implement a protocol to share the data between server and client.
I don't know the correct one. By keeping performance as main criteria can anyone suggest the best protocol to parse the data.
I have one in mind, don't the actual name but it will be like the this
[Header][Message][Header][Message]
the header contains the length of the message and the header size is fixed.
I have tried this with some by performing a lot of concatenation and substring operation which are costlier. can any suggest the best implementation for this
The question is very broad.
On the topic of avoiding buffer/string concatenation, like at Buffer Sequences, described in Boost Asio's "Scatter-Gather" Documentation
For parsing there are two common solutions:
small messages
Receive the data into a buffer, e.g. 64k. Then use pointers into that buffer to parse the header and message. Since the messages are small there can be many messages in the buffer and you would call the parser again as long as there is data in the buffer. Note that the last message in the buffer might be truncated. In which case you have to keep the partial message and read more data into the buffer. If the message is near the end of the buffer then copying it to the front might be necessary.
large messages
With large messages it makes sense to first only read the header. Then parse the header to get the message size, allocate an appropriate buffer for the message and then read the whole message into it before parsing it.
Note: In both cases you might want to handle overly large messages by either skipping them or terminating the connection with an error. For the first case a message can not be larger than the buffer and should be a lot smaller. For the second case you don't want to allocate e.g. a gigabyte to buffer a message if they are supposed to be around 1MB only.
For sending messages it's best to first collect all the output. A std::vector can be sufficient. Or a rope of strings. Avoid copying the message into a larger buffer over and over. At most copy it once at the end when you have all the pieces. Using writev() to write a list of buffers instead of copying them all into one buffer can also be a solution.
As for the best protocol... What's best? Simply sending the data in binary format is fastest but will break when you have different architectures or versions. Something like google protobuffers can solve that but at the cost of some speed. It all depends on what your needs are.

Can TCP data overlap in the buffer

If I keep sending data to a receiver is it possible for the data sent to overlap such that they accumulate in the buffer and so the next read to the buffer reads also the data of another sent data?
I'm using Qt and readAll() to receive data and parse it. This data has some structure in it so I can know if the data is already complete or if it is valid data at all but I'm worried that other data will overlap with others when I call readAll() and so would invalidate this suppose-to-be valid data.
If it can happen, how do I prevent/control it? Or is that something the OS/API worries about instead? I'm worried partly because of how the method is called. lol
TCP is a stream based connection, not a packet based connection, so you may not assume that what is sent in one time will also be received in one time. You still need some kind of protocol to packetize your stream.
For sending strings, you could use the nul-character as separator or you could begin with a header which contains a magic and a length.
According to http://qt-project.org/doc/qt-4.8/qiodevice.html#readAll this function snarfs all the data and returns it as an array. I don't see how the API raises concerns about overlapping data. The array is returned by value, and given that it represents the entire stream, so what would it even overlap with? Are you worried that the returned object actually has reference semantics (i.e. that it just holds pointers to storage that is re-used in other calls to the same function?)
If send and receive buffers overlap in any system, that's a bug, unless special care is taken that the use is completely serialized. (I.e. a buffer is somehow used only for sending and only for receiving, without any mixup.)
Why dont you use a fixed length header followed by variable length packet with the header holding the information of length of packet.
This way you can avoid worrying about packet boundaries. Say for example instead of just sending the string send the length of the string followed by the string. In the receiver end always read the length and then based on the length read the string.

Is it possible to send array over network?

I'm using C++ and wondering if I can just send an entire int array over a network (using basic sockets) without doing anything. Or do I have to split the data up and send it one at a time?
Yes.
An array will be laid out sequentially in memory so you are free to do this. Simply pass in the address of the first element and the amount of data and you'll send all data.
You could definitely send an array in one send, however you might want to do some additional work. There are issues with interpreting it correctly at the receiving end. For example, if using different machine architectures, you may want to convert the integers to network order (e.g., htonl).
Another thing to keep in mind is the memory layout. If it is a simple array of integers, then it would be contiguous in memory and a single send could successfully capture all the data. If, though, (and this is probably obvious), you have an array with other data, then the layout definitely needs consideration. A simple example would be if the array had pointers to other data such as a character string, then a send of the array would be sending pointers (and not data) and would be meaningless to the receiver.

C++ byte stream

For a networked application, the way we have been transmitting dynamic data is through memcpying a struct into a (void*). This poses some problems, like when this is done to an std::string. Strings can be dynamic length, so how will the other side know when the string ends? An idea I had was to use something similiar to Java's DataOuputStream, where I could just pass whatever variables to it and it could then be put into a (void*). If this can't be done, then its cool. I just don't really like memcpying a struct. Something about it doesn't seem quite right.
Thanks,
Robbie
nothing wrong with memcpy on a struct - as lng as the struct is filled with fixed-size buffers. Put a dynamic variable in there and you have to serialise it differently.
If you have a struct with std::strings in there, create a stream operator and use it to format a buffer. You can then memcpy that buffer to the data transport. If you have boost, use Boost::serialize which does all this for you (that link also has links to alternative serialization libs)
Notes: the usual way to pass a variable-size buffer is to begin by sending the length, then that many bytes of data. Occasionally you see data transferred until a delimiter is received (and fields within that data are delimited themselves by another character, eg a comma).
I see two parts of this question:
- serialization of data over a network
- how to pass structures into a network stack
To serialize data over a network, you'll need a protocol. Doesn't have to be difficult; for ASCII even a cr/lf as packet end may do. If you use a framework (like MFC), it may provide serialization functions for you; in that case you need to worry about how to send this in packets. A packetization which often works well for me is :
<length><data_type>[data....][checksum]
In this case the checksum is optional, and also zero-data is possible, for instance if the signal is carried in the data_type (i.e. Ack for acklnowedgement)
If you're working on the memcpy with structures, you'll need to consider that memcpy only makes a shallow copy. A pointer is worthless once transmitted over a network; instand you should transmit the data from that pointer (i.e. the contents of your string example)
For sending dynamic data across the network you have the following options.
First option in the same packet.
void SendData()
{
int size;
char payload[256];
Send(messageType)
Send(size);
Send(payload)
}
Second option:
void SendData()
{
char payload[256];
Send(messageType)
Send(payload)
}
Though in either situation, you will be faced with more of a design choice. In the first example you would send the message type, and the payload size and also then the payload.
The second option you have is you can send the message type and then you can send the string that has a delimiter of null terminator.
Though either option does not cover fully the problem your facing I think. Firstly, you need to determine if you're building a game what type of protocal you will be using, UDP? TCP? The second problem you will be facing is the maximum packet size. Then on top of that you need to have the framework in place so that you can calculate the optimum packet size that will not be fragmented and lost to the inter web. After that you have bandwidth control in regards to how much data you can transmitted and receive between the client and server.
For example the way that most games approach this situation is each packet is identified with the following.
MessageType
MessageSize
CRCCheckSum
MessageID
void buffer[payload]
In situation where you need to send dynamic data you would send a series of packets not just one. For example if you were to send a file accross the network the best option would to use TCP/IP because its a streaming protocal and it garnentees that the complete stream arrives safly to the other end. On the other hand UDP is a packet based protocal and is does not do any checking that all packets arrived in order or at all on the other end.
So in conclusion.
For dynamic data, send multiple packets but with a special flag
to say more data is to arrive to complete this message.
Keep it simple and if your working with C++ dont assume the packet or data
will contain a null terminator and check the size compared to the
payload if you decide to use a null terminator.