I have a question for you.
I have this class:
`
#define DIMBLOCK 128
#ifndef _BLOCCO_
#define _BLOCCO_
class blocco
{
public:
int ID;
char* data;
blocco(int id);
};
#endif
blocco::blocco(int id)
{
ID = id;
data = new char[DIMBLOCK];
}
`
and the application has a client and a server.
In the main of my server I instantiate an object of this class in this way:
blocco a(1);
After that I open a connection between the client and the server using sockets.
The question is: how can I send this object from the server to the client or viceversa?
Could you help me please?
It's impossible to send objects across a TCP connection in the literal sense. Sockets only know how to transmit and receive a stream of bytes. So what you can do is send a series of bytes across the TCP connection, formatted in such a way that the receiving program knows how to interpret them and create an object that is identical to the one the sending program wanted to send.
That process is called serialization (and deserialization on the receiving side). Serialization isn't built in to the C++ language itself, so you'll need some code to do it. It can be done by hand, or using XML, or via Google's Protocol Buffers, or by converting the object to human-readable-text and sending the text, or any of a number of other ways.
Have a look here for more info.
you can do this using serialization. This means pulling object into pieces so you can send these elements over the socket. Then you need to reconstruct your class in the other end of connection. in Qt there is QDataStream class available providing such functionality. In combination with a QByteArray you can create a data package which you can send. Idea is simple:
Sender:
QByteArray buffer;
QDataStream out(&buffer);
out << someData << someMoreData;
Receiver:
QByteArray buffer;
QDataStream in(&buffer);
in >> someData >> someMoreData;
Now you might want to provide additional constructor:
class blocco
{
public:
blocco(QDataStream& in){
// construct from QDataStream
}
//or
blocco(int id, char* data){
//from data
}
int ID;
char* data;
blocco(int id);
};
extended example
I don't know how much flak I'll get for this, but well I tried this and though I should share it. I am a beginner at socket programming so don't get pissed off.
What I did is I created an array of characters which is of the size of the class (representing the block of memory at the server side). Then I recved the block of memory at the client side and typecast that block of memory as an object and voila!! I managed to send an object from client to server.
Sample code:
blocco *data;
char blockOfData[sizeof(*data)];
if(recv(serverSocket, blockOfData, sizeof(*data), 0) == -1) {
cerr << "Error while receiving!!" << endl;
return 1;
}
data = (blocco *)blockOfData;
Now you can do whatever you want with this data using this as a pointer to the object. Just remember do not try to delete/free this pointer as this memory is assigned to the array blockOfData which is on the stack.
Hope this helps if you wanted to implement something like this.
PS: If you think what I've done is poor way of coding please let me know why. I don't know why this is such a bad idea(if it is in fact a bad idea to do this). Thanks!!
Related
scenario: I would like to exchange InkStroke between devices running on the same LAN.
Step1: collect all the strokes drawn on the screen from the client:
// Send strokes to a remote server.
strokesToReplay = inkCanvas->InkPresenter->StrokeContainer->GetStrokes();
const int sz = sizeof(strokesToReplay);
char msgToSend[sz];
memcpy(msgToSend, &strokesToReplay, sz);
send(tcpClient.sockfd, msgToSend, sz, 0);
Step2: receive data from the server part:
// tcpServer is an instance of TCPServer, which contains
// a function that calls listen -> accept
// to establish a connection with the client
bytesRecv = tcpServer.recvFromClient();
// Received data was storaged in TCPServer::buffer (char buffer[16384])
What I would like to do is cast the data in the buffer into IVectorView.
So that it is possible to iterate InkStroke from it like:
for (InkStroke^ inkStroke : buffer) {
... to do
}
But here is the question: how can I cast char * to IVectorView?
I've tried memcpy() and static_cast.
But since there is no proper memory allocated in IVectorView, memcpy() will destroy the whole program.
static_cast() won't work with IVectorView.
Now I am thinking about copying the data into the clipboard, then calling the API provided by Microsoft that gets data from the clipboard and cast it into strokes automatically.
But I Do Not Know If This Works Or Not...
Is there any advice that you guys can give me?
Thank you.
Let us suppose that a client holds two different big objects (in terms of byte size) and serializes those followed by sending the serialized objects
to a server over TCP/IP network connection using boost::asio.
For client side implementation, I'm using boost::asio::write to send binary data (const char*) to the server.
For server side implementation, I'm using read_some rather than boost::asio::ip::tcp::iostream for future improvement for efficiency. I built the following recv function at the server side. The second parameter std::stringstream &is holds a big received data (>65536 bytes) in the end of the function.
When the client side calls two sequential boost::asio::write in order to send two different binary objects separately, the server side sequentially calls two corresponding recv as well.
However, the first recv function absorbs all of two incoming big data while the second call receives nothing ;-(.
I am not sure why this happens and how to solve it.
Since each of two different objects has its own (De)Serialization function, I'd like to send each data separately. In fact, since there are more than 20 objects (not just 2) that have to be sent over the network.
void recv (
boost::asio::ip::tcp::socket &socket,
std::stringstream &is) {
boost::array<char, 65536> buf;
for (;;) {
boost::system::error_code error;
size_t len = socket.read_some(boost::asio::buffer(buf), error);
std::cout << " read "<< len << " bytes" << std::endl; // called multiple times for debugging!
if (error == boost::asio::error::eof)
break;
else if (error)
throw boost::system::system_error(error); // Some other error.
std::stringstream buf_ss;
buf_ss.write(buf.data(), len);
is << buf_ss.str();
}
}
Client main file:
int main () {
... // some 2 different big objects are constructed.
std::stringstream ss1, ss2;
... // serializing bigObj1 -> ss1 and bigObj2-> ss2, where each object is serialized into a string. This is due to the dependency of our using some external library
const char * big_obj_bin1 = reinterpret_cast<const char*>(ss1.str().c_str());
const char * big_obj_bin2 = reinterpret_cast<const char*>(ss2.str().c_str());
boost::system::error_code ignored_error;
boost::asio::write(socket, boost::asio::buffer(big_obj_bin1, ss1.str().size()), ignored_error);
boost::asio::write(socket, boost::asio::buffer(big_obj_bin2, ss2.str().size()), ignored_error);
... // do something
return 0;
}
Server main file:
int main () {
... // socket is generated. (communication established)
std::stringstream ss1, ss2;
recv(socket,ss1); // this guy absorbs all of incoming data
recv(socket,ss2); // this guy receives 0 bytes ;-(
... // deserialization to two bib objects
return 0;
}
recv(socket,ss1); // this guy absorbs all of incoming data
Of course it absorbs everything. You explicitly coded recv to do an infinite loop until eof. That's the end of the stream, which means "whenever the socket is closed on the remote end".
So the essential thing missing from the protocol is framing. The most common way to address it are:
sending data length before data, this way the server knows how much to read
sending a "special sequence" to delimit frames. In text, a common special delimiter would be '\0'. However, for binary data it is (very) hard to arrive at a delimiter that cannot naturally occur in the payload.
Of course, if you know extra characteristics of your payload you can use that. E.g. if your payload is compressed, you know you won't regularly find a block of 512 identical bytes (they would have been compressed). Alternatively you resort to encoding the binary data in ways that removes the ambiguity. yEnc, Base122 et al. come to mind (see Binary Data in JSON String. Something better than Base64 for inspiration).
Notes:
Regardless of that
it's clumsy to handwrite the reading loop. Next it is very unnecessary to do that and also copy the blocks into a stringstream anyways. If you're doing all that copying anyways, just use boost::asio::[async_]read with boost::asio::streambuf directly.
This is clear UB:
const char * big_obj_bin1 = reinterpret_cast<const char*>(ss1.str().c_str());
const char * big_obj_bin2 = reinterpret_cast<const char*>(ss2.str().c_str());
str() returns a temporary copy of the buffer - which not only is wasteful, but means that the const char* are dangling the moment they have been initialized.
I am currently trying to write the networking part of a little multiplayer game, and I am facing a problem to store my TCP sockets which are, in SFML, non-copyable (I am a beginner in C++).
I have three classes : Server, Client (a server-side class used to store all informations about a connecting client) and ClientManager, which is in charge of storing all clients and giving them IDs, etc.
ClientManager.h
class ClientManager {
public:
ClientManager();
std::map<int, Net::Client*> getClients();
int attribID();
void addClient(Net::Client *client);
sf::TcpSocket getClientSocket(int id) throw(std::string);
void setClientSocket(int id, sf::TcpSocket);
private:
std::map<int, Net::Client*> m_clients;
std::map<int, sf::TcpSocket> m_clientSockets;
std::vector<int> m_ids;
int m_lastID;
};
What I planned to do originally, when a client connects, is :
void Net::Server::waitForClient() {
while(true) {
if(m_listener.accept(m_tcpSocket) != Socket::Done) {
cout << "Error happened during client connection. skipping. " << endl;
return;
}
int newID = m_clientManager.attribID();
this->m_clientManager.addClient(new Net::Client(newID, m_tcpSocket, m_tcpSocket.getRemoteAddress()));
}
}
So, adding a new Client into ClientManager's list, with its ID, a TcpSocket to send info and his address.
But, the problem is that SFML's TcpSocket class is not copyable, which means I can't copy it to a Client like this.
I could pass it as a pointer to the original TcpSocket, but what if another client connects ? The data the pointer points to will have change and the program will bug. I do not know if this behavior will be the same with smart pointers, but I think so (but I don't master them at all).
Storing them in a std::map or std::vector causes the same problem, as they both copy the object. Storing them as pointers (in the map) to the original TcpSocket will cause the same problem as before, because the socket will change too, and the pointers will point to the same object.
How can I store my sockets without having to use references, pointers or to copy my object ?
Any help will be greatly appreciated :)
It's going to be a real pain to do without pointers. Personally I use smart pointers to manage the sockets themselves (std::vector<std::unique_ptr<sf::TcpSocket>> or similiar), along with `sf::SocketSelector' to manage the actual communication
Okay, I actually don't have code as of yet because i'm just picking out a framework for the time being, but i'm still a little baffled about how i wish to go about this :.
Server side, i wish to have a class where each instance has a socket and various information identifying each connection. each object will have it's own thread for receiving data. I understand how i'll be implementing most of that, but my confusion starts just as i get to the actual transfer of data between server and client. I'll want to have a bunch of different message structs for specific cases, (for example CONNECT_MSG , DISCONNECT_MSG, POSTTEXT_MSG, etc) and then all i have to do is have a char * point at that struct and then pass it via the send() function.
But as i think on it, it gets a little complicated at that point. Any of those different message types could be sent, and on the receiving end, you will have no idea what you should cast the incoming buffer as. What i was hoping to do is, in the thread of each connection object, have it block until it receives a packet with a message, then dump it into a single queue object managed by the server(mutexes will prevent greediness) and then the server will process each message in FIFO order independent of the connection objects.
I havn't written anything yet, but let me write a little something to illustrate my setup.
#define CONNECT 1000
struct GENERIC_MESSAGE
{
int id;
}
struct CONNECT_MESSAGE : public GENERIC_MESSAGE
{
m_username;
}
void Connection::Thread()
{
while(1)
{
char buffer[MAX_BUFFER_SIZE]; // some constant(probably 2048)
recv(m_socket, buffer, MAX_BUFFER_SIZE, 0);
MESSAGE_GENERIC * msg = reinterpret_cast<MESSAGE_GENERIC *> (buffer);
server->queueMessage(msg);
}
}
void Server::QueueMessage(MESSAGE_GENERIC * msg)
{
messageQueue.push(msg);
}
void Server::Thread()
{
while(1)
{
if(!messageQueue.empty())
ProcessMessages();
else
Sleep(1);
}
}
void Server::ProcessMessages()
{
for(int i = 0; i < messageQueue.size(); i++)
{
switch(messageQueue.front()->id)
{
case CONNECT:
{
// the part i REALLY don't like
CONNECT_MESSAGE * msg = static_cast<CONNECT_MESSAGE *>(messageQueue.front() );
// do the rest of the processing on connect
break;
}
// other cases for the other message types
}
messageQueue.pop();
}
}
Now if you've been following up until now, you realize just how STUPID and fragile this is. it casts to the base class, passes that pointer to a queue, and then just assumes that the pointer is still valid from the other thread, and even then whether or not the remaining buffer after the pointer for the rest of the derived class will always be valid afterward for casting, but i have yet to find a correct way of doing this. I am wide open for ANY suggestions, either making this work, or an entirely different messaging design.
Before you write even a line of code, design the protocol that will be used on the wired. Decide what a message will consist of at the byte level. Decide who sends first, whether messages are acknowledged, how receivers identify message boundaries, and so on. Decide how the connection will be kept active (if it will be), which side will close first, and so on. Then write the code around the specification.
Do not tightly associate how you store things in memory with how you send things on the wire. These are two very different things with two very different sets of requirements.
Of course, feel free to adjust the protocol specification as you write the code.
I find myself constantly running into a situation where I have a set of messages that I need to send over a TCP/IP connection. I have never found a good solution for the design of the message class. I would like to have a message base class where all messages derive from it. Since each message will have different fields, this would allow me to access the fields through member variables or methods. Something like...
class message_base
{
public:
message_base();
virtual ~message_base();
unsigned int type;
};
class message_control : public message_base
{
public:
message_control();
virtual ~message_control();
unsigned int action;
};
This way I can create a message_control and access the action member for assigning to and reading from. I can also pass the messages around without writing too much code.
The problem arises when I need to send the messages. If I override the operator<< and operator>> then I can send the messages over one variable at a time. The problem with that solution is that with so many calls to send data, the context switches will slam the processor. Also, the streaming operator ends up the the socket class and not in the message class where I would prefer it lived.
socket& socket::operator<<(message_control& message)
{
sock << type;
sock << action;
}
If I pack the data in a buffer, I get away from C++ and more into the realm of C and find myself making generous use of pointers and the like. And, modifying the code is difficult and error prone. And, the streaming operator is still in the socket class and not the message class.
socket& socket::operator<<(message_control& message)
{
byte* buffer = new byte[sizeof(message.type) + sizeof(message.action)];
memcpy(buffer, message.type, sizeof(message.type));
memcpy(buffer + sizeof(message.type), message.action, sizeof(message.action));
sock.send(buffer);
}
My last attempt used an intermediate class to handle packing and unpacking the members in a buffer. The messages could implement operator<< and operator>> to the buffer class and then the buffer class is sent to the socket. This works but doesn't feel right.
class socket
{
public:
socket();
~socket();
socket& operator<<(buffer& buff);
};
class buffer
{
public:
buffer() {m_buffer = new byte[initial_size];}
~buffer() {delete [] m_buffer;}
buffer& operator<<(unsigned int value);
private:
byte* m_buffer;
};
void message_control::serialize(buffer& buff)
{
buff << type;
buff << action;
}
I can't help but feel there is an elegant solution to this problem. I can't find any design patterns that match what I am trying to accomplish. Has anyone experienced this problem and come up with a going design that doesn't make you feel like you would be better off with good old pointers and an array of bytes?
Update
I failed to mention in my original post that I am most often dealing with very well define wire protocols. That is why I typically need to roll my own solution and can't use any of the wonderful toolkits available for messaging over a network connection.
"The problem with that solution is that with so many calls to send data, the context switches will slam the processor. Also, the streaming operator ends up the the socket class and not in the message class where I would prefer it lived."
The solution to the second problem is to define operator<< as a non-member function in the namespace which contains the message class, instead of as a member function of the socket class. ADL will find it.
The solution to the first problem is to buffer the data within your process, and then flush at the end of each message. If Nagle buffering isn't preventing context switches, then you might be able to achieve this by messing with the socket, I don't know. What you certainly can do, though, is prepare each message before sending in a more C++-ish way. Replace:
sock << type;
sock << action;
with:
stringstream ss;
ss << type;
ss << action;
sock << ss.str();