I'm working to implement a server for a Rock Paper Scissors protocol. So far things are going really well except I have one snag I'm trying to get past. As a total overview of the program:
A client connects to the server.
A second client connects to the server.
The server receives input from client 1.
The server receives input from client 2.
The server determines a winner then sends back winning information to each client.
I've wrapped the sockets in a class to allow an easier API for working with them. The part that I'm having trouble with is here:
char playerOneRequest;
char playerTwoRequest;
int playerOneLength = mPlayerOne->receive(&playerOneRequest,
BUFFER_SIZE);
cerr << "After player one\n";
cout << "Received '" << playerOneRequest << "' from player one.\n";
int playerTwoLength = mPlayerTwo->receive(&playerTwoRequest,
BUFFER_SIZE);
cerr << "After player two\n";
cerr << "Received '" << playerTwoRequest << "' from player two.\n";
char playerOne = toupper(playerOneRequest);
char playerTwo = toupper(playerTwoRequest);
I've been using DDD to debug it and I've discovered a problem. Let's imagine that player one sent a R and player two sent a S. After the first receive playerOneRequest is R after the second receive which is to player two (a separate TCP stream) playerTwoRequest is S. But at this point playerOneRequest now equals \r.
I can't figure out why this is the case. All my code is available in this Gist
What value is BUFFER_SIZE? You only have room for one character in playerOneRequest but it appears you may read more than one character from receive. Any additional characters read will be put into adjacent variables.
Related
Hi I am a little confused. My program uses TCP to transfer messages over network. Which is in my opinion irrelevant to my question.
std::stringstream tmp(buf);
if (tmp.str().find("\r\n") == std::string::npos ) {
std::cout << " doesnt have ending char" << std::endl;
}else{
std::cout << " position of ending char " << tmp.str().find("\r\n") << std::endl;
}
when a message is read from client, it is pushed to stringstream. Then I am trying to find escape character, unfortunately string.find("\r\n") always return length of the string, even though "\r\n" is not contained in the buf.
I am using
telnet
to test it, is it possible that this behavior is caused by the telnet?
This is output of my terminal:
Connected to localhost.
Escape character is '^]'.
200 LOGIN
asdadsgdgsd
and this is output from the program:
3001 PORT NUM
sent message: asdadsgdgsd END
position of ending char 11
The string entered in the console then sent and received is
"asdadsgdgsd\r\n"
012345678901_2_
So the value 11 is correct
After dealing with a very strange error in a C++ program I was writing, I decided to write the following test code, confirming my suspicion. In the original program, calling send() and this_thread::sleep_for() (with any amount of time) in a loop 16 times caused send to fail with a SIGPIPE signal. In this example however, it fails after 4 times.
I have a server running on port 25565 bound to localhost. The original program was designed to communicate with this server. I'm using the same one in this test code because it doesn't terminate connections early.
int main()
{
struct sockaddr_in sa;
memset(sa.sin_zero, 0, 8);
sa.sin_family = AF_INET;
inet_pton(AF_INET, "127.0.0.1", &(sa.sin_addr));
sa.sin_port = htons(25565);
cout << "mark 1" << endl;
int sock = socket(AF_INET, SOCK_STREAM, 0);
connect(sock, (struct sockaddr *) &sa, sizeof(sa));
cout << "mark 2" << endl;
for (int i = 0; i < 16; i++)
{
cout << "mark 3" << endl;
cout << "sent " << send(sock, &i, 1, 0) << " byte" << endl;
cout << "errno == " << errno << endl;
cout << "i == " << i << endl;
this_thread::sleep_for(chrono::milliseconds(2));
}
return 0;
}
Running it in GDB is how I discovered it was emitting SIGPIPE. Here is the output of that: http://pastebin.com/gXg2Y6g1
In another test, I called this_thread::sleep_for() 16 times in a loop, THEN called send() once. This did NOT produce the same error. It ran without issue.
In yet another test, I commented out the thread sleeping line, and it ran all the way through just fine. I did this in both the original program and the above test code.
These results make me believe it's not a case of the server closing the connection, even though that's usually what SIGPIPE means (why did it run fine when there was no call to this_thread::sleep_for()?).
Any ideas as to what could be causing this? I've been messing around with it for a week and have gotten no further.
Running this on my machine prints up to mark 3 once, as I expected it to. The fact that it does run several times on your end tells me that you have a server listening on port 25565, which you have not included in this question.
Your problem is that you are not testing to see whether the server, of which you have not told us, closed the connection. When it does, your process gets a SIGPIPE. Since you do not handle that signal, your process quits.
What you can do in order to fix this:
Start checking return values of functions. It wouldn't have helped in this particular case, but you ignore potential errors from both connect and send. I'm hoping this is because of minimizing the program, but it is worth mentioning.
Handle the signal. If you prefer to handle server closes from the main flow of your code, you can either register a handler that ignores the signal, or pass the flag MSG_NOSIGNAL to send. In both cases, send will return -1 in such a case with errno set to EPIPE.
RTFM. Seriously. A simple man send and a search for SIGPIPE would give you this answer.
As for why the server closed, I cannot answer this without knowing what server it is and what protocol it is running. No, don't answer that question in the comments. It is irrelevant to this question. The simple truth of the matter is that a server you are talking to might close the connection at any time, and your code must be able to to deal with that.
Recently I've been messing around with some sockets by trying to make a client/server program. So far I have been successful, but it seems I hit a roadblock. For some quick background information, I made a server that can accept a connection, and once everything is set up and a connection to a client is made, this block of code begins to exectue:
while(1){
read(newsockfd, &inbuffer, 256);
std::cout << "Message from client " << inet_ntoa(cli_addr.sin_addr) << " : ";
for(int i = 0; i < sizeof(inbuffer); i++){
std::cout << inbuffer[i];
}
std::cout << std::endl;
}
Now the client simply, when executed, connects to the server and writes to the socket, and then exits. So since one message was sent, this loop should only run once, and then wait for another message if what I read was correct.
But what ends up happenning is that this loop continues over and over, printing the same message over and over. From what I read (on this site and others) about the read() function is that after it is called once, it waits for another message to be recieved. I may be making a stupid mistake here, but is there any way I can have this read() function wait for a new message, instead of using the same old message over and over? Or is there another function that could replace read() to do what I want it to?
Thanks for any help.
You don't check the return value of read. So if the other end closes the connection or there's an error, you'll just loop forever outputting whatever happened to be in the buffer. You probably want:
while(1){
int msglen = read(newsockfd, &inbuffer, 256);
if (msglen <= 0) break;
std::cout << "Data from client " << inet_ntoa(cli_addr.sin_addr) << " : ";
for(int i = 0; i < msglen; i++){
std::cout << inbuffer[i];
}
std::cout << std::endl;
}
Notice that I changed the word "message" to "data". Here's why:
So since one message was sent, this loop should only run once, and then wait for another message if what I read was correct.
This is incorrect. The code above does not have any concept of a "message", and TCP does not preserve application message boundaries. So not only is this wrong, there's no way it could be correct because the word "message" has no meaning that could possibly apply in this context. TCP does not "glue together" the bytes that happend to be passed in a single call to a sending function.
I am trying to send a Google Protocol Buffer serialized string across an HTTP connection and receive it back ( unmodified ) where I will deserialize it. My problem seems to be with the 'serializeToString' method which takes my string and seems to add newline characters ( and maybe other whitespace ) to the serialized string. In the example below, I am taking the string "camera_stuff" and after serializing it I get a QString with newlines at the front. I have tried other strings with the same result only with different whitespace and newlines added. This causes problems for my deserializing operation as the whitespace is not captured in the HTTP request and so the response containing the serialized string from the server cannot be successfully decoded. I can partially decode it if I guess the whitespace in the serialized string. How can I solve this? Please see the following code - thanks.
I have a protocol buffer .proto file that looks like:
message myInfo {
required string data = 1;
required int32 number = 2;
}
After running the protoc compiler, I construct in it Qt like this:
// Now Construct our Protocol Buffer Data to Send
myInfo testData;
testData.set_data("camera_stuff");
testData.set_number(123456789);
I serialize my data to a string like this:
// Serialize the protocol buffer to a string
std::string serializedProtocolData; // Create a standard string to serialize the protocol buffer contents to
myInfo.SerializeToString(&serializedProtocolData); // Serialize the contents to the string
QString serializedProtocolDataAsQString = QString::fromStdString(serializedProtocolData);
And then I print it out like this:
// Print what we are sending
qDebug() << "Sending Serialized String: " << serializedProtocolDataAsQString;
qDebug() << "Sending Serialized String (ASCII): " << serializedProtocolDataAsQString.toAscii();
qDebug() << "Sending Serialized String (UTF8): " << serializedProtocolDataAsQString.toUtf8();
qDebug() << "Sending Serialized Protocol Buffer";
qDebug() << "Data Number: " << QString::fromStdString(myInfo.data());
qDebug() << "Number: " << (int)myInfo.number();
When I send my data as part of an HTTP multipart message I see those print statements like this ( notice the newlines in the printouts! ):
Composing multipart message...
Sending Serialized String: "
camera_stuffï:"
Sending Serialized String (ASCII): "
camera_stuffï:"
Sending Serialized String (UTF8): "
camera_stuffÂÂï:"
Sending Serialized Protocol Buffer
Data: "camera_stuff"
Number: 123456789
Length of Protocol Buffer serialized message: 22
Loading complete...
The client deserializes the message like this:
// Now deserialize the protocol buffer
string = "\n\n" + string; // Notice that if I don't add the newlines I get nothing!
myInfo protocolBuffer;
protocolBuffer.ParseFromString(string.toStdString().c_str());
std::cout << "DATA: " << protocolBuffer.model() << std::endl;
std::cout << "NUMBER: " << protocolBuffer.serial() << std::endl;
qDebug() << "Received Serialized String: " << string;
qDebug() << "Received Deserialized Protocol Buffer";
qDebug() << "Data: " << QString::fromStdString(protocolBuffer.data());
qDebug() << "Number: " << (int)protocolBuffer.number();
The server gives it back without doing anything to the serialized string and the client prints the following:
RESPONSE: "camera_stuffï:"
DATA: camera_stu
NUMBER: 0
Received Serialized String: "
camera_stuffï:"
Received Deserialized Protocol Buffer
Number: "camera_stu"
Number: 0
So you see the issue is that I cannot guess the whitespace so I cannot seem to reliably deserialize my string. Any thoughts?
A serialized protobuf cannot be treated as a C string because it probably has embedded NULs in it. It's a binary protocol which uses every possible octet value and can only be sent over an 8-bit clean connection. It's also not a valid UTF-8 sequence, and cannot be serialized and deserialized as Unicode. So QString is also not a valid way of storing a serialized protobuf, and I suspect that might be causing you problems as well.
You can use std::string and QByteArray. I strongly suggest you avoid anything else. In particular, this is wrong:
protocolBuffer.ParseFromString(string.toStdString().c_str());
because it will truncate the protobuf at the first NUL. (Your test message doesn't have any, but this will bite you sooner or later.)
As for sending the message over HTTP, you need to be able to ensure that all bytes in the message are sent as-is, which also means that you need to send the length explicitly. You didn't include the code which actually transmits and receives the message, so I can't comment on it (and I don't know the Qt HTTP library well enough in any event), but the fact that 0x0A are being deleted from the front of the message suggests that you are missing something. Make sure you set the content-type in the message part correctly (not text, for example).
In a first time i want to thanks HostileFork to help me to explain my problem.
Thanks you !
i'm trying to build a client and a server who send their data through a binary protocol.
my problem is i want to send a class from a QT client to a Boost Server. My header(one integer who is the size of my class) is writting on the socket. When i want to read the header on the server side, i can't get the good integer(instead of that i have an big number like -13050660). I think that the problem come to the deserialization on the server but i am not sure.
This is the technique that my Qt client code uses to write the number 10 to onto a socket:
QByteArray paquet;
QDataStream out(&paquet, QIODevice::WriteOnly);
out << (quint32) 0;
out.device()->seek(0);
out << (quint32) (10);
cout << "Writing " << sizeof(quint32) << " bytes to socket." << endl;
Then I try to read it on a server process, which uses boost's async_read():
this->Iheader.resize(size, '\0'); // Iheader is a vector of char
async_read(
this->socket,
buffer(this->Iheader),
bind(
&Client::endRead,
cli,
placeholders::error,
placeholders::bytes_transferred)
);
Here's the function that operates on the string result:
#ifdef WIN32
#define MYINT INT32
#include <Windows.h>
#else
#define MYINT int
#endif
void Client::endRead(const error_code& error, size_t nbytes)
{
if (!error && nbytes == sizeof(MYINT)) {
cout << "Read " << sizeof(MYINT) << " bytes from a socket." << endl;
istringstream stream(this->connection->getIheader(nbytes));
stream >> this->Isize;
cout << "Integer value read was " << this->Isize << endl;
} else {
cout << "Could not read " << sizeof(MYINT) << " bytes." << endl;
}
}
I do get the 32-bit signed integer (4 bytes), but it is not ten, instead it is something like -1163005939. Anyone have and ideas why this is not working?
The server and the client are both on launching on Windows7 pro, 64-bit.
You're welcome...and thanks for following my suggestions on editing the question, and doing the requisite effort to pinpoint the problem more clearly. So now I can tell you what's wrong. :)
The behavior of the << and >> are different on QDataStream than on C++ standard IOstreams. In the world of classes like std::stringstream these operators are called "inserters"/"extractors" and are intended for dealing with information formatted as text. If you want to read a certain number of bytes into a memory address, what you'll want is:
http://www.cplusplus.com/reference/iostream/istream/read/
(Note that if you wish to read binary data out of something that is not a stringstream, you need to be using ios::binary to keep it from messing with line ending conversions)
QDataStream doesn't follow that convention...it's a good helper for binary data. Nothing wrong with that...since abstractly speaking the << and >> operators are available in the language to be overloaded to do whatever you want within your own class hierarchies. Qt was free to define its own semantics for its own streams, and they did.
Do heed the advice given by #vitakot about (if possible) using the same methodology for both input and output. Also heed my warning about byte-ordering issues that start to come up if you aren't careful.
(Good news is that if you are using QDataStream it finesses this issue by taking care of it for you.)
Be aware that in your code as written, your stringstream is making a copy of the buffer in order to read from it. I'm not experienced with boost::asio or the best practices of async_read, but I'm sure there are better ways you might dig around and find.
HostileFork is right, from the information we have it is not possible to isolate a bug in your code.
However, I would suggest you to use boost serialization in your Qt client as well. There is no reason to not combine Boost and Qt libraries. Otherwise you will have to deal with a lot of troubles when sending more complicated classes over the network...