I am reading an Image URL sent from a Java client to a C++ server from Sockets. The server stops reading through recv() when it detects there is a null character in the char buffer[] as I do below in the following code:
void * SocketServer::clientController(void *obj)
{
// Retrieve client connection information
dataSocket *data = (dataSocket*) obj;
// Receive data from a client step by step and append data in String message
string message;
int bytes = 0;
do
{
char buffer[12] = {0};
bytes = recv(data->descriptor, buffer, 12, 0);
if (bytes > 0) // Build message
{
message.append(buffer, bytes);
cout << "Message: " << message << endl;
}
else // Error when receiving it
cout << "Error receiving image URL" << endl;
// Check if we are finished reading the image link
unsigned int i = 0;
bool finished = false;
while (i < sizeof(buffer) / sizeof(buffer[0]) && !finished)
{
finished = buffer[i] == '\0';
i++;
}
if (finished)
break;
}
while (bytes > 0);
cout << message << endl;
close(data->descriptor);
pthread_exit(NULL);
}
Is there a better and more elegant way to make this?
I read about sending first the size of the URL, but I do not know exactly how to stop recv() with it. I guess it is done by counting the bytes received until the size of the URL is reached. At that moment, we should be finished reading.
Another approach could be closing the Java socket so that recv() will return -1 and the loop will be finished. However, considering my Java client waits for a response from C++ server, closing the socket and then reopen it does not seem a suitable option.
Thank you,
Héctor
Apart from that your buffer has an unusual size (one typically chooses a power of 2, so 8, 16, 32, ...) and it looks a little small for your intent, your approach seems fine to me:
I assume that your java client will send a null terminated string and then wait anyway, i. e. especially it does not send any further data. So after you received the 0 character, there won't be any data to receive any more anyway, so there is not need to bother for something explicitly that recv does implicitly (recv normally returns only the data available, even if less than the buffer could consume).
Be aware that you initialized buffer with 0, so if you check the entire buffer (instead of the range [buffer, buffer + bytes), you might detect a false positive (if you receive less than 12 characters in the first iteration)! Detection of the 0 character can be done more elegantly, though, anyway:
if(std::find(buffer, buffer + bytes, 0) < buffer + bytes)
{
// found the 0 character!
break;
}
Related
So i have created a java server and a c++ client.
The java server sends a message with a printwriter to the c++ client to execute a command (the data transfer is correct, no problems with that)
Im using a strcmp() to check if the string that client recieved with recv() is the string i want but when i try to check it, it doesn't work. I've tried to print out the line with the recieved buffer and i dont see any problems.
Here is the code that recieves and checks the buffer(c++, ignore some values becouse this is a small piece of the code)
char buffer[1024];
if (recv(s, buffer, sizeof(buffer), 0) == SOCKET_ERROR) {
cout << "Error CR#001" << endl;
return -1;
}
if (strcmp(buffer, "||clear") == 0) {
system("cls");
return 1;
}
In c++ you can use std::string for the buffer:
const ssize_t MAX_BYTES = 1024;
ssize_t noreceivedbytes;
std::string buffer;
buffer.reserve(MAX_BYTES + 1);
noreceivedbytes = recv(s, buffer.data(), MAX_BYTES, 0)
if (noreceivedbytes <= 0) {
cout << "Error CR#001" << endl;
return -1;
}
buffer.data()[noreceivedbytes] = '\0';
if (buffer == "||clear") {
system("cls");
return 1;
}
Safer c solution for completeness:
#define MAX_BYTES = 1024;
ssize_t noreceivedbytes;
chat buffer[MAX_BYTES];
noreceivedbytes = recv(s, buffer, MAX_BYTES - 1, 0)
if (noreceivedbytes <= 0) {
cout << "Error CR#001" << endl;
return -1;
}
buffer[noreceivedbytes] = '\0';
if (strcmp(buffer, "||clear") == 0) {
system("cls");
return 1;
}
Please note:
This answer brings you only over the top of the iceberg. There are many more things that could go wrong when dealing with sockets (as mentioned by others in comments).
recv() doesn't guarantee that the whole chunk of data sent from the server will be read completely. You could easily end up with partial strings like "||cle" or "||c".
The least thing you'll need to do is to receive the bytes from the socket in a loop, until you have something at hand you can reasonably parse and match.
The simplest way to do so is to define a very primitive protocol, which preceeds payload data sent with it's size (take care of endianess problems when converting the size sent as integer value from the received data).
Having that at hand, you'll know exactly how many bytes you have to read until you have the payload chunk completed, such it can be parsed and compared reasonably.
How to do all that in detail exactly would lead to far to be answered here. There are whole books written about the topic (I'd recommend Stevens, "Unix network programming").
Edit (solution)
I've followed the advice of debugging with -fsanitize=address & valgrind. I only used -fsanitize (which I never heard of before) and found out what was the problem, there was a left over call for a destructor in another function and the object was being destroyed twice. The memory was completely jeopardised at this point.
Thanks a lot for the help and the other recommendations too.
I'm writing a code in C++ to talk with CouchDB using sockets (CouchDB is a Database by Apache that has an HTTP API). I've created a whole class to deal with it and it's basically a socket client that connects and closes.
One of my functions is to send an HTTP request and then read the response and work with it, it works fine on the first call, but fails when I call it a second time.
But it's inconsistent where it fails, sometimes it's a SEGFAULT inside of it in one of the string functions, other times it's a SIGABORT in the return. I've signalled the lines where it crashed with ->
And the worst part is that it only fails when it runs for the "second" time, which is actually the 10th time. Explanation: When the class is instantiated a socket is created, sendRequest is called 8 times (all work, always), I close the socket. Then I have another class that controls a socket server, which receives commands and creates a remote user object that executes the command, the remote user command then calls the CouchDB class to manipulate the DB. The first time a command is requested works, but the second fails and crashes the program.
Extra info: In the short int httpcode line, gdb trace shows it's a crash on substr, on the SIGABORT crash trace shows a problem on free().
I've already debugged many times, made some changes as to where and how to instantiate the string and the buffer and I'm lost. Anyone knows why it would work fine many times but crash on a subsequent call?
CouchDB::response CouchDB::sendRequest(std::string req_method, std::string req_doc, std::string msg)
{
std::string responseBody;
char buffer[1024];
// zero message buffer
memset(buffer, 0, sizeof(buffer));
std::ostringstream smsg;
smsg << req_method << " /" << req_doc << " HTTP/1.1\r\n"
<< "Host: " << user_agent << "\r\n"
<< "Accept: application/json\r\n"
<< "Content-Length: " << msg.size() << "\r\n"
<< (msg.size() > 0 ? "Content-Type: application/json\r\n" : "")
<< "\r\n"
<< msg;
/*std::cout << "========== Request ==========\n"
<< smsg.str() << std::endl;*/
if (sendData((void*)smsg.str().c_str(), smsg.str().size())) {
perror("#CouchDB::sendRequest, Error writing to socket");
std::cerr << "#CouchDB::sendRequest, Make sure CouchDB is running in " << user_agent << std::endl;
return {-1, "ERROR"};
}
// response
int len = recv(socketfd, buffer, sizeof(buffer), 0);
if (len < 0) {
perror("#CouchDB::sendRequest, Error reading socket");
return {-1, "ERROR"};
}
else if (len == 0) {
std::cerr << "#CouchDB::sendRequest, Connection closed by server\n";
return {-1, "ERROR"};
}
responseBody.assign(buffer);
// HTTP code is the second thing after the protocol name and version
-> short int httpcode = std::stoi(responseBody.substr(responseBody.find(" ") + 1));
bool chunked = responseBody.find("Transfer-Encoding: chunked") != std::string::npos;
/*std::cout << "========= Response =========\n"
<< responseBody << std::endl;*/
// body starts after two CRLF
responseBody = responseBody.substr(responseBody.find("\r\n\r\n") + 4);
// chunked means that the response comes in multiple packets
// we must keep reading the socket until the server tells us it's over, or an error happen
if (chunked) {
std::string chunkBody;
unsigned long size = 1;
while (size > 0) {
while (responseBody.length() > 0) {
// chunked requests start with the size of the chunk in HEX
size = std::stoi(responseBody, 0, 16);
// the chunk is on the next line
size_t chunkStart = responseBody.find("\r\n") + 2;
chunkBody += responseBody.substr(chunkStart, size);
// next chunk might be in this same request, if so, there must have something after the next CRLF
responseBody = responseBody.substr(chunkStart + size + 2);
}
if (size > 0) {
len = recv(socketfd, buffer, sizeof(buffer), 0);
if (len < 0) {
perror("#CouchDB::sendRequest:chunked, Error reading socket");
return {-1, "ERROR"};
}
else if (len == 0) {
std::cerr << "#CouchDB::sendRequest:chunked, Connection closed by server\n";
return {-1, "ERROR"};
}
responseBody.assign(buffer);
}
}
// move created body from chunks to responseBody
-> responseBody = chunkBody;
}
return {httpcode, responseBody};
}
The function that calls the above and that sometimes SIGABORT
bool CouchDB::find(Database::db db_type, std::string keyValue, std::string &value)
{
if (!createSocket()) {
return false;
}
std::ostringstream doc;
std::ostringstream json;
doc << db_name << db_names[db_type] << "/_find";
json << "{\"selector\":{" << keyValue << "},\"limit\":1,\"use_index\":\"index\"}";
-> CouchDB::response status = sendRequest("POST", doc.str(), json.str());
close(socketfd);
if (status.httpcode == 200) {
value = status.body;
return true;
}
return false;
}
Some bits that you might have questions about:
CouchDB::response is a struct {httpcode: int, body: std::string}
CouchDB::db is an enum to choose different databases
sendData only sends anything as bytes until all bytes are sent
Make it int len = recv(socketfd, buffer, sizeof(buffer), 0); might be overwriting the last '\0' in your buffer. One might be tempted to use sizeof(buffer) - 1 but this would be wrong as you might be getting null bytes in your stream. So, do this instead: responseBody.assign(buffer, len);. Only do this of course after you've made sure len >= 0, which you do in your error checks.
You have to do that every place where you call recv. Though, why you're using recv instead of read is beyond me, since you aren't using any of the flags.
Also, your buffer memset is pointless if you do it my way. You should also declare your buffer right before you use it. I had to read through half your function to figure out if you did anything with it. Though, of course, you do end up using it a second time.
Heck, since your error handling is basically identical in both cases, I would just make a function that did it. Don't repeat yourself.
Lastly, you play fast and loose with the result of find. You might not actually find what you're looking for and might get string::npos back instead, and that'd also cause you interesting problems.
Another thing, try -fsanitize=address (or some of the other sanitize options documented there) if you're using gcc or clang. And/or run it under valgrind. Your memory error may be far from the code that's crashing. Those might help you get close to it.
And, a very last note. Your logic is totally messed up. You have to separate out your reading data and your parsing and keep a different state machine for each. There is no guarantee that your first read gets the entire HTTP header, no matter how big that read is. And there is no guarantee that your header is less than a certain size either.
You have to keep reading until you've either read more than you're willing to for the header and consider it an error, or until you get the CR LN CR LN at the end of the header.
Those last bits won't cause your code to crash, but will cause you to get spurious errors, especially in certain traffic scenarios, which means that they will likely not show up in testing.
I'm developing a program with TCP connection, which consists of a server and multiple clients.
Server side
write(sockFd,message,strlen(message));
write(sockFd,message2,strlen(message2));
Client side
char msg[256];
bzero(msg, 256);
n = read (listenFd, msg, 255);
cout << "Msg1 " << msg << endl;
bzero(msg, 256);
n = read (listenFd, msg, 255);
cout << "Msg2 " << msg << endl;
The problem is after the server write() the two messages, the first read() on the client(s) may read all of messages from server.
For example, the output is Msg 1 content-of-msg1content-of-msg2. Why it can happen?
TCP is a streaming protocol so data comes as a stream, this means that each read will read as much data as there is in the buffer (or as much as you requested).
So if you want to break the data down into packets or datagrams, you will need to do so with your own session based protocol. One way of doing this is prefixing the outgoing data with 4 bytes of length. This way the reader can first read the length and that way know how big the rest of the datagram is.
It is because of stream nature of socket writes and reads. You must implement your own message distinguish technique.
Some variants are:
Worser: implement your own message end symbol.
Better: before the message send size of that message (measured in bytes). And at first read the size, then read number of bytes given by size.
And use C++11, it's 2015 year today, don't do the old C style.
For example (error checking omitted for simplicity of the example):
Server side:
// choose this type once and never change it after first release!
using message_size_t = uint64_t;
void write_message(int sockfd, const std::string & message)
{
message_size_t message_size{ static_cast<message_size_t>(message.size()) };
// check error here:
write(sockfd, &message_size, sizeof(message_size));
// and here:
write(sockfd, message.data(), message_size);
}
// use:
write_message(sockFd, message);
write_message(sockFd, message2);
Client side:
void read_bytes_internal(int sockfd, void * where, size_t size)
{
auto remaining = size;
while (remaining > 0) {
// check error here
auto just_read = recv(sockfd, where, remaining);
remaining -= just_read;
}
}
std::string read_message(int sockfd)
{
message_size_t message_size{ 0 };
read_bytes_internal(sockfd, &message_size, sizeof(message_size));
std::string result{ message_size, 0 };
read_bytes_internal(sockfd, &result[0], message_size);
return result;
}
// use:
cout << "Msg1 " << read_message(listenFd) << endl;
cout << "Msg2 " << read_message(listenFd) << endl;
(Code may contain some small errors, I wrote it right here in stackoverflow answer window and didn't syntax-check it in IDE.)
I'm currently working on a multiplayer game using sockets and I encountered some problems at the log-in.
Here's the server function - thread that deals with incoming messages from a user:
void Server::ClientThread(SOCKET Connection)
{
char *buffer = new char[256];
while (true)
{
ZeroMemory(buffer,256);
recv(Connection, buffer, 256, 0);
cout << buffer << endl;
if (strcmp(buffer, "StartLogIn"))
{
char* UserName = new char[256];
ZeroMemory(UserName, 256);
recv(Connection, UserName, 256, 0);
char* Password = new char[256];
ZeroMemory(Password, 256);
recv(Connection, Password, 256, 0);
cout << UserName << "-" << Password << " + "<< endl;
if (memcmp(UserName, "taigi100", sizeof(UserName)))
{
cout << "SMB Logged in";
}
else
cout << "Wrong UserName";
}
int error = send(Connection, "0", 1, 0);
// error = WSAGetLastError();
if (error == SOCKET_ERROR)
{
cout << "SMB D/Ced";
ExitThread(0);
}
}
}
And here is the function that sends the data from the client to the server:
if (LogInButton->isPressed())
{
send(Srv->getsConnect(), "StartLogIn", 256, 0);
const wchar_t* Usern = UserName->getText();
const wchar_t* Passn = Password->getText();
stringc aux = "";
aux += Usern;
char* User = (char*)aux.c_str();
stringc aux2 = "";
aux2 += Passn;
char* Pass = (char*)aux2.c_str();
if (strlen(User) > 0 && strlen(Pass) > 0)
{
send(Srv->getsConnect(), User, 256, 0);
send(Srv->getsConnect(), Pass, 256, 0);
}
}
I'm going to try to explain this as easy as possible. The first recv function from the while(true) in the Server-side function receives at first "StartLogIn" but does not enter the if only until the next loop of the while. Because it loops again it changes to "taigi100" ( a username I use ) and then it enters the if even tho it shouldn't.
A way to fix this would be to make a send-recv system in order to not send anything else until it got some feedback.
I want to know if there are any other fast ways of solving this problem and why such weird behaviour happens.
Well it's full of bugs.
Your overuse of new[]. Ok not a bug but you are not deleting any of these, and you could use either local stack buffer space or vector< char >
You need to always check the result of any call to recv as you are not guaranteed to receive the number of bytes you are expecting. The number you specify is the size of the buffer, not the number of bytes you are expecting to get.
strcmp returns 0 if the strings match, non-zero if they do not (actually 1 or -1 depending whether they compare less or greater). But it appears you are using non-zero to mean equal.
Not sure what stringc is. Some kind of conversion from wide string to string? In any case, I think send is const-correct so there is no need to cast the constness away.
3rd parameter of send is the number of bytes you are sending, not the capacity of your buffer. The user name and password are probably not 256 bytes. You need to send them as a "packet" though so the receiver knows what they are getting and will know when they have received a full packet. e.g. send a string like "User=vandamon\0". (And you need to check its return value too)
Because send() and recv() calls may not match up, two very good habits to get into are (1) preceed all variable length data by a fixed size length, and (2) only send the bare minimum needed.
So your initial send() call would be written as follows:
char const * const StartLogin = "StartLogIn";
short const StartLoginLength = static_cast<short>(strlen(StartLogin));
send(Srv->getsConnect(), reinterpret_cast<char *>(&StartLoginLength), sizeof(short), 0);
send(Srv->getsConnect(), StartLogin, StartLoginLength, 0);
The corresponding receive code would then have to read two bytes and guarantee that it got them by checking the return value from recv() and retrying if not enough was received. Then it would loop a second time reading exactly that many bytes into a buffer.
int guaranteedRecv(SOCKET s, char *buffer, int expected)
{
int totalReceived = 0;
int received;
while (totalReceived < expected)
{
received = recv(s, &buffer[totalReceived], expected - totalReceived, 0);
if (received <= 0)
{
// Handle errors
return -1;
}
totalReceived += received;
}
return totalReceived;
}
Note that this assumes a blocking socket. Non-blocking will return zero if no data is available and errno / WSAGetLastError() will say *WOULDBLOCK. If you want to go this route you'll have to handle this case specifically and find some way to block till data is available. Either that or busy-wait waiting for data, by repeatedly calling recv(). UGH.
Anyway, you call this first with the address of a short reinterpret_cast<char *> and expected == sizeof(short). Then you new[] enough space, and call a second time to get the payload. Beware of the lack of trailing NUL characters, unless you explicitly send them, which my code doesn't.
This question has been asked a number of times, I have noted, but none of the solutions seem to be applicable to me. Before I continue I will post a little bit of code for you:
// Await the response and stream it to the buffer, with a physical limit of 1024 ASCII characters
stringstream input;
char buffer[4096*2];
while (recv(sock, buffer, sizeof(buffer) - 1, MSG_WAITALL) > 0)
input << buffer;
input << '\0';
// Close the TCP connection
close(sock);
freehostent(hostInfo);
And here is my request:
string data;
{
stringstream bodyStream;
bodyStream
<< "POST /api/translation/translate HTTP/1.1\n"
<< "Host: elfdict.com\n"
<< "Content-Type: application/x-www-form-urlencoded\n"
<< "Content-Length: " << (5 + m_word.length())
<< "\n\nterm=" << m_word;
data = bodyStream.str();
}
cout << "Sending HTTP request: " << endl << data << endl;
I am very new to this sort of programming (and stack overflow- preferring to slog it out and bang my head against a wall until I solve problems myself but I'm lost here!) and would really appreciate help working out why it takes so long! I've looked into setting it up so that it is non-blocking, but had issues getting that to work as expected. Though maybe people here could point me in the right direction, if the non-bocking route is the way I need to go.
I have seen that a lot of people prefer to use libraries but I want to learn to do this!
I'm also new to programming on the mac and working with sockets. Probably not the best first time project maybe, but I've started now! So I wish to continue :) Any help would be nice!
Thank you in advance!
The reason why it takes a long time to receive is because you tell the system to wait until it has received all data you ask for, i.e. 8k bytes, or there is an error on the connection or it is closed. This is what the flag MSG_WAITALL does.
One solution to this is to make the socket non-blocking, and do a continuous read in a loop until we get an error or the connection is closed.
How to make a socket non-blocking differs depending on platform, on Windows it done with the ioctlsocket function, on Linux or similar systems it is done with the fcntl function:
int flags = fcntl(sock, F_GETFL, 0);
flags |= O_NONBLOCK;
fcntl(sock, F_SETFL, flags);
Then you read from the socket like this:
std::istringstream input;
for (;;)
{
char buffer[256];
ssize_t recvsize;
recvsize = recv(sock, buffer, sizeof(buffer) - 1, 0);
if (recvsize == -1)
{
if (errno != EAGAIN && errno != EWOULDBLOCK)
break; // An error
else
continue; // No more data at the moment
}
else if (recvsize == 0)
break; // Connection closed
// Terminate buffer
buffer[recvsize] = '\0';
// Append to input
input << buffer;
}
The problem with the above loop is that if no data is ever received, it will loop forever.
However, you have a much more serious problem in your code: You receive into a buffer, and then you append it to the stringstream, but you do not terminate the buffer. You do not need to terminate the string in the stream, it's done automatically, but you do need to terminate the buffer.
This can be solved like this:
int rc;
while ((rc = recv(sock, buffer, sizeof(buffer) - 1, MSG_WAITALL)) > 0)
{
buffer[rc] = '\0';
input << buffer;
}
The problem here happens because you are specifying MSG_WAITALL flag. It forces the recv to remain blocked until all the specified bytes are received (sizeof(buffer) - 1 in your case, while the message being sent by the other party is obviously smaller) or an error occurs and it returns -1 with errno variable being set appropriately.
I think, a more preferable option would be to cause recv without any flags in a loop until the socket on the other end is closed (recv returns 0) or some separator is received.
However, you should be careful using input << buffer, because recv might return only a small portion of data (for example, 20 bytes) on each iteration, so you should put exactly this amount of data to string stream. The number of bytes received is returned by recv.