I am playing around with freeboard.io and trying to make a widget that pulls JSON data from a URL [TBD]. My original data source is from an iMX6-based Wandboard running Linux that is connected to the internet. I want to write a C++ program on the Wandboard that opens a socket to [TBD] and sends UDP packets, for example, containing my sensor data. My JSON data structure is like this:
{
"sensor_a": 1100,
"sensor_b": 247,
"sensor_c": 0
}
Can you help me put my JSON data structure into an IP packet using C++ on Ubuntu Linux? I know how to just serialize the data structure in ascii for example and build a buffer to stuff an IP packet but I'm wondering if there is a standard way to do this for cloud services, or will it be different for Azure vs AWS? Is some type of header info needed to "put" the data?
This is a very simple problem, like all simple problems no need for external libraries for serializing etc. Like #Galik said above your problem is how to send a string from client to server. Additionally for your case you need a JSON parser on the server (any C or C++ parser from the JSON page will do, I use gason because it's fast and simple).
In TCP/IP socket programming you have to make the other part know how many bytes (characters in your case) to read.
I faced a similar case: send JSON over the web.
here's the example, a JSON "message"
https://github.com/pedro-vicente/lib_netsockets/blob/master/examples/json_message.cc
in this case, the size of the message has this header format
nbr_bytes#json_string
where "json_string" is the JSON text, "nbr_bytes" is the number of characters "json_string" has and "#" is a separator character.
how does the server parse this?
By reading 1 character at a time until the "#" separator is found, then converting that string into a number;
then make the socket API read "nbr_bytes" characters and exit
example
100#{json_txt....}
in this case "json_txt" has 100 characters
here's the code for the parser
std::string read_response(socket_t &socket)
{
int recv_size; // size in bytes received or -1 on error
size_t size_json = 0; //in bytes
std::string str_header;
std::string str;
//parse header, one character at a time and look for for separator #
//assume size header lenght less than 20 digits
for (size_t idx = 0; idx < 20; idx++)
{
char c;
if ((recv_size = recv(socket.m_socket_fd, &c, 1, 0)) == -1)
{
std::cout << "recv error: " << strerror(errno) << std::endl;
return str;
}
if (c == '#')
{
break;
}
else
{
str_header += c;
}
}
//get size
size_json = static_cast<size_t>(atoi(str_header.c_str()));
//read from socket with known size
char *buf = new char[size_json];
if (socket.read_all(buf, size_json) < 0)
{
std::cout << "recv error: " << strerror(errno) << std::endl;
return str;
}
std::string str_json(buf, size_json);
delete[] buf;
return str_json;
}
Related
So i have created a java server and a c++ client.
The java server sends a message with a printwriter to the c++ client to execute a command (the data transfer is correct, no problems with that)
Im using a strcmp() to check if the string that client recieved with recv() is the string i want but when i try to check it, it doesn't work. I've tried to print out the line with the recieved buffer and i dont see any problems.
Here is the code that recieves and checks the buffer(c++, ignore some values becouse this is a small piece of the code)
char buffer[1024];
if (recv(s, buffer, sizeof(buffer), 0) == SOCKET_ERROR) {
cout << "Error CR#001" << endl;
return -1;
}
if (strcmp(buffer, "||clear") == 0) {
system("cls");
return 1;
}
In c++ you can use std::string for the buffer:
const ssize_t MAX_BYTES = 1024;
ssize_t noreceivedbytes;
std::string buffer;
buffer.reserve(MAX_BYTES + 1);
noreceivedbytes = recv(s, buffer.data(), MAX_BYTES, 0)
if (noreceivedbytes <= 0) {
cout << "Error CR#001" << endl;
return -1;
}
buffer.data()[noreceivedbytes] = '\0';
if (buffer == "||clear") {
system("cls");
return 1;
}
Safer c solution for completeness:
#define MAX_BYTES = 1024;
ssize_t noreceivedbytes;
chat buffer[MAX_BYTES];
noreceivedbytes = recv(s, buffer, MAX_BYTES - 1, 0)
if (noreceivedbytes <= 0) {
cout << "Error CR#001" << endl;
return -1;
}
buffer[noreceivedbytes] = '\0';
if (strcmp(buffer, "||clear") == 0) {
system("cls");
return 1;
}
Please note:
This answer brings you only over the top of the iceberg. There are many more things that could go wrong when dealing with sockets (as mentioned by others in comments).
recv() doesn't guarantee that the whole chunk of data sent from the server will be read completely. You could easily end up with partial strings like "||cle" or "||c".
The least thing you'll need to do is to receive the bytes from the socket in a loop, until you have something at hand you can reasonably parse and match.
The simplest way to do so is to define a very primitive protocol, which preceeds payload data sent with it's size (take care of endianess problems when converting the size sent as integer value from the received data).
Having that at hand, you'll know exactly how many bytes you have to read until you have the payload chunk completed, such it can be parsed and compared reasonably.
How to do all that in detail exactly would lead to far to be answered here. There are whole books written about the topic (I'd recommend Stevens, "Unix network programming").
Edit (solution)
I've followed the advice of debugging with -fsanitize=address & valgrind. I only used -fsanitize (which I never heard of before) and found out what was the problem, there was a left over call for a destructor in another function and the object was being destroyed twice. The memory was completely jeopardised at this point.
Thanks a lot for the help and the other recommendations too.
I'm writing a code in C++ to talk with CouchDB using sockets (CouchDB is a Database by Apache that has an HTTP API). I've created a whole class to deal with it and it's basically a socket client that connects and closes.
One of my functions is to send an HTTP request and then read the response and work with it, it works fine on the first call, but fails when I call it a second time.
But it's inconsistent where it fails, sometimes it's a SEGFAULT inside of it in one of the string functions, other times it's a SIGABORT in the return. I've signalled the lines where it crashed with ->
And the worst part is that it only fails when it runs for the "second" time, which is actually the 10th time. Explanation: When the class is instantiated a socket is created, sendRequest is called 8 times (all work, always), I close the socket. Then I have another class that controls a socket server, which receives commands and creates a remote user object that executes the command, the remote user command then calls the CouchDB class to manipulate the DB. The first time a command is requested works, but the second fails and crashes the program.
Extra info: In the short int httpcode line, gdb trace shows it's a crash on substr, on the SIGABORT crash trace shows a problem on free().
I've already debugged many times, made some changes as to where and how to instantiate the string and the buffer and I'm lost. Anyone knows why it would work fine many times but crash on a subsequent call?
CouchDB::response CouchDB::sendRequest(std::string req_method, std::string req_doc, std::string msg)
{
std::string responseBody;
char buffer[1024];
// zero message buffer
memset(buffer, 0, sizeof(buffer));
std::ostringstream smsg;
smsg << req_method << " /" << req_doc << " HTTP/1.1\r\n"
<< "Host: " << user_agent << "\r\n"
<< "Accept: application/json\r\n"
<< "Content-Length: " << msg.size() << "\r\n"
<< (msg.size() > 0 ? "Content-Type: application/json\r\n" : "")
<< "\r\n"
<< msg;
/*std::cout << "========== Request ==========\n"
<< smsg.str() << std::endl;*/
if (sendData((void*)smsg.str().c_str(), smsg.str().size())) {
perror("#CouchDB::sendRequest, Error writing to socket");
std::cerr << "#CouchDB::sendRequest, Make sure CouchDB is running in " << user_agent << std::endl;
return {-1, "ERROR"};
}
// response
int len = recv(socketfd, buffer, sizeof(buffer), 0);
if (len < 0) {
perror("#CouchDB::sendRequest, Error reading socket");
return {-1, "ERROR"};
}
else if (len == 0) {
std::cerr << "#CouchDB::sendRequest, Connection closed by server\n";
return {-1, "ERROR"};
}
responseBody.assign(buffer);
// HTTP code is the second thing after the protocol name and version
-> short int httpcode = std::stoi(responseBody.substr(responseBody.find(" ") + 1));
bool chunked = responseBody.find("Transfer-Encoding: chunked") != std::string::npos;
/*std::cout << "========= Response =========\n"
<< responseBody << std::endl;*/
// body starts after two CRLF
responseBody = responseBody.substr(responseBody.find("\r\n\r\n") + 4);
// chunked means that the response comes in multiple packets
// we must keep reading the socket until the server tells us it's over, or an error happen
if (chunked) {
std::string chunkBody;
unsigned long size = 1;
while (size > 0) {
while (responseBody.length() > 0) {
// chunked requests start with the size of the chunk in HEX
size = std::stoi(responseBody, 0, 16);
// the chunk is on the next line
size_t chunkStart = responseBody.find("\r\n") + 2;
chunkBody += responseBody.substr(chunkStart, size);
// next chunk might be in this same request, if so, there must have something after the next CRLF
responseBody = responseBody.substr(chunkStart + size + 2);
}
if (size > 0) {
len = recv(socketfd, buffer, sizeof(buffer), 0);
if (len < 0) {
perror("#CouchDB::sendRequest:chunked, Error reading socket");
return {-1, "ERROR"};
}
else if (len == 0) {
std::cerr << "#CouchDB::sendRequest:chunked, Connection closed by server\n";
return {-1, "ERROR"};
}
responseBody.assign(buffer);
}
}
// move created body from chunks to responseBody
-> responseBody = chunkBody;
}
return {httpcode, responseBody};
}
The function that calls the above and that sometimes SIGABORT
bool CouchDB::find(Database::db db_type, std::string keyValue, std::string &value)
{
if (!createSocket()) {
return false;
}
std::ostringstream doc;
std::ostringstream json;
doc << db_name << db_names[db_type] << "/_find";
json << "{\"selector\":{" << keyValue << "},\"limit\":1,\"use_index\":\"index\"}";
-> CouchDB::response status = sendRequest("POST", doc.str(), json.str());
close(socketfd);
if (status.httpcode == 200) {
value = status.body;
return true;
}
return false;
}
Some bits that you might have questions about:
CouchDB::response is a struct {httpcode: int, body: std::string}
CouchDB::db is an enum to choose different databases
sendData only sends anything as bytes until all bytes are sent
Make it int len = recv(socketfd, buffer, sizeof(buffer), 0); might be overwriting the last '\0' in your buffer. One might be tempted to use sizeof(buffer) - 1 but this would be wrong as you might be getting null bytes in your stream. So, do this instead: responseBody.assign(buffer, len);. Only do this of course after you've made sure len >= 0, which you do in your error checks.
You have to do that every place where you call recv. Though, why you're using recv instead of read is beyond me, since you aren't using any of the flags.
Also, your buffer memset is pointless if you do it my way. You should also declare your buffer right before you use it. I had to read through half your function to figure out if you did anything with it. Though, of course, you do end up using it a second time.
Heck, since your error handling is basically identical in both cases, I would just make a function that did it. Don't repeat yourself.
Lastly, you play fast and loose with the result of find. You might not actually find what you're looking for and might get string::npos back instead, and that'd also cause you interesting problems.
Another thing, try -fsanitize=address (or some of the other sanitize options documented there) if you're using gcc or clang. And/or run it under valgrind. Your memory error may be far from the code that's crashing. Those might help you get close to it.
And, a very last note. Your logic is totally messed up. You have to separate out your reading data and your parsing and keep a different state machine for each. There is no guarantee that your first read gets the entire HTTP header, no matter how big that read is. And there is no guarantee that your header is less than a certain size either.
You have to keep reading until you've either read more than you're willing to for the header and consider it an error, or until you get the CR LN CR LN at the end of the header.
Those last bits won't cause your code to crash, but will cause you to get spurious errors, especially in certain traffic scenarios, which means that they will likely not show up in testing.
I am reading an Image URL sent from a Java client to a C++ server from Sockets. The server stops reading through recv() when it detects there is a null character in the char buffer[] as I do below in the following code:
void * SocketServer::clientController(void *obj)
{
// Retrieve client connection information
dataSocket *data = (dataSocket*) obj;
// Receive data from a client step by step and append data in String message
string message;
int bytes = 0;
do
{
char buffer[12] = {0};
bytes = recv(data->descriptor, buffer, 12, 0);
if (bytes > 0) // Build message
{
message.append(buffer, bytes);
cout << "Message: " << message << endl;
}
else // Error when receiving it
cout << "Error receiving image URL" << endl;
// Check if we are finished reading the image link
unsigned int i = 0;
bool finished = false;
while (i < sizeof(buffer) / sizeof(buffer[0]) && !finished)
{
finished = buffer[i] == '\0';
i++;
}
if (finished)
break;
}
while (bytes > 0);
cout << message << endl;
close(data->descriptor);
pthread_exit(NULL);
}
Is there a better and more elegant way to make this?
I read about sending first the size of the URL, but I do not know exactly how to stop recv() with it. I guess it is done by counting the bytes received until the size of the URL is reached. At that moment, we should be finished reading.
Another approach could be closing the Java socket so that recv() will return -1 and the loop will be finished. However, considering my Java client waits for a response from C++ server, closing the socket and then reopen it does not seem a suitable option.
Thank you,
Héctor
Apart from that your buffer has an unusual size (one typically chooses a power of 2, so 8, 16, 32, ...) and it looks a little small for your intent, your approach seems fine to me:
I assume that your java client will send a null terminated string and then wait anyway, i. e. especially it does not send any further data. So after you received the 0 character, there won't be any data to receive any more anyway, so there is not need to bother for something explicitly that recv does implicitly (recv normally returns only the data available, even if less than the buffer could consume).
Be aware that you initialized buffer with 0, so if you check the entire buffer (instead of the range [buffer, buffer + bytes), you might detect a false positive (if you receive less than 12 characters in the first iteration)! Detection of the 0 character can be done more elegantly, though, anyway:
if(std::find(buffer, buffer + bytes, 0) < buffer + bytes)
{
// found the 0 character!
break;
}
I'm developing a program with TCP connection, which consists of a server and multiple clients.
Server side
write(sockFd,message,strlen(message));
write(sockFd,message2,strlen(message2));
Client side
char msg[256];
bzero(msg, 256);
n = read (listenFd, msg, 255);
cout << "Msg1 " << msg << endl;
bzero(msg, 256);
n = read (listenFd, msg, 255);
cout << "Msg2 " << msg << endl;
The problem is after the server write() the two messages, the first read() on the client(s) may read all of messages from server.
For example, the output is Msg 1 content-of-msg1content-of-msg2. Why it can happen?
TCP is a streaming protocol so data comes as a stream, this means that each read will read as much data as there is in the buffer (or as much as you requested).
So if you want to break the data down into packets or datagrams, you will need to do so with your own session based protocol. One way of doing this is prefixing the outgoing data with 4 bytes of length. This way the reader can first read the length and that way know how big the rest of the datagram is.
It is because of stream nature of socket writes and reads. You must implement your own message distinguish technique.
Some variants are:
Worser: implement your own message end symbol.
Better: before the message send size of that message (measured in bytes). And at first read the size, then read number of bytes given by size.
And use C++11, it's 2015 year today, don't do the old C style.
For example (error checking omitted for simplicity of the example):
Server side:
// choose this type once and never change it after first release!
using message_size_t = uint64_t;
void write_message(int sockfd, const std::string & message)
{
message_size_t message_size{ static_cast<message_size_t>(message.size()) };
// check error here:
write(sockfd, &message_size, sizeof(message_size));
// and here:
write(sockfd, message.data(), message_size);
}
// use:
write_message(sockFd, message);
write_message(sockFd, message2);
Client side:
void read_bytes_internal(int sockfd, void * where, size_t size)
{
auto remaining = size;
while (remaining > 0) {
// check error here
auto just_read = recv(sockfd, where, remaining);
remaining -= just_read;
}
}
std::string read_message(int sockfd)
{
message_size_t message_size{ 0 };
read_bytes_internal(sockfd, &message_size, sizeof(message_size));
std::string result{ message_size, 0 };
read_bytes_internal(sockfd, &result[0], message_size);
return result;
}
// use:
cout << "Msg1 " << read_message(listenFd) << endl;
cout << "Msg2 " << read_message(listenFd) << endl;
(Code may contain some small errors, I wrote it right here in stackoverflow answer window and didn't syntax-check it in IDE.)
I have a relatively simple web server I have written in C++. It works fine for serving text/html pages, but the way it is written it seems unable to send binary data and I really need to be able to send images.
I have been searching and searching but can't find an answer specific to this question which is written in real C++ (fstream as opposed to using file pointers etc.) and whilst this kind of thing is necessarily low level and may well require handling bytes in a C style array I would like the the code to be as C++ as possible.
I have tried a few methods, this is what I currently have:
int sendFile(const Server* serv, const ssocks::Response& response, int fd)
{
// some other stuff to do with headers etc. ........ then:
// open file
std::ifstream fileHandle;
fileHandle.open(serv->mBase + WWW_D + resource.c_str(), std::ios::binary);
if(!fileHandle.is_open())
{
// error handling code
return -1;
}
// send file
ssize_t buffer_size = 2048;
char buffer[buffer_size];
while(!fileHandle.eof())
{
fileHandle.read(buffer, buffer_size);
status = serv->mSock.doSend(buffer, fd);
if (status == -1)
{
std::cerr << "Error: socket error, sending file\n";
return -1;
}
}
return 0
}
And then elsewhere:
int TcpSocket::doSend(const char* message, int fd) const
{
if (fd == 0)
{
fd = mFiledes;
}
ssize_t bytesSent = send(fd, message, strlen(message), 0);
if (bytesSent < 1)
{
return -1;
}
return 0;
}
As I say, the problem is that when the client requests an image it won't work. I get in std::cerr "Error: socket error sending file"
EDIT : I got it working using the advice in the answer I accepted. For completeness and to help those finding this post I am also posting the final working code.
For sending I decided to use a std::vector rather than a char array. Primarily because I feel it is a more C++ approach and it makes it clear that the data is not a string. This is probably not necessary but a matter of taste. I then counted the bytes read for the stream and passed that over to the send function like this:
// send file
std::vector<char> buffer(SEND_BUFFER);
while(!fileHandle.eof())
{
fileHandle.read(&buffer[0], SEND_BUFFER);
status = serv->mSock.doSend(&buffer[0], fd, fileHandle.gcount());
if (status == -1)
{
std::cerr << "Error: socket error, sending file\n";
return -1;
}
}
Then the actual send function was adapted like this:
int TcpSocket::doSend(const char* message, int fd, size_t size) const
{
if (fd == 0)
{
fd = mFiledes;
}
ssize_t bytesSent = send(fd, message, size, 0);
if (bytesSent < 1)
{
return -1;
}
return 0;
}
The first thing you should change is the while (!fileHandle.eof()) loop, because that will not work as you expect it to, in fact it will iterate once too many because the eof flag isn't set until after you try to read from beyond the end of the file. Instead do e.g. while (fileHandle.read(...)).
The second thing you should do is to check how many bytes was actually read from the file, and only send that amount of bytes.
Lastly, you read binary data, not text, so you can't use strlen on the data you read from the file.
A little explanations of the binary file problem: As you should hopefully know, C-style strings (the ones you use strlen to get the length of) are terminated by a zero character '\0' (in short, a zero byte). Random binary data can contain lots of zero bytes anywhere inside it, and it's a valid byte and doesn't have any special meaning.
When you use strlen to get the length of binary data there are two possible problems:
There's a zero byte in the middle of the data. This will cause strlen to terminate early and return the wrong length.
There's no zero byte in the data. That will cause strlen to go beyond the end of the buffer to look for the zero byte, leading to undefined behavior.