UDP client socket read less data then sent - c++

I am facing a very strange problem. I have a server application that runs UDP socket and wait for incoming data. As soon as it gets the command it begins to send back a stream. Just for testing, I limit the server to sending only one piece of data 8000 bytes long. I don't provide the server code since it work as expected. It receives the command and sends data back, I can see it with Wireshark. My problem is the client size.
The issue: I instantiate a client non-blocking UDP socket and send "Hello" to the server that responses with 8000 bytes of data. I'm trying to read data in a loop in chunks of 1024 bytes. But the problem that only one chunk of data has read. the next loop returns -1 infinitely. If I try to read 8000 bytes in recv I read it successfully, If I try to read 8100 bytes in recv I read 8000 bytes that sent. I mean that only one call to recv succeed. All subsequent calls return an error although not all data has read yet.
Here is a simplified code:
class ClienSocket
{
public:
void Init()
{
pollfd m_poll = {};
m_poll.fd = socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP);
if(m_poll.fd == -1)
{
throw std::runtime_error(GetLastError());
}
int optval = 1;
setsockopt(m_poll.fd, SOL_SOCKET, SO_REUSEADDR, static_cast<const void *>(&optval), sizeof(int));
int on = 1;
if(ioctl(m_poll.fd, FIONBIO, &on) < 0)
{
throw std::runtime_error(std::string("failed to set the client socket non-blocking: ") + strerror(errno));
}
}
void Run()
{
struct sockaddr_in serv_addr;
m_servaddr.sin_family = AF_INET;
m_servaddr.sin_addr.s_addr = inet_addr(m_address.c_str());
m_servaddr.sin_port = htons(static_cast<uint16_t>(m_port));
m_poll.events = POLLIN;
serv_addr.sin_family = AF_INET;
serv_addr.sin_port = htons(m_port);
m_running = true;
if(pthread_create(&m_readThread, nullptr, &ClienSocket::ReadThreadWrapper, this) != 0)
{
m_running = false;
throw std::runtime_error(std::string("thread creating error");
}
}
void ClienSocket::Write(const char *data, size_t size)
{
sendto(m_poll.fd, data, size, MSG_NOSIGNAL, reinterpret_cast<const struct sockaddr *>(&(m_servaddr)), sizeof(sockaddr_in));
}
static void *ClienSocket::ReadThreadWrapper(void *ptr)
{
ClienSocket *instance = static_cast<ClienSocket *>(ptr);
if(instance != nullptr)
{
return instance->ReadThreadFunc();
}
return nullptr;
}
void *ClienSocket::ReadThreadFunc()
{
while(m_running)
{
retval = poll(&m_poll, 1, 1000);
if(retval > 0)
{
if(m_poll.revents == POLLIN)
{
bool readMore = true;
do
{
ssize_t readBytes = recv(m_poll.fd, m_readBuffer, READ_BUFFER_SIZE, 0);
std::cout << readBytes << ", " << errno << std::endl;
if (readBytes < 0)
{
if (errno != EWOULDBLOCK)
{
throw std::runtime_error(std::string("socket error");
}
}
else if(readBytes == 0)
{
readMore = false;
}
else
{
ProcessData(m_readBuffer, readBytes);
}
}
while(readMore == true);
}
}
}
return nullptr;
}
void ClienSocket::Wait()
{
if(m_running)
{
pthread_join(m_readThread, nullptr);
}
}
void ProcessData(const char *data, size_t length)
{
std::cout << length << std::endl;
}
private:
bool m_running = false;
int m_port = 3335;
std::string m_address = "192.168.5.1";
struct sockaddr_in m_servaddr;
pollfd m_poll = {};
pthread_t m_readThread;
static constexpr size_t READ_BUFFER_SIZE = 1024;
char m_readBuffer[READ_BUFFER_SIZE];
}
The testcase:
ClienSocket client;
client.Init();
client.Run();
client.Write("hello", 5);
clientWait();
According to Wireshard 8000 bytes has sent:
Target system: Ubuntu 22.04
The output:
1024, 0
-1, 11
-1, 11
-1, 11
-1, 11
-1, 11
...

I'm trying to read data in a loop in chunks of 1024 bytes.
That will not work with UDP, as it is message-oriented rather than stream-oriented, like TCP is.
In UDP, there is a 1:1 relationship between sends and reads. If the UDP server sends a single message of 8000 bytes, the client must receive the entire message in a single read, it cannot receive it across multiple reads, like you are attempting to do.
If the buffer you are reading into is too small to receive the entire message, the read will fail with an EMSGSIZE error code and the unread bytes will be discarded, you can't recover them.
That is why your subsequent reads are failing (withan EWOULDBLOCK/EAGAIN error code), as there is no data available to read until the server sends a new message.

Related

C++ TCP server: FD_ISSET() does not always work in a basic server

This is my implementation of a soon-to-be HTTP server.
void Webserv::resetFdSets()
{
int fildes;
FD_ZERO(&to_read);
FD_ZERO(&to_write);
max_fd = -1;
Server_map::iterator it;
for (it = servers.begin(); it != servers.end(); ++it) //set server sockets to be read
{
fildes = it->second.getFd();
FD_SET(fildes, &to_read);
if (fildes > max_fd)
max_fd = fildes;
}
std::list<int>::iterator iter;
for (iter = accepted.begin(); iter != accepted.end(); ++iter) // set client sockets if any
{
fildes = (*iter);
FD_SET(fildes, &to_read);
FD_SET(fildes, &to_write);
if (fildes > max_fd)
max_fd = fildes;
}
}
accept()
void Webserv::acceptConnections()
{
int client_fd;
sockaddr_in cli_addr;
socklen_t cli_len = sizeof(sockaddr_in);
Server_map::iterator it;
for (it = servers.begin(); it != servers.end(); ++it)
{
ft_bzero(&cli_addr, sizeof(cli_addr));
if (FD_ISSET(it->second.getFd(), &to_read)) // if theres data in server socket
{
client_fd = accept(it->second.getFd(), reinterpret_cast<sockaddr*>(&cli_addr), &cli_len);
if (client_fd > 0) // accept and add client
{
fcntl(client_fd, F_SETFL, O_NONBLOCK);
accepted.push_back(client_fd);
}
else
throw (std::runtime_error("accept() failed"));
}
}
}
recv()
void Webserv::checkReadSockets()
{
char buf[4096];
std::string raw_request;
ssize_t bytes;
static int connections;
std::list<int>::iterator it;
for (it = accepted.begin(); it != accepted.end(); ++it)
{
if (FD_ISSET(*it, &to_read))
{
++connections;
std::cout << "Connection counter : " << connections << std::endl;
while ((bytes = recv(*it, buf, sizeof(buf) - 1, MSG_DONTWAIT)) > 0)
{
buf[bytes] = '\0';
raw_request += buf;
}
std::cout << raw_request << std::endl;
}
}
}
send()
void Webserv::checkWriteSockets()
{
char buf[8096];
char http_success[] = {
"HTTP/1.1 200 OK\r\n"
"Server: This op server\r\n"
"Content-Length: 580\r\n"
"Content-Type: text/html\r\n"
"Connection: Closed\r\n\r\n"
};
std::list<int>::iterator it = accepted.begin();
while (it != accepted.end())
{
int cliSock = (*it);
if (FD_ISSET(cliSock, &to_write))
{
int response_fd = open("hello.html", O_RDONLY);
int bytes = read(response_fd, buf, sizeof(buf));
send(cliSock, http_success, sizeof(http_success) - 1, MSG_DONTWAIT); // header
send(cliSock, buf, bytes, MSG_DONTWAIT); // hello world
close(cliSock);
close(response_fd);
it = accepted.erase(it);
}
else
++it;
}
}
main loop:
void Webserv::operate()
{
int rc;
while (true)
{
resetFdSets(); // add server and client fd for select()
if ((rc = select(max_fd + 1, &to_read, &to_write, NULL, NULL)) > 0) // if any of the ports are active
{
acceptConnections(); // create new client sockets with accept()
checkReadSockets(); // parse and route client requests recv() ...
checkWriteSockets(); // send() response and delete client
}
else if (rc < 0)
{
std::cerr << "select() failed" << std::endl;
exit(1);
}
}
}
The same code highlighted + Webserv class definition on pastebin: https://pastebin.com/9B8uYumF
For now the algorithm is such:
reset fd_sets for reading and writing.
if any of the file descriptors trigger select, we check whether server FD are set, accept connections and store them in a list of ints.
if any of the client FD are FD_ISSET() to read - we read their request and print it.
finally, if any of the client FD are ready to receive - we dump an html page in there and close the connection.
While checking all the client descriptors FD_ISSET() returns false 80% of the time when it shouldn't. Hence I cannot get client requests consistently. Strangely enough, it returns true with fd_set for write sockets much more often. Here's a demo with 2 out of 10 successful reads using the above code: https://imgur.com/a/nzc21LV
I know that the new clients are always accepted because it always enters the if condition in acceptConnections(). In other words, the listen FD are initialized correctly.
edit: Here's how it's initialized:
void Server::init()
{
if ((listen_fd = socket(AF_INET, SOCK_STREAM, 0)) < 0)
{
throw (std::runtime_error("socket() failed"));
}
fcntl(listen_fd, F_SETFL, O_NONBLOCK);
int yes = 1;
setsockopt(listen_fd, SOL_SOCKET, SO_REUSEADDR, &yes, sizeof(int));
if (bind(listen_fd, reinterpret_cast<sockaddr*>(&server_addr), sizeof(server_addr)))
{
throw (std::runtime_error("bind() failed"));
}
if (listen(listen_fd, 0))
throw (std::runtime_error("listen() failed"));
}

Sending files over TCP sockets C++ | Windows [duplicate]

This question already has answers here:
C: send file to socket
(4 answers)
Closed 2 years ago.
I want to send files over TCP sockets in C++ on Windows, all is working absolutely fine, however I can't send big files like this, I understand that TCP as any protocol has it's limitations, like I can't send more than 64KB per packet, my method works for small file sizes(tested all up to 12KB), but I would like to send LARGE files, like iso image of ubuntu or windows, which are surely bigger than 12 fully packed packets and etc.
Server
int filesize = 0;
int err = recv(conn, (char*)&filesize, sizeof(filesize), 0);
if (err <= 0)
{
printf("recv: %d\n", WSAGetLastError());
clean(conn);
}
printf("recv %d bytes [OK]\n", err);
char* buffer = new char[filesize];
ZeroMemory(buffer, filesize);
err = recv(conn, buffer, filesize, MSG_WAITALL);
if (err <= 0)
{
printf("recv: %d\n", WSAGetLastError());
clean(conn);
}
printf("recv %d bytes [OK]\n", err);
ofstream file("a.txt", ios::binary);
file.write(buffer, filesize);
delete[] buffer;
file.close();
Client
ifstream file("a.txt", ios::binary);
file.seekg(0, ios::end);
int size = file.tellg();
file.seekg(0, ios::beg);
char* buffer = new char[size];
file.read(buffer, size);
file.close();
int* fsize = &size;
int err = send(client, (char*)fsize, sizeof(int), 0);
if (err <= 0)
{
printf("send: %d\n", WSAGetLastError());
}
printf("send %d bytes [OK]\n", err);
err = send(client, buffer, size, 0);
if (err <= 0)
{
printf("send: %d\n", WSAGetLastError());
}
printf("send %d bytes [OK]\n", err);
delete[] buffer;
All values for both sides are initialised, and error handling is done well, and if I had problem then I would have said about that. I decided to use MSG_WAITALL because I guess that is suitable for this case, please correct my code for recieving/sending and if possible refactor it, it would be nicer if it would be with explainations, so that evrybody could learn to code better, thanks)))
The one main point that should be taken away from the comments below your question is that send and recv are fickle. Just because you write send(buffer with 100 bytes) doesn't mean it's going to send 100 bytes. It could send 25 bytes, or 99 bytes, or fail out completely. It's up to you to take the return value and compute what needs to still be sent.
Same goes with recv. If you write recv(buffer with 100 bytes) because you are expecting 100 bytes, it could only grab 25 bytes, or 99 bytes, or fail out completely. Again, it's up to you to use that return value and compute what still needs to be received.
File I/O is completely different. If you want to write 100 bytes to a file, those 100 bytes are guaranteed to be written if the method doesn't fail. So, when folks who have worked with file I/O move to socket I/O usually end up confused why things aren't sending or receiving correctly.
One of the trickier parts to socket programming is knowing how much data you will need to receive. You covered that by sending the length of the file first. The server will know to read in that value, then continue reading until that value is satisfied.
Some protocols, like HTTP, will use delimiters (in HTTP's case \r\n\r\n) to signal when a packet of data has ended. So, as a socket programmer, you would recv on a loop until those 4 bytes are read.
I put together an example on how you could accomplish sending and receiving a large file (this will handle files up to 9,223,372,036,854,775,807 in length). This isn't pure C++, I cheated in places because of lack of time. I used some Windows-only constructs for the same reason.
So let's take a look at it:
int64_t GetFileSize(const std::string& fileName) {
// no idea how to get filesizes > 2.1 GB in a C++ kind-of way.
// I will cheat and use Microsoft's C-style file API
FILE* f;
if (fopen_s(&f, fileName.c_str(), "rb") != 0) {
return -1;
}
_fseeki64(f, 0, SEEK_END);
const int64_t len = _ftelli64(f);
fclose(f);
return len;
}
///
/// Recieves data in to buffer until bufferSize value is met
///
int RecvBuffer(SOCKET s, char* buffer, int bufferSize, int chunkSize = 4 * 1024) {
int i = 0;
while (i < bufferSize) {
const int l = recv(s, &buffer[i], __min(chunkSize, bufferSize - i), 0);
if (l < 0) { return l; } // this is an error
i += l;
}
return i;
}
///
/// Sends data in buffer until bufferSize value is met
///
int SendBuffer(SOCKET s, const char* buffer, int bufferSize, int chunkSize = 4 * 1024) {
int i = 0;
while (i < bufferSize) {
const int l = send(s, &buffer[i], __min(chunkSize, bufferSize - i), 0);
if (l < 0) { return l; } // this is an error
i += l;
}
return i;
}
//
// Sends a file
// returns size of file if success
// returns -1 if file couldn't be opened for input
// returns -2 if couldn't send file length properly
// returns -3 if file couldn't be sent properly
//
int64_t SendFile(SOCKET s, const std::string& fileName, int chunkSize = 64 * 1024) {
const int64_t fileSize = GetFileSize(fileName);
if (fileSize < 0) { return -1; }
std::ifstream file(fileName, std::ifstream::binary);
if (file.fail()) { return -1; }
if (SendBuffer(s, reinterpret_cast<const char*>(&fileSize),
sizeof(fileSize)) != sizeof(fileSize)) {
return -2;
}
char* buffer = new char[chunkSize];
bool errored = false;
int64_t i = fileSize;
while (i != 0) {
const int64_t ssize = __min(i, (int64_t)chunkSize);
if (!file.read(buffer, ssize)) { errored = true; break; }
const int l = SendBuffer(s, buffer, (int)ssize);
if (l < 0) { errored = true; break; }
i -= l;
}
delete[] buffer;
file.close();
return errored ? -3 : fileSize;
}
//
// Receives a file
// returns size of file if success
// returns -1 if file couldn't be opened for output
// returns -2 if couldn't receive file length properly
// returns -3 if couldn't receive file properly
//
int64_t RecvFile(SOCKET s, const std::string& fileName, int chunkSize = 64 * 1024) {
std::ofstream file(fileName, std::ofstream::binary);
if (file.fail()) { return -1; }
int64_t fileSize;
if (RecvBuffer(s, reinterpret_cast<char*>(&fileSize),
sizeof(fileSize)) != sizeof(fileSize)) {
return -2;
}
char* buffer = new char[chunkSize];
bool errored = false;
int64_t i = fileSize;
while (i != 0) {
const int r = RecvBuffer(s, buffer, (int)__min(i, (int64_t)chunkSize));
if ((r < 0) || !file.write(buffer, r)) { errored = true; break; }
i -= r;
}
delete[] buffer;
file.close();
return errored ? -3 : fileSize;
}
Sending and Receiving Buffers
At the top we have two methods that works with buffers in memory. You can send it any buffer at any size (stay reasonable here), and those methods will send and receive until all the bytes passed in have been transmitted.
This does what I was talking about above. It takes the buffer and loops until all the bytes have been successfully sent or received. After these methods complete, you are guaranteed that all data is transmitted (as long as the return value is zero or positive).
You can define a "chunk size" which is the default size of the chunks of data the methods will use to send or receive data. I am sure these can be optimized by using more suitable values than what they are currently set at, but I don't know what those values are. It's safe to leave them at the default. I don't think that with the speed of today's computers you will notice too much of a difference if you change it to something else.
Sending and Receiving Files
The code for doing files is almost identical in nature to the buffer code. Same idea, except now we can assume that if the return value is greater than zero from the buffer methods then it was successful. So the code is a little simpler. I use a chunk size of 64KB... for no special reason. This time the chunk size determines how much data is read from the file I/O operations, not the sockets I/O.
Test Server and Client
Just to be complete, I used this code below to test this with a 5.3 GB file I have on disk. I basically just re-wrote Microsoft's client/server examples in a very slimmed down way.
#pragma comment(lib, "Ws2_32.lib")
#include <iostream>
#include <winsock2.h>
#include <ws2tcpip.h>
#include <fstream>
DWORD __stdcall ClientProc(LPVOID param) {
struct addrinfo hints = { 0 }, * result, * ptr;
hints.ai_family = AF_UNSPEC;
hints.ai_socktype = SOCK_STREAM;
hints.ai_protocol = IPPROTO_TCP;
if (getaddrinfo("127.0.0.1", "9001", &hints, &result) != 0) {
return ~0;
}
SOCKET client = INVALID_SOCKET;
for (ptr = result; ptr != NULL; ptr = ptr->ai_next) {
client = socket(ptr->ai_family, ptr->ai_socktype, ptr->ai_protocol);
if (client == SOCKET_ERROR) {
// TODO: failed (don't just return, cleanup)
}
if (connect(client, ptr->ai_addr, (int)ptr->ai_addrlen) == SOCKET_ERROR) {
closesocket(client);
client = INVALID_SOCKET;
continue;
}
break;
}
freeaddrinfo(result);
if (client == SOCKET_ERROR) {
std::cout << "Couldn't create client socket" << std::endl;
return ~1;
}
int64_t rc = SendFile(client, "D:\\hugefiletosend.bin");
if (rc < 0) {
std::cout << "Failed to send file: " << rc << std::endl;
}
closesocket(client);
return 0;
}
int main()
{
WSADATA wsaData;
WSAStartup(MAKEWORD(2, 2), &wsaData);
{
struct addrinfo hints = { 0 };
hints.ai_family = AF_INET;
hints.ai_socktype = SOCK_STREAM;
hints.ai_protocol = IPPROTO_TCP;
hints.ai_flags = AI_PASSIVE;
struct addrinfo* result = NULL;
if (0 != getaddrinfo(NULL, "9001", &hints, &result)) {
// TODO: failed (don't just return, clean up)
}
SOCKET server = socket(result->ai_family, result->ai_socktype, result->ai_protocol);
if (server == INVALID_SOCKET) {
// TODO: failed (don't just return, clean up)
}
if (bind(server, result->ai_addr, (int)result->ai_addrlen) == INVALID_SOCKET) {
// TODO: failed (don't just return, clean up)
}
freeaddrinfo(result);
if (listen(server, SOMAXCONN) == SOCKET_ERROR) {
// TODO: failed (don't just return, clean up)
}
// start a client on another thread
HANDLE hClientThread = CreateThread(NULL, 0, ClientProc, NULL, 0, 0);
SOCKET client = accept(server, NULL, NULL);
const int64_t rc = RecvFile(client, "D:\\thetransmittedfile.bin");
if (rc < 0) {
std::cout << "Failed to recv file: " << rc << std::endl;
}
closesocket(client);
closesocket(server);
WaitForSingleObject(hClientThread, INFINITE);
CloseHandle(hClientThread);
}
WSACleanup();
return 0;
}

Winsock receiving random letters trough recv()

I am trying to make a Winsock Chat. I want to send packets which sit between 2 "tags". kind of like "^^^TAG^^^ packet data ^^^TAG^^^"
Problem is, the Client Apps I am using, including my own Client app I wrote either send the message wrong or my Server app is receiving the data wrong
Here is what I mean:
Using Hercules TCP Client
Using my own Client
I am aware why it is split, that's what my Tag idea is for, but if you read what I send and what I got you will see there's added and replaced letters. At a certain point I even got the words that I sent followed by "================================" then other random unicode characters, but I could not get it again to screenshot.
Due to the fact most of the TCP clients I got off of the internet didn't work, I assume the problem is with how I receive the packets rather than how I and the other programs are sending them
My code:
heres a rewritten simple version of my code
struct client_info
{
SOCKET sock;
const char* ip;
int port;
};
struct server_info
{
SOCKET sock;
const char* ip;
int port;
std::vector<client_info> clients;
int client_count;
HANDLE connection_handler;
HANDLE recv_handler;
};
struct param_info
{
void* server_info_pointer;
};
class my_server
{
public:
my_server(const char* ip, int port)
{
this->m_info.ip = ip;
this->m_info.port = port;
this->start();
this->client_handler();
this->recv_packet();
}
~my_server(void)
{
}
private:
server_info m_info;
bool start(void)
{
WSADATA lpWsaData = decltype(lpWsaData){};
WSAStartup(MAKEWORD(2, 2), &lpWsaData);
this->m_info.sock = socket(AF_INET, SOCK_STREAM, 0);
sockaddr_in lpAddr = decltype(lpAddr){};
lpAddr.sin_family = AF_INET;
lpAddr.sin_addr.S_un.S_addr = inet_addr(this->m_info.ip);
lpAddr.sin_port = htons(this->m_info.port);
char chOption = 1;
setsockopt(this->m_info.sock, SOL_SOCKET, SO_REUSEADDR, &chOption, sizeof(chOption));
setsockopt(this->m_info.sock, IPPROTO_TCP, TCP_NODELAY, &chOption, sizeof(chOption));
if (!bind(this->m_info.sock, reinterpret_cast<sockaddr*>(&lpAddr), sizeof(lpAddr)))
{
return true;
}
closesocket(this->m_info.sock);
WSACleanup();
return false;
}
bool client_handler(void)
{
param_info pi = param_info{};
pi.server_info_pointer = &this->m_info;
if (this->m_info.connection_handler = CreateThread(nullptr, 0, reinterpret_cast<LPTHREAD_START_ROUTINE>
(this->client_handler_internal), &pi, 0, nullptr))
{
return true;
}
return false;
}
static void client_handler_internal(void* param)
{
auto pi = reinterpret_cast<param_info*>(param);
if (!listen(reinterpret_cast<server_info*>(pi->server_info_pointer)->sock, SOMAXCONN))
{
client_info ci = client_info{};
sockaddr_in lpAddr;
int dAddrSize = sizeof(lpAddr);
while (ci.sock = accept(reinterpret_cast<server_info*>(pi->server_info_pointer)->sock, reinterpret_cast<sockaddr*>(&lpAddr), &dAddrSize))
{
ci.ip = inet_ntoa(lpAddr.sin_addr);
ci.port = htons(lpAddr.sin_port);
printf("%s:%d joined!\n", ci.ip, ci.port);
reinterpret_cast<server_info*>(pi->server_info_pointer)->clients.push_back(ci);
memset(&ci, 0, sizeof(ci));
Sleep(100);
}
}
return;
}
auto __forceinline recv_packet(void) -> bool
{
param_info pi = param_info{};
pi.server_info_pointer = &this->m_info;
if (this->m_info.recv_handler = CreateThread(nullptr, 0, reinterpret_cast<LPTHREAD_START_ROUTINE>
(this->recv_packet_internal), &pi, 0, nullptr))
{
return true;
}
return false;
}
static void recv_packet_internal(void* param)
{
auto pi = reinterpret_cast<param_info*>(param);
for (;;)
{
for (int i = 0; i < reinterpret_cast<server_info*>(pi->server_info_pointer)->clients.size(); ++i)
{
char * lpBuffer = new char[64];
if (0 < recv(reinterpret_cast<server_info*>(pi->server_info_pointer)->clients.at(i).sock, lpBuffer, sizeof(lpBuffer), 0))
{
std::string lpNewBuffer = lpBuffer;
printf("%s\n", lpNewBuffer.c_str());
}
memset(lpBuffer, 0, sizeof(lpBuffer));
}
Sleep(50);
}
return;
}
};
if (0 < recv(reinterpret_cast<server_info*>(pi->server_info_pointer)->clients.at(i).sock, lpBuffer, sizeof(lpBuffer), 0))
You ignore the return value of recv, so your code has no idea how many bytes it received. Also, see below for why sizeof(lpBuffer) is wrong here.
memset(lpBuffer, 0, sizeof(lpBuffer));
Since lpBuffer is a char *, this zeroes sizeof(char *) bytes, which is not right. Only use sizeof when you need the size of a type. Also, why are you zeroing a buffer you already used and will never use again?
std::string lpNewBuffer = lpBuffer;
You should have used the return value from recv here to know how many bytes lpNewBuffer should be.
Don't treat things as strings if they're not strings. Store the return value of recv so you know how many bytes you received.

does this c++ code have memory leaks?

I'm trying to understand this Libevent c++ code I got from this page.
I'm a bit confused - am I correct to think that this code might have memory leaks?
It seems like ConnectionData pointer is created in on_connect() callback, but delete() is only called on bad read or after write is complete.
What if connection was accept()ed - but there were no reads or writes? so is that pointer just stays in daemon memory?
#include <event.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <iostream>
// Read/write buffer max length
static const size_t MAX_BUF = 512;
typedef struct {
struct event ev;
char buf[MAX_BUF];
size_t offset;
size_t size;
} ConnectionData;
void on_connect(int fd, short event, void *arg);
void client_read(int fd, short event, void *arg);
void client_write(int fd, short event, void *arg);
int main(int argc, char **argv)
{
// Check arguments
if (argc < 3) {
std::cout << "Run with options: <ip address> <port>" << std::endl;
return 1;
}
// Create server socket
int server_sock = socket(AF_INET, SOCK_STREAM, 0);
if (server_sock == -1) {
std::cerr << "Failed to create socket" << std::endl;
return 1;
}
sockaddr_in sa;
int on = 1;
char * ip_addr = argv[1];
short port = atoi(argv[2]);
sa.sin_family = AF_INET;
sa.sin_port = htons(port);
sa.sin_addr.s_addr = inet_addr(ip_addr);
// Set option SO_REUSEADDR to reuse same host:port in a short time
if (setsockopt(server_sock, SOL_SOCKET, SO_REUSEADDR, &on, sizeof(on)) == -1) {
std::cerr << "Failed to set option SO_REUSEADDR" << std::endl;
return 1;
}
// Bind server socket to ip:port
if (bind(server_sock, reinterpret_cast<const sockaddr*>(&sa), sizeof(sa)) == -1) {
std::cerr << "Failed to bind server socket" << std::endl;
return 1;
}
// Make server to listen
if (listen(server_sock, 10) == -1) {
std::cerr << "Failed to make server listen" << std::endl;
return 1;
}
// Init events
struct event evserver_sock;
// Initialize
event_init();
// Set connection callback (on_connect()) to read event on server socket
event_set(&evserver_sock, server_sock, EV_READ, on_connect, &evserver_sock);
// Add server event without timeout
event_add(&evserver_sock, NULL);
// Dispatch events
event_dispatch();
return 0;
}
// Handle new connection {{{
void on_connect(int fd, short event, void *arg)
{
sockaddr_in client_addr;
socklen_t len = 0;
// Accept incoming connection
int sock = accept(fd, reinterpret_cast<sockaddr*>(&client_addr), &len);
if (sock < 1) {
return;
}
// Set read callback to client socket
ConnectionData * data = new ConnectionData;
event_set(&data->ev, sock, EV_READ, client_read, data);
// Reschedule server event
event_add(reinterpret_cast<struct event*>(arg), NULL);
// Schedule client event
event_add(&data->ev, NULL);
}
//}}}
// Handle client request {{{
void client_read(int fd, short event, void *arg)
{
ConnectionData * data = reinterpret_cast<ConnectionData*>(arg);
if (!data) {
close(fd);
return;
}
int len = read(fd, data->buf, MAX_BUF - 1);
if (len < 1) {
close(fd);
delete data;
return;
}
data->buf[len] = 0;
data->size = len;
data->offset = 0;
// Set write callback to client socket
event_set(&data->ev, fd, EV_WRITE, client_write, data);
// Schedule client event
event_add(&data->ev, NULL);
}
//}}}
// Handle client responce {{{
void client_write(int fd, short event, void *arg)
{
ConnectionData * data = reinterpret_cast<ConnectionData*>(arg);
if (!data) {
close(fd);
return;
}
// Send data to client
int len = write(fd, data->buf + data->offset, data->size - data->offset);
if (len < data->size - data->offset) {
// Failed to send rest data, need to reschedule
data->offset += len;
event_set(&data->ev, fd, EV_WRITE, client_write, data);
// Schedule client event
event_add(&data->ev, NULL);
}
close(fd);
delete data;
}
//}}}
The documentation for event_set says that the only valid event types are EV_READ or EV_WRITE, but the callback will be invoked with EV_TIMEOUT, EV_SIGNAL, EV_READ, or EV_WRITE. The documentation is not clear, but I expect the read callback will be invoked when the socket is closed by the client. I expect the delete in the failure branch in client_read will handle this situation.
Note that that is only the case if the client sends a FIN or RST packet. A client could establish a connection and leave it open forever. For this reason, this code should be modified to have a timeout (perhaps via event_once) and require the client send a message within that timeout.

Nonblocking sockets even if not explicitly setting them as nonblocking

I have a TCP application written in C++, where a client and a server exchange data. I've istantiated a socket, believing that it would have been blocking by default; on the contrary, after server waits for a client, I have that client calls the recv function without waiting for data. This is the code in which I inizialize the socket fr the client.
int TCPreceiver::initialize(char* address, int port)
{
sock = socket (AF_INET, SOCK_STREAM, 0);
cout << "Socket: " << sock << endl;
sockaddr_in target;
target.sin_family = AF_INET;
target.sin_port = htons (port);
target.sin_addr.s_addr = inet_addr(address);
int fails=0;
while (connect(sock, (sockaddr*) &target, sizeof(target)) == -1)
{
fails++;
if (fails==10)
{
close(sock);
cout << "Error with connection to the server, try again"<< endl;
exit(-1);
}
}
cout << "Client connected (control channel)" << endl;
unsigned char text[10]; //Request message
//fill text[]
if(send(sock, (char*)text, 10, 0)==-1)
{
printf("send() failed with error code : %d" , -1);
exit(EXIT_FAILURE);
}
return 0;
}
I've tried adding this code:
int opts;
opts = fcntl(sock,F_GETFL);
if (opts < 0) {
perror("fcntl(F_GETFL)");
exit(0);
}
opts = (opts & (~O_NONBLOCK));
if (fcntl(sock,F_SETFL,opts) < 0) {
perror("fcntl(F_SETFL)");
exit(0);
}
but it still doesn't work, and if I call the recv(), the application doesn't block (and recv() always returns 0). Here is the function where I call the recv():
void TCPreceiver::receive(char* text, int& dim)
{
int ret;
ret = recv(sock, text, dim, 0);
dim=ret;
if(ret == -1){
printf("recv() failed with error (%d)\n", ret);
//system("PAUSE");
exit(1);
}
}
Where am I wrong?
recv() returning zero indicates either (1) you passed a zero length, which is just a programming error which I won't discuss further here, or (2) end of stream. The peer has close the connection. This isn't a non-blocking situation, this is the end of the connection. You must close the socket and stop using it. It will never return anything. It zero ever again.
See the man pages.