C++ network programing in linux: Server Questions - c++

I am learning how to network program using c/c++ and I have created a server(TCP) that is suppose to respond in specific ways to messages from a client in order to do this I created a class that the server class passes the message to and returns a string to report back to the client.
Here is my problem sometimes it reports the correct string back other times if just repeats what I sent to the message handler. Which no where in the code do I have it return what was passed in. So I am wondering am I handling getting the message correctly?
Secondly, I am unsure of how to keep a connection open in a while loop to continually pass messages back and forth. You can see how I did it in the code below but I am pretty sure this is incorrect, any help on this would be great. Thanks!
if (!fork())
{ // this is the child process
close(sockfd); // child doesn't need the listener
while ((numbytes = recv(new_fd, buf, MAXDATASIZE-1, 0)) > 0)
{
//numbytes = recv(new_fd, buf, MAXDATASIZE-1, 0);
buf[numbytes-1] = '\0';
const char* temp = ash.handleMessage(buf).c_str();
int size_of_temp = ash.handleMessage(buf).length();
send(new_fd, temp, size_of_temp, 0);
//send(new_fd, temp, size_of_temp+1, 0);
}
}//end if
Please excuse my ghetto code
Handles the message
Class Method handler uses

If your learning about sockets you should also know that you can't assume that what you send() as a "complete message", will be delivered as a complete message.
If you send() some big data from your client you might need to use multiple recv() on the server (or vice versa) to read it all. This is a big difference of how files usually work...
If you'r designing your own protocol you can opt to also send the messages length, like [LEN][message]. An easy example would be if the strings you send are limited to 256 bytes you can start with send()ing a short representing the strings length,
Or easier, decide that you use line-feeds (newline - \n) to terminate messages. The the protocol would look like
"msg1\nmsg2\n"
then you would have to recv(), and append the data, until you get a newline. This is all I can muster right now, there are a lot of great examples on the internet, but I would recommend getting the source of some "real" program and look at how it handles its network.

You are calling handleMessage twice. You didn't post the code, but it looks like you're returning a string. It might be better to do:
string temp = ash.handleMessage(buf);
int size_of_temp = temp.length();
This would avoid repeating any action that takes place in handleMessage.

Related

I need help figuring out tcp sockets (clsocket)

I am having trouble figuring out sockets i am just asking the server for data at a position (glm::i64vec4) and expecting a response but the position gets way off when i get the response and the data for that position reflects that (aka my voxel game make a kinda cool looking but useless mess)
It's probably just me not understanding sockets whatsoever or maybe something weird with this library
one thought i had is it was maybe something to do with mismatching blocking and non blocking on the server and client
but when i switched the server to blocking (and put each client in a seperate thread from each other and the accepting process) it did nothing
if i'm doing something really stupid please tell me i know next to nothing about sockets
here is some code that probably looks horrible
Server Code
std::deque <CActiveSocket*> clients;
CPassiveSocket socket;
socket.Initialize();
socket.SetNonblocking();//I'm doing this so i don't need multiple threads for clients
socket.Listen("0.0.0.0",port);
while (1){
{
CActiveSocket* c;
if ((c = socket.Accept()) != NULL){
clients.emplace_back(c);
}
}
for (CActiveSocket*& c : clients){
c->Receive(sizeof(glm::i64vec4));
if (c->GetBytesReceived() == sizeof(glm::i64vec4)){
chkpkt chk;
chk.pos = *(glm::i64vec4*)c->GetData();
LOOP3D(chksize+2){
chk.data(i,j,k).val = chk.pos.y*chksize+j;
chk.data(i,j,k).id=0;
}
while (c->Send((uint8*)&chk,sizeof(chkpkt)) != sizeof(chkpkt)){}
}
}
}
Client Code
//v is a glm::i64vec4
//fsock is set to Blocking
if(fsock.Send((uint8*)&v,sizeof(glm::i64vec4)))
if (fsock.Receive(sizeof(chkpkt))){
tthread::lock_guard<tthread::fast_mutex> lock(wld->filemut);
wld->ichks[v]=(*(chkpkt*)fsock.GetData()).data;//i tried using the position i get back from the server to set this (instead of v) but that made it to where nothing loaded
//i checked it and the chunks position never lines up with what i sent
}
Without your complete application codes it's unrealistic to offer any suggestions of any particular lines of code correction.
But it seems like you are using this library. It doesn't matter if not, because most of time when doing network programming, socket's weird behavior make some problems somewhat universal. Thus there are a few suggestions for the portion of socket application in your project:
It suffices to have BLOCKING sockets.
Most of time socket's read have somewhat weird behavior, that is, it might not receive the requested size of bytes at a time. Due to this, you need to repeatedly call read until the receiving buffer is read thoroughly. For a complete and robust solution you can refer to Stevens's readn routine ([Ref.1], page122).
If you are using exactly the library mentioned above, you can see that your fsock.Receive eventually calls recv. And recv is just an variant of read[Ref.2], thus the solutions for both of them are just identical. And this pattern might help:
while(fsock.Receive(sizeof(chkpkt))>0)
{
// ...
}
Ref.1: https://mathcs.clarku.edu/~jbreecher/cs280/UNIX%20Network%20Programming(Volume1,3rd).pdf
Ref.2: https://man7.org/linux/man-pages/man2/recv.2.html#DESCRIPTION

When using send() to send data from a text file from client to server through TCP stream, how do I send all the data but only 4 bytes at a time?

Below is an excerpt from my client.cpp file:
//Variables previously declared
char buffer[1024];
char sendbuffer[100];
int sockfd, b;
//Opens specified file
FILE *fp = fopen(argv[3], "rb");
while( (b = fread(sendbuffer, 1, sizeof(sendbuffer), fp)) > 0 )
{
send(sockfd, sendbuffer, b, 0);
}
I am new to client-server programming, and I'm far from being extremely proficient in C++.
When I use the code above, it's successful sending the inf, but it obviously isn't going to send the data 4 bytes at a time.
If I modified the line containing send() as shown below without making other necessary changes, I'm certain that it would be incorrect.
send(sockfd, sendbuffer, 4, 0);
It's also a pain to debug because when I make a change to the code, I have to continuously simulate a client-server interaction, which takes time to set up.
What would be the most efficient way to send this text file data 4 bytes at a time?
Also, can anyone suggest a tool or method for quickly debugging client-server programs?
Let me know if more information is needed. Thanks
Well, you can try to send 4 bytes at a time and it will probably work but you have no control of how many bytes a stream socket will actually send. You have to check the return value.
I do not think you need to debug the program at all. Logging is better because it does not introduce time delays like debugging does, and time is money in the networking world.

Sending data via socket aborts unexpected

i am trying to send data via tcp socket to a server. The idea behind that is a really simple chat programm.
The string I am trying to send looks like the following:
1:2:e9e633097ab9ceb3e48ec3f70ee2beba41d05d5420efee5da85f97d97005727587fda33ef4ff2322088f4c79e8133cc9cd9f3512f4d3a303cbdb5bc585415a00:2:xc_[z kxc_[z kxc_[z kxc_[==
As you can see there a few unprintable characters which I don't think are a problem here.
To send this data I am using the following code snippet.
bool tcp_client::send_data(string data)
{
if( send(sock , data.c_str(), strlen(data.c_str()) , 0) < 0)
{
perror("Send failed : ");
return false;
}
return true;
}
After a few minutes of trying things out I came up, that data.c_str() cuts my string of.
The result is:
1:2:e9e633097ab9ceb3e48ec3f70ee2beba41d05d5420efee5da85f97d97005727587fda33ef4ff2322088f4c79e8133cc9cd9f3512f4d3a303cbdb5bc585415a00:2:xc_[z
I think that there is some kind of null sequence inside my string which is a problem for the c_str() function.
Is there a way to send the whole string as I mentioned aboved without cutting it off?
Thanks.
Is there a way to send the whole string as I mentioned aboved without cutting it off?
What about:
send(sock , data.c_str(), data.size() , 0);
There are only two sane ways to send arbitrary data (such as a array of characters) over stream sockets:
On the server: close the socket after data was sent (like in ftp, http 0.9, etc). On the client - read until socket is closed in a loop.
On the server: prefix the data with fixed-length size (nowadays people usualy use 64 bit integers for size, watch out for endiannes). On the client - read the size first (in a loop!), than read the data until size bytes are read (in a loop).
Everything else is going to backfire sooner or later.

Good way to start communicating with a client?

The way my game and server work is like this:
I send messages that are encoded in a format I created. It starts with 'p' followed by an integer for the message length then the message.
ex: p3m15
The message is 3 bytes long. And it corresponds to message 15.
The message is then parsed and so forth.
It is designed for TCP potentially only sending only 1 byte (since TCP only has to send a minimum of 8 bits).
This message protocol I created is extremely lightweight and works great which is why I use it over something like JSON or other ones.
My main concern is, how should the client and the server start talking?
The server expects clients to send messages in my format. The game will always do this.
The problem I ran into was when I tested my server on port 1720. There was BitTorrent traffic and my server was picking it up. This was causing all kinds of random 'clients' to connect to my server and sending random garbage.
To 'solve' this, I made it so that the first thing a client must send me is the string "Hello Server".
If the first byte ever sent is != 'H' or if they have sent me > 12 bytes and it's != "Hello Server" then I immediately disconnect them.
This is working great. I'm just wondering if I'm doing something a bit naive or if there are more standard ways to deal with:
-Clients starting communication with server
-Clients passing Hello Server check, but somewhere along the line I get an invalid message. I can assume that my app will never send an invalid message. If it did, it would be a bug. Right now if I detect an invalid message then I disconnect the client.
I noticed BitTorrent was sending '!!BitTorrent Protocol' before each message. Should I do something like that?
Any advice on this and making it safer and more secure would be very helpful.
Thanks
perhaps a magic number field embedded in your message.
struct Message
{
...
unsigned magic_number = 0xbadbeef3;
...
};
so first thing you do after receive something, is checking whether the magic_number field is 0xbadbeef3.
Typically, I design protocols with a header something like this:
typedef struct {
uint32_t signature;
uint32_t length;
uint32_t message_num;
} header_t;
typedef struct {
uint32_t foo;
} message13_t;
Sending a message:
message13_t msg;
msg.foo = 0xDEADBEEF;
header_t hdr;
hdr.signature = 0x4F4C494D; // "MILO"
hdr.length = sizeof(message13_t);
hdr.message_num = 13;
// Send the header
send(s, &hdr, sizeof(hdr), 0);
// Send the message data
send(s, &msg, sizeof(msg), 0);
Receiving a message:
header_t hdr;
char* buf;
// Read the header - all messages always have this
recv(s, &hdr, sizeof(hdr), 0);
// allocate a buffer for the rest of the message
buf = malloc(hdr.length);
// Read the rest of the message
recv(s, buf, hdr.length, 0);
This code is obviously devoid of error-checking or making sure all data has been sent/received.

C++ non blocking socket select send too slow?

I have a program that maintains a list of "streaming" sockets. These sockets are configured to be non-blocking sockets.
Currently, I have used a list to store these streaming sockets. I have some data that I need to send to all these streaming sockets hence I used the iterator to loop through this list of streaming sockets and calling the send_TCP_NB function below:
The issue is that my own program buffer that stores the data before sending to this send_TCP_NB function slowly decreases in free size indicating that the send is slower than the rate at which data is put into the program buffer. The rate at which the program buffer is about 1000 data per second. Each data is quite small, about 100 bytes.
Hence, i am not sure if my send_TCP_NB function is working efficiently or correct?
int send_TCP_NB(int cs, char data[], int data_length) {
bool sent = false;
FD_ZERO(&write_flags); // initialize the writer socket set
FD_SET(cs, &write_flags); // set the write notification for the socket based on the current state of the buffer
int status;
int err;
struct timeval waitd; // set the time limit for waiting
waitd.tv_sec = 0;
waitd.tv_usec = 1000;
err = select(cs+1, NULL, &write_flags, NULL, &waitd);
if(err==0)
{
// time limit expired
printf("Time limit expired!\n");
return 0; // send failed
}
else
{
while(!sent)
{
if(FD_ISSET(cs, &write_flags))
{
FD_CLR(cs, &write_flags);
status = send(cs, data, data_length, 0);
sent = true;
}
}
int nError = WSAGetLastError();
if(nError != WSAEWOULDBLOCK && nError != 0)
{
printf("Error sending non blocking data\n");
return 0;
}
else
{
if(nError == WSAEWOULDBLOCK)
{
printf("%d\n", nError);
}
return 1;
}
}
}
One thing that would help is if you thought out exactly what this function is supposed to do. What it actually does is probably not what you wanted, and has some bad features.
The major features of what it does that I've noticed are:
Modify some global state
Wait (up to 1 millisecond) for the write buffer to have some empty space
Abort if the buffer is still full
Send 1 or more bytes on the socket (ignoring how much was sent)
If there was an error (including the send decided it would have blocked despite the earlier check), obtain its value. Otherwise, obtain a random error value
Possibly print something to screen, depending on the value obtained
Return 0 or 1, depending on the error value.
Comments on these points:
Why is write_flags global?
Did you really intend to block in this function?
This is probably fine
Surely you care how much of the data was sent?
I do not see anything in the documentation that suggests that this will be zero if send succeeds
If you cleared up what the actual intent of this function was, it would probably be much easier to ensure that this function actually fulfills that intent.
That said
I have some data that I need to send to all these streaming sockets
What precisely is your need?
If your need is that the data must be sent before proceeding, then using a non-blocking write is inappropriate*, since you're going to have to wait until you can write the data anyways.
If your need is that the data must be sent sometime in the future, then your solution is missing a very critical piece: you need to create a buffer for each socket which holds the data that needs to be sent, and then you periodically need to invoke a function that checks the sockets to try writing whatever it can. If you spawn a new thread for this latter purpose, this is the sort of thing select is very useful for, since you can make that new thread block until it is able to write something. However, if you don't spawn a new thread and just periodically invoke a function from the main thread to check, then you don't need to bother. (just write what you can to everything, even if it's zero bytes)
*: At least, it is a very premature optimization. There are some edge cases where you could get slightly more performance by using the non-blocking writes intelligently, but if you don't understand what those edge cases are and how the non-blocking writes would help, then guessing at it is unlikely to get good results.
EDIT: as another answer implied, this is something the operating system is good at anyways. Rather than try to write your own code to manage this, if you find your socket buffers filling up, then make the system buffers larger. And if they're still filling up, you should really give serious thought to the idea that your program needs to block anyways, so that it stops sending data faster than the other end can handle it. i.e. just use ordinary blocking sends for all of your data.
Some general advice:
Keep in mind you are multiplying data. So if you get 1 MB/s in, you output N MB/s with N clients. Are you sure your network card can take it ? It gets worse with smaller packets, you get more general overhead. You may want to consider broadcasting.
You are using non blocking sockets, but you block while they are not free. If you want to be non blocking, better discard the packet immediately if the socket is not ready.
What would be better is to "select" more than one socket at once. Do everything that you are doing but for all the sockets that are available. You'll write to each "ready" socket, then repeat again while there are sockets that are not ready. This way, you'll proceed with the sockets that are available first, and then with some chance, the busy sockets will become themselves available.
the while (!sent) loop is useless and probably buggy. Since you are checking only one socket FD_ISSET will always be true. It is wrong to check again FD_ISSET after a FD_CLR
Keep in mind that your OS has some internal buffers for the sockets and that there are way to extend them (not easy on Linux, though, to get large values you need to do some config as root).
There are some socket libraries that will probably work better than what you can implement in a reasonable time (boost::asio and zmq for the ones I know).
If you need to implement it yourself, (i.e. because for instance zmq has its own packet format), consider using a threadpool library.
EDIT:
Sleeping 1 millisecond is probably a bad idea. Your thread will probably get descheduled and it will take much more than that before you get some CPU time again.
This is just a horrible way to do things. The select serves no purpose but to waste time. If the send is non-blocking, it can mangle data on a partial send. If it's blocking, you still waste arbitrarily much time waiting for one receiver.
You need to pick a sensible I/O strategy. Here is one: Set all sockets non-blocking. When you need to send data to a socket, just call write. If all the data writes, lovely. If not, save the portion of data that wasn't sent for later and add the socket to your write set. When you have nothing else to do, call select. If you get a hit on any socket in your write set, write as many bytes as you can from what you saved. If you write all of them, remove that socket from the write set.
(If you need to write to a data that's already in your write set, just add the data to the saved data to be sent. You may need to close the connection if too much data gets buffered.)
A better idea might be to use a library that already does all these things. Boost::asio is a good one.
You are calling select() before calling send(). Do it the other way around. Call select() only if send() reports WSAEWOULDBLOCK, eg:
int send_TCP_NB(int cs, char data[], int data_length)
{
int status;
int err;
struct timeval waitd;
char *data_ptr = data;
while (data_length > 0)
{
status = send(cs, data_ptr, data_length, 0);
if (status > 0)
{
data_ptr += status;
data_length -= status;
continue;
}
err = WSAGetLastError();
if (err != WSAEWOULDBLOCK)
{
printf("Error sending non blocking data\n");
return 0; // send failed
}
FD_ZERO(&write_flags);
FD_SET(cs, &write_flags); // set the write notification for the socket based on the current state of the buffer
waitd.tv_sec = 0;
waitd.tv_usec = 1000;
status = select(cs+1, NULL, &write_flags, NULL, &waitd);
if (status > 0)
continue;
if (status == 0)
printf("Time limit expired!\n");
else
printf("Error waiting for time limit!\n");
return 0; // send failed
}
return 1;
}