I have been trying to create a program that sends multiple packets via sendto to different IP addresses, but after exactly 1238 callings to sendto I'm getting the error: "SendTo: Invalid argument" (printed by perror).
Edit: After an hour the number of callings to sendto is exactly 1231 and remains like that every run. After I added a code that prints something on the screen, it was back to 1238 callings every run until error, deleted that code, it became 1241 and about an hour later it's 1231.
If I take down the IP addresses (making the aliases offline), it sends those packets correctly without an error but it get stuck for a moment after about every 500 sendto callings,
This error only happens when those IP addresses are not in the same server, when they are in the same server (aliases) the sendto works correctly.
Also, the error doesn't appear when sending to the same IP multiple times instead of multiple times to different IP addresses.
I have tried different fixes that I found when searching in Google. I have tried playing with the configurations in sysctl.conf file, raised the send buffer, somaxconn, backlog, and other things.. When I raised the send buffer, I have also raised the buffer in the application itself.
Here is the sample code I have written:
http://pastebin.com/FCn0ALzn
And the code that gives the error:
for (size_t i = 0; i < ips.size(); i++)
{
cout << i << ") Sending message to: " << ips[i] << endl;
server.sin_addr.s_addr = inet_addr(ips[i].c_str());
n = sendto(sock, buffer, strlen(buffer), 0, (const struct sockaddr *)&server, length);
if (n < 0)
{
perror("Sendto");
return;
}
}
I have managed to fix this issue by clearing IP addresses from the ARP cache. Every 500 callings to sendto, the program sleeps for few milliseconds and then clears the IP addresses that were processed from the ARP cache using the shell command: arp -d [ip] like this:
// Clear ARP cache
void clearIpArp(char* ip)
{
char arp[100] = {0};
sprintf(arp, "arp -d %s", ip);
system(arp);
}
Related
Good afternoon all,
I have been making a UDP server for gathering metrics on my Windows server (SNMP isn't accurate on Windows as it doesn't have 64bit counters). The server runs on the Windows server and the client is running on a Linux monitoring box.
I have set it up running as a service and it is running great except for, every once and a while, the UDP packet is not received from the Linux machine. I am using the following bit of code to receive UDP packets:
bytes_received = recvfrom(serverSocket, serverBuf, serverBufLen, 0, (SOCKADDR*)&SenderAddr, &SenderAddrSize);
The socket is set to timeout every 15 seconds (So any service control requests like stop can be executed). What I am thinking is happening is either:
The UDP packet is arriving in between the 15 second timeout and when it starts listening again.
The packet is arriving a fraction of a second after another UDP packet has arrived (for a different metric) and it has gone onto starting up a process to send a packet back, and thus it isn't at the recvfrom yet.
(I am basing both of those off my assumption that it is only waiting for a packet when it is at recvfrom).
I could possibly move over to TCP to solve this issue, but since the information is time sensitive, I would prefer to stay with UDP for it's speed.
Is there anyway to queue up incoming packets and have them be processed or would I be best to look at TCP instead?
I ended up coming up with the idea of transmitting the UDP packet if the first one doesn't get a response after 2 seconds. Works a treat so far.
Edit:
It is asking for code:
std::string returnMsg;
returnMsg = "CRITICAL - No packet recieved back.";
int i = 0;
while(returnMsg == "CRITICAL - No packet recieved back.") {
if(i == 5) {
std::cout << "CRITICAL - No packet recieved back." << "\n";
return 2;
}
//std::cout << "Try " << i << "\n";
// Now lets send the message
send_message(args[2],message.c_str());
// Now lets wait for response
returnMsg = recieve_message();
i++;
}
The recieve_message function returns "CRITICAL - No packet recieved back" when the timeout occurs.
I am new to OpenSSL programming. Anyways, I have coded an openssl server and client in C (also tested in C++). When the two connect they successfully handshake and are able to read and write to each other sucessfully. I have it currently set up such that the client only reads from stream and writes to buffer, like so:
while((rc = SSL_read(ssl, buffer, sizeof(buffer))) > 0){
fprintf(stdout,"%s\n", buffer);
}
Likewise, my server is setup such that it constantly writes to stream from buffer, like so:
while ((rc = SSL_write(ssl, buffer, sizeof(buffer))) > 0) {
fprintf(stdout, "Sent message.\n");
}
fprintf(stdout, "Done sending.\n");
And this works. If I were to abruptly end the client with ^C (Ctrl C), the server would finish and print "Done Sending." However, if I were to put a delay any longer than about 10000 nanoseconds between every SSL_write (within the server's writing while loop), I get unexpected behaviour when the client disconnects abruptly (using using ^C) or normally via counter and break. To clarify, before abruptly disconnecting the server is able to SSL_write and the client is able to SSL_read normally between any duration of a delay (haven't tried anything past a minute).
This issue means that a client connection can effectively crash the server thread, as demonstrated by not printing "Done Sending" after the client disconnects when using delays longer than 10000 nanoseconds. I do not want the server to be able to crash because of an abrupt disconnect in a session. To be clear the server crashes on call to SSL_write and does not return anything.
Thing I have tried attempting to solve this issue:
Attempting to see changes in these return values: SSL_want(ssl), SSL_get_error(ssl,0), ERR_get_error(), and SSL_get_shutdown(ssl). In some test code, I have printed all these out prior to calling SSL_write and none have changed their values before crashing.
Clearing the error queue prior to every SSL_write using ERR_clear_error()
Seeing if anything is printed with ERR_print_errors_fp(stderr) - nada
I have used the following delay methods:
// Method 1
for(long i = 0; i < (long) 99999999; i++){}
// Method 2
struct timespec tim, tim2;
tim.tv_sec = 0;
tim.tv_nsec = 10000L; // 5 milliseconds
nanosleep(&tim, &tim2);
// Method 3
sleep(1)
I personally think it would be ludicrous to be required to write to socket within a duration of a hundredth of a millisecond just so that the server doesn't crash on client disconnect.
Is this actually expected behavior? Am I doing something wrong or am I forgetting something? What should I do to circumvent this issue?
Any help or advice would be appreciated.
I'm trying to create application where multiple instances will run on same machine and they will communicate together via UDP via the same port.
I was reading many threads on StackOverflow about it that it should be possible.
Though, when I open connection from each application instance I can see that each instance sends a message but only first instance (if first is closed then second...) receives that message.
I'm using ACE library for the communication. Excerpt from code:
ACE_SOCK_Dgram_Mcast dgram;
ACE_INET_Addr *listenAddress = new ACE_INET_Addr(12345, ACE_LOCALHOST);
dgram.open(*listenAddress);
ACE_INET_Addr peer_address;
char buffer[1024];
dgram.send(buffer, 256);
while (true)
{
if (dgram.recv(buffer, 256, peer_address, 0, &receiveLoopTimeout) != -1)
{
std::cout << "Received" << std::endl;
}
}
I also found out that if I call "dgram.join(*listenAddress)" then I get error, code ENODEV from the first instance of the app.
I'm not sure I understand what you are trying to do... send a message multicast so multiple receivers get it, or allow multiple processes to receive on the same UDP port unicast... I'm guessing the former.
You're using the ACE_SOCK_Dgram_Mcast class but with unicast addressing and operations. So only one instance will receive that message.
Check the ACE_wrappers/tests/Multicast_Test.cpp for examples of how to send and receive multicast.
I am dealing with problem that after sending data successfully i recv the first response from the client but the second one after he put his details and submit not.
do you have any idea why this happend?
Here is my code:
sock->listenAndAccept();
string url="HTTP/1.1 302 Found \r\nContent-Type: text/html; charset=utf8 \r\nContent- Length:279\r\n\r\n<!DOCTYPE html><html><head><title>Creating an HTML Element</title></head><body><form name=\"input\" action=\"login.html\" method=\"get\">user name: <input type=\"text\" name=\"user\"><br>password: <input type=\"text\" name=\"password\"><input type=\"submit\" value=\"Submit\"></form></body></html>";
sock->send(url.data(),url.length());
char buffer[1000];
sock->recv(buffer, 1000);
cout<<buffer<<endl;
sock->recv(buffer, 1000);
cout<<buffer<<endl;
listen and accept function:
TCPSocket* TCPSocket::listenAndAccept(){
int rc = listen(socket_fd, 1);
if (rc<0){
return NULL;
}
size_t len = sizeof(peerAddr);
bzero((char *) &peerAddr, sizeof(peerAddr));
int connect_sock = accept(socket_fd, (struct sockaddr *)&peerAddr,(unsigned int *)&len);
return new TCPSocket(connect_sock,serverAddr,peerAddr);
}
recv function:
int TCPSocket::recv(char* buffer, int length){
return read(socket_fd,buffer,length);
}
TCP is stream oriented protocol. It might be possible that you have read all the messages in first recv. Check the size of received data and see if it matches the expected output.
Always always always (I can't say that often enough) check the return value of recv. recv will read up to the amount you have requested. If you're certain the amount you've requested is on its way then you must go into a loop around recv buffering incoming data until you've received what you expect to receive.
This kind of bug tends to sit there lurking unseen while you test on your local machine using the very fast localhost interface and then surfaces as soon as you start running the client and server on different hosts.
When you move on from your test code to actual code then you must also deal with zero length responses (client closed the socket) and error codes (<0 response).
Finally, please post your client code. There may be bugs there as well.
I am developing a client-server application (TCP) in Linux using C++. This application is in charge of testing the network performance.
The connection between client and server is established only once, and then data are transmitted/received using write()/read() with an own-defined protocol.
When data exceeds 40Kb I receive just a part of the data only once. (i.e. I receive about 48KB)
Please find down the relevant part of the code:
while (1) {
servMtx.lock();
...
serv_bytes = (byte *) malloc(size_bytes);
n = read(newsockfd, serv_bytes,size_bytes);
if (n != (int)size_bytes ) {
std::cerr << "No enough data available for msg. Received just: " << n << std::endl;
continue;
}
receivedBytes += n + size_header_bytes + sizeof(ssize_t);
....
}
I increased the kernel buffer size to become 1MB using:
int buffsize = 1024*1024;
setsockopt(newsockfd, SOL_SOCKET, SO_RCVBUF, &buffsize, sizeof(buffsize));
and modified sysctl variables too:
sysctl -w net.core.rmem_max=8388608;
sysctl -w net.core.wmem_max=8388608;
as mentioned on this How to recive more than 65000 bytes in C++ socket using recv() but nothing was changed. Also, I tried to change the package size to no avail.
You should read or recv in several chunks (in general; if you are unlucky, the "several" becomes "one"). So you need to manage your buffering and keep (and use) the count of received bytes.
So at some point, you'll code
int nbrecv = recv(s, buffer + off, bufsize, 0);
if (nbrec>0) { off += nbrecv; bufsize -= nbrecv; }
and you probably should do that in your event loop (often around poll(2)...). And it does happen that nbrec is a lot less than bufsize and you should be handling that common case.
TCP does not guarantee that you'll get all the bytes in the same recv! It could depend on external factors (routing, network hardware, ...); it is a stream-oriented protocol, not a message-packet one. If your application wants messages it should buffer the input and chunk that input into messages according to the content. Look at HTTP or SMTP: their message have a well defined boundary given by header information (Content-Length: in HTTP) or by ending convention (line with a single . in SMTP).
Please read carefully read(2), recv(2), socket(7), tcp(7), some sockets tutorial, Advanced Linux Programming.