Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I'm trying to close socket on windows from closesocket(), but it takes 20 sec to complete. at first I thought it was about linger interval, although I was not setting anything with setsockopt by linger, so I added such code:
linjer lobj;
lobj.l_onoff = 1;
lobj.l_linger = 0;
sz = sizeof(lobj);
setsockopt(s_, SOL_SOCKET, SO_LINGER, (char *) &lobj, sz);
but it still does not help.
any ideas? I just want to close connection, doesnt matter if its gracefull or abortion, just want to close it as soon as possible.
P.S. it takes exactly 20 sec.
lobj.l_onoff = 1;
lobj.l_linger = 0;
sz = sizeof(lobj);
setsockopt(s_, SOL_SOCKET, SO_LINGER, (char *) &lobj, sz);
lobj.l_onoff = -1;
lobj.l_linger = -1;
getsockopt(s_, SOL_SOCKET, SO_LINGER, (char *) &lobj, &sz);
log << "Option 1:" << lobj.l_linger << ".\n";
log << "Option 2:" << lobj.l_onoff << ".\n";
closesocket(s_);
this code prints option1 = 0 and option2 = 1, so it really sets option correctly.
Also, observing from wireshark, it sends RST at the beginning of whole delay.
plus, closesocket() returns 0.
P.S. I have set SO_REUSADDR, can it be causing it?
If you can't post the code you can't ask the question here. Those are the rules.
However the only way closesocket() can take any measurable time at all is if:
there is a lot of pending outgoing data, and
you have set a positive SO_LINGER timeout.
You can only get a delay of 20 seconds by setting a positive SO_LINGER timeout of >= 20 seconds and having a lot of pending outgoing data and probably a peer that isn't reading.
If you had left SO_LINGER strictly alone, which you should, or set it to false, or true with a zero timeout, closesocket() is asynchronous and returns immediately, and in the latter case you would also have reset the connection.
Ergo either you haven't done what you claimed or your observations are faulty. Possibly it is the final send() which is blocking.
It would be interesting to know whether this closesocket() call returned -1 and if so what the error value was.
Related
How to scynhronise read_handler calls of sock.async_read_some to a specific frequency, while reading streams of 812 bytes (which is streamed with 125 Hz frequency).
I have a problem related with reading a stream from a robot. I am very new to the boost asio. And I have very little info on this concept. Here is a sample block from my code. What the read_handler does is, it processes the data coming from robot. This loop should execute at every 8 ms which is my sampling time and also by the time it starts to execute, reading of the data stream from the robot should be completed. When I look at the robot's stream data, data comes at each 8 ms. So the robot data is OK. But the execution of read_handler is somehow problematic. for instance, one loop starts at time =0, second loop starts at time=2, third loop starts at time = 16, forth loop starts at time=18, and fifth loop again starts at time = 32. So, the triggering time of the loop changes from every first time to second time. But on the third it syncronizes again to a multiple of 8 ms. What I need is read_handler should trigger at every 8 ms (when the data arrives) but it catches this sampling time at every two calls (total of 16 ms). This is crucial since I am making computations, and feeding a command back to robot later on (A control system). This code segment is not detailed with sending commands etc, this segment only contains very basic data processes.
So, what might be causing these variations between calls, and how can I fix it?
I searched through the net and stack overflow but I couldn't run into another time related issue as I faced.
void read_handler(const boost::system::error_code &ec, std::size_t bytes_transferred)
{
if (!ec)
{
thisLoopStart = clock();
loopInstant[iterationNum]=diffclock(startTime, endLoopTime);
std::cout << "Byte transfered: " << bytes_transferred << std::endl;
printf("Byte transfered: %d", bytes_transferred);
printf("Byte transfered: %d", bytes_transferred);
printf("Byte transfered: %d\n", bytes_transferred);
//std::cout << std::string(buffer.data(), bytes_transferred) << std::endl;
char myTempDoubleTime[8];
for (int j = 0; j<1; j++)
{
for (int i = 0; i < 8; i++ )
{
myTempDoubleTime[7-i]=buffer[4+i+8*j]; //636
}
memcpy(&controllerTime[iterationNum], myTempDoubleTime, 8);
}
endLoopTime = clock();
thisLoopDuration = diffclock(thisLoopStart, endLoopTime);
loopTimes[iterationNum] = thisLoopDuration;
if (iterationNum++>500)
{//io_service.~io_service();
//io_service.reset();
//io_service.run();
exitThread = 1;
printf("Program terminates...\n");
GlobalObjects::myFTSystem->StopAcquisition();
for(int i=1;i<iterationNum;i++)
fprintf(LoopDurations, "%f\t%f\t%f\n", loopTimes[i], controllerTime[i], loopInstant[i]);
fclose(LoopDurations);
closeConnectionToServer();
printf("Connection is closed...\n");
io_service.stop();
}
sock.async_read_some(boost::asio::buffer(buffer), read_handler);
}
}
If the incoming stream timing is controlled by the robot itself, then you shouldn't be worrying about trying to read specifically at such and such time. If you're expecting a burst of 812 bytes from the robot every X seconds, simply keep async_reading from your client socket. boost::asio will invoke your callback as soon as the read is complete.
As for your mysterious delay, try explicitly stating the size of your buffer in your call to async_read_some like so:
sock.async_read_some(boost::asio::buffer(buffer, 812), read_handler);
If you're sure you're always transmitting enough data to fill such a buffer, then this should cause your callback to be invoked consistently because the buffer supplied to boost::asio is full. If this doesn't solve your problem, then do as sehe suggested and implement a dealine_timer that you can use to have finer time-based control over your asynchronous ops.
Edit
You should also be checking the bytes_transferred in your OnRead handler to ensure that you've made a complete read from the robot. Right now you're just printing it. You could have an incomplete read which means you should immediately start reading again repeatedly from the socket until you're sure you've consumed all of the data you're expecting. Otherwise, you're going to screw yourself up by trying to act on incomplete data, most likely failing there, then starting up another ::async_read assuming you're starting a clean new read when really you're just going to read old data you ignored and left on the socket, and begin fragmenting your reads.
This could explain why you're seeing inconsistent times that are both shorter and longer than your expected interval. Explicitly specifying buffer size and checking the bytes_transferred that you're passed in the handler will guarantee that you catch such a problem. Also look at docs for completion_condition types you can pass to ::asio such as ::transfer_exactly(num_bytes), but I'm not sure if those apply to async read ops.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I'm having some strange things happen with my program, and I'm not sure what I should be doing. This is a pseudocode version of what my code looks like so far:
Server:
//Set up Server sockets
int maximum;
// Collect the maximum
cout << "\nEnter a Maximum:";
cin >> maximum;
cout << "\n";
int *array = new int[maximum + 1];
memset(array, 0, sizeof(array));
while(array[0] < anInt){
//receive the array from the client
if(recv(newsockfd, array, maximum, 0) < 0){
perror("ERROR receiving from socket");
}
mathFunction(array); //A function that alters the contents of array
array[0]++;
//If array[0] isn't too big
if(array[0] < anInt){
// Send the array to the client
if(send(newsockfd, array, maximum, 0) < 0){
perror("ERROR sending to socket");
}
}
}
Client:
//Set up Client sockets
//The maximum was already sent over earlier
int *array = new int[maximum + 1];
while(array[0] < anInt){
//receive the array from the server
if(recv(sockfd, array, maximum, 0) < 0){
perror("ERROR receiving from socket");
}
mathFunction(array); //A function that alters the contents of array
array[0]++;
if(send(sockfd, array, maximum, 0) < 0){
perror("ERROR sending to socket");
}
}
My problem is that I keep getting a "Connection reset by peer" error, which leads to a Segmentation Fault, crashing my program. Also, when playing around with the 3rd argument of the send/recv functions (currently set as maximum), my program acts differently. It will actually work perfectly if the user enters a maximum of 100, but anything more than that screws it up.
I know this is a long shot, but can anyone see something that I'm doing wrong?
First of all code that posted by you has a logical error:
Server first receive data from the client, do something with it and then send its result back to the client.
In the other side client also receive data from server, do something with it and then send it back to the server.
And that's obviously a race condition, no one send data to other side to begin communication.
Beside that logical error you have some C++ errors:
1) memset(array, 0, sizeof(array)) only 0 initialize sizeof(int*) bytes from your array not entire array, since sizeof(array) is always sizeof(int*) if you want to 0 initialize entire array (and I think you want it) you should call:
memset(array, 0, (maximum + 1) * sizeof(int));
or even better:
std::fill( array, array + maximum + 1, 0 );
And in C++ it is much better to use classes like std::vector instead of raw pointers:
std::vector<int> array( maximum + 1 ); // automatically initialize to 0
2) Your array type is int* and send/recv count its input by byte, so if you want to send/recv entire array you must have something like:
send(sockfd, (char*)array, maximum * sizeof(int), 0);
3) You should check return value of send/recv, specially recv since it may recv less data in each call, for example you send 8K data and recv only receive first 1K and rest of it remain in the network buffer, so you should call it repeatedly until you read your buffer completely.
One thing that seems obviously incorrect is:
mathFunction(array);
doesn't tell mathFunction() how many elements are in the array. In fact, you throw away this information when you call recv() by not storing it anywhere (all your code does is check to see if it's less than zero, but doesn't use it if it is positive). When calling recv(), your code must be prepared to receive any number of bytes from 1 through maximum. If you don't get all the bytes you ask for, then you need to call recv() again to get more.
I'm writing a chat program, and my receive function sometimes does not wait at all.. Here is the receiving code: The important parts are basically the first half, but i've added the whole function just in case. (Edit: the commenting is for myself, not notes to you guys reading! sorry!)
ReceiveStatus Server::Receive(PacketInternal*& packetInternalOut)
{
fd_set fds ;
int n ;
struct timeval tv ;
// Set up the file descriptor set.
FD_ZERO(&fds) ;
FD_SET(*p_socket, &fds) ;
// Set up the struct timeval for the timeout.
tv.tv_sec = NETWORKTIMEOUTSEC ;
tv.tv_usec = NETWORKTIMEOUTUSEC ;
// Wait until timeout or data received.
n = select ( *p_socket, &fds, NULL, NULL, &tv ) ;
if ( n == 0)
{
return ReceiveStatus::ReceiveTimeout;
}
else if( n == -1 )
{
return ReceiveStatus::ReceiveSocketError;
}
//need to make this more flexible so it can support others
sockaddr_in fromAddr;
int flags = 0;
int fromLength = sizeof(fromAddr);
char dataIn[TOTALPACKETSIZE];
int bytesIn = recvfrom(*p_socket, dataIn, TOTALPACKETSIZE, flags, (SOCKADDR*)&fromAddr, &fromLength);
// Convert fromAddr into ip, port
if(bytesIn == SOCKET_ERROR)
{
return ReceiveStatus::ReceiveSocketError;
}
if(bytesIn > 0)
{
memcpy(packetInternalOut,dataIn,bytesIn);
return ReceiveStatus::ReceiveSuccessful;
}
else
{
return ReceiveStatus::ReceiveEmpty;
}
}
Is there anything that could effect whether or not this works or doesn't work? my chat program can either be a server or a client. they both use this same code. The server, when waiting for a connection, sits on Select() for 100 seconds, as NETWORKTIMEOUTSEC = 100. But in the char program, whenever I want to send a message, i first send a transfer request, and then I wait for an acknowledgement (For the acknowledgement packet, i need to call receive again). Now this is the step that does not wait. my ReceiveAck function calls Receive(), and receive just runs straight over the entire code. I can test this by creating a client and no server. If i send a message where there is no server, it should wait 100 seconds for an acknowledgement, and then time out. But instead, as soon as i hit enter, it says it timed out.
i cant work out what would be making it skip this step. I have debugged my chat program in both its server and client states. The values of tv and fds are the same in both, yet the server will wait and the client wont...
The first parameter to select() is one greater than the last socket. So you need:
n = select ( *p_socket + 1, &fds, NULL, NULL, &tv ) ;
Select also returns early (i.e. without any of the sockets having data present) when your application is hit by a signal. So if your app uses a lot of usleep() and friends in a different thread, you might be in for a surprise.
select() should always be used in a loop. You must check its return for three conditions:
-1 (an error), which you must evaluate to determine if it is fatal. EINTR is an example of a non-fatal error.
a zero, in which case some indeterminate amount of time has passed and, if you care about how long its been, you need to check the time separately.
A positive value, in which case you should check all of the flagged descriptors and act on them.
In all cases, you should check whether any other conditions exist which might make you want to exit the loop, such as how much time as actually passed.
Note that the first parameter to select() should generally be the constant FD_SETSIZE. There is little to be gained in setting it to anything else.
Also note that just because you received a datagram doesn't mean you received the datagram you wanted. You need a way to check that you did not get some random datagram that happened to be floating around on the network (it happens). Along those lines, make sure TOTALPACKETSIZE is 65536, because that's theoretically (approximately) how big a random packet could be.
So basically I'm making an MMO server in C++ that runs on linux. It works fine at first, but after maybe 40 seconds with 50 clients it will completely pause. When I debug it I find that basically the last frame its on before it stops responding is syscall() at which point it disappears into the kernel. Once it disappears into the kernel it never even returns a value... it's completely baffling.
The 50 clients are each sending 23 bytes every 250 milliseconds. These 23 bytes are then broadcasted to all the other 49 clients. This process begins to slow down and then eventually comes to a complete halt where the kernel never returns from a syscall for the send() command. What are some possible reasons here? This is truly driving me nuts!
One option I found is Nagles algorithm which forces delays. I've tried toggling it but it still happens however.
Edit: The program is stuck here. Specifically, in the send, which in turn calls syscall()
bool EpollManager::s_send(int curFD, unsigned char buf[], int bufLen, int flag)
// Meant to counteract partial sends
{
int sendRetVal = 0;
int bytesSent = 0;
while(bytesSent != bufLen)
{
print_buffer(buf, bufLen);
sendRetVal = send(curFD, buf + bytesSent, bufLen - bytesSent, flag);
cout << sendRetVal << " ";
if(sendRetVal == -1)
{
perror("Sending failed");
return false;
}
else
bytesSent += sendRetVal;
}
return true;
}
Also this is the method which calls the s_send.
void EpollManager::broadcast(unsigned char msg[], int bytesRead, int sender)
{
for(iMap = connections.begin(); iMap != connections.end(); iMap++)
{
if(sender != iMap->first)
{
if(s_send(iMap->first, msg, bytesRead, 0)) // MSG_NOSIGNAL
{
if(debug)
{
print_buffer(msg, bytesRead);
cout << "sent on file descriptor " << iMap->first << '\n';
}
}
}
}
if(connections.find(sender) != connections.end())
connections[sender]->reset_batch();
}
And to clarify connections is an instance of boost's unordered_map. The data that the program chokes on is not unique in any way either. It has been broadcast successfully to other file descriptors, but chokes on a, at least seemingly, random one.
TCP congestion control, i.e. Nagle's algorithm, along side a full buffer (SO_SNDBUF socket option) will cause the send() and similar operations to block.
The lazy way around this is to implement separate threads for each socket but this does not scale too far. On Linux you should use non-blocking sockets with poll() or similar, with Windows you would investigate IO completion ports. Look at middleware libraries to simplify this, libevent is a popular cross platform example with recent inclusion of Windows IOCP support, alternatively Boost:ASIO for C++.
A useful article to read on IO scalability would be The C10K problem.
Note you really do not want to disable Nagle's on Internet traffic, even on a LAN you might see major problems without some form of congestion feedback.
The kernel keeps a finite buffer for sending data. If the receiver isn't receiving, that buffer will fill up and the sender will block. Could that be the problem?
I am implementing a Go Back N protocol for a networking class. I am using WaitForSingleObject to know when the socket on my receiver thread has data inside it:
int result = WaitForSingleObject(dataReady, INFINITE);
For Go Back N, I have to send multiple packets to the receiver at once, and manipulate the data, and then send an ACK packet back to the sender. I have a variable expectedSEQ that I increment each time I send an ACK so that I know if a packet arrives out of order.
However, when the first packet arrives, my debugger tells me that expectedSEQ has been incremented, but when the next packet is being manipulated, expectedSEQ is still its original value.
Anyone have any idea why this is occurring? If I put an if statement as such
if(recvHeader->seq == expectedSeq+1)
the second packet registers properly and sends an ack. Clearly this will not work for any amount of packets higher than 2 tho.
I event tried wrapping the entire section (including the original WaitForSingleObject) in a semaphore in an attempt to make everything wait until after the variable was incremented but this didn't work either.
Thanks for your help!
Eric
Per Request: more code!
WaitForSingleObject(semaphore, INFINITE);
int result = WaitForSingleObject(dataReady, timeout);
if(result == WAIT_TIMEOUT)
rp->m->printf("Receiver:\tThe packet was lost on the network.\n");
else {
int bytes = recvfrom(sock, recv_buf, MAX_PKT_SIZE, 0, 0, 0);
if(bytes > 0) {
rp->m->printf("Receiver:\tPacket Received\n");
if(recvHeader->syn == 1 && recvHeader->win > 0)
windowSize = recvHeader->win;
//FORMER BUG: (recvHeader->syn == 1 ? expectedSeq = recvHeader->seq : expectedSeq = 0);
if(recvHeader->syn)
expectedSeq = recvHeader->seq;
switch(rp->protocol) {
case RDT3:
...
break;
case GBN:
if(recvHeader->seq == expectedSeq) {
GBNlastACK = expectedACK;
//Setup sendHeader for the protocol
sendHeader->ack = recvHeader->seq;
...
sendto(sock, send_buf, sizeof(send_buf), 0, (struct sockaddr*) &send_addr, sizeof(struct sockaddr_in));
if(sendHeader->syn == 0) { //make sure its not the first SYN connection packet
WaitForSingleObject(mutex, INFINITE);
expectedSeq++;
ReleaseMutex(mutex);
if(recvHeader->fin) {
fin = true;
rp->m->printf("Receiver:\tFin packet has been received. SendingOK\n");
}
}
}
break;
}//end switch
}
Exactly how and when do you increment expectedSeq? There may be a memory barrier issue involved, so you might need to access expectedSeq inside a critical section (or protected by some other synchronization object) or use Interlocked APIs to access the variable.
For example, the compiler might be caching the value of expectedSeq in a register, so synchrnoization APIs might be necessary to prevent that from happening at critical areas of the code. Note that using the volatile key word may seem to help, but it's also probably not entirely sufficient (though it might with MSVC, since Microsoft's compiler uses full memory barriers when dealing with volatile objects).
I think you'll need to post more code shown exactly how you're handling expectedSeq.
As I was entering my code (hand typing since my code was on another computer), I realized a very stupid bug when I was setting the original value for expectedSeq. I was setting it to 0 every run through of a packet.
Have to love the code that comes out when you are coding until 5 am!