I am implementing a Go Back N protocol for a networking class. I am using WaitForSingleObject to know when the socket on my receiver thread has data inside it:
int result = WaitForSingleObject(dataReady, INFINITE);
For Go Back N, I have to send multiple packets to the receiver at once, and manipulate the data, and then send an ACK packet back to the sender. I have a variable expectedSEQ that I increment each time I send an ACK so that I know if a packet arrives out of order.
However, when the first packet arrives, my debugger tells me that expectedSEQ has been incremented, but when the next packet is being manipulated, expectedSEQ is still its original value.
Anyone have any idea why this is occurring? If I put an if statement as such
if(recvHeader->seq == expectedSeq+1)
the second packet registers properly and sends an ack. Clearly this will not work for any amount of packets higher than 2 tho.
I event tried wrapping the entire section (including the original WaitForSingleObject) in a semaphore in an attempt to make everything wait until after the variable was incremented but this didn't work either.
Thanks for your help!
Eric
Per Request: more code!
WaitForSingleObject(semaphore, INFINITE);
int result = WaitForSingleObject(dataReady, timeout);
if(result == WAIT_TIMEOUT)
rp->m->printf("Receiver:\tThe packet was lost on the network.\n");
else {
int bytes = recvfrom(sock, recv_buf, MAX_PKT_SIZE, 0, 0, 0);
if(bytes > 0) {
rp->m->printf("Receiver:\tPacket Received\n");
if(recvHeader->syn == 1 && recvHeader->win > 0)
windowSize = recvHeader->win;
//FORMER BUG: (recvHeader->syn == 1 ? expectedSeq = recvHeader->seq : expectedSeq = 0);
if(recvHeader->syn)
expectedSeq = recvHeader->seq;
switch(rp->protocol) {
case RDT3:
...
break;
case GBN:
if(recvHeader->seq == expectedSeq) {
GBNlastACK = expectedACK;
//Setup sendHeader for the protocol
sendHeader->ack = recvHeader->seq;
...
sendto(sock, send_buf, sizeof(send_buf), 0, (struct sockaddr*) &send_addr, sizeof(struct sockaddr_in));
if(sendHeader->syn == 0) { //make sure its not the first SYN connection packet
WaitForSingleObject(mutex, INFINITE);
expectedSeq++;
ReleaseMutex(mutex);
if(recvHeader->fin) {
fin = true;
rp->m->printf("Receiver:\tFin packet has been received. SendingOK\n");
}
}
}
break;
}//end switch
}
Exactly how and when do you increment expectedSeq? There may be a memory barrier issue involved, so you might need to access expectedSeq inside a critical section (or protected by some other synchronization object) or use Interlocked APIs to access the variable.
For example, the compiler might be caching the value of expectedSeq in a register, so synchrnoization APIs might be necessary to prevent that from happening at critical areas of the code. Note that using the volatile key word may seem to help, but it's also probably not entirely sufficient (though it might with MSVC, since Microsoft's compiler uses full memory barriers when dealing with volatile objects).
I think you'll need to post more code shown exactly how you're handling expectedSeq.
As I was entering my code (hand typing since my code was on another computer), I realized a very stupid bug when I was setting the original value for expectedSeq. I was setting it to 0 every run through of a packet.
Have to love the code that comes out when you are coding until 5 am!
Related
So this is the first time I'm actually asking a question in here, although I have been using this site for ages!
My problem is a bit tricky. I'm trying to develop a client server application for sending large files, using UDP with my own error checking and flow control. Now, I've developed a fully-functioning server and client. Client requests for a specific file, server starts sending. The file is read in parts into a buffer to avoid having to read small bits of the file every time a packet is send, thus saving processing time. Packets consist of 1400 bytes of actual data + a header of 28 bytes (sequence numbers, ack numbers, checksum etc..).
So I had the basics down, a simple stop-and-wait protocol. Send packet and receive ack, before sending next packet.
To be able to implement a smarter flow control algorithm, for starters with just some windowing, I have to run the sending-part and receiving-ack part in two different threads. Now here's where I got into problems. This is my first time working with threads, so please bear with me.
My problem is that the file written from the packets on the client side is corrupt. Well, when testing with a small jpg file, the file is only corrupt 50% of the times, when testing with a MP4 file, it's always corrupt! So I guess maybe the thread somehow rearranges the order in which the packets are send? I use sequence numbers, so the problem must occur before assigning the sequence number to the packets...
I know for sure that the part where I split up the file is correct, and also where I reassemble it on the client side, since I have tested this before trying to implement the threading. It should also be noted that I copied the exact sending-part of the code into the sending-thread, and this also worked perfectly before putting it into a thread.. This is also why I'm just posting the threading-part of my code, since this is clearly what is creating the problem (and since the entire code of the project would take up a loooot of space)
My sending thread code:
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t condition_var = PTHREAD_COND_INITIALIZER;
static void *send_thread(void *){
if (file.is_open()) {
while(!file.reachedEnd()){
pthread_mutex_lock(& mutex);
if(seq <= upperwindow) {
int blocksize = file.getNextBlocksize();
senddata = new unsigned char[blocksize + 28];
Packet to_send;
to_send.data = new char[blocksize];
to_send.sequenceNumber = seq;
to_send.ackNumber = 0;
to_send.type = 55; // DATA
file.readBlock(*to_send.data);
createPacket(senddata, to_send, blocksize + 28);
if (server.sendToClient(reinterpret_cast<char*>(senddata), blocksize + 28) == -1)
perror("sending failed");
incrementSequenceNumber(seq);
/* free memory */
delete [] to_send.data;
delete [] senddata;
}
pthread_mutex_unlock(& mutex);
}
pthread_exit(NULL);
} else {
perror("file opening failed!");
pthread_exit(NULL);
}
}
My receiving ack thread code:
static void *wait_for_ack_thread(void *){
while(!file.reachedEnd()){
Packet ack;
if (server.receiveFromClient(reinterpret_cast<char*>(receivedata), 28) == -1) {
perror("error receiving ack");
} else {
getPacket(receivedata, ack, 28);
pthread_mutex_lock(& mutex);
incrementSequenceNumber(upperwindow);
pthread_mutex_unlock(& mutex)
}
}
pthread_exit(NULL);
}
All comments are very much appreciated! :)
EDIT:
Added code of the readBlock function:
void readBlock(char & in){
memcpy(& in, buffer + block_position, blocksize);
block_position = block_position + blocksize;
if(block_position == buffersize){
buf_position ++;
if(buf_position == buf_reads){
buffersize = filesize % buffersize;
}
fillBuffer();
block_position = 0;
}
if(blocksize < MAX_DATA_SIZE){
reached_end = true;
return;
}
if((buffersize - block_position) < MAX_DATA_SIZE){
blocksize = buffersize % blocksize;
}
}
Create an array that represents the status of the communication.
0 means unsent, or sent and receiver reported error. 1 means sending. 2 means sent, and ack gotten.
Allocate this array, and guard access to it with a mutex.
The sending thread keeps two pointers into the array -- "has been sent up to" and "should sent next". These are owned by the sending thread.
The ack thread simply gets ack packets, locks the array, and does the transition on the state.
The sending thread locks the array, checks if it can advance the "has been sent up to" pointer (or if it should resend old stuff). If it notices an error, it reduces the "should be sent next" pointer to point at it.
It then sees if it should send stuff next. If it should, it marks the node as "being sent", unlocks the array, and sends it.
If the sending thread did no work, and found nothing to do, it goes to sleep on a timeout, and possibly a "kick awake" by the ack thread.
Now, note that the client can get the packets sent by this in the wrong order, unless you limit it to having 1 packet in transit.
The connection status array does not have to be a literal array, but it is easier if you start with that and optimize later.
On the receiving end, you have to pay attention to the sequence number, as the packets can get there out of sequence. To test this, write a server that sends the packets in the wrong order on purpose, and ensure that the client manages to stitch it together properly.
I have a socket program which acts like both client and server.
It initiates connection on an input port and reads data from it. On a real time scenario it reads data on input port and sends the data (record by record ) on to the output port.
The problem here is that while sending data to the output port CPU usage increases to 50% while is not permissible.
while(1)
{
if(IsInputDataAvail==1)//check if data is available on input port
{
//condition to avoid duplications while sending
if( LastRecordSent < LastRecordRecvd )
{
record_time temprt;
list<record_time> BufferList;
list<record_time>::iterator j;
list<record_time>::iterator i;
// Storing into a temp list
for(i=L.begin(); i != L.end(); ++i)
{
if((i->recordId > LastRecordSent) && (i->recordId <= LastRecordRecvd))
{
temprt.listrec = i->listrec;
temprt.recordId = i->recordId;
temprt.timestamp = i->timestamp;
BufferList.push_back(temprt);
}
}
//Sending to output port
for(j=BufferList.begin(); j != BufferList.end(); ++j)
{
LastRecordSent = j->recordId;
std::string newlistrecord = j->listrec;
newlistrecord.append("\n");
char* newrecord= new char [newlistrecord.size()+1];
strcpy (newrecord, newlistrecord.c_str());
if ( s.OutputClientAvail() == 1) //check if output client is available
{
int ret = s.SendBytes(newrecord,strlen(newrecord));
if ( ret < 0)
{
log1.AddLogFormatFatal("Nice Send Thread : Nice Client Disconnected");
--connected;
return;
}
}
else
{
log1.AddLogFormatFatal("Nice Send Thread : Nice Client Timedout..connection closed");
--connected; //if output client not available disconnect after a timeout
return;
}
}
}
}
// Sleep(100); if we include sleep here CPU usage is less..but to send data real time I need to remove this sleep.
If I remove Sleep()...CPU usage goes very high while sending data to out put port.
}//End of while loop
Any possible ways to maintain real time data transfer and reduce CPU usage..please suggest.
There are two potential CPU sinks in the listed code. First, the outer loop:
while (1)
{
if (IsInputDataAvail == 1)
{
// Not run most of the time
}
// Sleep(100);
}
Given that the Sleep call significantly reduces your CPU usage, this spin-loop is the most likely culprit. It looks like IsInputDataAvail is a variable set by another thread (though it could be a preprocessor macro), which would mean that almost all of that CPU is being used to run this one comparison instruction and a couple of jumps.
The way to reclaim that wasted power is to block until input is available. Your reading thread probably does so already, so you just need some sort of semaphore to communicate between the two, with a system call to block the output thread. Where available, the ideal option would be sem_wait() in the output thread, right at the top of your loop, and sem_post() in the input thread, where it currently sets IsInputDataAvail. If that's not possible, the self-pipe trick might work in its place.
The second potential CPU sink is in s.SendBytes(). If a positive result indicates that the record was fully sent, then that method must be using a loop. It probably uses a blocking call to write the record; if it doesn't, then it could be rewritten to do so.
Alternatively, you could rewrite half the application to use select(), poll(), or a similar method to merge reading and writing into the same thread, but that's far too much work if your program is already mostly complete.
if(IsInputDataAvail==1)//check if data is available on input port
Get rid of that. Just read from the input port. It will block until data is available. This is where most of your CPU time is going. However there are other problems:
std::string newlistrecord = j->listrec;
Here you are copying data.
newlistrecord.append("\n");
char* newrecord= new char [newlistrecord.size()+1];
strcpy (newrecord, newlistrecord.c_str());
Here you are copying the same data again. You are also dynamically allocating memory, and you are also leaking it.
if ( s.OutputClientAvail() == 1) //check if output client is available
I don't know what this does but you should delete it. The following send is the time to check for errors. Don't try to guess the future.
int ret = s.SendBytes(newrecord,strlen(newrecord));
Here you are recomputing the length of the string which you probably already knew back at the time you set j->listrec. It would be much more efficient to just call s.sendBytes() directly with j->listrec and then again with "\n" than to do all this. TCP will coalesce the data anyway.
I seem to have a problem with select.
while(!sendqueue.empty())
{
if(!atms_connection.connected)
{
//print error message
goto RECONNECT;
}
//select new
FD_ZERO(&wfds);
FD_SET(atms_connection.socket, &wfds);
tv.tv_sec = 1;
tv.tv_usec = 0;
retval = select(atms_connection.socket + 1, NULL, &wfds, NULL, &tv);
if (retval == -1) {
printf("Select failed\n");
break;
}
else if (retval) {
printf("Sent a Message.\n");
}
else {
//printf("retval value is %d\n",retval);
printf("Server buffer is full, try again...\n");
break;
}
n = write(atms_connection.socket, sendqueue.front().c_str(), sendqueue.front().length());
}
The function belongs to a thread, which, when it acquires the lock cleans a queue using select() and writes to socket in a loop until the queue is empty.
The first time the thread gets the lock it select() fine, but the second time it gets the lock and enters the while it always returns 0.
For the record, it used to work fine a while ago, and I haven't changed that code since then.
Selects returns 0 when it has a timeout. The following line specifies a time out to be 1 second.
tv.tv_sec = 1;
Usually a socket is ready for write and select would return immediately. However, if there is no room in the socket output buffer for the new data, select won't flag this socket as ready for write.
For example, this condition might happen when the other side of the connection is not calling recv/read. The amount of unconfirmed data grows and the buffer eventually becomes full. Since the timeout is fairly small, the select returns frequently with return value 0.
In addition to what is mentioned already, select api itself only tells whether a descriptor is ready. It can be used for both reading or writing depending upon what operation is chosen.
In your case, the "message sent" print seems to give the illusion the data is actually written, which is not the case. You have to make a write call on the descriptor which is returned as ready from select.
Select does not magically itself reads or writes to a buffer. In case you are a listening server, you call an accept or a read after select call returns successfully. In case you are a client and intend to write (like it appears in your case), you need to make an explicit write() or send() call.
I'm writing a chat program, and my receive function sometimes does not wait at all.. Here is the receiving code: The important parts are basically the first half, but i've added the whole function just in case. (Edit: the commenting is for myself, not notes to you guys reading! sorry!)
ReceiveStatus Server::Receive(PacketInternal*& packetInternalOut)
{
fd_set fds ;
int n ;
struct timeval tv ;
// Set up the file descriptor set.
FD_ZERO(&fds) ;
FD_SET(*p_socket, &fds) ;
// Set up the struct timeval for the timeout.
tv.tv_sec = NETWORKTIMEOUTSEC ;
tv.tv_usec = NETWORKTIMEOUTUSEC ;
// Wait until timeout or data received.
n = select ( *p_socket, &fds, NULL, NULL, &tv ) ;
if ( n == 0)
{
return ReceiveStatus::ReceiveTimeout;
}
else if( n == -1 )
{
return ReceiveStatus::ReceiveSocketError;
}
//need to make this more flexible so it can support others
sockaddr_in fromAddr;
int flags = 0;
int fromLength = sizeof(fromAddr);
char dataIn[TOTALPACKETSIZE];
int bytesIn = recvfrom(*p_socket, dataIn, TOTALPACKETSIZE, flags, (SOCKADDR*)&fromAddr, &fromLength);
// Convert fromAddr into ip, port
if(bytesIn == SOCKET_ERROR)
{
return ReceiveStatus::ReceiveSocketError;
}
if(bytesIn > 0)
{
memcpy(packetInternalOut,dataIn,bytesIn);
return ReceiveStatus::ReceiveSuccessful;
}
else
{
return ReceiveStatus::ReceiveEmpty;
}
}
Is there anything that could effect whether or not this works or doesn't work? my chat program can either be a server or a client. they both use this same code. The server, when waiting for a connection, sits on Select() for 100 seconds, as NETWORKTIMEOUTSEC = 100. But in the char program, whenever I want to send a message, i first send a transfer request, and then I wait for an acknowledgement (For the acknowledgement packet, i need to call receive again). Now this is the step that does not wait. my ReceiveAck function calls Receive(), and receive just runs straight over the entire code. I can test this by creating a client and no server. If i send a message where there is no server, it should wait 100 seconds for an acknowledgement, and then time out. But instead, as soon as i hit enter, it says it timed out.
i cant work out what would be making it skip this step. I have debugged my chat program in both its server and client states. The values of tv and fds are the same in both, yet the server will wait and the client wont...
The first parameter to select() is one greater than the last socket. So you need:
n = select ( *p_socket + 1, &fds, NULL, NULL, &tv ) ;
Select also returns early (i.e. without any of the sockets having data present) when your application is hit by a signal. So if your app uses a lot of usleep() and friends in a different thread, you might be in for a surprise.
select() should always be used in a loop. You must check its return for three conditions:
-1 (an error), which you must evaluate to determine if it is fatal. EINTR is an example of a non-fatal error.
a zero, in which case some indeterminate amount of time has passed and, if you care about how long its been, you need to check the time separately.
A positive value, in which case you should check all of the flagged descriptors and act on them.
In all cases, you should check whether any other conditions exist which might make you want to exit the loop, such as how much time as actually passed.
Note that the first parameter to select() should generally be the constant FD_SETSIZE. There is little to be gained in setting it to anything else.
Also note that just because you received a datagram doesn't mean you received the datagram you wanted. You need a way to check that you did not get some random datagram that happened to be floating around on the network (it happens). Along those lines, make sure TOTALPACKETSIZE is 65536, because that's theoretically (approximately) how big a random packet could be.
I've implemented a simple socket wrapper class. It includes a non-blocking function:
void Socket::set_non_blocking(const bool b) {
mNonBlocking = b; // class member for reference elsewhere
int opts = fcntl(m_sock, F_GETFL);
if(opts < 0) return;
if(b)
opts |= O_NONBLOCK;
else
opts &= ~O_NONBLOCK;
fcntl(m_sock, F_SETFL, opts);
}
The class also contains a simple receive function:
int Socket::recv(std::string& s) const {
char buffer[MAXRECV + 1];
s = "";
memset(buffer,0,MAXRECV+1);
int status = ::recv(m_sock, buffer, MAXRECV,0);
if(status == -1) {
if(!mNonBlocking)
std::cout << "Socket, error receiving data\n";
return 0;
} else if (status == 0) {
return 0;
} else {
s = buffer;
return status;
}
}
In practice, there seems to be a ~15ms delay when Socket::recv() is called. Is this delay avoidable? I've seen some non-blocking examples that use select(), but don't understand how that might help.
It depends on how you using sockets. If you have multiple sockets and you loop over all of them checking for data that may account for the delay.
With non-blocking recv you are depending on data being there. If your application need to use more than one socket you will have to constantly pool each socket in turns to find out if any of them have data available.
This is bad for system resources because it means your application is constantly running even when there is nothing to do.
You can avoid that with select. You basically set up your sockets, add them to group and select on the group. When anything happens on any of the selected sockets select returns specifying what happened and on which socket.
For some code about how to use select look at beej's guide to network programming
select will let you a specify a timeout, and can test if the socket is ready to be read from. So you can use something smaller than 15ms. Incidentally you need to be careful with that code you have, if the data on the wire can contain embedded NULs s won't contain all the read data. You should use something like s.assign(buffer, status);.
In addition to stefanB, I see that you are zeroing out your buffer every time. Why bother? recv returns how many bytes were actually read. Just zero out the one byte after ( buffer[status+1]=NULL )
How big is your MAXRECV? It might just be that you incur a page fault on the stack growth. Others already mentioned that zeroing out the receive buffer is completely unnecessary. You also take memory allocation and copy hit when you create a std::string out of received character data.