We're working with a C++ webrtc data channels library and in our test application, upon sending a few small packets that would totally amount to about 256kB, the usrsctp_sendv() call returns -1 (with errno as EWOULDBLOCK/EAGAIN which means "Resource is temporarily unavailable"). We believe this is because we're hitting the usrsctp's send buffer limit, which is 256 kB by default. We've tried adding several sleep delays in between each send call hoping it clears that buffer, but nothing works.
The receiving side, (a JS web page) does indeed receive all the bytes that we've sent up until it errors out. It's also worth noting that this only happens when we try to send data from the C++ application to the JS and not the other way around. We tried looking around mozilla's datachannels implementation, but can't seem to draw any conclusions on what the issue could be about.
It is hard to answer such question straight away. I would start looking into wireshark traces in order to see if your remote side (JS page) actually acknowledges data you send (e.i. if SACK chunks are sent back) and what is the value of received buffer (a_rwnd) reported in these SACKs. It might be possible that it is not an issue on your side, but you are getting EWOULDBLOCKS just because sending side SCTP cannot flush the data from buffers because it is still awaiting for delivery confirmation from remote end.
Please provide more details about your case, also if this is possible provide sample code for your JS page.
Related
I am a creating a very simple server that accepts http request from Browser(Safari) and responding some dump HTTP response back such as "Hello World" Message.
My program was blocked on the recv() function because it doesn't know whether the the client(browser) finish sending the HTTP request and recv() is a blocking function. (A very typical question)
The most popular answer I found is to send the length of the message before sending the message.
This solution is good but it doesn't work for me because I have no control on what is being sent from the client. And as far as I know, the browser does not send any message length before sending the real message.
The second most popular answer to to use asy I/O such as select() or poll(). But, personally, I don't think it is really a good strategy because once I had already received all the request message from the client, then, of course, I would like to go to the next step to handle the request. Why would I still waste my time and resource to wait for something that will never come even though it is not blocking anymore? (Creating threads poses similar question)
The solution I came up with is to check whether the size of the message received equal to the buffer size. For example, let's say I set the recvBufferSize to be 32 and the total size of the request message is 70. Then I will receive three packets of size 32, 32, 6 respectively.
I can tell that the client finish sending the request because the last packet's size is not equal to the
recvBuffersize(32).
However, as you can see, problems occurs when the request message's size is 64/96/128......
Other approaches may be like setting timeout, but I am not sure whether they are good or not.
And I want to build all the thing by myself so I may not be interested in any library such as zeromq or Boost.Asio
Can some people give some advice on my approach or provide some other better ways to solve the problem? Thanks a lot!
If you're implementing the HTTP protocol you need to study the HTTP RFCs. There are several different ways you can know the request length, starting with the Content-length header, and the combined lengths of the chunks if the client is using chunked transfer encoding.
I'm experiencing a frustrating behaviour of windows sockets that I cant find any info on, so I thought I'd try here.
My problem is as follows:
I have a C++ application that serves as a device driver, communicating with a serial device connected
through a serial to TCP/IP converter.
The serial protocol requires a lot of single byte messages to be communicated between the device and
my software. I noticed that these small messages are only sent about 3 times after startup, after which they are no longer actually transmitted (checked with wireshark). All the while, the send() method keeps returning > 0, indicating that the message has been copied to it's send buffer.
I'm using blocking sockets.
I discovered this issue because this particular driver eventually has to drop it's connection when the send buffer is completely filled (select() fails due to this after about 5 hours, but it happens much sooner when I reduce SO_SNDBUF size).
I checked, and noticed that when I call send with messages of 2 bytes or larger, transmission never fails.
Any input would be very much appreciated, I am out of ideas how to fix this.
This is a rare case when you should set TCP_NODELAY so that the sends are written individually, not coalesced. But I think you have another problem as well. Are you sure you're reading everything that's being sent back? And acting on it properly? It sounds like an application protocol problem to me.
I am developing a viewer application, in which server captures image, perform some image processing operations and this needs to be shown at the client end, on HTML5 canvas. The server that I've written is in VC++ and uses http://www.codeproject.com/Articles/371188/A-Cplusplus-Websocket-server-for-realtime-interact.
So far I've implemented the needed functionality. Now all I need to do is Optimization. Reference was a chat application which was meant to send strings, and so I was encoding data into 7-bit format. Which is causing overhead. I need binary data transfer capability. So I modified the encoding and framing (Now opcode is 130, for binary messages instead of 129.) and I can say that server part is alright. I've observed the outgoing frame, it follows protocol. I'm facing problem in the client side.
Whenever the client receives the incoming message, if all the bytes are within limits (0 to 127) it calls onMessage() and I can successfully decode the incoming message. However even a single introduction of character which is >127 causes the client to call onClose(). The connection gets closed and I am unable to find cause. Please help me out.
PS: I'm using chrome 22.0 and Firefox 17.0
Looks like your problem is related to how you assemble your frames? As you have an established connection that terminates when the onmessage event is about to fire, i asume that it is frame related?
What if you study the network -> WebSocket -> frame of your connection i Google Chrome? what does it say?
it may be out-of-scope for you ?, but im one of the developers of XSockets.NET (C#) framework, we have binary support there, if you are interested there is an example that i happend to publish just recently, it can be found on https://github.com/MagnusThor/XSockets.Binary.Controller.Example
How did you observe the outgoing frame and what were the header bytes that you observed? It sounds like you may not actually be setting the binary opcode successfully, and this is triggering UTF-8 validation in the browser which fails.
I have an application that compresses and sends data via socket and data received is written in remote machine. During recovery, this data is decompressed and retrieved. Compression/Decompression is done using "zlib".But during decompression I face the following problem randomly:
zlib inflate() fails with error "Z_DATA_ERROR" for binary files like .xls,.qbw etc.
The application compresses data in blocks say "1024" bytes in a loop with data read from the file and decompresses in the same way.From the forum posts, I found that one reason for Z_DATA_ERROR is due to data corruption. As of now, to avoid this problem, we have introduced CRC check of data compressed during send and what is received.
Any possible reasons on why this happens is really appreciated! (as this occurs randomly and for the same file, it works the other time around).Is it bcoz of incorrect handling of zlib inflate() and deflate() ?
Note: If needed,will post the exact code snippet for further analysis!
Thanks...Udhai
You didn't mention if the socket was TCP or UDP; but based on the blocking and checksumming, I'm going out on a limb and guessing it's UDP.
If you're sending the compressed packets over UDP they could be received out-of-order on the other end, or the packets could be lost in transit.
Getting things like out-of-sequencing and lost packets right ends up being a lot of the work that is all fixed by using the TCP protocol - you have a simple pipe that guarantees the data arrives in-order and as-expected.
Also I'd make sure that the code on the receiving side is simple, and receives into buffers allocated on the heap and not on the stack (I've seen many a bug triggered by this).
Again, this is just an educated guess based on the detail of the question.
I remade this post because my title choice was horrible, sorry about that. My new post can be found here: After sending a lot, my send() call causes my program to stall completely. How is this possible?
Thank you very much everyone. The problem was that the clients are actually bots and they never read from the connections. (Feels foolish)
TCP_NODELAY might help latency of small packets from sender to receiver, but the description you gave points into different direction. I can imagine the following:
Sending more data than receivers actually consume - this eventually overflows sender's buffer (SO_SNDBUF) and causes the server process to appear "stuck" in the send(2) system call. At this point the kernel waits for the other end to acknowledge some of the outstanding data, but the receiver does not expect it, so it does not recv(2).
There are probably other explanations, but it's hard to tell without seeing the code.
If send() is blocking on a TCP socket, it indicates that the send buffer is full, which in turn indicates that the peer on the other end of the connection isn't reading data fast enough. Maybe that client is completely stuck and not calling recv() often enough.
Nagle's wouldn't cause "disappearing into the kernel", which is why disabling it doesn't help you. Nagle's will just buffer data for a little while, but will eventually send it without any prompting from the user.
There is some other culprit.
Edit for the updated question.
You must make sure that the client is receiving all of the sent data, and that it is receiving it quickly. Have each client write to a log or something to verify.
For example, if a client is waiting for the server to accept its 23-byte update, then it might not be receiving the data. That can cause the server's send buffer to fill-up, which would cause degradation and eventual deadlock.
If this is indeed the culprit, the solution would be some asynchronous communication, like Boost's Asio library.