Cloudfront: HTTP/2 compliance issue - amazon-web-services

I've resorted to stackoverflow becase AWS doesn't provide technical support for free tiers.
Someone reported an issue using httpx, the ruby HTTP client library I maintain: https://gitlab.com/honeyryderchuck/httpx/issues/64
The report came after a recent upgrade to improve HTTP/2 spec compliance in the parser. Although the library now passes the h2spec, there seem to be legitimate issues requesting from cloudfront, due to an apparent part of the spec they don't seem to comply with: when a flow control window over 2 ** 31 - 1 is advertised, the sender must not allow it and return a flow control error.
Is it correct?

sbordet answer is not fully correct.
He is right that flow-control window can't exceed 2^31-1 bytes and that the initial flow-control window size is 65535 bytes. However the part that CloudFront sends wrong value of 65536 is incorrect, as any endpoint is allowed to modify the default initial window size as stated in RFC7540 Sec 6.9.2:
Both endpoints can adjust the initial window size for new streams by including a value for SETTINGS_INITIAL_WINDOW_SIZE in the SETTINGS frame that forms part of the connection preface.
Note that this setting is applied only to new streams and not connection flow-control window size. The connection flow-control window size can be updated only through WINDOW_UPDATE frame, as mentioned in next line of RFC:
The connection flow-control window can only be changed using WINDOW_UPDATE frames.
So after CloudFront updated SETTINGS_INITIAL_WINDOW_SIZE to 65536 bytes, the connection flow-control window is still at 65535 bytes, so the next WINDOW_UPDATE of 2147418112 bytes, increases it to 2^31-1 bytes (which is a valid value according to RFC), not 2^31 bytes.

You are correct that the flow control window cannot exceed 2^31-1, as indicated in the specification.
The initial flow control window is 65535, not 65536 as sent from Cloudfront, so the subsequent enlargement of the flow control window by 2147418112 yields 2^31 which is off-by-one too big for the flow control window.
Your client correctly sends a GO_AWAY with error FLOW_CONTROL_ERROR.

Related

usrsctp send buffer does not free itself

We're working with a C++ webrtc data channels library and in our test application, upon sending a few small packets that would totally amount to about 256kB, the usrsctp_sendv() call returns -1 (with errno as EWOULDBLOCK/EAGAIN which means "Resource is temporarily unavailable"). We believe this is because we're hitting the usrsctp's send buffer limit, which is 256 kB by default. We've tried adding several sleep delays in between each send call hoping it clears that buffer, but nothing works.
The receiving side, (a JS web page) does indeed receive all the bytes that we've sent up until it errors out. It's also worth noting that this only happens when we try to send data from the C++ application to the JS and not the other way around. We tried looking around mozilla's datachannels implementation, but can't seem to draw any conclusions on what the issue could be about.
It is hard to answer such question straight away. I would start looking into wireshark traces in order to see if your remote side (JS page) actually acknowledges data you send (e.i. if SACK chunks are sent back) and what is the value of received buffer (a_rwnd) reported in these SACKs. It might be possible that it is not an issue on your side, but you are getting EWOULDBLOCKS just because sending side SCTP cannot flush the data from buffers because it is still awaiting for delivery confirmation from remote end.
Please provide more details about your case, also if this is possible provide sample code for your JS page.

Simulating Keep Alive Signal

I am working on Connecting an Embedded Circuit board to PC via TCP.
The board contains a chip which, sadly, doesn't generate any interrupt on Receiving data. But it does generates an interrupt on receiving "Keep-Alive" signal.
Currently I have to poll for data.
Instead, I am thinking that, I will send data from PC and then a KeepAlive Signal. Whenever a KeepAlive is received, I will read data too.
I do understand that this might generate false alarms but it's better than continuous polling.
I observed a Keep-Alive packet on Wireshark, it has One byte of Data and it is "00".
And then I tried to send TCP Packet with Data as "00":
I can see, Only Flag Section is different.
I got Two questions:
(Broadly) How to manually send a Keep-Alive Signal?
How to change that flag setting? (Flags in send and sendto are different)
Update:
I have tried RawSockets, but that didn't help me or I missed something. I just change Flag to ACK in RAW Sockets header.
RFC 1122 section 4.2.3.6 might be worth reading.
It states that keepalive is an optional feature of the TCP implementation. It also states that keepalive signals should be limited to at most one every two hours. So manually emitting one from your application isn't a desired feature in general.
Furthermore, it describes details about the implementation, in particular pointing out the sequence number involved. This is one difference visible in your screen shots which you apparently failed to notice: the real keepalive packet has a very high relative sequence number, which is simply the unsigned representation of -1. To reproduce this with raw sockets, I think you'd have to somehow get your hands on the current TCP sequence number of the existing connection. Haven't worked enough with RawSockets to know details on how to do this.
The supported means to have the system send keepalives periodically is using the SO_KEEPALIVE option. But that won't be of much use to emit such a signal at a specific moment in time, I think.

Unable to send binary data over WebSockets

I am developing a viewer application, in which server captures image, perform some image processing operations and this needs to be shown at the client end, on HTML5 canvas. The server that I've written is in VC++ and uses http://www.codeproject.com/Articles/371188/A-Cplusplus-Websocket-server-for-realtime-interact.
So far I've implemented the needed functionality. Now all I need to do is Optimization. Reference was a chat application which was meant to send strings, and so I was encoding data into 7-bit format. Which is causing overhead. I need binary data transfer capability. So I modified the encoding and framing (Now opcode is 130, for binary messages instead of 129.) and I can say that server part is alright. I've observed the outgoing frame, it follows protocol. I'm facing problem in the client side.
Whenever the client receives the incoming message, if all the bytes are within limits (0 to 127) it calls onMessage() and I can successfully decode the incoming message. However even a single introduction of character which is >127 causes the client to call onClose(). The connection gets closed and I am unable to find cause. Please help me out.
PS: I'm using chrome 22.0 and Firefox 17.0
Looks like your problem is related to how you assemble your frames? As you have an established connection that terminates when the onmessage event is about to fire, i asume that it is frame related?
What if you study the network -> WebSocket -> frame of your connection i Google Chrome? what does it say?
it may be out-of-scope for you ?, but im one of the developers of XSockets.NET (C#) framework, we have binary support there, if you are interested there is an example that i happend to publish just recently, it can be found on https://github.com/MagnusThor/XSockets.Binary.Controller.Example
How did you observe the outgoing frame and what were the header bytes that you observed? It sounds like you may not actually be setting the binary opcode successfully, and this is triggering UTF-8 validation in the browser which fails.

Increase the TCP receive window for a specific socket

How to increase the TCP receive window for a specific socket?
- I know how to do so for all the sockets by setting the registry key TcpWindowSize,
but how do do that for a specific one?
According to MSFT's documents, the way is
Calling the Windows Sockets function
setsockopt, which sets the receive
window on a per-socket basis.
But in setsockopt, it is mentioned about SO_RCVBUF :
Specifies the total per-socket buffer
space reserved for receives. This is
unrelated to SO_MAX_MSG_SIZE and does
not necessarily correspond to the size
of the TCP receive window.
So is it possible? How?
Thanks.
SO_MAX_MSG_SIZE is for UDP. Here's from MSDN:
SO_MAX_MSG_SIZE - Returns the maximum outbound message size for message-oriented sockets supported by the protocol. Has no meaning for stream-oriented sockets.
It's also not settable.
For TCP just use SO_(SND|RCV)BUF.
I am fairly sure that SO_RCVBUF is what you want. The first link says that SO_RCVBUF has the highest priority for determining the TCP window size over and above anything set on the system. From the way I am reading it, I think that all second part is saying is that the SO_RCVBUF size does not have to match the system receive window size. In other words, it can be a different size that you set.
You need to be careful tuning this and testing the results. Windows Vista and above have a smart adaptive window size auto tuning feature which specifically tunes the window size to work well both on LANs and long fat networks such as 3G and high loss networks. Setting the window size yourself will override this so that windows can no longer tune the window size automatically. This may damage your performance should you ever need to run over a particularly high latency network such as a cellular network.

timing of reads from serial port on windows

I'm trying to implement a protocol over serial port on a windows(xp) machine.
The problem is that message synchronization in the protocol is done via a gap in the messages, i.e., x millisecond gap between sent bytes signifies a new message.
Now, I don't know if it is even possible to accurately detect this gap.
I'm using win32/serport.h api to read in one of the many threads of our server. Data from the serial port gets buffered, so if there is enough (and there will be enough) latency in our software, I will get multiple messages from the port buffer in one sequence of reads.
Is there a way of reading from the serial port, so that I would detect gaps in when particular bytes were received?
If you want more control over a Windows serial port, you will have to write your own driver.
The problem I see is that Windows may be executing other tasks or programs (such as virus checking) which will cause timing issues with your application. You application will not know when it has been swapped out for another application.
If possible, I suggest your program time stamp the end of the last message. When the next message arrives, another time stamp is taken. The difference between time stamps may help in detecting new messages.
I highly suggest changing the protocol so that timing is not a factor.
I've had to do something similar in the past. Although the protocol in question did not use any delimiter bytes, it did have a crc and a few fixed value bytes at certain positions so I could speculatively decode the message to determine if it was a complete individual message.
It always amazes me when I encounter these protocols that have no context information in them.
Look for crc fields, length fields, type fields with a corresponding indication of the expected message length or any other fixed offset fields with predictable values that could help you determine when you have a single complete message.
Another approach might be to use the CreateFile, ReadFile and WriteFile API functions. There are settings you can change using the SetCommTimeouts function that allows you to halt the i/o operation when a certain time gap is encountered.
Doing that along with some speculative decoding could be your best bet.
It sounds odd that there is no sort of data format delineating a "message" from the device. Every serial port device I've worked with has had some form of a header that described the data it transmitted.
Just throwing this out there, but could you use the Win32 Asynchronous ReadFileEx() and WriteFileEx() system calls? They allow you to attach a callback function, and then you might be able to manage a timer within the callback. The timer would only provide you a rough estimation, however.
If you need to write your own driver, the Windows Driver Kit has a sample that shows how to write a serial port driver. I can't imagine that you'll be able to override the Windows serial port bus driver(the driver that directly controls the serial port on your Windows machine), but you might be able to write a driver that sits on top of the bus driver.
I thought so. You all grew up with the web, I didn't, though I was present at the birth. Let me guess, the one byte is 1(SOH) or 2(STX)? IMVEO it is enough. You just need to think outside the box.
You receive message_delimiter followed by 4 (as length) and then 4 bytes of data. A valid message is not those 6 bytes.
message_delimiter - 1 byte
4 - length - 1 byte
(4 data bytes) - 4 bytes
A valid message is always bounded by the message_delimiter, so it would look like
message_delimiter - 1 byte
4 - length - 1 bytes
(4 data bytes) - 4 bytes
message_delimiter - 1 byte