Unable to send binary data over WebSockets - c++

I am developing a viewer application, in which server captures image, perform some image processing operations and this needs to be shown at the client end, on HTML5 canvas. The server that I've written is in VC++ and uses http://www.codeproject.com/Articles/371188/A-Cplusplus-Websocket-server-for-realtime-interact.
So far I've implemented the needed functionality. Now all I need to do is Optimization. Reference was a chat application which was meant to send strings, and so I was encoding data into 7-bit format. Which is causing overhead. I need binary data transfer capability. So I modified the encoding and framing (Now opcode is 130, for binary messages instead of 129.) and I can say that server part is alright. I've observed the outgoing frame, it follows protocol. I'm facing problem in the client side.
Whenever the client receives the incoming message, if all the bytes are within limits (0 to 127) it calls onMessage() and I can successfully decode the incoming message. However even a single introduction of character which is >127 causes the client to call onClose(). The connection gets closed and I am unable to find cause. Please help me out.
PS: I'm using chrome 22.0 and Firefox 17.0

Looks like your problem is related to how you assemble your frames? As you have an established connection that terminates when the onmessage event is about to fire, i asume that it is frame related?
What if you study the network -> WebSocket -> frame of your connection i Google Chrome? what does it say?
it may be out-of-scope for you ?, but im one of the developers of XSockets.NET (C#) framework, we have binary support there, if you are interested there is an example that i happend to publish just recently, it can be found on https://github.com/MagnusThor/XSockets.Binary.Controller.Example

How did you observe the outgoing frame and what were the header bytes that you observed? It sounds like you may not actually be setting the binary opcode successfully, and this is triggering UTF-8 validation in the browser which fails.

Related

usrsctp send buffer does not free itself

We're working with a C++ webrtc data channels library and in our test application, upon sending a few small packets that would totally amount to about 256kB, the usrsctp_sendv() call returns -1 (with errno as EWOULDBLOCK/EAGAIN which means "Resource is temporarily unavailable"). We believe this is because we're hitting the usrsctp's send buffer limit, which is 256 kB by default. We've tried adding several sleep delays in between each send call hoping it clears that buffer, but nothing works.
The receiving side, (a JS web page) does indeed receive all the bytes that we've sent up until it errors out. It's also worth noting that this only happens when we try to send data from the C++ application to the JS and not the other way around. We tried looking around mozilla's datachannels implementation, but can't seem to draw any conclusions on what the issue could be about.
It is hard to answer such question straight away. I would start looking into wireshark traces in order to see if your remote side (JS page) actually acknowledges data you send (e.i. if SACK chunks are sent back) and what is the value of received buffer (a_rwnd) reported in these SACKs. It might be possible that it is not an issue on your side, but you are getting EWOULDBLOCKS just because sending side SCTP cannot flush the data from buffers because it is still awaiting for delivery confirmation from remote end.
Please provide more details about your case, also if this is possible provide sample code for your JS page.

SIEM streaming over TCP, getting multiple messages put into one event

Adjusting question:
SIEM is a management system that takes syslog and other types of log messages and allows an admin to search, combine, and report on logs in ways that helps them better understand what is going on. I am working with Splunk and sending Syslog (CEF) formatted messages to Splunk. When I send two messages to splunk, that appear in the same message as seen here.
<1286>Sep 16, 2014 2:07:38 PM dbrLnxRv CEF:0|MyCompany|MyApp|2.0|Malicious|6|FileName eicar.cab dname=www.csm-testcenter.org dst=10.204.64.137 dpt=8080 prot=HTTP src=10.204.82.168 spt=49809 suser="" xAuthenticatedUser="" requestMethod=GET requestClientApplication="" reason=0-1492-EICARFile.Detection_Test.Web.RTSS request=http://www.csm-testcenter.org/download/archives/cab/eicar.cab AnalysisType="" ThreatName=EICARFile ThreatReason=0-1492-EICARFile.Detection_Test.Web.RTSS Category=128 Direction=inbound Manual=1 TicketNumber=0 FileType=unknown FileHash=654ec5ae29c1718501af794822663da40aec51fc FileSize=168 Status=completed SessionId=79421 TransactionId=5
<1286>Sep 16, 2014 2:07:39 PM dbrLnxRv CEF:0|MyCompany|MyApp|2.0|Malicious|6|FileName eicar.cab dname=www.csm-testcenter.org dst=85.214.28.69 dpt=80 prot=HTTP src=10.204.64.137 spt=40378 suser="" xAuthenticatedUser="" requestMethod=GET requestClientApplication="" reason=0-1492-EICARFile.Detection_Test.Web.RTSS request=http://www.csm-testcenter.org/download/archives/cab/eicar.cab AnalysisType="" ThreatName=EICARFile ThreatReason=0-1492-EICARFile.Detection_Test.Web.RTSS Category=128 Direction=inbound Manual=1 TicketNumber=0 FileType=unknown FileHash=654ec5ae29c1718501af794822663da40aec51fc FileSize=168 Status=completed SessionId=79432 TransactionId=3
My questions is, how can I make them appear in separate blocks.
Currently have CR/LF between each message (verified by looking at the TCP transaction using Wireshark). Tried adding a NULL too, did not make a difference.
I know I am not down to the MS in the date/Time field, is that an issue?
Is there a message ID I am missing that will force Splunk to separate the messages?
Other ideas?
(When sending via UDP, the each event appears in it's own message)
Also tried disabling the nagle algo. and still same issue.
I created a custom C++ app to send SIEM messages from my data source to Splunk. If I send 6 SIEM messages over a socket at one time with each message is separated by a CR/LF (I also tried adding a NULL between the messages), Splunk puts them into one single event. What should I send to cause the messages to be in unique events? I've look everywhere for the spec on the SIEM protocol and have not found and binary base documents on the actual protocol.
TCP is 'stream' protocol and not message oriented. It does not maintain message boundaries. What one sends is not guaranteed to be read in the same way. It is upto the applications above TCP to interpret the bytes and form 'messages'
UDP on the hand maintains message boundaries. One sendto of X bytes will translate to recvfrom of X bytes. Though UDP will not gurantee that the message will reach the receiver.
The above stated reason is what you are witnessing. Multiple sends translating to single recv and in UDP the opposite.
Got it working
The protocol uses a basic /r/n to terminate the stream, which I had tried in the past. The real trick lies with the Splunk configuration. One needs to create a config files called props.conf and included the following line
SHOULD_LINEMERGE=false
Then everything works fine.

Winsock send() issue with single byte transmissions

I'm experiencing a frustrating behaviour of windows sockets that I cant find any info on, so I thought I'd try here.
My problem is as follows:
I have a C++ application that serves as a device driver, communicating with a serial device connected
through a serial to TCP/IP converter.
The serial protocol requires a lot of single byte messages to be communicated between the device and
my software. I noticed that these small messages are only sent about 3 times after startup, after which they are no longer actually transmitted (checked with wireshark). All the while, the send() method keeps returning > 0, indicating that the message has been copied to it's send buffer.
I'm using blocking sockets.
I discovered this issue because this particular driver eventually has to drop it's connection when the send buffer is completely filled (select() fails due to this after about 5 hours, but it happens much sooner when I reduce SO_SNDBUF size).
I checked, and noticed that when I call send with messages of 2 bytes or larger, transmission never fails.
Any input would be very much appreciated, I am out of ideas how to fix this.
This is a rare case when you should set TCP_NODELAY so that the sends are written individually, not coalesced. But I think you have another problem as well. Are you sure you're reading everything that's being sent back? And acting on it properly? It sounds like an application protocol problem to me.

regarding writing with TCP/IP in symbian

void CSocket::WriteSocket()
{
TBuf8<2> KData (_L8("60"));
//RBuf8 KData;
RBuf8 aQuery;
aQuery.CleanupClosePushL();
aQuery.CreateL(100);
// <?xml version="1.0"?><AM><P/><A><CE/></A></AM>
_LIT8(KData1,"61");
//_LIT8(KData2,"20");
_LIT8(KData3,"A");
_LIT8(KData4,"<?xml version=\"1.0\"?><AM><P/><A><CE/></A></AM>");
TBuf8<100> buff,buff1;
buff.Copy(_L("<?xml version=\"1.0\"?><AM><P/><A><CE/></A></AM>"));
TInt len=buff.Length();
buff1.AppendNum(len);
aQuery.Append(KData1);
aQuery.Append(buff1);
// aQuery.Append(KData2);
aQuery.Append(KData3);
aQuery.Append(buff);
//iSocket.Send(KData,KExpeditedDataOpt,iStatus);
iSocket.Write((TDesC8)aQuery,iStatus);
User::WaitForRequest(iStatus);
}
I am using this code on Symbian for communication with server which is in Java.
But the issue is: the data is not reaching the server. It is showing device has succesfully connected. What i am doing wrong in this code? Is TDes8 compatible with plain text in Java?
TDes8 is just bytes, which is fine if the Java end is trying to read ASCII, not so fine if the Java end is expecting 16bit unicode (which is what Java's char is). But if you're reading the socket at the other end, then even if it wasn't compatible you'd still see some data, just not what you were expecting. And the protocol by which you're communicating with the server should specify the charset, regardless of what language the server is implemented in.
Otherwise:
Does it work on the emulator?
Have you checked iStatus for errors? Normally if a socket connects you can write to it, but you never know.
Is the server reporting that the socket is connected? It's possible you've connected to the wrong host or the wrong port.
If the server did read some data, but then failed for some reason, would you know? I guess I'm asking whether you're debugging the other end too. If not then it's possible your data isn't in the right format and is being ignored at the other end. I think you're sending 6146A<?xml version="1.0"?><AM><P/><A><CE/></A></AM>. Some tracing and/or packet sniffing will tell you whether that's true.
A variable starting with "a" is usually a parameter: is this your real code? If not, then the thing about the malformed data applies, and your caller might be giving you the wrong thing.
You might want to PopAndDestroy aQuery before return, although that doesn't affect this issue.
Your function should be called WriteSocketL, since it can leave.
cross-posting reply from duplicated question:
Your mobile network operator could be blocking any non-HTTP traffic.
Your server could need to receive more data before returning it all.
I'm also particularly concerned about your use of java character/string considering I would expect a low-level java socket on the server to put incoming network data in a byte[], not a String. If your server is using something like a call to a readLine() method, you may need to add a carriage return character in the data your client sends.

"Specified network name is no longer available" in Httplistener

I have built a simple web service that simply uses HttpListener to receive and send requests. Occasionally, the service fails with "Specified network name is no longer available". It appears to be thrown when I write to the output buffer of the HttpListenerResponse.
Here is the error:
ListenerCallback() Error: The specified network name is no longer available at System.Net.HttpResponseStream.Write(Byte[] buffer, Int32 offset, Int32 size)
and here is the guilty portion of the code. responseString is the data being sent back to the client:
buffer = System.Text.Encoding.UTF8.GetBytes(responseString);
response.ContentLength64 = buffer.Length;
output = response.OutputStream;
output.Write(buffer, 0, buffer.Length);
It doesn't seem to always be a huge buffer, two examples are 3,816 bytes and, 142,619 bytes, these errors were thrown about 30 seconds apart. I would not think that my single client application would be overloading HTTPlistener; the client does occasionally sent/receive data in bursts, with several exchanges happening one after another.
Mostly Google searches shows that this is a common IT problem where, when there are network problems, this error is shown -- most of the help is directed toward sysadmins diagnosing a problem with an app moreso than developers tracking down a bug. My app has been tested on different machines, networks, etc. and I don't think it's simply a network configuration problem.
What may be the cause of this problem?
I'm getting this too, when a ContentLength64 is specified and KeepAlive is false. It seems as though the client is inspecting the Content-Length header (which, by all possible accounts, is set correctly, since I get an exception with any other value) and then saying "Whelp I'm done KTHXBYE" and closing the connection a little bit before the underlying HttpListenerResponse stream was expecting it to. For now, I'm just catching the exception and moving on.
I've only gotten this particular exception once so far when using HttpListener.
It occurred when I resumed execution after my application had been standing on a breakpoint for a while.
Perhaps there is some sort of internal timeout involved? Your application sends data in bursts, which means it's probably completely inactive a lot of the time. Did the exception occur immediately after a period of inactivity?
Same problem here, but other threads suggest ignoring the Exception.
C# problem with HttpListener
May be that's not the right thing to do.
For me I find that whenever the client close the webpage before it load fully it gives me that exception. What I do is just add a try catch block and print something when the exception happen. In another word I just ignore the exception.
The problem occurs when you're trying to respond to an invalid request. Take a look at this. I found out that the only way to solve this problem is:
listener = new HttpListener();
listener.IgnoreWriteExceptions = true;
Just set IgnoreWriteExceptions to true after instantiating your listener and the errors are gone.
Update:
For a deeper explanation, Http protocol is based on TCP protocol which works with streams to which each peer writes data. TCP protocol is peer to peer and each peer can close the connection. When the client sends a request to your HttpListener there will be a TCP handshake, then the server will process the data and responds back to the client by writing into the connection's stream. If you try to write into a stream which is already closed by the remote peer the Exception with "Specified network name is no longer available" will occur.