GDB - establish communication between gdb and OCD Deamon - gdb

I write OCD Daemon for an architecture that is not yet supported by already existing ones. As for now I try to establish remote communication between GDB <-> My_OCD_Daemon and here problems start. Right after I demand connection with my daemon by "target remote tcp:IP:PORT" gdb starts sending a bunch of requests, here are few of them:
Sending packet: $Hg0#df...Ack
Packet received:
Sending packet: $qxtn#cb...Ack
Packet received: XOCD
...
Sending packet: $qxtocdversion#99...Ack
Packet received: 6000
Sending packet: $p2b0#34...Ack
Reply contains invalid hex digit 79
Fetching next packet
...
For most of them it is enough if I reply just '+' which denotes successful reception. However there are commands like $p2b0#34 which expects some sane size value back.
So, is there a way to skip this never ending chain of requests from GDB and make it wait for user input?
How such init/hand-shake procedure shall look like?
Thanks.

Okay so it looks like we can not "bypass" or "skip" this initial stage of gdb. It is used to configure gdb session and shall be conducted with care. Passing odd values to gdb will result in odd baheviour during debugging session.

Related

QTcpSocket: Setting LowDelayOption seems to have no effect?

I have a Qt GUI application that uses QTcpSocket to send and receive TCP packets to and from a server. So far I've had success making the TCP socket connections (there are 2 separate socket connections because there are 2 different message sets. Same IP address for both but 2 different port numbers) and sending and receiving packets. Most of the messages that my application sends are kicked off via push-button on the GUI's main window (one message is sent periodically using a QTimer that expires every 1667ms).
The server has a FIFO (128 messages deep) and sends a specific message to my application that communicates when the FIFO is 1/2 full, 3/4 full, and full. It's tedious to test this functionality by just mashing the send button on the GUI so I had the idea of loading a .csv file that could be pre-filled (the message has several different configurable parameters) with what I want to send. Each line gets read and turned into a message and sent on the TCP socket.
From my main window I open up a QFileDialog when a push-button on the GUI is clicked. Then when a .csv file is navigated to and selected the function reads the .csv file one line at a time, pulls out all the individual parameters, fills the message with the parameters, and then sends it out to the socket. Each message is 28 bytes. It repeats this until there are no lines left in the .csv file.
What I am noticing on Wireshark is that instead of sending a bunch of individual TCP packets they are all being put together and sent as one large TCP packet.
When I first tested this out I did not know about the LowDelayOption so when I found the information about it in the documentation for QAbstractSocket I thought "Aha! That must be it! The solution to my problem!" but when I added it to my code it did not seem to have any kind of effect at all. It's still being sent as one large TCP packet. For each socket, I am calling setSocketOption to set the LowDelayOption to 1 in the slot function that receives the connected() signal from the socket. I thought maybe the setSocketOption call wasn't working so I checked this by calling socketOption to get the value of the LowDelayOption and it's 1.
Is there something else I need to be doing? Am I doing something wrong?
Thanks for your time and your help. If it matters I am developing this on Windows and I am using Qt 5.9.1
... send and TCP packets to and from a server.
From this I am getting the vibe that your application relies on a certain amount of data - 'a packet' being received in a single receive call.
You can't really rely on that. Data you send over TCP can also be fragmented on the way. Also in your receiving end TCP implementation multiple packets received from the network may be put in the receiving sockets buffer before you have read the first one, and you have no way of telling which kind of fragments they were originally sent in.
So you should just treat TCP as a pipe through which bytes of data flow with some unknown and potentially variable delay. That variable delay causes data to be received in bigger or smaller chunks at random.
If you want to have a packet structure, you should add a packet header containing at least the packet length to the data you transmit.
I hope this helps.
From QTcpSocket documentation:
TCP (Transmission Control Protocol) is a reliable, stream-oriented, connection-oriented transport protocol. It is especially well suited for continuous transmission of data.
Stream-orientet means that there is no something like datagrams in UDP sockets.
There is only stream of data, and you never know in what parts it will be sent.
TCP protocol gives only reliability and you have to provide message extraction on your own. I.e send message length before each message, or use QDataStream (check
Fortune server and Fortune client for examples).
LowDelayOption from QAbstractSocket::SocketOption
Try to optimize the socket for low latency. For a QTcpSocket this would set the TCP_NODELAY option and disable Nagle's algorithm. Set this to 1 to enable.
It is equavilent of setsockopt with TCP_NODELAY option
First thing is:
The TCP_NODELAY option is specific to TCP/IP service providers.
And it doesn't work for me too :)
MSDN says that they do not recommend to disable Nagle's algorithm:
It is highly recommended that TCP/IP service providers enable the Nagle Algorithm by default, and for the vast majority of application protocols the Nagle Algorithm can deliver significant performance enhancements. However, for some applications this algorithm can impede performance, and TCP_NODELAY can be used to turn it off. These are applications where many small messages are sent, and the time delays between the messages are maintained. Application writers should not set TCP_NODELAY unless the impact of doing so is well-understood and desired because setting TCP_NODELAY can have a significant negative impact on network and application performance.
The question is: Do you really need to send your messages as fast as possible?
If yes, consider using QUdpSocket. Maybe tell us more about messages that you are sending.

C++/Qt: QTcpSocket won't write after reading

I am creating a network client application that sends requests to a server using a QTcpSocket and expects responses in return. No higher protocol involved (HTTP, etc.), they just exchange somewhat simple custom strings.
In order to test, I have created a TCP server in Python that listens on a socket and logs the strings it receives and those it sends back.
I can send the first request OK and get the expected response. However, when I send the second request, it does not seem to get written to the network.
I have attached debug slots to the QTcpSocket's notification signals, such as bytesWritten(...), connected(), error(), stateChanged(...), etc. and I see the connection being established, the first request sent, the first response processed, the number of bytes written - it all adds up...
Only the second request never seems to get sent :-(
After attempting to send it, the socket sends an error(RemoteHostClosedError) signal followed by ClosingState and UnconnectedState state change signals.
Before I go any deeper into this, a couple of (probably really basic) questions:
do I need to "clear" the underlying socket in any way after reading ?
is it possible / probable that not reading all the data the server has sent me prevents me from writing ?
why does the server close the connection ? Does it always do that so quickly or could that be a sign that something is not right ? I tried setting LowDelay and KeepAlive socket options, but that didn't change anything. I've also checked the socket's state() and isValid() and they're good - although the latter also returns true when unconnected...
In an earlier version of the application, I closed and re-opened the connection before sending a request. This worked ok. I would prefer keeping the connection open though. Is that not a reasonable approach ? What is the 'canonical' way to to implement TCP network communication ? Just read/write or re-open every time ?
Does the way I read from the socket have any impact on how I can write to it ? Most sample code uses readAll(...) to get all available data; I read piece by piece as I need it and << to a QTextStream when writing...
Could this possibly be a bug in the Qt event loop ? I have observed that the output in the Qt Creator console created with QDebug() << ... almost always gets cut short, i.e. just stops. Sometimes some more output is printed when I shut down the application.
This is with the latest Qt 5.4.1 on Mac OS X 10.8, but the issue also occurs on Windows 7.
Update after the first answer and comments:
The test server is dead simple and was taken from the official Python SocketServer.TCPServer Example:
import SocketServer
class MyTCPHandler(SocketServer.StreamRequestHandler):
def handle(self):
request = self.rfile.readline().strip()
print "RX [%s]: %s" % (self.client_address[0], request)
response = self.processRequest(request)
print "TX [%s]: %s" % (self.client_address[0], response)
self.wfile.write(response)
def processRequest(self, message):
if message == 'request type 01':
return 'response type 01'
elif message == 'request type 02':
return 'response type 02'
if __name__ == "__main__":
server = SocketServer.TCPServer(('localhost', 12345), MyTCPHandler)
server.serve_forever()
The output I get is
RX [127.0.0.1]: request type 01
TX [127.0.0.1]: response type 01
Also, nothing happens when I re-send any message after this - which is not surprising as the socket was closed. Guess I'll have to figure out why it is closed...
Next update:
I've captured the network traffic using Wireshark and while all the network stuff doesn't really tell me a lot, I do see the first request and the response. Right after the client [ACK]nowledges the response, the server sends a Connection finish (FIN). I don't see the second request anywhere.
Last update:
I have posted a follow-up question at Python: SocketServer closes TCP connection unexpectedly.
Only the second request never seems to get sent :-(
I highly recommend running a program like WireShark and seeing what packets are actually getting sent and received across the network. (As it is, you can't know for sure whether the bug is on the client side or in the server, and that is the first thing you need to figure out)
do I need to "clear" the underlying socket in any way after reading ?
No.
is it possible / probable that not reading all the data the server has
sent me prevents me from writing ?
No.
why does the server close the connection ?
It's impossible to say without looking at the server's code.
Does it always do that so quickly or could that be a sign that
something is not right ?
Again, this would depend on how the server was written.
This worked ok. I would prefer keeping the connection open though. Is
that not a reasonable approach ?
Keeping the connection open is definitely a reasonable approach.
What is the 'canonical' way to to implement TCP network communication
? Just read/write or re-open every time ?
Neither was is canonical; it depends on what you are attempting to accomplish.
Does the way I read from the socket have any impact on how I can write
to it ?
No.
Could this possibly be a bug in the Qt event loop ?
That's extremely unlikely. The Qt code has been used for years by tens of thousands of programs, so any bug that serious would almost certainly have been found and fixed long ago. It's much more likely that either there is a bug in your client, or a bug in your server, or a mismatch between how you expect some API call to behave and how it actually behaves.

C++ Reading UDP packets [duplicate]

I have a java app on linux which opens UDP socket and waits for messages.
After couple of hours under heavy load, there is a packet loss, i.e. the packets are received by kernel but not by my app (we see the lost packets in sniffer, we see UDP packets lost in netstat, we don't see those packets in our app logs).
We tried enlarging socket buffers but this didnt help - we started losing packets later then before, but that's it.
For debugging, I want to know how full the OS udp buffer is, at any given moment. Googled, but didn't find anything. Can you help me?
P.S. Guys, I'm aware that UDP is unreliable. However - my computer receives all UDP messages, while my app is unable to consume some of them. I want to optimize my app to the max, that's the reason for the question. Thanks.
UDP is a perfectly viable protocol. It is the same old case of the right tool for the right job!
If you have a program that waits for UDP datagrams, and then goes off to process them before returning to wait for another, then your elapsed processing time needs to always be faster than the worst case arrival rate of datagrams. If it is not, then the UDP socket receive queue will begin to fill.
This can be tolerated for short bursts. The queue does exactly what it is supposed to do – queue datagrams until you are ready. But if the average arrival rate regularly causes a backlog in the queue, it is time to redesign your program. There are two main choices here: reduce the elapsed processing time via crafty programming techniques, and/or multi-thread your program. Load balancing across multiple instances of your program may also be employed.
As mentioned, on Linux you can examine the proc filesystem to get status about what UDP is up to. For example, if I cat the /proc/net/udp node, I get something like this:
$ cat /proc/net/udp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode ref pointer drops
40: 00000000:0202 00000000:0000 07 00000000:00000000 00:00000000 00000000 0 0 3466 2 ffff88013abc8340 0
67: 00000000:231D 00000000:0000 07 00000000:0001E4C8 00:00000000 00000000 1006 0 16940862 2 ffff88013abc9040 2237
122: 00000000:30D4 00000000:0000 07 00000000:00000000 00:00000000 00000000 1006 0 912865 2 ffff88013abc8d00 0
From this, I can see that a socket owned by user id 1006, is listening on port 0x231D (8989) and that the receive queue is at about 128KB. As 128KB is the max size on my system, this tells me my program is woefully weak at keeping up with the arriving datagrams. There have been 2237 drops so far, meaning the UDP layer cannot put any more datagrams into the socket queue, and must drop them.
You could watch your program's behaviour over time e.g. using:
watch -d 'cat /proc/net/udp|grep 00000000:231D'
Note also that the netstat command does about the same thing: netstat -c --udp -an
My solution for my weenie program, will be to multi-thread.
Cheers!
Linux provides the files /proc/net/udp and /proc/net/udp6, which lists all open UDP sockets (for IPv4 and IPv6, respectively). In both of them, the columns tx_queue and rx_queue show the outgoing and incoming queues in bytes.
If everything is working as expected, you usually will not see any value different than zero in those two columns: as soon as your application generates packets they are sent through the network, and as soon those packets arrive from the network your application will wake up and receive them (the recv call immediately returns). You may see the rx_queue go up if your application has the socket open but is not invoking recv to receive the data, or if it is not processing such data fast enough.
rx_queue will tell you the queue length at any given instant, but it will not tell you how full the queue has been, i.e. the highwater mark. There is no way to constantly monitor this value, and no way to get it programmatically (see How do I get amount of queued data for UDP socket?).
The only way I can imagine monitoring the queue length is to move the queue into your own program. In other words, start two threads -- one is reading the socket as fast as it can and dumping the datagrams into your queue; and the other one is your program pulling from this queue and processing the packets. This of course assumes that you can assure each thread is on a separate CPU. Now you can monitor the length of your own queue and keep track of the highwater mark.
The process is simple:
If desired, pause the application process.
Open the UDP socket. You can snag it from the running process using /proc/<PID>/fd if necessary. Or you can add this code to the application itself and send it a signal -- it will already have the socket open, of course.
Call recvmsg in a tight loop as quickly as possible.
Count how many packets/bytes you got.
This will discard any datagrams currently buffered, but if that breaks your application, your application was already broken.

Forced server-side socket close without SO_LINGER > 0 can lose data, right?

I'm writing a cross-platform client application that uses sockets, written in C++. I'm having problems where the server is doing a hard close on the socket when it's done sending me info.
I've been reading other posts on this topic, and I'm not so much interested in the rights or wrong of this approach, but it's seems the server is either explicitly setting SO_LINGER=0, or that's the default behavior on that system (not sure, it's a Linux box).
I can see (in Wireshark) that the data was sent to me followed within milli-seconds by an RST, indicating a hard close by the server. I personally don't agree with this approach as it should be up to the client to shutdown the socket.
Server team are saying there's nothing wrong with that approach (doing a hard close rather than shutdown), it's typical on servers to avoid accumulating TIMED_WAIT sockets. On Windows my select() returns indicating there's something to read (while I haven't read any of this "in transit" data yet).
However, because of the quick arrival of the RST, on Windows recv() returns -1 and I'm seeing a 10054 for the error code (connection reset by peer). This wouldn't be too bad if I could at least get the data that was sent, but it seems that once my client's socket stack sees the RST any unread bytes are no longer made available to me.
On Linux (client), there's no problem. It seems the TCP stack is behaving slightly differently, in that I can read the outstanding bytes before the RST is honoured. I'm having trouble convincing the server guys they have a bug, given that it works for a Linux client.
First off, am I correct? Is this a server-side issue? I can't see that the client end is doing anything wrong, so it must be right?
It seems the server team are adamant that they want to perform the close, and they don't want to in have TIMED_WAITs, so I was going to push for them to add a SO_LINGER of, say 2 seconds? Does that sound like it will solve my problem? From what I understand this will stop the server from sending out a RST so soon after sending data, and should give me a chance to read the outstanding bytes.
Found a definitive answer to my own question:
"...Upon reception of RST segment, the receiving side will immediately abort the connection. This statement has more implications than just meaning that you will not be able to receive or send any more data to/from this connection. It also implies that any unread data still in the TCP reception buffer will be lost..." It cites the book "TCP/IP Internetworking Volume II". I don't have that book, so I can only take his word for it. Doesn't seems to discard data on Linux, only Windows...
Olivier Langlois's blog
The side-effect of fiddling with SO_LINGER to force a reset is that all pending data is lost. The fact that you don't receive it is all the proof you need that the server team is wrong to do this.
RFC 793 cited below says 'this command [ABORT] causes all pending SENDs and RECEIVEs to be aborted, ... and a special RESET message to be sent to the TCP on the other side of the connection.' See also W.R. Stevens, TCP/IP Illustrated, Vol. 1, p. 287: 'Aborting a connection provides two features to the application: (1) any queued data is thrown away and the reset is sent immediately, and (2) the receiver of the RST can tell that the other end did an abort instead of a normal close'. There is similar wording, along with an extract from the BSD code that implements it, in Vol. 2.
The TIME_WAIT state only occurs on a socket which sends a FIN before it has received one: see RFC 793. So the server should be waiting for a FIN from the client, with a suitable timeout, rather than resetting. This will also permit the client to do connection pooling.

Confusion about UDP/IP and sendto/recvfrom return values

I'm working with UDP sockets in C++ for the first time, and I'm not sure I understand how they work. I know that sendto/recvfrom and send/recv normally return the number of bytes actually sent or received. I've heard this value can be arbitrarily small (but at least 1), and depends on how much data is in the socket's buffer (when reading) or how much free space is left in the buffer (when writing).
If sendto and recvfrom only guarantee that 1 byte will be sent or received at a time, and datagrams can be received out of order, how can any UDP protocol remain coherent? Doesn't this imply that the bytes in a message can be arbitrarily shuffled when I receive them? Is there a way to guarantee that a message gets sent or received all at once?
It's a little stronger than that. UDP does deliver a full package; the buffer size can be arbitrarily small, but it has to include all the data sent in the packet. But there's also a size limit: if you want to send a lot of data, you have to break it into packets and be able to reassemble them yourself. It's also no guaranteed delivery, so you have to check to make sure everything comes through.
But since you can implement all of TCP with UDP, it has to be possible.
usually, what you do with UDP is you make small packets that are discrete.
Metaphorically, think of UDP like sending postcards and TCP like making a phone call. When you send a postcard, you have no guarantee of delivery, so you need to do something like have an acknowledgement come back. With a phone call, you know the connection exists, and you hear the answers right away.
Actually you can send a UDP datagram of 0 bytes length. All that gets sent is the IP and UDP headers. The UDP recvfrom() on the other side will return with a length of 0. Unlike TCP this does not mean that the peer closed the connection because with UDP there is no "connection".
No. With sendto you send out packets, which can contain down to a single byte.
If you send 10 bytes as a single sendto call, these 10 bytes get sent into a single packet, which will be received coherent as you would expect.
Of course, if you decide to send those 10 bytes one by one, each of them with a sendto call, then indeed you send and receive 10 different packets (each one containing 1 byte), and they could be in arbitrary order.
It's similar to sending a book via postal service. You can package the book as a whole into a single box, or tear down every page and send each one as an individual letter. In the first case, the package is bulkier but you receive the book as a single, ordered entity. In the latter, each package is very light, but good luck reading that ;)
I have a client program that uses a blocking select (NULL timeout parameter) in a thread dedicated to waiting for incoming data on a UDP socket. Even though it is blocking, the select would sometimes return with an indication that the single read descriptor was "ready". A subsequent recvfrom returned 0.
After some experimentation, I have found that on Windows at least, sending a UDP packet to a port on a host that's not expecting it can result in a subsequent recvfrom getting 0 bytes. I suspect some kind of rejection notice might be coming from the other end. I now use this as a reminder that I've forgotten to start the process on the server that looks for the client's incoming traffic.
BTW, if I instead "sendto" a valid but unused IP address, then the select does not return a ready status and blocks as expected. I've also found that blocking vs. non-blocking sockets makes no difference.