I’m using Wireshark to monitor telnet server/client transmissions. Occasionally the server's incoming text buffer pulls in multiple instances of the same incoming data string like so:
*1890000000000900000000A00000000B000000000064/\r*1890000000000900000000A00000000B000000000064/\r*1890000000000900000000A00000000B000000000064/\r*1890000000000900000000A00000000B000000000064/\r
When this happens Wireshark says they are retransmissions. TCP wouldn't be responsible for the duplicated data would it? Should I focus more on the client code as the source of duplication?
I might add that this application is communicating over 2.4ghz WiFi with 23 access points. It is a very congested network.
Does Wireshark show any unusually colored entries just before the perceived re-transmission? If wireshark says the packet is a re-transmission, you can assume the server (via TCP) is re-sending it. You can verify this if may appear the client does not send an ACK for that packet initially, or subsequently.
Related
I have build an UDP server with C++ and I have a couple questions about this.
Goal:
I have incomming TCP trafic and I need to sent this further as UDP trafic. My own UDP server then processes this UDP data.
The size of the TCP packets can vary.
Details:
In my example I have a TCP packet that consists of a total of 2000 bytes (4 random bytes, 1995 'a' (0x61) bytes and the last byte being 'b' (0x62)).
My UDP server has a buffer (recvfrom buffer) with size larger then 2000 bytes.
My MTU size is 1500 everywhere.
My server is receiving this packet correctly. In my UDP server I can see the received packet has a length of 2000 and if I check the last byte buffer[1999], it prints 'b' (0x62), which is correct. But if I open tcpdump -i eth0 I see only one UDP packet: 09:06:01.143207 IP 192.168.1.1.5472 > 192.168.1.2.9000: UDP, bad length 2004 > 1472.
With the tcpdump -i eth0 -X command, I see the data of the packet, but only ~1472 bytes, which does not include the 'b' (0x62) byte.
The ethtool -k eth0 command prints udp-fragmentation-offload: off.
So my questions are:
Why do I only see one packet and not two (fragmented part 1 and 2)?
Why dont I see the 'b' (0x62) byte in the tcpdump?
In my C++ server, what buffer size is best to use? I have it now on 65535 because the incomming TCP packets can be any size.
What will happen if the size exceedes 65535 bytes, will I have to make an own fragmentation scheme before sending the TCP packet as UDP?
OK, circumstances are more complicated as they appear from question, extracted from your comments there's the following information available:
Some client sends data to a server – both not modifiable – via TCP.
In between both resides a firewall, though, only allowing uni-directional communication to server via UDP.
You now intend to implement a proxy consisting of two servers residing in between and tunneling TCP data via UDP.
Not being able for the server to reply backwards does not impose a problem either.
My personal approach would then be as follows:
Let the proxy servers be entirely data unaware! Let the outbound receiver accept (recv or recvfrom depending on a single or multiple clients being available) chunks of data that yet fit into UDP packets and simply forward them as they are.
Apply some means to assure lost data is at least detected, better such that lost data can be reconstructed. As confirmation or re-request messages are impossible due to firewall limitation, only chance to increase reliability is via redundancy, though.
Configure the final target server to listen on loopback only.
Let the inbound proxy connect to the target server via TCP and as long as no (non-recoverable) errors occur just forward any incoming data as is.
To be able to detect lost messages I'd at very least prepend a packet counter to any UDP message sent. If two subsequent messages do not provide consecutive counter values then a message has been lost in between.
As no communication backwards is possible the only way to increase reliability is unconditional redundancy, trading some of your transfer rate for, e.g. by sending every message more than once and ignoring surplus duplicates on reception side.
A more elaborate approach might distribute redundant data over several packets such that a missing one can be reconstructed from the remaining ones – maybe similar to what RAID level 5 does. Admitted, you need to be pretty committed to try that...
Final question would be how routing looks like. There's no guarantee with UDP for packets being received in the same order as they are sent. If there's really only one fix route available from outbound proxy to inbound one via firewall then packets shouldn't overtake one another – you might still want to at least log to file appropriately to monitor the inbound UDP packets and in case of errors occurring apply appropriate means (buffering packets and re-ordering them if need be).
The size of the TCP packets can vary.
While there is no code shown the sentence above and your description suggests that you are working with wrong assumptions of how TCP works.
Contrary to UDP, TCP is not a message based protocol but a byte stream. This especially means that it does not guarantee that single send at the sender will be matched by a single recv in the recipient. Thus even if the send is done with 2000 bytes it might still be that the first recv only gets 1400 bytes while another recv will get the rest - no matter if everything would fit into the socket buffer at once.
Writing a UDP client-server app in C++ (done that lots of times before in many languages in the past 15 years), but somehow this one is not working correctly.
I cannot post actual code nor minimal reproducible app at the moment but I am willing to pay for live help if anyone is available to help solve this quickly with screensharing.
I think this is a particularity with C++ sockets and the way I am using them in this specific app which is quite complex.
Basically the issue is that the packets sent from the server to the client are not received by the client, only when said client is on a separate nat.
When both in same local networking and using their local IP, everything works as expected.
Here is what I am doing :
Client sendto(...) packets through UDP to the server using a specific server host and port 12345 (and keeps sending these non-stop)
On another thread, client bind(...) on port 12345 and "0.0.0.0" and tries to poll() and recvfrom() in a loop (poll always returns 0 here when client is on a separate nat)
Server bind() on port 12345 and "0.0.0.0" then poll() and recvfrom() in a loop
Upon receiving the first UDP message from a client, it starts a thread for sending
UDP messages back to the client on a new socket, using the
sockaddr_in that it got from the recvfrom() to pass in the sendto() commands.
Result : Server perfectly receives ALL messages from all clients, and sends all messages back to all clients, but any client that is not on the same NAT will never receive any messages (poll() always returns 0).
As far as I understand it, when the client sends a UDP message to the server on a specific remote port (12345 in this case), it will punch a hole in its NAT so that it can receive messages back from the remote server on that port...
I tested five different client network configurations :
Local network with the server, using local IP addresses (WORKS)
Local network with the server while client is using a VPN thus going through a remote NAT (DOES NOT WORK)
Local network with the server but client is using the WAN ip address to connect to the server (DOES NOT WORK)
Client at an actual remote network from a friend's connection, behind a router (DOES NOT WORK)
Client going through a wifi hotspot created using my phone (DOES NOT WORK)
For all tests above, the server was correctly receiving all communications from clients.
I also tried forcing the port to 12345 for the sendto() instead of using the sockaddr_in as set from recvfrom(), same issue.
Am I doing anything wrong ?
If you want to help but need to see actual code, I can do that live with screen sharing and I will pay for the help.
Thanks.
Also, if anyone can point me to a great site where I can pay for VERY QUICK help, please let me know, I don't even bother searching google because I really want actual advice from people who tried these services, not ads trying to rip me off...
Only the original receiver socket is allowed to reply to the client, because it's the client request that opens the port in the NAT. So either use the same socket in the server to receive and reply, or get the port that the second server socket was bound to and transfer it with an initial message through the original server port, so that A can send to it and punch the hole.
It looks so strange to create two half duplex sockets when a socket is a full duplex communication object that I'd go with the first option.
I need to do some validation testing of a new feed handler I have made. I have some pcap data that I captured from the production network and I would like to have my development feed handler connect to the "replay" of this data and compare the results.
My pcap:
I have a prod application that connects to a data feed, a TCP connection to an external server lets call assume this is 123.456.789.1:1234. This external sever then sends data to my application there is almost no client to server communication the server just sends the client data until the client drops. I have a pcap of all the packets sent to and from port 1234. I got this pcap by mirroring the production port (SPAN) on the switch and attaching tcpdump to an interface plugged in to the mirrored network port. When I look at the PCAP in wireshark it has all the data I would expect.
My problem:
I am in no way a network engineer and I am unsure how I can use this pcap to test my application. What I would like to do is "replay" this stream form the pcap and connect to it with my development application to validate that the data is being handled the same was it was on the prod connection.
I would like to some how "replay" the data sent from 123.456.789.1:12344 on 127.0.0.1:1234 and then connect to 127.0.0.1:1234 with my dev application. I looked at tcpreplay but from the documents I can not seem to figure out if it can do this, I get the feeling that they do not handle the tcp session data, and I could do this if it was a UDP stream, but tcpreplay can not act as the external server. Did I read this wrong or is there another tool that will let me do this?
thanks!
You may want to use netcat if you just want to throw some data back at your tool, and you don't care about what the tool sends.
You would do this by extracting the raw data sent by your tool from the pcap file (this tool may be helpful) and then piping that into netcat.
Is there a way that, using WinPcap, I can read a packet and if that packet is going to a certain domain name, I can block the packet with a custom HTML page? Like, if I wanted to go to myspace and my program saw that, is there a way I could return HTML saying "Site Blocked"?
WinPcap provides the capability for sending packets, so using this capability you can terminate a tcp session by preparing a packet with the IP address and port numbers of the offending packet with the FIN flag ON and then send the packet to the server so that it closes the session and send it(with modification) to the client to stop it waiting for incoming packets.
I've seen it asked elsewhere but no one answers it to my satisfaction: how can I receive and send raw packets?
By "raw packets", I mean where I have to generate all the headers and data, so that the bytes are completely arbitrary, and I am not restricted in any way. This is why Microsofts RAW sockets won't work, because you can't send TCP or UDP packets with incorrect source addresses.
I know you can send packets like I want to with WinPCAP but you cannot receive raw information with it, which I also need to do.
First of all decide what protocol layer you want to test malformed data on:
Ethernet
If you want to generate and receive invalid Ethernet frames with a wrong ethernet checksum, you are more or less out of luck as the checksumming is often done in hardware, and in the cases they're not, the driver for the NIC performs the checksumming and there's no way around that at least on Windows. NetBSD provides that option for most of it drivers that does ethernet checksumming in the OS driver though.
The alternative is to buy specialized hardware, (e.g. cards from Napatech, you might find cheaper ones though), which provides an API for sending and receiving ethernet frames however invalid you would want.
Be aware that sending by sending invalid ethernet frames, the receiving end or a router inbetween will just throw the frames away, they will never reach the application nor the OS IP layer. You'll be testing the NIC or NIC driver on the receiving end.
IP
If all you want is to send/receive invalid IP packets, winpcap lets you do this. Generate the packets, set up winpcap to capture packets, use winpcap to send..
Be aware that packets with an invalid IP checksum other invalid fields, the TCP/IP stack the receiving application runs on will just throw the IP packets away, as will any IP/layer 3 router inbetween the sender and receiver do. They will not reach the application. If you're generating valid IP packets, you'll also need to generate valid UDP and implement a TCP session with valid TCP packets yourself in order for the application to process them, otherwise they'll also be thrown away by the TCP/IP stack
You'll be testing the lower part of the TCP/IP stack on the receiving end.
TCP/UDP
This is not that different from sending/receiving invalid IP packets. You an do all this with winpcap, though routers will not throw them away, as long as the ethernet/IP headers are ok. An application will not receive these packets though, they'll be thrown away by the TCP/IP stack.
You'll be testing the upperpart of the TCP/IP stack on the receiving end.
Application Layer
This is the (sane) way of actually testing the application(unless your "application" actually is a TCP/IP stack, or lower). You send/receive data as any application would using sockets, but generate malformed application data as you want. The application will receive this data, it's not thrown away by lower protocol layers.
Although one particular form of tests with TCP can be hard to test - namely varying the TCP segments sent, if you e.g. want to test that an application correctly interprets the TCP data as a stream. (e.g. you want to send the string "hello" in 5 segments and somehow cause the receiving application to read() the characters one by one). If you don't need speed, you can usually get that behaviour by inserting pauses in the sending and turn off nagel's algorithm (TCP_NDELAY) and/or tune the NIC MTU.
Remember that any muckery with lower level protocols in a TCP stream, e.g. cause one of the packets to have an invalid/diffferent IP source address just gets thrown away by lower level layers.
You'll be testing an application running on top of TCP/UDP(or any other IP protocol).
Alternatives
switch to another OS, where you at least can use raw sockets without the restrictions of recent windows.
Implement a transparent drop insert solution based on the "Ethernet" or "IP" alternative above. i.e. you have your normal client application, your normal server application. You break a cable inbetween them, insert your box with 2 NICs where you programatically alter bytes of the frames received and send them back out again on the other NIC. This'll allow you to easily introduce packet delays in the system as well. Linux' netfilter already have this capability which you can easily build on top of, often with just configuration or scripting.
If you can alter the receiving application you want to test, have it read data from something else such as a file or pipe and feed it random bytes/packets as you wish.
Hybrid model, mainly for TCP application testing, but also useful for e.g. testing UDP ICMP responses. Set up a TCP connection using sockets. Send your invalid application data using sockets. Introduce random malformed packets(much easier than programming with raw sockets that set up a TCP session and then introduce lower layer errors). Send malformed IP or UDP/TCP packets, or perhaps ICMP packets using WinPcap, though communicate with the socket code to the winpcap code so you'll the addresses/port correct, such that the receiving application sees it.
Check out NS/2