TCP packet drop (ns3) - c++

I am new to ns3 network simulator and wanted to know how to get the number of packet drops in a TCP connection. I know of the following command:
devices.Get (1)->TraceConnectWithoutContext ("PhyRxDrop", MakeBoundCallback (&RxDrop, stream));
But this is helpful only for a single TCP connection over a p2p link. In my topology, there is a single p2p connection but 2 applications using TCP over that same p2p link and I would like to know individually for each TCP connection the number of dropped packets. I have researched online quite a lot but was unable to find any resources. Kindly point to some resources or give the class name which I can use to detect TCP connection-specific packet losses.
The above command as of now combines the packet losses for both the connections and outputs them to the stream because they are over the same p2p link.

The usage of RxDrop tells me you're using referring to fourth.cc in the ns-3 tutorial. Connecting to the PhyRxDrop TraceSource will result in the requested CallBack being invoked for each dropped packet. ns-3 doesn't have a packet filter such that the CallBack would only be invoked for some packets.
However, you can determine which connection a packet corresponds to. Simply strip the packet headers, and inspect the port numbers. Remember every connection is defined by a unique 4-tuple: (host IP, host port, destination IP, destination port).
static void
RxDrop(Ptr<const Packet> packet) {
/*
* Need to copy packet since headers need to be removed
* to be inspected. Alternatively, remove the headers,
* and add them back.
*/
Ptr<Packet> copy = packet->Copy();
// Headers must be removed in the order they're present.
PppHeader pppHeader;
copy->RemoveHeader(pppHeader);
Ipv4Header ipHeader;
copy->RemoveHeader(ipHeader);
TcpHeader tcpHeader;
copy->RemoveHeader(tcpHeader);
std::cout << "Source IP: ";
ipHeader.GetSource().Print(std::cout);
std::cout << std::endl;
std::cout << "Source Port: " << tcpHeader.GetSourcePort() << std::endl;
std::cout << "Destination IP: ";
ipHeader.GetDestination().Print(std::cout);
std::cout << std::endl;
std::cout << "Destination Port: " << tcpHeader.GetDestinationPort() << std::endl;
}

Related

How to close tcp server socket correctly

can anyone explain me, what I am doing wrong with my tcp server termination?
In my program (single instance), I started another program which starts a tcp server. For the tcp server it is only allowed to listen to one connection.
After a connection was established between client and my server, a few messages are shared. As soon as the message protocoI has passed through, I want to terminate the server socket, reset my internal states and close my sub-programm.
After a few seconds, it will be possible to open my sub-programm again.
If so, I open the socket again... The same network device, the same ip address and the same port as before will be used...
My problem: my sub-programm crashes when running the 2nd time.
With netstat I analyzed my socket and found out, it stays in state LAST_ACK.
This can take more than 60 seconds (timeout?) till the socket is finally closed.
For closing the socket, I used the following code:
if (0 != shutdown(socketDescriptor_, SHUT_RDWR)) {
std::cout << "Read/write of socket deactivated" << std::endl;
}
if (0 == close(socketDescriptor_)) {
std::cout << "Socket is destroyed" << std::endl;
socketDescriptor_ = -1;
}
Any ideas? Thanks for your help!
Kind regards,
Matthias

How to Check if a Client is Still Connected c++

I want to check using sockets whether a client is still connected to the server. I saw that the revc function gives me the status of the client but it is not working as I expect (sometines client did not disconnect and the revc function thought he was).
I got this code:
if (recv(client->getSocket(), rcmsg, 1024, 0) <= 0)
{
bool found = false;
for (i = 0; i < this->clients.size(); i++)
{
if (*(this->clients[i]) == *client)
{
found = true;
break;
}
}
if (found)
this->clients.erase(this->clients.begin() + i);
closesocket(client->getSocket());
std::cout << "disconected: socket = " << client->getSocket() << ", ip = " << inet_ntoa(addr.sin_addr) << endl;
There is another solution?
Thanks in advance.
It's important to remember where TCP/IP came from.
It's a communications protocol stack that's designed to withstand an all-out nuclear war.
That means, telegraph poles being vapourised, radio links intermittent or no longer there, telephone exchanges no longer being there... and it still had to work.
In this context, define "connected".
A TCP "connection" is merely two distinct hosts believing that they're connected, and somehow packets are routed to the hosts eventually by the remaining routers on the network.
This is why there is a whole host of protocols we never even think about, for example RIP (Routing Information Protocol) who's job is to discover the remaining links after the bombardment.
There is really no such thing as "connected". There is simply the time elapsed since you received a packet from the remote host. That's it.

QTcpSocket and Specifying Client Outgoing Network Device

Windows 8.1 user here, using Qt 5.3. Trying to learn network programming (please bear with me). Let's say I have two network devices on my machine. One is assigned the IP 192.168.1.2, and the other 192.168.1.3. The first device has priority.
My goal is to create a QTcpServer on 192.168.1.2 and a QTcpSocket client on 192.168.1.3. The way I envision this would work is the data packets from the client will start at 192.168.1.3 (on some port), travel to the router, then to the server at 192.168.1.2 (on some port). Ok, hopefully this sounds reasonable.
Here's the problem. I can't find a functioning way to specify an outgoing address with QTcpSocket. There appears to be a bind method, but it doesn't do much. Each time I send that from the client, it travels on the default device at 192.168.1.2.
socket = new QTcpSocket(this);
qDebug() << socket->localAddress(); // shows "0"
qDebug() << socket->localPort(); // shows "0"
socket->bind(QHostAddress("192.168.1.3"), 50000);
qDebug() << socket->localAddress(); // shows "50000"
qDebug() << socket->localPort(); // shows "0"
//socket->setLocalAddress(QHostAddress("192.168.1.4")); // error, says it's protected
//socket->setLocalPort("50000"); // error, says it's protected
//qDebug() << socket->localAddress();
//qDebug() << socket->localPort();
socket->connectToHost("google.com", 80); // network manager shows data on 192.168.1.2
Any ideas?

UDP bind failures

{Windows 7, MinGW 4.8, boost 1.55}
I'm having some problems with UDP binds. I've a client that broadcasts datagrams for listeners listening on specific port and binds to a port itself if the listeners want to communicate something back.
The port on which the client needs to bind is X and the servers are listening on Y.
Problem:
If I simulate a client-crash (eg., by causing segmentation fault by dereferencing a nullptr) after binding the UDP socket to the port, then once the client application is no longer running (no longer listed in Windows Task Manager) netstat -ano | find "X" still shows that someone is bound to port X and ip address of 0.0.0.0 (the client had specified the IP address as any address). The PID cannot be found in Windows Task Manager. However when I downloaded application TCPView I can see that a <non-existent> process is still bound to 50000. On starting the client (without making it crash this time) subsequently.
I get two behaviors:
<1> On some machines the client is unable to bind to the socket again (although reuse_address option is set to true) and the error message is: An attempt was made to access a socket in a way forbidden by its access permissions.
<2> On other machines the client binds successfully but the read handler is not called and the client does not receive any datagram on port X although the servers are unicasting to the client port X. Infact <2> is true even for launching multiple instances of the client on the same machine even if none of the clients were deliberately made to crash and exist as zombie processes. Only the 1st one gets datagrams.
Here is how client socket is set up:
if(!m_udpSocket.is_open())
{
m_udpSocket.open(m_localEndpoint.protocol(), errorCode); //m_localEndpoint is address 0.0.0.0 and port X
if(errorCode)
{
std::cerr << "Unable to open socket: " << errorCode.message() << std::endl;
}
else
{
m_udpSocket.set_option(boost::asio::socket_base::reuse_address(true), errorCode);
if(errorCode)
{
std::cerr << "Reuse address option set failure. " << errorCode.message() << std::endl;
}
m_udpSocket.set_option(boost::asio::socket_base::broadcast(true), errorCode);
if(errorCode)
{
std::cerr << "Socket cannot send broadcast. " << errorCode.message() << std::endl;
}
else
{
m_udpSocket.bind(m_localEndpoint, errorCode);
if(errorCode)
{
std::cerr << "Socket cannot bind...!! " << errorCode.message() << std::endl;
}
}
}
}
Can you explain why do I get <1> and <2> and what can I do to avoid them and make socket bind even if there is some other process bound to that socket? I need to support Windows, Linux and MAC.

boost:asio IPv4 address and UDP comms

Problem Solved - See bottom for solution notes
I'm trying to build a simple app to test an ethernet-capable microcontroller. All I want to do is send and receive small UDP packets. The code is using boost::asio for the networking, and is incredibly simple. For debugging I moved all the intialisation out of the constructors so I could check each step. Here's the body of my stuff:
boost::system::error_code myError;
boost::asio::ip::address_v4 targetIP;
targetIP.from_string("10.1.1.75", myError); // Configure output IP address. HACKHACK--Hardcoded for Debugging
std::cout << "GetIP - " << myError.message() << std::endl;
std::cout << "IP: " << targetIP << std::endl;
boost::asio::ip::udp::endpoint myEndpoint; // Create endpoint on specified IP.
myEndpoint.address(targetIP);
myEndpoint.port(0x1000);
std::cout << "Endpoint IP: " << myEndpoint.address().to_string() << std::endl;
std::cout << "Endpoint Port: " << myEndpoint.port() << std::endl;
boost::asio::io_service io_service; // Create socket and IO service, bind socket to endpoint.
udp::socket socket(io_service);
socket.open( myEndpoint.protocol(), myError );
std::cout << "Open - " << myError.message() << std::endl;
socket.bind( myEndpoint, myError );
std::cout << "Bind - " << myError.message() << std::endl;
char myMessage[] = "UDP Hello World!"; // Send basig string, enable socket level debugging.
socket.send(boost::asio::buffer(myMessage, sizeof(myMessage)), boost::asio::socket_base::debug(true), myError);
std::cout << "Send - " << myError.message() << std::endl;
boost::array<char, 128> recv_buf; // Receive something (hopefully an echo from the uP)
udp::endpoint sender_endpoint;
size_t len = socket.receive_from( boost::asio::buffer(recv_buf), myEndpoint );
std::cout.write(recv_buf.data(), len);
The snag happens right at the beginning. The address_v4 doesn't want to accept the IP that I'm passing into it. The output of this app is:
GetIP - The operation completed successfully
IP: 0.0.0.0
Endpoint IP: 0.0.0.0
Endpoint Port: 4096
Open - The operation completed successfully
Bind - The operation completed successfully
Send - A request to send or receive data was disallowed because the socket is not connected and (when sending on a datagram socket using a sendto call) no address was supplied
I'm assuming the send error is a result of the address_v4 not getting set correctly, but there is no reason that I can think of for such a thing to be taking place.
For those playing along at home, my PC has dual ethernet cards, one of which has been DHCP'd 10.1.1.7, so the target IP should be reachable without any routing. I'm using BOOST 1.46.1 on 32-bit Win7 and MSVS 10. It also fails when I try an IP of 127.0.0.1, correct me if I'm wrong but that should work for loopback in this context?
Edit with Updates:
So thanks to the earlier answers I've gotten the IP address into my address_v4, and I'm no longer trying to bind when I meant to use connect. The significanly changed section of code is the TX, which now looks like:
socket.open( targetEndpoint.protocol(), myError );
std::cout << "Open - " << myError.message() << std::endl;
char myMessage[] = "UDP Hello World!"; // Send basig string, enable socket level debugging.
socket.send_to(boost::asio::buffer(myMessage, sizeof(myMessage)), targetEndpoint, boost::asio::socket_base::debug(true), myError);
std::cout << "Send - " << myError.message() << std::endl;
(I renamed myEndpoint to targetEndpoint to help reduce confusion.....)
I now get the error while trying to send:
The attempted operation is not supported for the type of object referenced
I would give my firstborn for an informative error message at this point! The error is consistent regardless of which target port I use. The only thing I can think of is that I need to be setting the source port somewhere, but I don't see how you can do that in any of the boost::asio documentation.
Final Resolution
I have managed to make this work, so I'm going to post the gotchas that I found in a nice neat list for anyone else who stumbles across this answer with similar problems to me. I think the main issue I had was that none of the boost examples ever show how to connect to a specified IP, they all use a resolver. It made the examples a lot harder to understand for me.
When using the from_string call to convert a text IP, use the syntax from the first answer below rather than my syntax above!
When setting up the UDP socket, order of operations is crucial! If you don't want to do it in the constructor you need to:
Open the socket using the required protocol.
Bind the socket to a local endpoint which specifies the source UDP port number.
Connect the socket to the remote endpoint which specifies the destination IP and Port number.
Attempting to bind after the connect will cause the bind to fail. The transmission will operate just fine, but your packets will be sent from an arbitrary port number.
Use a send method to actually transmit. Do not attempt to enable debugging data with boost::asio::socket_base::debug(true)! All this flag seems to do is cause error messages within an otherwise functional send!
I'd also like to share that my most valuable debugging tool in this entire exercise was Wireshark. Maybe it's only because I'm used to having a CRO or Protocol Analyser when I'm working on comms like this, but I found being able to see the bytes-on-wire display helped me sort out a whole bucketload of stuff that I would otherwise never have tracked down.
Cheers for your help on the IP issues and helping me realise the difference between connect and bind.
The problem you are currently seeing appears to be your usage of this line:
targetIP.from_string("10.1.1.75", myError);
boost::asio::ip::address::from_string is a static function, that returns a constructed ip::address object. Change it to look like this:
targetIP = boost::asio::ip::address::from_string("10.1.1.75", myError);
And your IP address should be populated properly.
On the top of my head, you try to bind the socket to an endpoint with address 10.1.1.75, but that seems to be a remote endpoint? I would assume you would like to bind it locally and use send_to, as it is UDP
In this line there is an error:
targetIP = boost::asio::ip::address::from_string("10.1.1.75", myError);
You should put:
targetIP = boost::asio::ip::address_v4::from_string("10.1.1.75", myError);
and then targetIP has the right value!