Communicating Through Protocol Layers With INET Packet - c++

I'm troubling with the reception packet at the receiver side.
Please help me to find a way.
At the SENDER side, I encapsulate the data packet (which comes from UdpBasicApp through Udp protocol) when it arrives at Network Layer as follows:
void Sim::encapsulate(Packet *packet) {
cModule *iftModule = findModuleByPath("SensorNetwork.sink.interfaceTable");
IInterfaceTable *ift = check_and_cast<IInterfaceTable *>(iftModule);
auto *ie = ift->findFirstNonLoopbackInterface();
mySinkMacAddr = ie->getMacAddress();
mySinkNetwAddr = ie->getNetworkAddress();
interfaceId = ie->getInterfaceId();
//Set Source and Destination Mac and Network Address.
packet->addTagIfAbsent<MacAddressReq>()->setSrcAddress(myMacAddr);
packet->addTagIfAbsent<MacAddressReq>()->setDestAddress(mySinkMacAddr);
packet->addTagIfAbsent<L3AddressReq>()->setSrcAddress(myNetwAddr);
packet->addTagIfAbsent<L3AddressReq>()->setDestAddress(mySinkNetwAddr);
packet->addTagIfAbsent<InterfaceReq>()->setInterfaceId(interfaceId);
//Attaches a "control info" structure (object) to the down message or packet.
packet->addTagIfAbsent<PacketProtocolTag>()->setProtocol(&getProtocol());
packet->addTagIfAbsent<DispatchProtocolInd>()->setProtocol(&getProtocol());
}
At the RECEIVER side, I try to get the Network Address of the SENDER as follows :
auto l3 = packet->addTagIfAbsent<L3AddressReq>()->getSrcAddress();
EV_DEBUG << "THE SOURCE NETWORK ADDRESS IS : " <<l3<<endl;
And when I print l3,
the output is DEBUG: THE SOURCE NETWORK ADDRESS IS : <none>
What is wrong ?
How can I access to the SENDER network Address through the received packet ?
Many thanks in advance.
I will be grateful

Request tags are things that you add to a packet sending information down to lower OSI layers. On receiving end, protocol layers will annotate the packet with indicator tags so upper OSI layers can get that information if needed. You are adding an empty request tag to an incoming packet, so no wonder it is empty. What you need is get the L3AddressInd tag from the packet and extract the source address from there:
L3Address srcAddr = packet->getTag<L3AddressInd>()->getSrcAddress();
or
MacAddress srcAddr = packet->getTag<MacAddressInd>()->getSrcAddress();
depending on how the packet was received.

Related

Scapy DHCP-Discover results in malformed packet

I am new to networking. And have found using scapy a great way to learn different protocols.
I am trying to send a DHCPDISCOVER packet, however in wireshark it comes out as a malformed packet.
Here is the code I use to construct the packet (my MAC address has been excluded and replaced with "[my MAC address]":
ethernet = Ether(dst='ff:ff:ff:ff:ff:ff',src="[my MAC address]",type=0x800)
ip = IP(src ='0.0.0.0',dst='255.255.255.255')
udp = UDP (sport=68,dport=67)
fam,hw = get_if_raw_hwaddr("Wi-Fi")
bootp = BOOTP(chaddr = hw, ciaddr = '0.0.0.0',xid = 0x01020304,flags= 1)
dhcp = DHCP(options=[("message-type","discover"),"end"])
packet = ethernet / ip / udp / bootp / dhcp
scap.send(packet, iface="Wi-Fi")
This is the wireshark result of the packet:
14 2.065968 ASUSTekC_a5:fa:7a Broadcast IPX 300 [Malformed Packet]
Thanks!
If you're going to specify layer 2, you need to use the *p variants of the send/receive functions instead:
scap.sendp(packet, iface="Wi-Fi")
I have to admit, I haven't gotten around to looking into exactly why this otherwise results in a malformed packet, but I've assumed it attempts to add a layer 2 protocol to the packet for you, resulting in two such layers in the final packet.

Accessing NS2 packet header to get received data in a wireless network

I have an encryption protocol implemented in NS2 (Elliptic curve to be exact). I built a wireless network in TCL. The cc code encrypts the message that it got from the TCL then stores the encrypted data in a packet header and sends it to another node. The other node is supposed to retrieve this data and decrypts it to the original plaintext.
I have a problem when the second node receives the wireless packet and accesses the header to retrieve the encrypted data (so it can preform the decryption), the received data is modified.
for an example:
=> if I want to send a message "abc" (The message will be converted to integers for encryption) , the first node encrypts the message and it and stores it in the packet header. When I print the encrypted data in the header before sending I get "508550885090".
=> When the second node receives the packet in the recv function, the header content changes to "107780505601550881290"
How can I extract the original sent data from the wireless packet header?
Here is the recv part from the code that accesses the header:
void Security_packetAgent::recv(Packet * pkt, Handler * ) {
// Access the IP header for the received packet:
hdr_ip * hdrip = hdr_ip::access(pkt);
// Access the security packet header for the received packet:
hdr_security_packet * hdr = hdr_security_packet::access(pkt);
//code for storing the data and handling it
}
Where hdr_security_packet is in the header file for the encryption protocol and the recieved encrypted data is stored in hdr.
Any help is appreciated.
Thanks in advance.

How to beat delays in UDP client

I'm trying to write a UDP client App, which receives some control packets(length 52-104 bytes) from a server fragmented to datagrams of size 1-4 bytes each (Why this is not a big packet and is fragmented instead? That's a mystery to me...).
I created a thread, and in this thread I used a typical recvfrom example from MS. The received data from the small buffer I append to string to recreate the packet (If the packet is too big, the string would be cleared).
My problem is the latency:
The inbound packets are changed, but the data from the buffer and the string hasn't changed during the minute or more. I tried to use a circular buffer instead of a string, but it has no effect on the latency.
So, what am I doing wrong and how do I receive a fragmented UDP packet in a proper way?
I don't have the original sender code, so i'm attaching a part of my sender emulator. As you can see, the original data string (mSendString) is fragmented to some four-bytes packets and sent to the net. When the data string has changed on sender side, the data on receiver side hasn't changed in aceptable time, it changed a few minutes later.
UdpClient mSendClient = new UdpClient();
string mSendString = "head,data,data,data,data,data,data,data,chksumm\n";//Control string
public static void SendCallback(IAsyncResult ar)
{
UdpClient u = (UdpClient)ar.AsyncState;
mMsgSent = true;
}
public void Send()
{
while (!mThreadStop)
{
if (!mSendStop)
{
for (int i = 0; i < mSendString.Length; i+=4)
{
Byte[] sendBytes = new Byte[4];
Encoding.ASCII.GetBytes(mSendString,i,4,sendBytes,0);
mSendClient.BeginSend(sendBytes, 1, mEndPoint, new AsyncCallback(SendCallback), mSendClient);
}
}
Thread.Sleep(100);
}
}
I was wrong when I asked this question in some points:
First,the wrong terms - the string was chopped/sliced/divided into
four bytes packets, not fragmented.
Second, I was thought, that too
much small UDP packets are the cause of latency in my app, but when I
ran my UDP receive code separately from other app code, I found this
UDP receive code is working without latency.
Seems like there are threading problems, not UDP sockets.

get SOCK_RAW frames with different rate [duplicate]

I'm writing code to send raw Ethernet frames between two Linux boxes. To test this I just want to get a simple client-send and server-receive.
I have the client correctly making packets (I can see them using a packet sniffer).
On the server side I initialize the socket like so:
fd = socket(PF_PACKET, SOCK_RAW, htons(MY_ETH_PROTOCOL));
where MY_ETH_PROTOCOL is a 2 byte constant I use as an ethertype so I don't hear extraneous network traffic.
when I bind this socket to my interface I must pass it a protocol again in the socket_addr struct:
socket_address.sll_protocol = htons(MY_ETH_PROTOCOL);
If I compile and run the code like this then it fails. My server does not see the packet. However if I change the code like so:
socket_address.sll_protocol = htons(ETH_P_ALL);
The server then can see the packet sent from the client (as well as many other packets) so I have to do some checking of the packet to see that it matches MY_ETH_PROTOCOL.
But I don't want my server to hear traffic that isn't being sent on the specified protocol so this isn't a solution. How do I do this?
I have resolved the issue.
According to http://linuxreviews.org/dictionary/Ethernet/ referring to the 2 byte field following the MAC addresses:
"values of that field between 64 and 1522 indicated the use of the new 802.3 Ethernet format with a length field, while values of 1536 decimal (0600 hexadecimal) and greater indicated the use of the original DIX or Ethernet II frame format with an EtherType sub-protocol identifier."
so I have to make sure my ethertype is >= 0x0600.
According to http://standards.ieee.org/regauth/ethertype/eth.txt use of 0x88b5 and 0x88b6 is "available for public use for prototype and vendor-specific protocol development." So this is what I am going to use as an ethertype. I shouldn't need any further filtering as the kernel should make sure to only pick up ethernet frames with the right destination MAC address and using that protocol.
I've worked around this problem in the past by using a packet filter.
Hand Waving (untested pseudocode)
struct bpf_insn my_filter[] = {
...
}
s = socket(PF_PACKET, SOCK_DGRAM, htons(protocol));
struct sock_fprog pf;
pf.filter = my_filter;
pf.len = my_filter_len;
setsockopt(s, SOL_SOCKET, SO_ATTACH_FILTER, &pf, sizeof(pf));
sll.sll_family = PF_PACKET;
sll.sll_protocol = htons(protocol);
sll.sll_ifindex = if_nametoindex("eth0");
bind(s, &sll, sizeof(sll));
Error checking and getting the packet filter right is left as an exercise for the reader...
Depending on your application, an alternative that may be easier to get working is libpcap.

How do I receive raw, layer 2 packets in C/C++?

How do I receive layer 2 packets in POSIXy C++? The packets only have src and dst MAC address, type/length, and custom formatted data. They're not TCP or UDP or IP or IGMP or ARP or whatever - they're a home-brewed format given unto me by the Hardware guys.
My socket(AF_PACKET, SOCK_RAW, IPPROTO_RAW) never returns from its recvfrom().
I can send fine, I just can't receive no matter what options I fling at the network stack.
(Platform is VxWorks, but I can translate POSIX or Linux or whatever...)
receive code (current incarnation):
int s;
if ((s = socket(AF_PACKET, SOCK_RAW, IPPROTO_RAW)) < 0) {
printf("socket create error.");
return -1;
}
struct ifreq _ifr;
strncpy(_ifr.ifr_name, "lltemac0", strlen("lltemac0"));
ioctl(s, IP_SIOCGIFINDEX, &_ifr);
struct sockaddr_ll _sockAttrib;
memset(&_sockAttrib, 0, sizeof(_sockAttrib));
_sockAttrib.sll_len = sizeof(_sockAttrib);
_sockAttrib.sll_family = AF_PACKET;
_sockAttrib.sll_protocol = IFT_ETHER;
_sockAttrib.sll_ifindex = _ifr.ifr_ifindex;
_sockAttrib.sll_hatype = 0xFFFF;
_sockAttrib.sll_pkttype = PACKET_HOST;
_sockAttrib.sll_halen = 6;
_sockAttrib.sll_addr[0] = 0x00;
_sockAttrib.sll_addr[1] = 0x02;
_sockAttrib.sll_addr[2] = 0x03;
_sockAttrib.sll_addr[3] = 0x12;
_sockAttrib.sll_addr[4] = 0x34;
_sockAttrib.sll_addr[5] = 0x56;
int _sockAttribLen = sizeof(_sockAttrib);
char packet[64];
memset(packet, 0, sizeof(packet));
if (recvfrom(s, (char *)packet, sizeof(packet), 0,
(struct sockaddr *)&_sockAttrib, &_sockAttribLen) < 0)
{
printf("packet receive error.");
}
// code never reaches here
I think the way to do this is to write your own Network Service that binds to the MUX layer in the VxWorks network stack. This is reasonably well documented in the VxWorks Network Programmer's Guide and something I have done a number of times.
A custom Network Service can be configured to see all layer 2 packets received on a network interface using the MUX_PROTO_SNARF service type, which is how Wind River's own WDB protocol works, or packets with a specific protocol type.
It is also possible to add a socket interface to your custom Network Service by writing a custom socket back-end that sits between the Network Service and the socket API. This is not required if you are happy to do the application processing in the Network Service.
You haven't said which version of VxWorks you are using but I think the above holds for VxWorks 5.5.x and 6.x
Have you tried setting the socket protocol to htons(ETH_P_ALL) as prescribed in packet(7)? What you're doing doesn't have much to do with IP (although IPPROTO_RAW may be some wildcard value, dunno)
I think this is going to be a bit tougher problem to solve than you expect. Given that it's not IP at all (or apparently any other protocol anything will recognize), I don't think you'll be able to solve your problem(s) entirely with user-level code. On Linux, I think you'd need to write your own device agnostic interface driver (probably using NAPI). Getting it to work under VxWorks will almost certainly be non-trivial (more like a complete rewrite from the ground-up than what most people would think of as a port).
Have you tried confirming via Wireshark that a packet has actually been sent from the other end?
Also, for debugging, ask your hardware guys if they have a debug pin (you can attach to a logic analyzer) that they can assert when it receives a packet. Just to make sure that the hardware is getting the packets fine.
First you need to specify the protocol as ETH_P_ALL so that your interface gets all the packet. Set your socket to be on promiscuous mode. Then bind your RAW socket to an interface before you perform a receive.