How to retrieve eth0 id programmatically - c++

When I gave
/sbin/ip addr show in my Linux machine. I got output like below
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:00:21:02:16:6b brd ff:ff:ff:ff:ff:ff .. so on
how to retrieve the above id 3 for eth0 programmatically in C or CPP.

Use the if_nametoindex() function.
unsigned int idx = if_nametoindex("eth0");
if (idx == 0) {
perror("if_nametoindex");
}

The standard linux c library for handling network interfaces is getifaddrs.
You can get a linked list of the existing interfaces from getifaddrs and count them till you find the one you're looking for.
Other than that it is not completely clear why you need that number (I am getting the XY problem hunch). I'm saying that because the OS differs the interfaces by name.

Related

DEV_TX_OFFLOAD_VXLAN_TNL_TSO Offload Testing - DPDK

I am working on Mellanox ConnectX-5 cards and using DPDK 20.11 with CentOS 8 (4.18.0-147.5.1.el8_1.x86_64).
I wanted to test the DEV_TX_OFFLOAD_VXLAN_TNL_TSO offload and what I want to ask is that what should the packet structure be like (I am using scapy) that I should send to the DPDK application such that this offload will come into action and perform segmentation (since it is a VXLAN_TNL_TSO).
I am modifying the dpdk-ip_fragmentation example and have added: DEV_TX_OFFLOAD_IP_TNL_TSO inside the port_conf
static struct rte_eth_conf port_conf = {
.rxmode = {
.max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
.split_hdr_size = 0,
.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
DEV_RX_OFFLOAD_SCATTER |
DEV_RX_OFFLOAD_JUMBO_FRAME),
},
.txmode = {
.mq_mode = ETH_MQ_TX_NONE,
.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
DEV_TX_OFFLOAD_VXLAN_TNL_TSO
),
},
};
And at the ol_flags:
ol_flags |= (PKT_TX_IPV4 | PKT_TX_IP_CKSUM | PKT_TX_TUNNEL_VXLAN );
In short, to test this offload it would be great if someone can help me with 2 things:
What should the packet structure be that I should send (using scapy, such that the offload comes into action)?
Required settings to do in the DPDK example application (It is not necessary to use the ip_fragmentation example, any other example would be fine too).
note: Based on the 3 hours debug session, it is been clarified the title and question shared is incorrect. Hence the question will be re-edited to reflect actual requirement as how enable DPDK port with TCP-TSO offloads for tunnelled VXLAN packets.
Answer to the first question what should be scapy settings for sending a packet to DPDK DUT for TSO and receiving segmented traffic is
Disable all TSO related offload on the SCAPY interface using ethtool -K [scapy interface] rx off tx off tso off gso off gro off lro off
Set MTU to send larger frames like 9000
Ensure to send large frames as payload but less than interface MTU.
Run tcpdump for ingress traffic with the directional flag as tcpdump -eni [scapy interface] -Q in
Answers to the second question Required settings to do in the DPDK example application is as follows
dpdk testpmd application can enable HW and SW TSO offloads based on NIC support.
next best application is tep_termination, but requires vhost interface (VM) or DPDK vhost to achieve the same.
Since the requirement is targeted for any generic application like skeleton, l2fwd, one can enable as follows
Ensure to use DPDK 20.11 LTS (to get the latest and best support for TUNNEL TSO)
In application check for tx_offload capability with dev_get_info API.
Cross-check for HW TSO for tunnelled (VXLAN) packets.
If the TSO needs to done for UDP payload, check for UDP_TSO support in HW.
configure the NIC with no-multisegment, jumbo frame, max frame len > 9000 Bytes.
Receive the packet via rx_burst, and ensure the packet is ipv4, UDP, VXLAN (tunnel) with nb_segs as 1.
modify the mbuf to point to l2_len, l3_len, l4_len.
mark the packets ol_flags as PKT_TX_IPV4 | PKT_TX_TUNNEL_VXLAN | PKT_TX_TUNNEL_IP. For UDP inner payload PKT_TX_TUNNEL_UDP.
then set the segment size as DPDK MTU (1500 as default) - l3_len - l4_len
This will enable the PMD which support HW TSO offload to update appropriate fields in descriptors for the given payload to transmitted as multiple packets. For our test case scapy send packet of 9000 bytes will be converted into 7 * 1500 byte packets. this can be observed as part tcpdump.
Note:
the reference code is present in tep_termination and test_pmd.
If there is no HW offload, SW library of rte gso is available.
For HW offload all PMD as of today require the MBUF is a continuous single non-external buffer. So make sure to create mbufpool or mempool with sufficient size for receiving large packets.

No DPDK packet fragmentation supported in Mellanox ConnectX-3?

Hello Stackoverflow Experts,
I am using DPDK on Mellanox NIC, but am struggling with applying the packet
fragmentation in DPDK application.
sungho#c3n24:~$ lspci | grep Mellanox
81:00.0 Ethernet controller: Mellanox Technologies MT27500 Family
[ConnectX-3]
the dpdk application(l3fwd, ip-fragmentation, ip-assemble) did not
recognized the received packet as the ipv4 header.
At first, I have crafted my own packets when sending ipv4 headers so I
assumed that I was crafting the packets in a wrong way.
So I have used DPDK-pktgen but dpdk-application (l3fwd, ip-fragmentation,
ip-assemble) did not recognized the ipv4 header.
As the last resort, I have tested the dpdk-testpmd, and found out this in
the status info.
********************* Infos for port 1 *********************
MAC address: E4:1D:2D:D9:CB:81
Driver name: net_mlx4
Connect to socket: 1
memory allocation on the socket: 1
Link status: up
Link speed: 10000 Mbps
Link duplex: full-duplex
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 127
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
strip on
filter on
qinq(extend) off
No flow type is supported.
Max possible RX queues: 65408
Max possible number of RXDs per queue: 65535
Min possible number of RXDs per queue: 0
RXDs number alignment: 1
Max possible TX queues: 65408
Max possible number of TXDs per queue: 65535
Min possible number of TXDs per queue: 0
TXDs number alignment: 1
testpmd> show port
According to DPDK documentation.
in the flow type of the info status of port 1 should show, but mine shows
that no flow type is supported.
The below example should be the one that needs to be displayed in flow types:
Supported flow types:
ipv4-frag
ipv4-tcp
ipv4-udp
ipv4-sctp
ipv4-other
ipv6-frag
ipv6-tcp
ipv6-udp
ipv6-sctp
ipv6-other
l2_payload
port
vxlan
geneve
nvgre
So Is my NIC, Mellanox Connect X-3 does not support DPDK IP fragmentation? Or is
there additional configuration that needs to be done before trying out the packet fragmentation?
-- [EDIT]
So I have checked the packets from DPDK-PKTGEN and the packets received by DPDK application.
The packets that I receive is the exact one that I have sent from the application. (I get the correct data)
The problem begins at the code
struct rte_mbuf *pkt
RTE_ETH_IS_IPV4_HDR(pkt->packet_type)
This determines the whether the packet is ipv4 or not.
and the value of pkt->packet_type is both zero from DPDK-PKTGEN and DPDK application. and if the pkt-packet_type is zero then the DPDK application reviews this packet as NOT IPV4 header.
This basic type checker is wrong from the start.
So what I believe is that either the DPDK sample is wrong or the NIC cannot support ipv4 for some reason.
The data I received have some pattern at the beginning I receive the correct message but after that sequence of packets have different data between the MAC address and the data offset
So what I assume is they are interpreting the data differently, and getting the wrong result.
I am pretty sure any NIC, including Mellanox ConnectX-3 MUST support ip fragments.
The flow type you are referring is for the Flow Director, i.e. mapping specific flows to specific RX queues. Even if your NIC does not support flow director, it does not matter for the IP fragmentation.
I guess there is an error in the setup or in the app. You wrote:
the dpdk application did not recognized the received packet as the ipv4 header.
I would look into this more closely. Try to dump those packets with dpdk-pdump or even by simply dumping the receiving packet on the console with rte_pktmbuf_dump()
If you still suspect the NIC, the best option would be to temporary substitute it with another brand or a virtual device. Just to confirm it is the NIC indeed.
EDIT:
Have a look at mlx4_ptype_table for fragmented IPv4 packets it should return packet_type set to RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_FRAG
Please note the functionality was added in DPDK 17.11.
I suggest you to dump pkt->packet_type on console to make sure it is zero indeed. Also make sure you have the latest libmlx4 installed.

Get CAN bitrate

I want to read the currently configured CAN bitrate of my socketcan socket in C++.
I can see the bitrate with ip -det link show can0:
9: can0: <NOARP,UP,LOWER_UP,ECHO> mtu 16 qdisc pfifo_fast state UP mode DEFAULT group default qlen 10
link/can promiscuity 0
can state ERROR-ACTIVE restart-ms 100
bitrate 1000000 sample-point 0.750
tq 125 prop-seg 2 phase-seg1 3 phase-seg2 2 sjw 1
pcan_usb: tseg1 1..16 tseg2 1..8 sjw 1..4 brp 1..64 brp-inc 1
clock 8000000
The bitrate was set via /etc/network/interfaces, but the user could manually change it.
libsocketcan seems to only support setting the bitrate, but not reading it.
The code of iproute2 that produces the output above uses rtnetlink.
How could I use libnetlink to read the corresponding attribute? Or is there another way of reading the current bitrate?
For now, I went with intepreting the output of a system call to ip -det link show can0 | grep bitrate | awk '{print $2}', which is ugly but works.
Surely there is a more elegant solution?
You can use can_get_bittiming() to get the set bitrate.

know a usb device's endpoint

Is there a bash command, a program or a libusb function (although I did not find one) which indicates me what are the OUT or IN endpoints of a usb device ?
For example, bNumEndpoints of libusb_interface_descriptor (from libusb1.0 library) shows me my usb drive has 3 endpoints, but how can I know what is their idnumber ?
After you have claimed the device, run this (where $ represents the terminal entry point):
$ sudo lsusb -v -d 16c0:05df
Where 16c0:05df are your vendor and product ids separated by a colon. (If you don't know these, type lsusb and figure out which device is yours by unplugging and re-running lsusb)
If you get confused use the lsusb man page:
http://linux.die.net/man/8/lsusb
Then once your description comes up, find the line labeled bEndpointAddress and the hex code following will be the endpoint for that specific Report.
I finally found the answer in lubusb-1.0. In was actually not a function, but a struct field :
uint8_t libusb_endpoint_descriptor::bEndpointAddress
The address of the endpoint described by this descriptor.
Bits 0:3 are the endpoint number. Bits 4:6 are reserved. Bit 7
indicates direction, see libusb_endpoint_direction.
For each interface for the usb drive, I just had to write these lines to display the available endpoints :
cout<<"Number of endpoints: "<<(int)interdesc->bNumEndpoints<<endl;
for(int k=0; k<(int)interdesc->bNumEndpoints; k++) {
epdesc = &interdesc->endpoint[k];
cout<<"Descriptor Type: "<<(int)epdesc->bDescriptorType<<endl;
cout<<"EP Address: "<<(int)epdesc->bEndpointAddress<<endl;
}
Where epdesc is the libusb_endpoint_descriptor and interdesc is the libusb_interface_descriptor.

How to set the don't fragment (DF) flag on a socket?

I am trying to set the DF (don't fragment flag) for sending packets using UDP.
Looking at the Richard Steven's book Volume 1 Unix Network Programming; The Sockets Networking API, I am unable to find how to set this.
I suspect that I would do it with setsockopt() but can't find it in the table on page 193.
Please suggest how this is done.
You do it with the setsockopt() call, by using the IP_DONTFRAG option:
int val = 1;
setsockopt(sd, IPPROTO_IP, IP_DONTFRAG, &val, sizeof(val));
Here's a page explaining this in further detail.
For Linux, it appears you have to use the IP_MTU_DISCOVER option with the value IP_PMTUDISC_DO (or IP_PMTUDISC_DONT to turn it off):
int val = IP_PMTUDISC_DO;
setsockopt(sd, IPPROTO_IP, IP_MTU_DISCOVER, &val, sizeof(val));
I haven't tested this, just looked in the header files and a bit of a web search so you'll need to test it.
As to whether there's another way the DF flag could be set:
I find nowhere in my program where the "force DF flag" is set, yet tcpdump suggests it is. Is there any other way this could get set?
From this excellent page here:
IP_MTU_DISCOVER: Sets or receives the Path MTU Discovery setting for a socket. When enabled, Linux will perform Path MTU Discovery as defined in RFC 1191 on this socket. The don't fragment flag is set on all outgoing datagrams. The system-wide default is controlled by the ip_no_pmtu_disc sysctl for SOCK_STREAM sockets, and disabled on all others. For non SOCK_STREAM sockets it is the user's responsibility to packetize the data in MTU sized chunks and to do the retransmits if necessary. The kernel will reject packets that are bigger than the known path MTU if this flag is set (with EMSGSIZE).
This looks to me like you can set the system-wide default using sysctl:
sysctl ip_no_pmtu_disc
returns "error: "ip_no_pmtu_disc" is an unknown key" on my system but it may be set on yours. Other than that, I'm not aware of anything else (other than setsockopt() as previously mentioned) that can affect the setting.
If you are working in Userland with the intention to bypass the Kernel network stack and thus building your own packets and headers and hand them to a custom Kernel module, there is a better option than setsockopt().
You can actually set the DF flag just like any other field of struct iphdr defined in linux/ip.h. The 3-bit IP flags are in fact part of the frag_off
(Fragment Offset) member of the structure.
When you think about it, it makes sense to group those two things as the flags are fragmentation related. According to the RFC-791, the section describing the IP header structure states that Fragment Offset is 13-bit long and there are three 1-bit flags. The
frag_off member is of type __be16, which can hold 13 + 3 bits.
Long story short, here's a solution:
struct iphdr ip;
ip.frag_off |= ntohs(IP_DF);
We are here exactly setting the DF bit using the designed-for-that-particular-purpose IP_DF mask.
IP_DF is defined in net/ip.h (kernel headers, of course), whereas struct iphdr is defined in linux/ip.h.
I agree with the paxdiablo's answer.
setsockopt(sockfd, IPPROTO_IP, IP_MTU_DISCOVER, &val, sizeof(val))
where val is one of:
#define IP_PMTUDISC_DONT 0 /* Never send DF frames. */
#define IP_PMTUDISC_WANT 1 /* Use per route hints. */
#define IP_PMTUDISC_DO 2 /* Always DF. */
#define IP_PMTUDISC_PROBE 3 /* Ignore dst pmtu. */
ip_no_pmtu_disc in kernel source:
if (ipv4_config.no_pmtu_disc)
inet->pmtudisc = IP_PMTUDISC_DONT;
else
inet->pmtudisc = IP_PMTUDISC_WANT;