I use range <portlist> src|dst ip <SMMI> <ipaddr> as the pktgen doc, and start all to send out packets. But it doesn't work. Do I miss some steps?
DPDK PKTGEN does not dynamically create randaom packets, instead it can be programmed with range option for desired start, minimum, maximum and increment factor. To generate packets with DST-MAC address for a given range using the following commands in pktgen cli
enable all range
page range
range all dst mac start 00:00:00:00:00:01
range all dst mac min 00:00:00:00:00:01
range all dst mac max ff:ff:ff:ff:ff:ff
range all dst mac inc 00:00:00:00:00:01
As per pkgetn range options one can change MAC, IP, VLAN, MPLS and GRE with range option.
note: tested on pktgen-20.03.0
Related
I couldn't find a way of determining what would be the max creatable enclave using the SGX SDK. Is there any way of fetching these capabilities? This is especially useful in cloud environments where you can create virtual machines with EPC sections and you don't know the actual usable size of the provisioned EPC.
The only option I found to get the value of the EPC section is by filtering dmesg for the output of the SGX driver.
[ 2.451815] intel_sgx: EPC section 0x240000000-0x2bfffffff
If we convert the start and end of the section in decimals and subtract the end from the start, we get a value in bytes which we can convert to gibibytes or mebibytes.
Here are the calculations for this example and the result in gibibytes:
python3 -c 'print((0x2bfffffff - 0x240000000) / 1024 ** 3)'
1.9999999990686774
I am trying to build a multi-RX-queue dpdk program, using RSS to split the incoming traffic into RX queues on a single port. Mellanox ConnectX-5 and DPDK Version 19.11 is used for this purpose. It works fine when I use IP over Ethernet packets as input. However when the packet contains IP over MPLS over Ethernet, RSS does not seems to work. As a result, all packets belonging to various flows (with different src & dst IPs, ports) over MPLS are all sent into the same RX queue.
My queries are
Is there any parameter/techniques in DPDK to distribute MPLS packets to multiple RX queues?
Is there any way to strip off MPLS tags (between Eth and IP) in hardware, something like hw_vlan_strip?
My Port configuration is
const struct rte_eth_conf default_port_conf = {
.rxmode = {
.hw_vlan_strip = 0, /* VLAN strip enabled. */
.header_split = 0, /* Header Split disabled. */
.hw_ip_checksum = 0, /* IP checksum offload disabled. */
.hw_strip_crc = 0, /* CRC stripping by hardware disabled. */
},
.rx_adv_conf = {
.rss_conf = {
.rss_key = NULL,
.rss_key_len = 0,
.rss_hf = ETH_RSS_IP,
},
} };
The requirement of POP_MPLS and RSS on MPLS can be activated via RTE_FLOW for supported NIC PMD. But mellanox mxl5 PMD supports only RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN & RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN. Only options supported for tunneled packets by mxl5 PMD are MPLSoGRE, MPLSoUD. Hence POP MPLS in HW via PMD is not possible on MXL5 PMD for DPDK 19.11 LTS
For any PMD RSS is reserved for outer/inner IP address along with TCP/UDP/SCTP port numbers. Hence I have to interpret RSS for MPLS as I would like to distribute/ spread packets with different MPLS to various queues. This can be achieved by again using RTE_FLOW for RTE_FLOW_ITEM_TYPE_MPLS and action field as RTE_FLOW_ACTION_TYPE_QUEUE. Using mask/range fields one can set patterns which can satisfy condition as 2 ^ 20 (MPLS id max value) / number of RX queues. hence the recommendation is to use RTE_FLOW_ITEM_TYPE_MPLS from RTE_FLOW and RTE_FLOW_ACTION_TYPE_QUEUE. But there is no IP/PORT RSS hashing for the same.
to test the same you can use
DPDK testpmd and set the flow rules or
make use of RTE_FLOW code snippet from rte_flow link
note: for POP MPLS I highly recommend to use PTYPES to identify the metadata and use RX-callabck to modify the packet header.
Hello Stackoverflow Experts,
I am using DPDK on Mellanox NIC, but am struggling with applying the packet
fragmentation in DPDK application.
sungho#c3n24:~$ lspci | grep Mellanox
81:00.0 Ethernet controller: Mellanox Technologies MT27500 Family
[ConnectX-3]
the dpdk application(l3fwd, ip-fragmentation, ip-assemble) did not
recognized the received packet as the ipv4 header.
At first, I have crafted my own packets when sending ipv4 headers so I
assumed that I was crafting the packets in a wrong way.
So I have used DPDK-pktgen but dpdk-application (l3fwd, ip-fragmentation,
ip-assemble) did not recognized the ipv4 header.
As the last resort, I have tested the dpdk-testpmd, and found out this in
the status info.
********************* Infos for port 1 *********************
MAC address: E4:1D:2D:D9:CB:81
Driver name: net_mlx4
Connect to socket: 1
memory allocation on the socket: 1
Link status: up
Link speed: 10000 Mbps
Link duplex: full-duplex
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 127
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
strip on
filter on
qinq(extend) off
No flow type is supported.
Max possible RX queues: 65408
Max possible number of RXDs per queue: 65535
Min possible number of RXDs per queue: 0
RXDs number alignment: 1
Max possible TX queues: 65408
Max possible number of TXDs per queue: 65535
Min possible number of TXDs per queue: 0
TXDs number alignment: 1
testpmd> show port
According to DPDK documentation.
in the flow type of the info status of port 1 should show, but mine shows
that no flow type is supported.
The below example should be the one that needs to be displayed in flow types:
Supported flow types:
ipv4-frag
ipv4-tcp
ipv4-udp
ipv4-sctp
ipv4-other
ipv6-frag
ipv6-tcp
ipv6-udp
ipv6-sctp
ipv6-other
l2_payload
port
vxlan
geneve
nvgre
So Is my NIC, Mellanox Connect X-3 does not support DPDK IP fragmentation? Or is
there additional configuration that needs to be done before trying out the packet fragmentation?
-- [EDIT]
So I have checked the packets from DPDK-PKTGEN and the packets received by DPDK application.
The packets that I receive is the exact one that I have sent from the application. (I get the correct data)
The problem begins at the code
struct rte_mbuf *pkt
RTE_ETH_IS_IPV4_HDR(pkt->packet_type)
This determines the whether the packet is ipv4 or not.
and the value of pkt->packet_type is both zero from DPDK-PKTGEN and DPDK application. and if the pkt-packet_type is zero then the DPDK application reviews this packet as NOT IPV4 header.
This basic type checker is wrong from the start.
So what I believe is that either the DPDK sample is wrong or the NIC cannot support ipv4 for some reason.
The data I received have some pattern at the beginning I receive the correct message but after that sequence of packets have different data between the MAC address and the data offset
So what I assume is they are interpreting the data differently, and getting the wrong result.
I am pretty sure any NIC, including Mellanox ConnectX-3 MUST support ip fragments.
The flow type you are referring is for the Flow Director, i.e. mapping specific flows to specific RX queues. Even if your NIC does not support flow director, it does not matter for the IP fragmentation.
I guess there is an error in the setup or in the app. You wrote:
the dpdk application did not recognized the received packet as the ipv4 header.
I would look into this more closely. Try to dump those packets with dpdk-pdump or even by simply dumping the receiving packet on the console with rte_pktmbuf_dump()
If you still suspect the NIC, the best option would be to temporary substitute it with another brand or a virtual device. Just to confirm it is the NIC indeed.
EDIT:
Have a look at mlx4_ptype_table for fragmented IPv4 packets it should return packet_type set to RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_FRAG
Please note the functionality was added in DPDK 17.11.
I suggest you to dump pkt->packet_type on console to make sure it is zero indeed. Also make sure you have the latest libmlx4 installed.
I'm using pcap to capture ip (both v4 and v6) packets on my router. It works just fine but I've noticed that sometimes the ethertype of an ethernet frame (LINKTYPE_ETHERNET) or a linux cooked capture encapsulation (LINKTYPE_LINUX_SLL) does not correctly indicate the version of the ip packet they contain.
I was expecting that if I get a frame whose ethertype is 0x0800 (ETHERTYPE_IP) then it should contain an ipv4 packet with version == 4 and if I get a frame whose ethertype is 0x86DD (ETHERTYPE_IPV6) then it should contain an ipv6 packet with version == 6.
Most of the time the above is true but sometimes it's not. I would get a frame whose ethertype is ETHERTYPE_IP but somehow it contains an ipv6 packet or I get a frame whose ethertype is ETHERTYPE_IPV6 but it contains an ipv4 packet.
I seem to have heard "ipv4 over ipv6" or "ipv6 over ipv4" but I don't know exactly how they work or if they apply to my problem, but otherwise I'm not sure what's causing this inconsistency.
EDIT
I think my actually question is whether such behavior is normal. If so should I simply ignore the ethertype field and just check the version field in the ip header to determine if it's ipv4 or ipv6.
According to my (limited) understanding, both IPv4 and IPv6 can appear after an IPv4 (0x800) Ethernet type (ethtype). This is related to transmitting IPv4 packets over IPv6. When the ethtype is 0x800 and the IP header is version 6, then the address in the IP header is an IPv4 address mapped to IPv6.
One example that shows this is Linux UDP receive code, which checks for ethtype 0x800 and then converts the source address to ipv4 using ipv6_addr_set_v4mapped
Suppose I have a scenario like below :
There are about 225 Computers having the following range of IP addresses and hostnames:-
PC-LAB IP ADDRESS RANGE HOSTNAME RANGE
PC-LAB1 10.11.2.01 - 10.11.2.30 ccl1pc01 - ccl1pc30
PC-LAB2 10.11.3.01 - 10.11.3.30 ccl2pc01 - ccl2pc30
PC-LAB3 10.11.4.01 - 10.11.4.45 ccl3pc01 - ccl3pc45
PC-LAB4 10.11.5.01 - 10.11.5.50 ccl4pc01 - ccl4pc50
PC-LAB5 10.13.6.01 - 10.13.6.65 ccl5pc01 – ccl5pc65
I want to write a program (in C / C++) that will take the above IP address and hostname ranges as input and create two separate files, one with 225 entries of IP
addresses and another with 225 entries of hostnames..
Then the program will figure out which of these machines are up and which are down and then create two files one containing
hostname and IP addresses of the systems which are UP and another which are DOWN.
E.g.
FILE1.down
Hostname IP address
ccl1pc10 10.8.2.10
ccl5pc25 10.11.5.25
Note : If any ubuntu command simplifies this work..we can use that in the program for sure..!!
Look at fping
Run this command, sit back and relax:
fping ccl{1..6}pc{01..60}
this will print a list of
All hosts that have a DNS name/IP record and are up
All hosts that have a DNS name/IP record but are down
All hosts that are not in the DNS (I guess you can ignore these)
Regards
Check out Nmap. You might need to create a small wrapper to handle the input and output in the format you want, but it should do exactly what you need.
Is this homework, ie do you have to do it programmatically in C?
Otherwise there a dozen of already existing monitoring frameworks, with several already in Ubuntu: munin, nagios, zabbix, ...