How can DPDK ACL library handle more than 16(RTE_ACL_MAX_CATEGORIES) categories? - dpdk

I'm using DPDK 20.08 and I need to copy and distribute packets to many targets, which involves the multi-category parallel classification feature on DPDK ACL. But according to the documentation and source code, one can only set up to RTE_ACL_MAX_CATEGORIES categories, which is 16. If I want to set more than 16, do I have to use at least 2 ACL context and lookup one by one?

Question> I need to copy and distribute packets to many targets
Caution> If the original content of the Ethernet frame is not modified at all, then one can use mbuf->ref_cnt to circumvent the overhead of mbuf copy. But if packet content will be different on each interface (such as dstmac or valn or ip headers) the one has to duplicate the packet. If mbuf->ref_cnt is used one can not use the NIC offload RX_MBUF_FAST_FREE too.
Solution 1:
Depending upon the support of SIMD on the hardware (CPU) one can use RTE_ACL_CLASSIFY_AVX512X32. This allows 32 flow parallel lookup to be processed on CPU such skylake, cascadelake and icelake. refer Section 53.1.3 for more details.
Solution 2 (If there is no support for SIMD AVX512):
is it worth to run all 16 or 32 flow lookup then copy mbuf into multiple targets, then modify the contents for each interface?
My suggestion is to use the HW (NIC or ASIC) or using SW (rx bridge driver) to create mirror on ingress traffic and sent to separate RX queue. By using separate CPU threads smaller ACL tables can be used.
Solution 3 (If there is no support for SIMD AVX512):
One can start of modifying lib/acl/rte_acl.h for #define RTE_ACL_MAX_CATEGORIES 16 to #define RTE_ACL_MAX_CATEGORIES 32 as starting point of reference. There will be many other areas as mask which will also requires modification and support. Personally I do not recommend this option as it creates more out of tree dependency and needs to spend more time debugging the other issues.

Related

Proper DPDK device and port initialization for transmission

While writing a simple DPDK packet generator I noticed some additional initialization steps that are required for reliable and successful packet transmission:
calling rte_eth_link_get() or rte_eth_timesync_enable() after rte_eth_dev_start()
waiting 2 seconds before sending the first packet with rte_eth_tx_burst()
So these steps are necessary when I use the ixgbe DPDK vfio driver with an Intel X553 NIC.
When I'm using the AF_PACKET DPDK driver, it works without those extra steps.
Is this a bug or a feature?
Is there a better way than waiting 2 seconds before the first send?
Why is the wait required with the ixgbe driver? Is this a limitation of that NIC or the involved switch (Mikrotik CRS326 with Marvell chipset)?
Is there a more idiomatic function to call than rte_eth_link_get() in order to complete the device initialization for transmission?
Is there some way to keep the VFIO NIC initialized (while keeping its link up) to avoid re-initializing it over and over again during the usual edit/compile/test cycle? (i.e. to speed up that cycle ...)
Additional information: When I connect the NIC to a mirrored port (which is configured via Mikrotik's mirror-source/mirror-target ethernet switch settings) and the sleep(2) is removed then I see the first packet transmitted to the mirror target but not to the primary destination. Thus, it seems like the sleep is necessary to give the switch some time after the link is up (after the dpdk program start) to completely initialize its forwarding table or something like that?
Waiting just 1 second before the first transmission works less reliable, i.e. the packet reaches the receiver only every odd time.
My device/port initialization procedure implements the following setup sequence:
rte_eth_dev_count_avail()
rte_eth_dev_is_valid_port()
rte_eth_dev_info_get()
rte_eth_dev_adjust_nb_rx_tx_desc()
rte_eth_dev_configure(port_id, 0 /* rxrings */, 1 /* txrings */, &port_conf)
rte_eth_tx_queue_setup()
rte_eth_dev_start()
rte_eth_macaddr_get()
rte_eth_link_get() // <-- REQUIRED!
rte_eth_dev_get_mtu()
Without rte_eth_link_get() (or rte_eth_timesync_enable()) the first transmitted packet doesn't even show up on the mirrored port.
The above functions (and rte_eth_tx_burst()) complete successfully with/without rte_eth_link_get()/sleep(2) being present. Especially, the read MAC address, MTU have the expected values (MTU -> 1500) and rte_eth_tx_burst() returns 1 for a burst of one UDP packet.
The returned link status is: Link up at 1 Gbps FDX Autoneg
The fact that rte_eth_link_get() can be replaced with rte_eth_timesync_enable() probably can be explained by the latter calling ixgbe_start_timecounters() which calls rte_eth_linkstatus_get() which is also called by rte_eth_link_get().
I've checked the DPDK examples and most of them don't call rte_eth_link_get() before sending something. There is also no sleep after device initialization.
I'm using DPDK 20.11.2.
Even more information - to answer the comments:
I'm running this on Fedora 33 (5.13.12-100.fc33.x86_64).
Ethtool reports: firmware-version: 0x80000877
I had called rte_eth_timesync_enable() in order to work with the transmit timestamps. However, during debugging I removed it to arrive at an minimal reproducer. At that point I noticed that removing it made it actually worse (i.e. no packet transmitted over the mirror port). I thus investigated what part of that function might make the difference and found rte_eth_link_get() which has similar side-effects.
When switching to AF_PACKET I'm using the stock ixgbe kernel driver, i.e. ixgbe with default settings on a device that is initialized by networkd (dhcp enabled).
My expectation was that when rte_eth_dev_start() terminates that the link is up and the device is ready for transmission.
However, it would be nice, I guess, if one could avoid resetting the device after program restarts. I don't know if DPDK supports this.
Regarding delays: I've just tested the following: rte_eth_link_get() can be omitted if I increase the sleep to 6 seconds. Whereas a call to rte_eth_link_get() takes 3.3 s. So yeah, it's probably just helping due to the additional delay.
The difference between the two attempted approaches
In order to use af_packet PMD, you first bind the device in question to the kernel driver. At this point, a kernel network interface is spawned for that device. This interface typically has the link active by default. If not, you typically run ip link set dev <interface> up. When you launch your DPDK application, af_packet driver does not (re-)configure the link. It just unconditionally reports the link to be up on device start (see https://github.com/DPDK/dpdk/blob/main/drivers/net/af_packet/rte_eth_af_packet.c#L266) and vice versa when doing device stop. Link update operation is also no-op in this driver (see https://github.com/DPDK/dpdk/blob/main/drivers/net/af_packet/rte_eth_af_packet.c#L416).
In fact, with af_packet approach, the link is already active at the time you launch the application. Hence no need to await the link.
With VFIO approach, the device in question has its link down, and it's responsibility of the corresponding PMD to activate it. Hence the need to test link status in the application.
Is it possible to avoid waiting on application restarts?
Long story short, yes. Awaiting link status is not the only problem with application restarts. You effectively re-initialise EAL as a whole when you restart, and that procedure is also eye-wateringly time consuming. In order to cope with that, you should probably check out multi-process support available in DPDK (see https://doc.dpdk.org/guides/prog_guide/multi_proc_support.html).
This requires that you re-implement your application to have its control logic in one process (also, the primary process) and Rx/Tx datapath logic in another one (the secondary process). This way, you can keep the first one running all the time and re-start the second one when you need to change Rx/Tx logic / re-compile. Restarting the secondary process will re-attach to the existing EAL instance all the time. Hence no PMD restart being involved, and no more need to wait.
Based on the interaction via comments, the real question is summarized as I'm just asking myself if it's possible to keep the link-up between invocations of a DPDK program (when using a vfio device) to avoid dealing with the relatively long wait times before the first transmit comes through. IOW, is it somehow possible to skip the device reset when starting the program for the 2nd time?
The short answer is No for the packet-generator program between restarts, because any Physcial NIC which uses PCIe config space for both PF (IXGBE for X533) and VF (IXGBE_VF for X553) bound with uio_pci_generic|igb_uio|vfio-pci requires PCIe reset & configuration. But, when using AF_PACKET (ixgbe kernel diver) DPDK PMD, this is the virtual device that does not do any PCIe resets and directly dev->data->dev_link.link_status = ETH_LINK_UP; in eth_dev_start function.
For the second part Is the delay for the first TX packets expected?
[Answer] No, as a few factors contribute to delay in the first packet transmission
Switch software & firmware (PF only)
Switch port Auto-neg or fixed speed (PF only)
X533 software and firmware (PF and VF)
Autoneg enable or disabled (PF and VF)
link medium SFP (fiber) or DAC (Direct Attached Copper) or RJ-45 (cat5|cat6) connection (PF only)
PF driver version for NIC (X553 ixgbe) (PF and VF)
As per intel driver Software-generated layer two frames, like IEEE 802.3x (link flow control), IEEE 802.1Qbb (priority based flow-control), and others of this type (VF only)
Note: Since the issue is mentioned for VF ports only (and not PF ports), my assumption is
TX packet uses the SRC MAC address of VF to avoid MAC Spoof check on ASIC
configure all SR-IOV enabled ports for VLAN tagging from the administrative interface on the PF to avoid flooding of traffic to VF
PF driver is updated to avoid old driver issues such (VF reset causes PF link to reset). This can be identified via checking dmesg
Steps to isolate the problem is the NIC by:
Check if (X533) PF DPDK has the same delay as VF DPDK (Isolate if it is PF or VF) problem.
cross-connect 2 NIC (X533) on the same system. Then compare Linux vs DPDK link up events (to check if it is a NIC problem or PCIe LNK issue)
Disable Auto-neg for DPDK X533 and compare PF vs Vf in DPDK
Steps to isolate the problem is the Switch by:
Disable Auto-neg on Switch and set FD auto-neg-disable speed 1Gbps and check the behaviour
[EDIT-1] I do agree with the workaround solution suggested by #stackinside using DPDK primary-secondary process concept. As the primary is responsible for Link and port bring up. While secondary is used for actual RX and TX burst.

dpdk-19.11,ixgbe PMD got imissed when up to 8Mpps(decreased 3Mpps),any one have solution or know the reason

using DPDK 17.02 with my custom application, I get to see missed fo 11Mpps. Using 19.11 DPDK it has reduced to 8Mpps. are there compile flags or code changes for ixgbe PMD which has reduced the same.
new updates:
the application arch is rx_cores(3)-->worker cores(16)-->tx cores(2),
when I increased tx cores to 3,it can reach 10M pps,but increased to 4,It didn't take affects any more
imissed stands for packets missed in HW, due to less number of poll cycles for RX thread. Lower the value better it is. Hence using DPDK 19.11 it is having reduced impact.
reasons for imissed to be lower is 19.11 can be
better compiler flags
better code optimization.
assuming you are using static libraries, code logic might be better fitting into the instruction cache.
note: you should really run profiler and use objdump to decipher this.
reasons for imissed in your applications could be the following reasons
CPU frequency is in power save and not performance (impact of not disabling C states in BIOS).
The is additional work or sleep added in RX thread loop will prevent rx_burst been invoked frequently
RSS is enabled, but traffic send to DPDK port is falling on 1 queue only. In case of too many RX queues like 16, this leads to increase delay to pick packet from the relevant queue.
iF RX thread is feeding worker cores based on flow id, there could be retries if RING full scenarios leading to loss of rx_burst.
If testpmd or example/skeleton or example/l2fwd is not having imissed. then here are my suggestions to debug your applciation.
try sending RX-TX from single-core
set power governed from powersave to performance.
ensure you are running the core threads on isolated cores.

How to make TensorFlow use more available CPU

How can I fully utilize each of my EC2 cores?
I'm using a c4.4xlarge AWS Ubuntu EC2 instance and TensorFlow to build a large convoluted neural network. nproc says that my EC2 instance has 16 cores. When I run my convnet training code, the top utility says that I'm only using 400% CPU. I was expecting it to use 1600% CPU because of the 16 cores. The AWS EC2 monitoring tab confirms that I'm only using 25% of my CPU capacity. This is a huge network, and on my new Mac Pro it consumes about 600% CPU and takes a few hours to build, so I don't think the reason is because my network is too small.
I believe the line below ultimately determines CPU usage:
sess = tf.InteractiveSession(config=tf.ConfigProto())
I admit I don't fully understand the relationship between threads and cores, but I tried increasing the number of cores. It had the same effect as the line above: still 400% CPU.
NUM_THREADS = 16
sess = tf.InteractiveSession(config=tf.ConfigProto(intra_op_parallelism_threads=NUM_THREADS))
EDIT:
htop shows that shows that I am actually using all 16 of my EC2 cores, but each core is only at about 25%
top shows that my total CPU % is around 400%, but occasionally it will shoot up to 1300% and then almost immediately go back down to ~400%. This makes me think there could be a deadlock problem
Several things you can try:
Increase the number of threads
You already tried changing the intra_op_parallelism_threads. Depending on your network it can also make sense to increase the inter_op_parallelism_threads. From the doc:
inter_op_parallelism_threads:
Nodes that perform blocking operations are enqueued on a pool of
inter_op_parallelism_threads available in each process. 0 means the
system picks an appropriate number.
intra_op_parallelism_threads:
The execution of an individual op (for
some op types) can be parallelized on a pool of
intra_op_parallelism_threads. 0 means the system picks an appropriate
number.
(Side note: The values from the configuration file referenced above are not the actual default values tensorflow uses but just example values. You can see the actual default configuration by manually inspecting the object returned by tf.ConfigProto().)
Tensorflow uses 0 for the above options meaning it tries to choose appropriate values itself. I don't think tensorflow picked poor values that caused your problem but you can try out different values for the above option to be on the safe side.
Extract traces to see how well your code parallelizes
Have a look at
tensorflow code optimization strategy
It gives you something like this. In this picture you can see that the actual computation happens on far fewer threads than available. This could also be the case for your network. I marked potential synchronization points. There you can see that all threads are active for a short moment which potentially is the reason for the sporadic peaks in CPU utilization that you experience.
Miscellaneous
Make sure you are not running out of memory (htop)
Make sure you are not doing a lot of I/O or something similar

Simulate network conditions with a C/C++ Socket

I'm looking for a way to add network emulation to a socket.
The basic solution would be some way to add bandwidth limitation to a connection.
The ideal solution for me would:
Support advanced network properties (latency, packet-loss)
Open-source
Have a similar API as standard sockets (or wraps around them)
Work on both Windows and Linux
Support IPv4 and IPv6
I saw a few options that work on the system level, or even as proxy (Dummynet, WANem, neten, etc.), but that won't work for me, because I want to be able to emulate each socket manually (for example, open one socket with modem emulation and one with 3G emulation. Basically I want to know how these tools do it.
EDIT: I need to embed this functionality in my own product, therefore using an extra box or a third-party tool that needs manual configuration is not acceptable. I want to write code that does the same thing as those tools do, and my question is how to do it.
Epilogue: In hindsight, my question was a bit misleading. Apparently, there is no way to do what I wanted directly on the socket. There are two options:
Add delays to send/receive operation (Based on #PaulCoccoli's answer):
by adding a delay before sending and receiving, you can get a very crude network simulation (constant delay for latency, delay sending, as to not send more than X bytes per second, for bandwidth).
Paul's answer and comment were great inspiration for me, so I award him the bounty.
Add the network simulation logic as a proxy (Based on #m0she and others answer):
Either send the request through the proxy, or use the proxy to intercept the requests, then add the desired simulation. However, it makes more sense to use a ready solution instead of writing your own proxy implementation - from what I've seen Dummynet is probably the best choice (this is what webpagetest.org does). Other options are in the answers below, I'll also add DonsProxy
This is the better way to do it, so I'm accepting this answer.
You can compile a proxy into your software that would do that.
It can be some implementation of full fledged socks proxy (like this) or probably better, something simpler that would only serve your purpose (and doesn't require prefixing your communication with the destination and other socks overhead).
That code could run as a separate process or a thread within your process.
Adding throttling to a proxy shouldn't be too hard. You can:
delay forwarding of data if it passes some bandwidth limit
add latency by adding timer before read/write operations on buffers.
If you're working with connection based protocol (like TCP), it would be senseless to drop packets, but with a datagram based protocol (UDP) it would also be simple to implement.
The connection creation API would be a bit different from normal posix/winsock (unless you do some macro or other magic), but everything else (send/recv/select/close/etc..) is the same.
If you're building this into your product, then you should implement a layer of abstraction over the sockets API so you can select your own implementation at run time. Alternatively, you can implement wrappers of each socket function and select whether to call your own version or the system's version.
As for adding latency, you could have your implementation of the sockets API spin off a thread. In that thread, have a priority queue ordered by time (i.e. this background thread does a very basic discrete event simulation). Each "packet" you send or receive could be enqueued along with a delivery time. Each delivery time should have some amount of delay added. I would use some kind of random number generator with a Gaussian distribution.
The background thread would also have to simulate the other side of the connection, though it sounds like you may have already implemented that part?
I know only Network Link Conditioner for Mac OS X Lion. You should be mac developer to download it, so i cannot put download link there. Only description from 9to5mac.com: http://9to5mac.com/2011/08/10/new-in-os-x-lion-network-link-conditioner-utility-lets-you-simulate-internet-and-bandwidth-conditions/
This answer might be a partial solution for you when using linux:
Simulate delayed and dropped packets on Linux. It refers to a kernel module called netem, which can simulate all kinds of network problems.
If you want to work with TCP connections, having "packet loss" could be problematic since a lot of error-handling (like recovering lost packages) is done in the kernel. Simulating this in a cross-platform way could be hard.
you usually add a network device to your network that throttles the bandwidth or latency, on a port by port basis, you can then achieve what you want just by connecting to the port allocated to the particular type of crappy network you want to test, with no code changes or modifications required.
The easiest ways to do this is just add iptables rules to a Linux server acting as a proxy.
If you want it to work without the separate device, try trickle that is a software package that throttles your network on your client PC. (or for Windows)
You may would like to check WANem http://wanem.sourceforge.net/ . WANEM is Open Source and licensed under the GNU General Public License.
WANem allows the application development team to setup a transparent application gateway which can be used to simulate WAN characteristics like Network delay, Packet loss, Packet corruption, Disconnections, Packet re-ordering, Jitter, etc.
I think you could use a tool like Network Simulator. It's free, for Windows.
The only thing to do is to setup your program to use the right ports (and the settings for the network, of course).
If you want a software only solution that you control, you will have to implement it yourself. I know of no such existing package.
While a wrapper layer over a socket may give you the ability to introduce delay, it won't be sufficient to introduce loss or out of order delivery. In order to simulate those activities, you actually need intercept the data in transit between the two TCP stacks.
The approach I would recommend is to use a tunneling device (say tunX). Routes should be set so the client believes the way to the server is through tunX. Additional code (perhaps running in a different thread) would promiscuously intercept traffic on tunX, and perform your augmented behavior, before forwarding packets over the true physical interface that will get the traffic to your server. The reverse would happen for packets arriving from the server on the physical interface. Those packets would be intercepted by the client code, behavior augmented, before forwarding through tunX.
However, since you are testing client software, I am unclear as to why you would want to embed this code in your released software, unless the software itself is a WAN simulating client.

Can I write Ethernet based network programs in C++?

I would like to write a program and run it on two machines, and send some data from one machine to another in an Ethernet frame.
Typically application data is at layer 7 of the OSI model, is there anything like a kernel restriction or API restriction, that would stop me from writing a program in which I can specify a destination MAC address and have some data sent to that MAC as the Ethernet payload? Then write a program to listen for incoming frames and grab the frames from a specified source MAC address, extracting the payload of data from the frame?
(So I don't want any other overhead like IP or TCP/UDP headers, I don't want to go higher than layer 2).
Can this be done in C++, or must all communication happen at the IP layer, and can this be done on Ubuntu? Extra love for pointing or providing examples! :D
My problem is obviously I'm new to network programming in c++ and as far as I know, if I want to communicate across a network I have to use a socket() call or similar, which works at an IP layer, so can I write a c++ program to work at OSI layer 2, are there APIs for this, does the Linux kernel even allow this?
As you already mentioned sockets, probably you would just like to use a raw socket. Maybe this page with C example code is of some help.
In case you are looking for an idea for a program only using Ethernet while still being useful:
Wake on LAN in it's original form is quite simple. Note however that most current implementations actually send UDP packets (exploiting that the receiver does not parse for packet headers etc. but just a string in the packet's payload).
Also the use of raw sockets is usually restricted to privileged users. You might need to either
call your program as root
or have it owned by root and setuid bit set
or set the capability for creating raw socket using setcap CAP_NET_RAW+ep /path/to/your/program-file
The last option gives more fine grained privileges (just raw sockets, not write access to your whole file system etc.) than the other two. It is still less widely known however, since it is "only" supported from kernel 2.6.24 on (which came with Ubuntu 8.04).
Yes, actually linux has a very nice feature that makes it easy to deal with layer 2 packets. You can use a TAP device, which allows your userspace program to read/write ethernet traffic through the kernel.
http://www.kernel.org/pub/linux/kernel/people/marcelo/linux-2.4/Documentation/networking/tuntap.txt
http://en.wikipedia.org/wiki/TUN/TAP