OpenOnload ZeroCopy ReceiveQueue Always Full - c++

We are using openonload with zerocopy (for multicast operations) feature to receive and parse the multicast data in network level.
Our code(which you can see in below) works in lots of servers and its working without any problem. However recently we got a new server and installed the same operating system(Ubuntu 18.04) and same onload version (7.1.2.141) however when we ran our code in this server the udp receive queue never gets empty, its always full and we are not able to receive and parse the multicast data. I'm also sharing the network configuration below with our code. Does anyone have any idea about this problem
Code:
int onload_zc_recv(int fd, onload_zc_recv_args *args);
onload_zc_callback_rc zc_recv_callback(onload_zc_recv_args *args, int flags){
return clients[((zc_user_info*)(args->user_ptr))->id]->ZCRecvCB(args, flags);
}
onload_zc_callback_rc ItchClient::ZCRecvCB(onload_zc_recv_args *args, int flags) {
uint32_t i = 0;
for( i = 0; i < args->msg.msghdr.msg_iovlen; ++i ) {
if (args->msg.iov[i].iov_len > 0) {
//Our application logic is here
}
}
}
return ONLOAD_ZC_TERMINATE;
}
onload_zc_recv_args args;
memset(&args.msg, 0, sizeof(args.msg));
args.cb = &zc_recv_callback;
while (!clientStopped.load(std::memory_order_relaxed)) {
rc = onload_zc_recv(connection.getConnectionSocket(), &args);
}
Network Configuration: (We are trying to bind to ens1f0np0 interface)
br-80983172fc5d: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.19.0.1 netmask 255.255.0.0 broadcast 172.19.255.255
ether 02:42:73:86:7c:19 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
br-a85649ccece2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
inet6 fe80::42:e7ff:fed4:6560 prefixlen 64 scopeid 0x20<link>
ether 02:42:e7:d4:65:60 txqueuelen 0 (Ethernet)
RX packets 3492 bytes 894736 (894.7 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3670 bytes 353542 (353.5 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:a7:6c:f9:da txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens10f0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether 68:05:ca:f3:9c:a2 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xb8500000-b85fffff
ens10f1: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether 68:05:ca:f3:9c:a3 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xb8400000-b84fffff
ens10f2: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether 68:05:ca:f3:9c:a4 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xb8300000-b83fffff
ens10f3: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether 68:05:ca:f3:9c:a5 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xb8200000-b82fffff
ens1f0np0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.46.54.133 netmask 255.255.255.224 broadcast 10.46.54.159
inet6 fe80::20f:53ff:fe9a:ef00 prefixlen 64 scopeid 0x20<link>
ether 00:0f:53:9a:ef:00 txqueuelen 1000 (Ethernet)
RX packets 220301 bytes 50255774 (50.2 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1792 bytes 236826 (236.8 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 18
ens1f1np1: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether 00:0f:53:9a:ef:01 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 19
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 9835 bytes 1610054 (1.6 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 9835 bytes 1610054 (1.6 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethe21f54f: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::b488:c1ff:fee1:4029 prefixlen 64 scopeid 0x20<link>
ether b6:88:c1:e1:40:29 txqueuelen 0 (Ethernet)
RX packets 3492 bytes 943624 (943.6 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3685 bytes 354688 (354.6 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
This is our system log (Which has no error)
Mar 11 13:20:38 a1hft kernel: [ 1987.912719] oo:HftSrvProd[7]: Using Cloud Onload 7.1.2.141 [5,hft-udp-p7]
Mar 11 13:20:38 a1hft kernel: [ 1987.912720] oo:HftSrvProd[7]: Copyright 2019-2021 Xilinx, 2006-2019 Solarflare Communications, 2002-2005 Level 5 Networks
I've also checked all the configurations with our currently working servers , but not able to find anything. Do you have any idea what may cause this problem

I would advise contacting support-nic#xilinx.com - helpful bunch of people. It is best to register as this ensures customer ticket is created.

Related

rte_eth_tx_burst can not send packet out

A dpdk application which generate a few arp request packets and call rte_eth_tx_burst to send them out, some packets are not received by peer NIC port(this can be confirmed by using wireshark to capture the packets from the peer NIC), dpdk-proc-info shows no error count. But before call rte_eth_tx_burst let the app sleep 10s, it can send all the packets.
example codes:
main(){
port_init();
sleep(10);
gen_pkt(mbuf);
rte_eth_tx_burst(mbuf);
}
System setup: Ubuntu 20.04.2 LTS, dpdk-stable-20.11.3, I350 Gigabit Network Connection 1521, igb_uio driver
root#k8s-node:/home/dpdk-stable-20.11.3/build/app# ./dpdk-proc-info -- --xstats
EAL: No legacy callbacks, legacy socket not created
###### NIC extended statistics for port 0 #########
####################################################
rx_good_packets: 10
tx_good_packets: 32
rx_good_bytes: 1203
tx_good_bytes: 1920
rx_missed_errors: 0
rx_errors: 0
tx_errors: 0
rx_mbuf_allocation_errors: 0
rx_q0_packets: 0
rx_q0_bytes: 0
rx_q0_errors: 0
tx_q0_packets: 0
tx_q0_bytes: 0
rx_crc_errors: 0
rx_align_errors: 0
rx_symbol_errors: 0
rx_missed_packets: 0
tx_single_collision_packets: 0
tx_multiple_collision_packets: 0
tx_excessive_collision_packets: 0
tx_late_collisions: 0
tx_total_collisions: 0
tx_deferred_packets: 0
tx_no_carrier_sense_packets: 0
rx_carrier_ext_errors: 0
rx_length_errors: 0
rx_xon_packets: 0
tx_xon_packets: 0
rx_xoff_packets: 0
tx_xoff_packets: 0
rx_flow_control_unsupported_packets: 0
rx_size_64_packets: 4
rx_size_65_to_127_packets: 3
rx_size_128_to_255_packets: 3
rx_size_256_to_511_packets: 0
rx_size_512_to_1023_packets: 0
rx_size_1024_to_max_packets: 0
rx_broadcast_packets: 0
rx_multicast_packets: 10
rx_undersize_errors: 0
rx_fragment_errors: 0
rx_oversize_errors: 0
rx_jabber_errors: 0
rx_management_packets: 0
rx_management_dropped: 0
tx_management_packets: 0
rx_total_packets: 10
tx_total_packets: 32
rx_total_bytes: 1203
tx_total_bytes: 1920
tx_size_64_packets: 32
tx_size_65_to_127_packets: 0
tx_size_128_to_255_packets: 0
tx_size_256_to_511_packets: 0
tx_size_512_to_1023_packets: 0
tx_size_1023_to_max_packets: 0
tx_multicast_packets: 0
tx_broadcast_packets: 32
tx_tso_packets: 0
tx_tso_errors: 0
rx_sent_to_host_packets: 0
tx_sent_by_host_packets: 0
rx_code_violation_packets: 0
interrupt_assert_count: 0
####################################################
root#k8s-node:/home/dpdk-stable-20.11.3/build/app# ./dpdk-proc-info -- --stats
EAL: No legacy callbacks, legacy socket not created
######################## NIC statistics for port 0 ########################
RX-packets: 5 RX-errors: 0 RX-bytes: 785
RX-nombuf: 0
TX-packets: 32 TX-errors: 0 TX-bytes: 1920
Stats reg 0 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 1 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 2 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 3 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 4 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 5 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 6 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 7 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 8 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 9 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 10 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 11 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 12 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 13 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 14 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 15 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 0 TX-packets: 0 TX-bytes: 0
Stats reg 1 TX-packets: 0 TX-bytes: 0
Stats reg 2 TX-packets: 0 TX-bytes: 0
Stats reg 3 TX-packets: 0 TX-bytes: 0
Stats reg 4 TX-packets: 0 TX-bytes: 0
Stats reg 5 TX-packets: 0 TX-bytes: 0
Stats reg 6 TX-packets: 0 TX-bytes: 0
Stats reg 7 TX-packets: 0 TX-bytes: 0
Stats reg 8 TX-packets: 0 TX-bytes: 0
Stats reg 9 TX-packets: 0 TX-bytes: 0
Stats reg 10 TX-packets: 0 TX-bytes: 0
Stats reg 11 TX-packets: 0 TX-bytes: 0
Stats reg 12 TX-packets: 0 TX-bytes: 0
Stats reg 13 TX-packets: 0 TX-bytes: 0
Stats reg 14 TX-packets: 0 TX-bytes: 0
Stats reg 15 TX-packets: 0 TX-bytes: 0
############################################################################
update:
Thanks for your response, I modified the codes:
main(){
uint32_t port_mask = 0x1;
port_init();
check_all_ports_link_status(port_mask);
gen_pkt(mbuf);
rte_eth_tx_burst(mbuf);
}
got the print logs:
Checking link status...............................
done
Port0 Link Up. Speed 1000 Mbps - full-duplex
I think the NIC should have initallized completely, but the peer NIC port still missed a lot of packets.
In most working cases the Physical NIC is enumerated for Duplex (full/half), speed (1, 10, 25, 40, 50, 100, 200) and negotiated for (auto/disable) within 1 second. Anything exceeding 2 or 3 seconds is the sign of connected machine or switch not able to negotiated with Duplex, speed or auto-negotiation. Hence the recommendation is
update the driver, firmware on both sides if the interfaces are NIC
Test out the different connection cable as link-sense might not be reaching properly
in case of hub or switch try fixing speed and auto-negotiation.
I do not recommend changing from FULL duplex to Half duplex (as it could be cable or SFI issue).
As temporary work around for the time being you can use rte_eth_link_get which also states it might need It might need to wait up to 9 seconds.
Note: easy way to test if it is cable issue is running DPDK on both ends to check time required for link to be up.
Modified Code Snippet:
main(){
port_init();
RTE_ETH_FOREACH_DEV(portid) {
struct rte_eth_link link;
memset(&link, 0, sizeof(link));
do {
retval = rte_eth_link_get_nowait(port, &link);
if (retval < 0) {
printf("Failed link get (port %u): %s\n",
port, rte_strerror(-retval));
return retval;
} else if (link.link_status)
break;
printf("Waiting for Link up on port %"PRIu16"\n", port);
sleep(1);
} while (!link.link_status);
}
gen_pkt(mbuf);
rte_eth_tx_burst(mbuf);
}
or
main(){
port_init();
RTE_ETH_FOREACH_DEV(portid) {
struct rte_eth_link link;
memset(&link, 0, sizeof(link));
ret = rte_eth_link_get(portid, &link);
if (ret < 0) {
printf("Port %u link get failed: err=%d\n", portid, ret);
continue;
}
gen_pkt(mbuf);
rte_eth_tx_burst(mbuf);
}
It's no surprise that packets can't be sent until the physical link goes up. That takes some time, and one can use rte_eth_link_get() API to automate waiting.

pktgen cannot send packet in ovs dpdk scenario

The test setup is: pktgen send packet to vhost-user1 port, then ovs forward it vhost-user2, then testpmd received it from vhost-user2.
The problem is: pktgen can not send any packets, testpmd received no packet also, I don't know what's the problem.
Needs some help, thanks in advance!
OVS: 2.9.0
DPDK: 17.11.6
pktgen: 3.4.4
OVS setup:
export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
export PATH=$PATH:/usr/local/share/openvswitch/scripts
rm /usr/local/etc/openvswitch/conf.db
ovsdb-tool create /usr/local/etc/openvswitch/conf.db /usr/local/share/openvswitch/vswitch.ovsschema
ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
ovs-vsctl --no-wait init
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true other_config:dpdk-lcore=0x2 other_config:dpdk-socket-mem="1024,0"
ovs-vswitchd unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach
ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x8
ovs-vsctl add-br ovs-br0 -- set bridge ovs-br0 datapath_type=netdev
ovs-vsctl add-port ovs-br0 vhost-user0 -- set Interface vhost-user0 type=dpdkvhostuser
ovs-vsctl add-port ovs-br0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser
ovs-vsctl add-port ovs-br0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
ovs-vsctl add-port ovs-br0 vhost-user3 -- set Interface vhost-user3 type=dpdkvhostuser
sudo ovs-ofctl del-flows ovs-br0
sudo ovs-ofctl add-flow ovs-br0 in_port=2,dl_type=0x800,idle_timeout=0,action=output:3
sudo ovs-ofctl add-flow ovs-br0 in_port=3,dl_type=0x800,idle_timeout=0,action=output:2
sudo ovs-ofctl add-flow ovs-br0 in_port=1,dl_type=0x800,idle_timeout=0,action=output:4
sudo ovs-ofctl add-flow ovs-br0 in_port=4,dl_type=0x800,idle_timeout=0,action=output:1
run pktgen:
root#k8s:/home/haosp/OVS_DPDK/pktgen-3.4.4# pktgen -c 0xf --master-lcore 0 -n 1 --socket-mem 512,0 --file-prefix pktgen --no-pci \
> --vdev 'net_virtio_user0,mac=00:00:00:00:00:05,path=/usr/local/var/run/openvswitch/vhost-user0' \
> --vdev 'net_virtio_user1,mac=00:00:00:00:00:01,path=/usr/local/var/run/openvswitch/vhost-user1' \
> -- -P -m "1.[0-1]"
Copyright (c) <2010-2017>, Intel Corporation. All rights reserved. Powered by DPDK
EAL: Detected 4 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
Lua 5.3.4 Copyright (C) 1994-2017 Lua.org, PUC-Rio
Copyright (c) <2010-2017>, Intel Corporation. All rights reserved.
Pktgen created by: Keith Wiles -- >>> Powered by DPDK <<<
>>> Packet Burst 64, RX Desc 1024, TX Desc 2048, mbufs/port 16384, mbuf cache 2048
=== port to lcore mapping table (# lcores 4) ===
lcore: 0 1 2 3 Total
port 0: ( D: T) ( 1: 1) ( 0: 0) ( 0: 0) = ( 1: 1)
port 1: ( D: T) ( 1: 1) ( 0: 0) ( 0: 0) = ( 1: 1)
Total : ( 0: 0) ( 2: 2) ( 0: 0) ( 0: 0)
Display and Timer on lcore 0, rx:tx counts per port/lcore
Configuring 2 ports, MBUF Size 2176, MBUF Cache Size 2048
Lcore:
1, RX-TX
RX_cnt( 2): (pid= 0:qid= 0) (pid= 1:qid= 0)
TX_cnt( 2): (pid= 0:qid= 0) (pid= 1:qid= 0)
Port :
0, nb_lcores 1, private 0x5635a661d3a0, lcores: 1
1, nb_lcores 1, private 0x5635a661ff70, lcores: 1
** Default Info (net_virtio_user0, if_index:0) **
max_rx_queues : 1, max_tx_queues : 1
max_mac_addrs : 64, max_hash_mac_addrs: 0, max_vmdq_pools: 0
rx_offload_capa: 28, tx_offload_capa : 0, reta_size : 0, flow_type_rss_offloads:0000000000000000
vmdq_queue_base: 0, vmdq_queue_num : 0, vmdq_pool_base: 0
** RX Conf **
pthresh : 0, hthresh : 0, wthresh : 0
Free Thresh : 0, Drop Enable : 0, Deferred Start : 0
** TX Conf **
pthresh : 0, hthresh : 0, wthresh : 0
Free Thresh : 0, RS Thresh : 0, Deferred Start : 0, TXQ Flags:00000f00
Create: Default RX 0:0 - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 = 36865 KB headroom 128 2176
Set RX queue stats mapping pid 0, q 0, lcore 1
Create: Default TX 0:0 - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 = 36865 KB headroom 128 2176
Create: Range TX 0:0 - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 = 36865 KB headroom 128 2176
Create: Sequence TX 0:0 - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 = 36865 KB headroom 128 2176
Create: Special TX 0:0 - Memory used (MBUFs 64 x (size 2176 + Hdr 128)) + 192 = 145 KB headroom 128 2176
Port memory used = 147601 KB
Initialize Port 0 -- TxQ 1, RxQ 1, Src MAC 00:00:00:00:00:05
** Default Info (net_virtio_user1, if_index:0) **
max_rx_queues : 1, max_tx_queues : 1
max_mac_addrs : 64, max_hash_mac_addrs: 0, max_vmdq_pools: 0
rx_offload_capa: 28, tx_offload_capa : 0, reta_size : 0, flow_type_rss_offloads:0000000000000000
vmdq_queue_base: 0, vmdq_queue_num : 0, vmdq_pool_base: 0
** RX Conf **
pthresh : 0, hthresh : 0, wthresh : 0
Free Thresh : 0, Drop Enable : 0, Deferred Start : 0
** TX Conf **
pthresh : 0, hthresh : 0, wthresh : 0
Free Thresh : 0, RS Thresh : 0, Deferred Start : 0, TXQ Flags:00000f00
Create: Default RX 1:0 - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 = 36865 KB headroom 128 2176
Set RX queue stats mapping pid 1, q 0, lcore 1
Create: Default TX 1:0 - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 = 36865 KB headroom 128 2176
Create: Range TX 1:0 - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 = 36865 KB headroom 128 2176
Create: Sequence TX 1:0 - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 = 36865 KB headroom 128 2176
Create: Special TX 1:0 - Memory used (MBUFs 64 x (size 2176 + Hdr 128)) + 192 = 145 KB headroom 128 2176
Port memory used = 147601 KB
Initialize Port 1 -- TxQ 1, RxQ 1, Src MAC 00:00:00:00:00:01
Total memory used = 295202 KB
Port 0: Link Up - speed 10000 Mbps - full-duplex <Enable promiscuous mode>
!ERROR!: Could not read enough random data for PRNG seed
Port 1: Link Up - speed 10000 Mbps - full-duplex <Enable promiscuous mode>
!ERROR!: Could not read enough random data for PRNG seed
=== Display processing on lcore 0
WARNING: Nothing to do on lcore 2: exiting
WARNING: Nothing to do on lcore 3: exiting
RX/TX processing lcore: 1 rx: 2 tx: 2
For RX found 2 port(s) for lcore 1
For TX found 2 port(s) for lcore 1
Pktgen:/>set 0 dst mac 00:00:00:00:00:03
Pktgen:/>set all rate 10
Pktgen:/>set 0 count 10000
Pktgen:/>set 1 count 20000
Pktgen:/>str
| Flags:Port : P--------------:0 P--------------:1 0/0
Link State : P--------------:0 P--------------:1 ----TotalRate----
Pkts/s Max/Rx : <UP-10000-FD> <UP-10000-FD> 0/0
Max/Tx : 0/0 0/0 0/0
MBits/s Rx/Tx : 256/0 256/0 512/0
Broadcast : 0/0 0/0 0/0
Multicast : 0 0
64 Bytes : 0 0
65-127 : 0 0
128-255 : 0 0
256-511 : 0 0
512-1023 : 0 0
1024-1518 : 0 0
Runts/Jumbos : 0 0
Errors Rx/Tx : 0/0 0/0
Total Rx Pkts : 0/0 0/0
Tx Pkts : 0 0
Rx MBs : 256 256
Tx MBs : 0 0
ARP/ICMP Pkts : 0 0
Tx Count/% Rate : 0/0 0/0
Pattern Type : abcd... abcd...
Tx Count/% Rate : 10000 /10% 20000 /10%--------------------
PktSize/Tx Burst : 64 / 64 64 / 64
Src/Dest Port : 1234 / 5678 1234 / 5678--------------------
Pkt Type:VLAN ID : IPv4 / TCP:0001 IPv4 / TCP:0001
802.1p CoS : 0 0--------------------
ToS Value: : 0 0
- DSCP value : 0 0--------------------
- IPP value : 0 0
Dst IP Address : 192.168.1.1 192.168.0.1--------------------
Src IP Address : 192.168.0.1/24 192.168.1.1/24
Dst MAC Address : 00:00:00:00:00:03 00:00:00:00:00:05--------------------
Src MAC Address : 00:00:00:00:00:05 00:00:00:00:00:01
VendID/PCI Addr : 0000:0000/00:00.0 0000:0000/00:00.0--------------------
Pktgen:/> str
-- Pktgen Ver: 3.4.4 (DPDK 17.11.6) Powered by DPDK --------------------------
Pktgen:/>
run testpmd:
./testpmd -c 0xf -n 1 --socket-mem 512,0 --file-prefix testpmd --no-pci \
--vdev 'net_virtio_user2,mac=00:00:00:00:00:02,path=/usr/local/var/run/openvswitch/vhost-user2' \
--vdev 'net_virtio_user3,mac=00:00:00:00:00:03,path=/usr/local/var/run/openvswitch/vhost-user3' \
-- -i -a --burst=64 --txd=2048 --rxd=2048 --coremask=0x4
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: 1 hugepages of size 1073741824 reserved, but no mounted hugetlbfs found for that size
EAL: Probing VFIO support...
EAL: VFIO support initialized
update_memory_region(): Too many memory regions
update_memory_region(): Too many memory regions
Interactive-mode selected
Auto-start selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
update_memory_region(): Too many memory regions
update_memory_region(): Too many memory regions
update_memory_region(): Too many memory regions
update_memory_region(): Too many memory regions
Configuring Port 0 (socket 0)
Port 0: 00:00:00:00:00:02
Configuring Port 1 (socket 0)
Port 1: 00:00:00:00:00:03
Checking link statuses...
Done
Start automatic packet forwarding
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 2 streams:
RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
io packet forwarding packets/burst=64
nb forwarding cores=1 - nb forwarding ports=2
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=2048 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=2048 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=2048 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=2048 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
testpmd> show port info
Bad arguments
testpmd> show port stats all
######################## NIC statistics for port 0 ########################
RX-packets: 0 RX-missed: 0 RX-bytes: 0
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0
Tx-pps: 0
############################################################################
######################## NIC statistics for port 1 ########################
RX-packets: 0 RX-missed: 0 RX-bytes: 0
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0
Tx-pps: 0
############################################################################
OVS dump-flow show:
root#k8s:/home/haosp# ovs-ofctl dump-flows ovs-br0
cookie=0x0, duration=77519.972s, table=0, n_packets=0, n_bytes=0, ip,in_port="vhost-user1" actions=output:"vhost-user2"
cookie=0x0, duration=77519.965s, table=0, n_packets=0, n_bytes=0, ip,in_port="vhost-user2" actions=output:"vhost-user1"
cookie=0x0, duration=77519.959s, table=0, n_packets=0, n_bytes=0, ip,in_port="vhost-user0" actions=output:"vhost-user3"
cookie=0x0, duration=77518.955s, table=0, n_packets=0, n_bytes=0, ip,in_port="vhost-user3" actions=output:"vhost-user0"
ovs-ofctl dump-ports ovs-br0 show:
root#k8s:/home/haosp# ovs-ofctl dump-ports ovs-br0
OFPST_PORT reply (xid=0x2): 5 ports
port "vhost-user3": rx pkts=0, bytes=0, drop=0, errs=0, frame=?, over=?, crc=?
tx pkts=0, bytes=0, drop=6, errs=?, coll=?
port "vhost-user1": rx pkts=0, bytes=0, drop=0, errs=0, frame=?, over=?, crc=?
tx pkts=0, bytes=0, drop=8, errs=?, coll=?
port "vhost-user0": rx pkts=0, bytes=0, drop=0, errs=0, frame=?, over=?, crc=?
tx pkts=0, bytes=0, drop=8, errs=?, coll=?
port "vhost-user2": rx pkts=0, bytes=0, drop=0, errs=0, frame=?, over=?, crc=?
tx pkts=0, bytes=0, drop=8, errs=?, coll=?
port LOCAL: rx pkts=50, bytes=3732, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=0, bytes=0, drop=0, errs=0, coll=0
ovs-ofctl show ovs-br0
root#k8s:/home/haosp# ovs-ofctl show ovs-br0
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000ca4f2b8e6b4b
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(vhost-user0): addr:00:00:00:00:00:00
config: 0
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
2(vhost-user1): addr:00:00:00:00:00:00
config: 0
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
3(vhost-user2): addr:00:00:00:00:00:00
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
4(vhost-user3): addr:00:00:00:00:00:00
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
LOCAL(ovs-br0): addr:ca:4f:2b:8e:6b:4b
config: 0
state: 0
current: 10MB-FD COPPER
speed: 10 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
ovs-vsctl show
root#k8s:/home/haosp# ovs-vsctl show
635ba448-91a0-4c8c-b6ca-4b9513064d7f
Bridge "ovs-br0"
Port "vhost-user2"
Interface "vhost-user2"
type: dpdkvhostuser
Port "ovs-br0"
Interface "ovs-br0"
type: internal
Port "vhost-user0"
Interface "vhost-user0"
type: dpdkvhostuser
Port "vhost-user3"
Interface "vhost-user3"
type: dpdkvhostuser
Port "vhost-user1"
Interface "vhost-user1"
type: dpdkvhostuser
It seems that pktgen can not send packets, ovs statatics shows no packet received also,
I have no idea yet, it confused me
If the goal is to have packet transfer between Pktgen and testpmd that is connected by OVS-DPDK one has to use net_vhost and virtio_user pair.
DPDK Pktgen (net_vhost) <==> OVS-DPDK port-1 (virtio_user) {Rule to forward} OVS-DPDK port-2 (virtio_user) <==> DPDK Pktgen (net_vhost)
In the current setup, you will have to make the following changes
start DPDK pktgen by changing from --vdev net_virtio_user0,mac=00:00:00:00:00:05,path=/usr/local/var/run/openvswitch/vhost-user0 to --vdev net_vhost0,iface=/usr/local/var/run/openvswitch/vhost-user0
start DPDK testpmd by changing from --vdev 'net_virtio_user2,mac=00:00:00:00:00:02,path=/usr/local/var/run/openvswitch/vhost-user2' to --vdev 'net_vhost0,iface=/usr/local/var/run/openvswitch/vhost-user2'
then start DPDK-OVS with --vdev=virtio_user0,path=/usr/local/var/run/openvswitch/vhost-user0 and --vdev=virtio_user1,path=/usr/local/var/run/openvswitch/vhost-user2
add rules to allow the port to port forwarding between pktgen and testpmd
Note:
please update the command line for multiple ports.
screenshot shared below with pktgen and l2fwd setup

Status thermal printer with libusb on linux ARM

I known there is a lot of question about status printer ...
i have a Citizen CT-S310 II, i have managed all the code for write character in USB without problem with libusb_bulk_transfer (Text, Bold, Center, CR, CUT_PAPER etc) :
#define ENDPOINT_OUT 0x02
#define ENDPOINT_IN 0x81
struct libusb_device_handle *_handle;
[detach kernel driver...]
[claim interface...]
[etc ...]
r = libusb_bulk_transfer(device_handle, ENDPOINT_OUT, Mydata, out_len, &transferred, 1000);
Now, i need to receive data from the printer to ckeck the status, my first idea was to send the POS command with the same "bulk_transfer" of the doc :
1D (hexa) 72 (hexa) n
n => 1 (Send the papel sensor status)
and retrieve the value by "bulk_transfer" with the end point "ENDPOINT_IN" the doc say there is 8 bytes to receive :
bit 0,1 => paper found by paper near-end sensor 00H
bit 0,1 => paper not found by paper near-end sensor 03H
bit 1,2 => paper found by paper-end sensor 00H
bit 1,2 => paper not found by paper-end sensor 0CH
[...]
so two "bulk_transfer", one for send command status (ENDPOINT_OUT) and one for receive the result (ENDPOINT_IN), but i have allways an USB ERROR ("bulk_transfer" in read = -1)
Maybe the USB don't work like this ? So my second idea was to use the implemented function in PrinterClass USB with the command "control_transfer" :
int r = 0;
int out_len = 1;
unsigned char* _udata = NULL;
uint8_t bmRequestType = LIBUSB_ENDPOINT_IN | LIBUSB_REQUEST_TYPE_CLASS | LIBUSB_RECIPIENT_INTERFACE;
uint8_t bRequest = LIBUSB_REQUEST_GET_STATUS;
uint16_t wValue = 0; // the value field for the setup packet (?????)
uint16_t wIndex = 0; // N° interface printer (the index field for the setup packet)
r = libusb_control_transfer(device_handle, bmRequestType,bRequest,wValue, wIndex,_udata,out_len, USB_TIMEOUT);
i don't exactly how to fill all the parameter, i know it depend of my device, but the doc of libsub is not very explicit.
What is exactly "wValue" ?
What is exactly "wIndex" ? the interface number ??
the parameter LIBUSB_ENDPOINT_IN by default is 0x80, but my printer use 0x81, i must to change this default endpoint ?
Bus 001 Device 004: ID 1d90:2060
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 2.00
bDeviceClass 0 (Defined at Interface level)
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 64
idVendor 0x1d90
idProduct 0x2060
bcdDevice 0.02
iManufacturer 1 CITIZEN
iProduct 2 Thermal Printer
iSerial 3 00000000
bNumConfigurations 1
Configuration Descriptor:
bLength 9
bDescriptorType 2
wTotalLength 32
bNumInterfaces 1
bConfigurationValue 1
iConfiguration 0
bmAttributes 0xc0
Self Powered
MaxPower 0mA
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 0
bAlternateSetting 0
bNumEndpoints 2
bInterfaceClass 7 Printer
bInterfaceSubClass 1 Printer
bInterfaceProtocol 2 Bidirectional
iInterface 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x81 EP 1 IN
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0040 1x 64 bytes
bInterval 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x02 EP 2 OUT
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0040 1x 64 bytes
bInterval 0
Device Status: 0x0001
Self Powered
The response of "control_transfer" in my case is always 0 :( with paper or without.How send a good "control_transfer" for request the status of my printer ??
All the help for solve my problem is welcome !!!
finally resolved !
The value of LIBUSB_REQUEST_GET_STATUS is 0x00, but for a printer the request status is 0x01.
for check the status of printer with libusb-1.0 :
uint8_t bmRequestType = LIBUSB_ENDPOINT_IN | LIBUSB_REQUEST_TYPE_CLASS | LIBUSB_RECIPIENT_INTERFACE;
uint8_t bRequest = 0x01; // Here not LIBUSB_REQUEST_GET_STATUS
uint16_t wValue = 0;
uint16_t wIndex = 0;
r = libusb_control_transfer(device_handle, bmRequestType,bRequest,wValue, wIndex,&_udata,out_len, USB_TIMEOUT);

How Can get the interface list in C++ in linux ?

I want to get my internet interface list in C++ in linux because my program need to down Or up the link but i dont know how get the interface to modifi it.
The system call you are looking for is getifaddrs. There is a brief example program on the man page.
Within the ifaddrs there is a bit flag field ifa_flags with which you can test whether the interface is up or down.
Read from /proc/net/dev
Full description in man proc:
/proc/net/dev
The dev pseudo-file contains network device status information. This gives the number of received and sent packets,
the number of errors and collisions and other basic statistics. These are used by the ifconfig(8) program to report
device status. The format is:
Inter-| Receive | Transmit
face |bytes packets errs drop fifo frame compressed multicast|bytes packets errs drop fifo colls carrier compressed
lo: 2776770 11307 0 0 0 0 0 0 2776770 11307 0 0 0 0 0 0
eth0: 1215645 2751 0 0 0 0 0 0 1782404 4324 0 0 0 427 0 0
ppp0: 1622270 5552 1 0 0 0 0 0 354130 5669 0 0 0 0 0 0
tap0: 7714 81 0 0 0 0 0 0 7714 81 0 0 0 0 0 0
This is a text file, each interface is a line... should be easy.
For example (no error checking... just printing the interface names)
#include <fstream>
#include <iostream>
int main() {
std::ifstream in("/proc/net/dev");
int c=0;
std::string line;
for(; std::getline( in, line ); c++)
{
if(c<2) continue; // skip header
std::size_t start=line.find_first_not_of(" "); // skip leading spaces
std::size_t end=line.find_first_of(":",start); // look for the ":"
std::cout << line.substr(start,end-start) << std::endl;
}
}

Sending a control signal?

I am working on making a client and a server with windows, c++
the design what I decided is
server is just sending what client have to render depends on client's sending message.
sort of tiles and objects, picture, line, rectangle, circle... could be drawn on client side
and a client just receive a command from server and render something
if server send a message like "draw picture.png srcX srcY width height destX destY".
(picture.png is there on client side)
then client just parse string and do what I want to.
but.
I want to send a control signal as well
as like below
"for(y = 0; y<30; y++){ for(x = 0; x<30; x++) { draw tile.png 0 0 16 16 x*16 y*16 }}"
I realize that sending a function is not a good idea
(thanks for all replies.)
is there any good idea to solve this problem?
sending
"draw tile.png 0 0 16 16 0 0"
"draw tile.png 0 0 16 16 0 16"
"draw tile.png 0 0 16 16 0 32"
"draw tile.png 0 0 16 16 0 48"
"draw tile.png 0 0 16 16 0 64"
"draw tile.png 0 0 16 16 0 96"
"draw tile.png 0 0 16 16 0 112"
"draw tile.png 0 0 16 16 0 128"
"draw tile.png 0 0 16 16 0 132"
...for 30*30 time would be overkill
I am searching for efficient way for sending a message "what client have to draw"
drawing is not limited to just tile and object but it may contain drawing effect picture command on any coordinates.
thanks for reading.
Well, if you don't want to send and execute scripts, try to find some simple solution. For example, message format can be defined as:
draw file name srcX srcY width height destX destY [srcX srcY width height destX destY ...]
Some optimization may be applied, for example, you can pass only the difference between previous and current image:
draw tile.png 0 0 16 16 0 0 (5 16)
That means: increase member #5 of previous packet by 16.
I know this is quite primitive, but simple for implementation.
Define a language, implement the parser on the client and send the commands as pure text.
You'll have to implement the reverse parser in the server to send optimized messages too.