Status thermal printer with libusb on linux ARM - c++

I known there is a lot of question about status printer ...
i have a Citizen CT-S310 II, i have managed all the code for write character in USB without problem with libusb_bulk_transfer (Text, Bold, Center, CR, CUT_PAPER etc) :
#define ENDPOINT_OUT 0x02
#define ENDPOINT_IN 0x81
struct libusb_device_handle *_handle;
[detach kernel driver...]
[claim interface...]
[etc ...]
r = libusb_bulk_transfer(device_handle, ENDPOINT_OUT, Mydata, out_len, &transferred, 1000);
Now, i need to receive data from the printer to ckeck the status, my first idea was to send the POS command with the same "bulk_transfer" of the doc :
1D (hexa) 72 (hexa) n
n => 1 (Send the papel sensor status)
and retrieve the value by "bulk_transfer" with the end point "ENDPOINT_IN" the doc say there is 8 bytes to receive :
bit 0,1 => paper found by paper near-end sensor 00H
bit 0,1 => paper not found by paper near-end sensor 03H
bit 1,2 => paper found by paper-end sensor 00H
bit 1,2 => paper not found by paper-end sensor 0CH
[...]
so two "bulk_transfer", one for send command status (ENDPOINT_OUT) and one for receive the result (ENDPOINT_IN), but i have allways an USB ERROR ("bulk_transfer" in read = -1)
Maybe the USB don't work like this ? So my second idea was to use the implemented function in PrinterClass USB with the command "control_transfer" :
int r = 0;
int out_len = 1;
unsigned char* _udata = NULL;
uint8_t bmRequestType = LIBUSB_ENDPOINT_IN | LIBUSB_REQUEST_TYPE_CLASS | LIBUSB_RECIPIENT_INTERFACE;
uint8_t bRequest = LIBUSB_REQUEST_GET_STATUS;
uint16_t wValue = 0; // the value field for the setup packet (?????)
uint16_t wIndex = 0; // N° interface printer (the index field for the setup packet)
r = libusb_control_transfer(device_handle, bmRequestType,bRequest,wValue, wIndex,_udata,out_len, USB_TIMEOUT);
i don't exactly how to fill all the parameter, i know it depend of my device, but the doc of libsub is not very explicit.
What is exactly "wValue" ?
What is exactly "wIndex" ? the interface number ??
the parameter LIBUSB_ENDPOINT_IN by default is 0x80, but my printer use 0x81, i must to change this default endpoint ?
Bus 001 Device 004: ID 1d90:2060
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 2.00
bDeviceClass 0 (Defined at Interface level)
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 64
idVendor 0x1d90
idProduct 0x2060
bcdDevice 0.02
iManufacturer 1 CITIZEN
iProduct 2 Thermal Printer
iSerial 3 00000000
bNumConfigurations 1
Configuration Descriptor:
bLength 9
bDescriptorType 2
wTotalLength 32
bNumInterfaces 1
bConfigurationValue 1
iConfiguration 0
bmAttributes 0xc0
Self Powered
MaxPower 0mA
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 0
bAlternateSetting 0
bNumEndpoints 2
bInterfaceClass 7 Printer
bInterfaceSubClass 1 Printer
bInterfaceProtocol 2 Bidirectional
iInterface 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x81 EP 1 IN
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0040 1x 64 bytes
bInterval 0
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x02 EP 2 OUT
bmAttributes 2
Transfer Type Bulk
Synch Type None
Usage Type Data
wMaxPacketSize 0x0040 1x 64 bytes
bInterval 0
Device Status: 0x0001
Self Powered
The response of "control_transfer" in my case is always 0 :( with paper or without.How send a good "control_transfer" for request the status of my printer ??
All the help for solve my problem is welcome !!!

finally resolved !
The value of LIBUSB_REQUEST_GET_STATUS is 0x00, but for a printer the request status is 0x01.
for check the status of printer with libusb-1.0 :
uint8_t bmRequestType = LIBUSB_ENDPOINT_IN | LIBUSB_REQUEST_TYPE_CLASS | LIBUSB_RECIPIENT_INTERFACE;
uint8_t bRequest = 0x01; // Here not LIBUSB_REQUEST_GET_STATUS
uint16_t wValue = 0;
uint16_t wIndex = 0;
r = libusb_control_transfer(device_handle, bmRequestType,bRequest,wValue, wIndex,&_udata,out_len, USB_TIMEOUT);

Related

rte_eth_tx_burst can not send packet out

A dpdk application which generate a few arp request packets and call rte_eth_tx_burst to send them out, some packets are not received by peer NIC port(this can be confirmed by using wireshark to capture the packets from the peer NIC), dpdk-proc-info shows no error count. But before call rte_eth_tx_burst let the app sleep 10s, it can send all the packets.
example codes:
main(){
port_init();
sleep(10);
gen_pkt(mbuf);
rte_eth_tx_burst(mbuf);
}
System setup: Ubuntu 20.04.2 LTS, dpdk-stable-20.11.3, I350 Gigabit Network Connection 1521, igb_uio driver
root#k8s-node:/home/dpdk-stable-20.11.3/build/app# ./dpdk-proc-info -- --xstats
EAL: No legacy callbacks, legacy socket not created
###### NIC extended statistics for port 0 #########
####################################################
rx_good_packets: 10
tx_good_packets: 32
rx_good_bytes: 1203
tx_good_bytes: 1920
rx_missed_errors: 0
rx_errors: 0
tx_errors: 0
rx_mbuf_allocation_errors: 0
rx_q0_packets: 0
rx_q0_bytes: 0
rx_q0_errors: 0
tx_q0_packets: 0
tx_q0_bytes: 0
rx_crc_errors: 0
rx_align_errors: 0
rx_symbol_errors: 0
rx_missed_packets: 0
tx_single_collision_packets: 0
tx_multiple_collision_packets: 0
tx_excessive_collision_packets: 0
tx_late_collisions: 0
tx_total_collisions: 0
tx_deferred_packets: 0
tx_no_carrier_sense_packets: 0
rx_carrier_ext_errors: 0
rx_length_errors: 0
rx_xon_packets: 0
tx_xon_packets: 0
rx_xoff_packets: 0
tx_xoff_packets: 0
rx_flow_control_unsupported_packets: 0
rx_size_64_packets: 4
rx_size_65_to_127_packets: 3
rx_size_128_to_255_packets: 3
rx_size_256_to_511_packets: 0
rx_size_512_to_1023_packets: 0
rx_size_1024_to_max_packets: 0
rx_broadcast_packets: 0
rx_multicast_packets: 10
rx_undersize_errors: 0
rx_fragment_errors: 0
rx_oversize_errors: 0
rx_jabber_errors: 0
rx_management_packets: 0
rx_management_dropped: 0
tx_management_packets: 0
rx_total_packets: 10
tx_total_packets: 32
rx_total_bytes: 1203
tx_total_bytes: 1920
tx_size_64_packets: 32
tx_size_65_to_127_packets: 0
tx_size_128_to_255_packets: 0
tx_size_256_to_511_packets: 0
tx_size_512_to_1023_packets: 0
tx_size_1023_to_max_packets: 0
tx_multicast_packets: 0
tx_broadcast_packets: 32
tx_tso_packets: 0
tx_tso_errors: 0
rx_sent_to_host_packets: 0
tx_sent_by_host_packets: 0
rx_code_violation_packets: 0
interrupt_assert_count: 0
####################################################
root#k8s-node:/home/dpdk-stable-20.11.3/build/app# ./dpdk-proc-info -- --stats
EAL: No legacy callbacks, legacy socket not created
######################## NIC statistics for port 0 ########################
RX-packets: 5 RX-errors: 0 RX-bytes: 785
RX-nombuf: 0
TX-packets: 32 TX-errors: 0 TX-bytes: 1920
Stats reg 0 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 1 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 2 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 3 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 4 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 5 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 6 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 7 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 8 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 9 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 10 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 11 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 12 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 13 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 14 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 15 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 0 TX-packets: 0 TX-bytes: 0
Stats reg 1 TX-packets: 0 TX-bytes: 0
Stats reg 2 TX-packets: 0 TX-bytes: 0
Stats reg 3 TX-packets: 0 TX-bytes: 0
Stats reg 4 TX-packets: 0 TX-bytes: 0
Stats reg 5 TX-packets: 0 TX-bytes: 0
Stats reg 6 TX-packets: 0 TX-bytes: 0
Stats reg 7 TX-packets: 0 TX-bytes: 0
Stats reg 8 TX-packets: 0 TX-bytes: 0
Stats reg 9 TX-packets: 0 TX-bytes: 0
Stats reg 10 TX-packets: 0 TX-bytes: 0
Stats reg 11 TX-packets: 0 TX-bytes: 0
Stats reg 12 TX-packets: 0 TX-bytes: 0
Stats reg 13 TX-packets: 0 TX-bytes: 0
Stats reg 14 TX-packets: 0 TX-bytes: 0
Stats reg 15 TX-packets: 0 TX-bytes: 0
############################################################################
update:
Thanks for your response, I modified the codes:
main(){
uint32_t port_mask = 0x1;
port_init();
check_all_ports_link_status(port_mask);
gen_pkt(mbuf);
rte_eth_tx_burst(mbuf);
}
got the print logs:
Checking link status...............................
done
Port0 Link Up. Speed 1000 Mbps - full-duplex
I think the NIC should have initallized completely, but the peer NIC port still missed a lot of packets.
In most working cases the Physical NIC is enumerated for Duplex (full/half), speed (1, 10, 25, 40, 50, 100, 200) and negotiated for (auto/disable) within 1 second. Anything exceeding 2 or 3 seconds is the sign of connected machine or switch not able to negotiated with Duplex, speed or auto-negotiation. Hence the recommendation is
update the driver, firmware on both sides if the interfaces are NIC
Test out the different connection cable as link-sense might not be reaching properly
in case of hub or switch try fixing speed and auto-negotiation.
I do not recommend changing from FULL duplex to Half duplex (as it could be cable or SFI issue).
As temporary work around for the time being you can use rte_eth_link_get which also states it might need It might need to wait up to 9 seconds.
Note: easy way to test if it is cable issue is running DPDK on both ends to check time required for link to be up.
Modified Code Snippet:
main(){
port_init();
RTE_ETH_FOREACH_DEV(portid) {
struct rte_eth_link link;
memset(&link, 0, sizeof(link));
do {
retval = rte_eth_link_get_nowait(port, &link);
if (retval < 0) {
printf("Failed link get (port %u): %s\n",
port, rte_strerror(-retval));
return retval;
} else if (link.link_status)
break;
printf("Waiting for Link up on port %"PRIu16"\n", port);
sleep(1);
} while (!link.link_status);
}
gen_pkt(mbuf);
rte_eth_tx_burst(mbuf);
}
or
main(){
port_init();
RTE_ETH_FOREACH_DEV(portid) {
struct rte_eth_link link;
memset(&link, 0, sizeof(link));
ret = rte_eth_link_get(portid, &link);
if (ret < 0) {
printf("Port %u link get failed: err=%d\n", portid, ret);
continue;
}
gen_pkt(mbuf);
rte_eth_tx_burst(mbuf);
}
It's no surprise that packets can't be sent until the physical link goes up. That takes some time, and one can use rte_eth_link_get() API to automate waiting.

How to decode AAC network audio stream using ffmpeg

I implemented a network video player (like VLC) using ffmpeg. But it can not decode AAC audio stream received from a IP camera. It can decode other audio sterams like G711, G726 etc. I set the codec ID as AV_CODEC_ID_AAC and I set channels and sample rate of AvCodecContext. But avcodec_decode_audio4 fails with an error code of INVALID_DATA. I checked previously asked questions, I tried to add extrabytes to AvCodecContext using media format specific parameters of "config=1408". And I set extradatabytes as 2 bytes of "20" and "8" but it also not worked. I appreciate any help, thanks.
IP CAMERA SDP:
a=rtpmap:96 mpeg4-generic/16000/1
a=fmtp:96 streamtype=5; profile-level-id=5; mode=AAC-hbr; config=1408; SizeLength=13; IndexLength=3; IndexDeltaLength=3
AVCodec* decoder = avcodec_find_decoder((::AVCodecID)id);//set as AV_CODEC_ID_AAC
AVCodecContext* decoderContext = avcodec_alloc_context3(decoder);
char* test = (char*)System::Runtime::InteropServices::Marshal::StringToHGlobalAnsi("1408").ToPointer();
unsigned int length;
uint8_t* extradata = parseGeneralConfigStr(test, length);//it is set as 0x14 and 0x08
decoderContext->channels = number_of_channels; //set as 1
decoderContext->sample_rate = sample_rate; //set as 16000
decoderContext->channel_layout = AV_CH_LAYOUT_MONO;
decoderContext->codec_type = AVMEDIA_TYPE_AUDIO;
decoderContext->extradata = (uint8_t*)av_malloc(AV_INPUT_BUFFER_PADDING_SIZE + length);
memcpy(decoderContext->extradata, extradata, length);
memset(decoderContext->extradata+ length, 0, AV_INPUT_BUFFER_PADDING_SIZE);
Did you check data for INVALID_DATA?
You can check it according to RFC
RFC3640 (3.2 RTP Payload Structure)
AAC Payload can be seperated like below
AU-Header | Size Info | ADTS | Data
Example payload 00 10 0c 00 ff f1 60 40 30 01 7c 01 30 35 ac
According to configs that u shared
AU-size (SizeLength=13)
AU-Index / AU-Index-delta (IndexLength=3/IndexDeltaLength=3)
The length in bits of AU-Header is 13(SizeLength) + 3(IndexLength/IndexDeltaLength) = 16.
AU-Header 00 10
You should use AU-size(SizeLength) value for Size Info
AU-size: Indicates the size in octets of the associated Access Unit in the Access Unit Data Section in the same RTP packet.
First 13 (SizeLength) bits 0000000000010 equals to 2. So read 2 octets for size info.
Size Info 0c 00
ADTS ff f1 60 40 30 01 7c
ADTS Parser
ID MPEG-4
MPEG Layer 0
CRC checksum absent 1
Profile Low Complexity profile (AAC LC)
Sampling frequency 16000
Private bit 0
Channel configuration 1
Original/copy 0
Home 0
Copyright identification bit 0
Copyright identification start 0
AAC frame length 384
ADTS buffer fullness 95
No raw data blocks in frame 0
Data starts with 01 30 35 ac.

pktgen cannot send packet in ovs dpdk scenario

The test setup is: pktgen send packet to vhost-user1 port, then ovs forward it vhost-user2, then testpmd received it from vhost-user2.
The problem is: pktgen can not send any packets, testpmd received no packet also, I don't know what's the problem.
Needs some help, thanks in advance!
OVS: 2.9.0
DPDK: 17.11.6
pktgen: 3.4.4
OVS setup:
export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
export PATH=$PATH:/usr/local/share/openvswitch/scripts
rm /usr/local/etc/openvswitch/conf.db
ovsdb-tool create /usr/local/etc/openvswitch/conf.db /usr/local/share/openvswitch/vswitch.ovsschema
ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
ovs-vsctl --no-wait init
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true other_config:dpdk-lcore=0x2 other_config:dpdk-socket-mem="1024,0"
ovs-vswitchd unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach
ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x8
ovs-vsctl add-br ovs-br0 -- set bridge ovs-br0 datapath_type=netdev
ovs-vsctl add-port ovs-br0 vhost-user0 -- set Interface vhost-user0 type=dpdkvhostuser
ovs-vsctl add-port ovs-br0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser
ovs-vsctl add-port ovs-br0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
ovs-vsctl add-port ovs-br0 vhost-user3 -- set Interface vhost-user3 type=dpdkvhostuser
sudo ovs-ofctl del-flows ovs-br0
sudo ovs-ofctl add-flow ovs-br0 in_port=2,dl_type=0x800,idle_timeout=0,action=output:3
sudo ovs-ofctl add-flow ovs-br0 in_port=3,dl_type=0x800,idle_timeout=0,action=output:2
sudo ovs-ofctl add-flow ovs-br0 in_port=1,dl_type=0x800,idle_timeout=0,action=output:4
sudo ovs-ofctl add-flow ovs-br0 in_port=4,dl_type=0x800,idle_timeout=0,action=output:1
run pktgen:
root#k8s:/home/haosp/OVS_DPDK/pktgen-3.4.4# pktgen -c 0xf --master-lcore 0 -n 1 --socket-mem 512,0 --file-prefix pktgen --no-pci \
> --vdev 'net_virtio_user0,mac=00:00:00:00:00:05,path=/usr/local/var/run/openvswitch/vhost-user0' \
> --vdev 'net_virtio_user1,mac=00:00:00:00:00:01,path=/usr/local/var/run/openvswitch/vhost-user1' \
> -- -P -m "1.[0-1]"
Copyright (c) <2010-2017>, Intel Corporation. All rights reserved. Powered by DPDK
EAL: Detected 4 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
Lua 5.3.4 Copyright (C) 1994-2017 Lua.org, PUC-Rio
Copyright (c) <2010-2017>, Intel Corporation. All rights reserved.
Pktgen created by: Keith Wiles -- >>> Powered by DPDK <<<
>>> Packet Burst 64, RX Desc 1024, TX Desc 2048, mbufs/port 16384, mbuf cache 2048
=== port to lcore mapping table (# lcores 4) ===
lcore: 0 1 2 3 Total
port 0: ( D: T) ( 1: 1) ( 0: 0) ( 0: 0) = ( 1: 1)
port 1: ( D: T) ( 1: 1) ( 0: 0) ( 0: 0) = ( 1: 1)
Total : ( 0: 0) ( 2: 2) ( 0: 0) ( 0: 0)
Display and Timer on lcore 0, rx:tx counts per port/lcore
Configuring 2 ports, MBUF Size 2176, MBUF Cache Size 2048
Lcore:
1, RX-TX
RX_cnt( 2): (pid= 0:qid= 0) (pid= 1:qid= 0)
TX_cnt( 2): (pid= 0:qid= 0) (pid= 1:qid= 0)
Port :
0, nb_lcores 1, private 0x5635a661d3a0, lcores: 1
1, nb_lcores 1, private 0x5635a661ff70, lcores: 1
** Default Info (net_virtio_user0, if_index:0) **
max_rx_queues : 1, max_tx_queues : 1
max_mac_addrs : 64, max_hash_mac_addrs: 0, max_vmdq_pools: 0
rx_offload_capa: 28, tx_offload_capa : 0, reta_size : 0, flow_type_rss_offloads:0000000000000000
vmdq_queue_base: 0, vmdq_queue_num : 0, vmdq_pool_base: 0
** RX Conf **
pthresh : 0, hthresh : 0, wthresh : 0
Free Thresh : 0, Drop Enable : 0, Deferred Start : 0
** TX Conf **
pthresh : 0, hthresh : 0, wthresh : 0
Free Thresh : 0, RS Thresh : 0, Deferred Start : 0, TXQ Flags:00000f00
Create: Default RX 0:0 - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 = 36865 KB headroom 128 2176
Set RX queue stats mapping pid 0, q 0, lcore 1
Create: Default TX 0:0 - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 = 36865 KB headroom 128 2176
Create: Range TX 0:0 - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 = 36865 KB headroom 128 2176
Create: Sequence TX 0:0 - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 = 36865 KB headroom 128 2176
Create: Special TX 0:0 - Memory used (MBUFs 64 x (size 2176 + Hdr 128)) + 192 = 145 KB headroom 128 2176
Port memory used = 147601 KB
Initialize Port 0 -- TxQ 1, RxQ 1, Src MAC 00:00:00:00:00:05
** Default Info (net_virtio_user1, if_index:0) **
max_rx_queues : 1, max_tx_queues : 1
max_mac_addrs : 64, max_hash_mac_addrs: 0, max_vmdq_pools: 0
rx_offload_capa: 28, tx_offload_capa : 0, reta_size : 0, flow_type_rss_offloads:0000000000000000
vmdq_queue_base: 0, vmdq_queue_num : 0, vmdq_pool_base: 0
** RX Conf **
pthresh : 0, hthresh : 0, wthresh : 0
Free Thresh : 0, Drop Enable : 0, Deferred Start : 0
** TX Conf **
pthresh : 0, hthresh : 0, wthresh : 0
Free Thresh : 0, RS Thresh : 0, Deferred Start : 0, TXQ Flags:00000f00
Create: Default RX 1:0 - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 = 36865 KB headroom 128 2176
Set RX queue stats mapping pid 1, q 0, lcore 1
Create: Default TX 1:0 - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 = 36865 KB headroom 128 2176
Create: Range TX 1:0 - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 = 36865 KB headroom 128 2176
Create: Sequence TX 1:0 - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 = 36865 KB headroom 128 2176
Create: Special TX 1:0 - Memory used (MBUFs 64 x (size 2176 + Hdr 128)) + 192 = 145 KB headroom 128 2176
Port memory used = 147601 KB
Initialize Port 1 -- TxQ 1, RxQ 1, Src MAC 00:00:00:00:00:01
Total memory used = 295202 KB
Port 0: Link Up - speed 10000 Mbps - full-duplex <Enable promiscuous mode>
!ERROR!: Could not read enough random data for PRNG seed
Port 1: Link Up - speed 10000 Mbps - full-duplex <Enable promiscuous mode>
!ERROR!: Could not read enough random data for PRNG seed
=== Display processing on lcore 0
WARNING: Nothing to do on lcore 2: exiting
WARNING: Nothing to do on lcore 3: exiting
RX/TX processing lcore: 1 rx: 2 tx: 2
For RX found 2 port(s) for lcore 1
For TX found 2 port(s) for lcore 1
Pktgen:/>set 0 dst mac 00:00:00:00:00:03
Pktgen:/>set all rate 10
Pktgen:/>set 0 count 10000
Pktgen:/>set 1 count 20000
Pktgen:/>str
| Flags:Port : P--------------:0 P--------------:1 0/0
Link State : P--------------:0 P--------------:1 ----TotalRate----
Pkts/s Max/Rx : <UP-10000-FD> <UP-10000-FD> 0/0
Max/Tx : 0/0 0/0 0/0
MBits/s Rx/Tx : 256/0 256/0 512/0
Broadcast : 0/0 0/0 0/0
Multicast : 0 0
64 Bytes : 0 0
65-127 : 0 0
128-255 : 0 0
256-511 : 0 0
512-1023 : 0 0
1024-1518 : 0 0
Runts/Jumbos : 0 0
Errors Rx/Tx : 0/0 0/0
Total Rx Pkts : 0/0 0/0
Tx Pkts : 0 0
Rx MBs : 256 256
Tx MBs : 0 0
ARP/ICMP Pkts : 0 0
Tx Count/% Rate : 0/0 0/0
Pattern Type : abcd... abcd...
Tx Count/% Rate : 10000 /10% 20000 /10%--------------------
PktSize/Tx Burst : 64 / 64 64 / 64
Src/Dest Port : 1234 / 5678 1234 / 5678--------------------
Pkt Type:VLAN ID : IPv4 / TCP:0001 IPv4 / TCP:0001
802.1p CoS : 0 0--------------------
ToS Value: : 0 0
- DSCP value : 0 0--------------------
- IPP value : 0 0
Dst IP Address : 192.168.1.1 192.168.0.1--------------------
Src IP Address : 192.168.0.1/24 192.168.1.1/24
Dst MAC Address : 00:00:00:00:00:03 00:00:00:00:00:05--------------------
Src MAC Address : 00:00:00:00:00:05 00:00:00:00:00:01
VendID/PCI Addr : 0000:0000/00:00.0 0000:0000/00:00.0--------------------
Pktgen:/> str
-- Pktgen Ver: 3.4.4 (DPDK 17.11.6) Powered by DPDK --------------------------
Pktgen:/>
run testpmd:
./testpmd -c 0xf -n 1 --socket-mem 512,0 --file-prefix testpmd --no-pci \
--vdev 'net_virtio_user2,mac=00:00:00:00:00:02,path=/usr/local/var/run/openvswitch/vhost-user2' \
--vdev 'net_virtio_user3,mac=00:00:00:00:00:03,path=/usr/local/var/run/openvswitch/vhost-user3' \
-- -i -a --burst=64 --txd=2048 --rxd=2048 --coremask=0x4
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: 1 hugepages of size 1073741824 reserved, but no mounted hugetlbfs found for that size
EAL: Probing VFIO support...
EAL: VFIO support initialized
update_memory_region(): Too many memory regions
update_memory_region(): Too many memory regions
Interactive-mode selected
Auto-start selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
update_memory_region(): Too many memory regions
update_memory_region(): Too many memory regions
update_memory_region(): Too many memory regions
update_memory_region(): Too many memory regions
Configuring Port 0 (socket 0)
Port 0: 00:00:00:00:00:02
Configuring Port 1 (socket 0)
Port 1: 00:00:00:00:00:03
Checking link statuses...
Done
Start automatic packet forwarding
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 2 streams:
RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
io packet forwarding packets/burst=64
nb forwarding cores=1 - nb forwarding ports=2
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=2048 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=2048 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=2048 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=2048 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
testpmd> show port info
Bad arguments
testpmd> show port stats all
######################## NIC statistics for port 0 ########################
RX-packets: 0 RX-missed: 0 RX-bytes: 0
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0
Tx-pps: 0
############################################################################
######################## NIC statistics for port 1 ########################
RX-packets: 0 RX-missed: 0 RX-bytes: 0
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
Throughput (since last show)
Rx-pps: 0
Tx-pps: 0
############################################################################
OVS dump-flow show:
root#k8s:/home/haosp# ovs-ofctl dump-flows ovs-br0
cookie=0x0, duration=77519.972s, table=0, n_packets=0, n_bytes=0, ip,in_port="vhost-user1" actions=output:"vhost-user2"
cookie=0x0, duration=77519.965s, table=0, n_packets=0, n_bytes=0, ip,in_port="vhost-user2" actions=output:"vhost-user1"
cookie=0x0, duration=77519.959s, table=0, n_packets=0, n_bytes=0, ip,in_port="vhost-user0" actions=output:"vhost-user3"
cookie=0x0, duration=77518.955s, table=0, n_packets=0, n_bytes=0, ip,in_port="vhost-user3" actions=output:"vhost-user0"
ovs-ofctl dump-ports ovs-br0 show:
root#k8s:/home/haosp# ovs-ofctl dump-ports ovs-br0
OFPST_PORT reply (xid=0x2): 5 ports
port "vhost-user3": rx pkts=0, bytes=0, drop=0, errs=0, frame=?, over=?, crc=?
tx pkts=0, bytes=0, drop=6, errs=?, coll=?
port "vhost-user1": rx pkts=0, bytes=0, drop=0, errs=0, frame=?, over=?, crc=?
tx pkts=0, bytes=0, drop=8, errs=?, coll=?
port "vhost-user0": rx pkts=0, bytes=0, drop=0, errs=0, frame=?, over=?, crc=?
tx pkts=0, bytes=0, drop=8, errs=?, coll=?
port "vhost-user2": rx pkts=0, bytes=0, drop=0, errs=0, frame=?, over=?, crc=?
tx pkts=0, bytes=0, drop=8, errs=?, coll=?
port LOCAL: rx pkts=50, bytes=3732, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=0, bytes=0, drop=0, errs=0, coll=0
ovs-ofctl show ovs-br0
root#k8s:/home/haosp# ovs-ofctl show ovs-br0
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000ca4f2b8e6b4b
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(vhost-user0): addr:00:00:00:00:00:00
config: 0
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
2(vhost-user1): addr:00:00:00:00:00:00
config: 0
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
3(vhost-user2): addr:00:00:00:00:00:00
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
4(vhost-user3): addr:00:00:00:00:00:00
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
LOCAL(ovs-br0): addr:ca:4f:2b:8e:6b:4b
config: 0
state: 0
current: 10MB-FD COPPER
speed: 10 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
ovs-vsctl show
root#k8s:/home/haosp# ovs-vsctl show
635ba448-91a0-4c8c-b6ca-4b9513064d7f
Bridge "ovs-br0"
Port "vhost-user2"
Interface "vhost-user2"
type: dpdkvhostuser
Port "ovs-br0"
Interface "ovs-br0"
type: internal
Port "vhost-user0"
Interface "vhost-user0"
type: dpdkvhostuser
Port "vhost-user3"
Interface "vhost-user3"
type: dpdkvhostuser
Port "vhost-user1"
Interface "vhost-user1"
type: dpdkvhostuser
It seems that pktgen can not send packets, ovs statatics shows no packet received also,
I have no idea yet, it confused me
If the goal is to have packet transfer between Pktgen and testpmd that is connected by OVS-DPDK one has to use net_vhost and virtio_user pair.
DPDK Pktgen (net_vhost) <==> OVS-DPDK port-1 (virtio_user) {Rule to forward} OVS-DPDK port-2 (virtio_user) <==> DPDK Pktgen (net_vhost)
In the current setup, you will have to make the following changes
start DPDK pktgen by changing from --vdev net_virtio_user0,mac=00:00:00:00:00:05,path=/usr/local/var/run/openvswitch/vhost-user0 to --vdev net_vhost0,iface=/usr/local/var/run/openvswitch/vhost-user0
start DPDK testpmd by changing from --vdev 'net_virtio_user2,mac=00:00:00:00:00:02,path=/usr/local/var/run/openvswitch/vhost-user2' to --vdev 'net_vhost0,iface=/usr/local/var/run/openvswitch/vhost-user2'
then start DPDK-OVS with --vdev=virtio_user0,path=/usr/local/var/run/openvswitch/vhost-user0 and --vdev=virtio_user1,path=/usr/local/var/run/openvswitch/vhost-user2
add rules to allow the port to port forwarding between pktgen and testpmd
Note:
please update the command line for multiple ports.
screenshot shared below with pktgen and l2fwd setup

How does flow classify example in DPDK works?

I want to test the flow classify example in DPDK 20.08 and I'm trying to modify the given ACL rules file to match all the TCP packets.
#file format:
#src_ip/masklen dst_ip/masklen src_port : mask dst_port : mask proto/mask priority
#
2.2.2.3/24 2.2.2.7/24 32 : 0xffff 33 : 0xffff 17/0xff 0
9.9.9.3/24 9.9.9.7/24 32 : 0xffff 33 : 0xffff 17/0xff 1
9.9.9.3/24 9.9.9.7/24 32 : 0xffff 33 : 0xffff 6/0xff 2
9.9.8.3/24 9.9.8.7/24 32 : 0xffff 33 : 0xffff 6/0xff 3
6.7.8.9/24 2.3.4.5/24 32 : 0x0000 33 : 0x0000 132/0xff 4
6.7.8.9/32 192.168.0.36/32 10 : 0xffff 11 : 0xffff 6/0xfe 5
6.7.8.9/24 192.168.0.36/24 10 : 0xffff 11 : 0xffff 6/0xfe 6
6.7.8.9/16 192.168.0.36/16 10 : 0xffff 11 : 0xffff 6/0xfe 7
6.7.8.9/8 192.168.0.36/8 10 : 0xffff 11 : 0xffff 6/0xfe 8
#error rules
#9.8.7.6/8 192.168.0.36/8 10 : 0xffff 11 : 0xffff 6/0xfe 9
Should I add 0.0.0.0/0 0.0.0.0/0 0 : 0x0000 0 : 0x0000 6/0xff 0 rule? I tried but there is still no packets matching.
ps:
This is the file I'm using.
#file format:
#src_ip/masklen dst_ip/masklen src_port : mask dst_port : mask proto/mask priority
#
2.2.2.3/24 2.2.2.7/24 32 : 0xffff 33 : 0xffff 17/0xff 0
9.9.9.3/24 9.9.9.7/24 32 : 0xffff 33 : 0xffff 17/0xff 1
9.9.9.3/24 9.9.9.7/24 32 : 0xffff 33 : 0xffff 6/0xff 2
9.9.8.3/24 9.9.8.7/24 32 : 0xffff 33 : 0xffff 6/0xff 3
6.7.8.9/24 2.3.4.5/24 32 : 0x0000 33 : 0x0000 132/0xff 4
6.7.8.9/32 192.168.0.36/32 10 : 0xffff 11 : 0xffff 6/0xfe 5
6.7.8.9/24 192.168.0.36/24 10 : 0xffff 11 : 0xffff 6/0xfe 6
6.7.8.9/16 192.168.0.36/16 10 : 0xffff 11 : 0xffff 6/0xfe 7
#6.7.8.9/8 192.168.0.36/8 10 : 0xffff 11 : 0xffff 6/0xfe 8
0.0.0.0/0 0.0.0.0/0 0 : 0x0000 0 : 0x0000 6/0xff 8
#error rules
#9.8.7.6/8 192.168.0.36/8 10 : 0xffff 11 : 0xffff 6/0xfe 9
I ran again, and it goes like:
rule [0] query failed ret [-22]
rule [1] query failed ret [-22]
rule [2] query failed ret [-22]
rule [3] query failed ret [-22]
rule [4] query failed ret [-22]
rule [5] query failed ret [-22]
rule [6] query failed ret [-22]
rule [7] query failed ret [-22]
rule[8] count=2
proto = 6
Segmentation fault
I don't know what is causing the Segmentation fault.
The command is sudo ./build/flow_classify -l 101 --log-level=pmd,8 -- --rule_ipv4="./ipv4_rules_file_pass.txt" > ~/flow_classify_log and I didn't change the source code.
I'm using a two port 82599 NIC. I'm putting the log file down below which contains the output before it shows Segmentation fault
flow_classify log
Sometimes it can process normally in the first iteration, and sometimes it can't.
update 1-3:
I modified the code to stop the packet forwarding and free every single packet received to check if it is the forwarding procedure that is causing the problem.
in main function:
/* if (nb_ports < 2 || (nb_ports & 1))
rte_exit(EXIT_FAILURE, "Error: number of ports must be even\n"); */
if (nb_ports < 1)
rte_exit(EXIT_FAILURE, "Error: no port avaliable\n");
in lcore_main function:
//in lcore_main function
/* Send burst of TX packets, to second port of pair. */
/* const uint16_t nb_tx = rte_eth_tx_burst(port ^ 1, 0,
bufs, nb_rx); */
const uint16_t nb_tx = 0;
/* Free any unsent packets. */
if (unlikely(nb_tx < nb_rx)) {
uint16_t buf;
for (buf = nb_tx; buf < nb_rx; buf++)
rte_pktmbuf_free(bufs[buf]);
}
And this is the new log, but I don't think there is any difference. I'm using only one of the two ports on a single 82599ES NIC. Maybe it's the false classification rule I added that is causing the problem, because it ran okay with the default rule settings.
Flow classify requires
minimum of 2 ports, always even ports.
Flow entries has to be populated in the valid format.
Entry in rules:
2.2.2.3/0 2.2.2.7/0 32 : 0xffff 33 : 0xffff 17/0xff 0
2.2.2.3/0 2.2.2.7/0 32 : 0x0 33 : 0x0 6/0xff 1
Packet send: ipv4-TCP
Log from flow-classify:
rule [0] query failed ret [-22] -- UDP lookup failed
rule[1] count=32 -- TCP lookup success
proto = 6
Virtual NIC: ./build/flow_classify -c 8 --no-pci --vdev=net_tap0 --vdev=net_tap1 -- --rule_ipv4="ipv4_rules_file.txt"
Physical NIC: ./build/flow_classify -c 8 -- --rule_ipv4="ipv4_rules_file.txt"
Hence issue faced at your end is because
incorrect configuration
were only using 1 port

How do I get the partition offset in OS X with C/C++?

I want to create my own volume id using the drive serial + partition offset + partition size, but I need to know how to get the partition information on OS X. I have (unsucceedingly) tried the following:
int fd;
if ((fd = open("/dev/disk0s1", O_RDONLY|O_NONBLOCK)) >= 0) {
struct hd_geometry geom;
if (ioctl(fd, 0x0301, &geom) == 0){ //0x0301 is HDIO_GETGEO
printf("Index = %u\n", geom.start);
}
close(fd);
}
But even if that were to succeed, it is a flawed solution since as this noted: hd_geometry.start is an unsigned long and "will not contain a meaningful value for disks over 219 Gb in size." Furthermore, I belive that it requires administrative rights, which is also bad. Is there any other way of doing this?
Okay last point first. Requiring admin rights is necessary because you are trying to read a raw disk; you could for example potentially seek to a block where a private crypto key is written and read it as an unprivileged user and then where would we be?
Second, /dev/disk0s1 is just a partition and it's also the block-device version of it. You need to read the character device version of the disk, which would be /dev/rdisk0.
Third, HDIO_GETGEO is a linux kernel ioctl (especially consider the 0x0301 value of it) you are not going to get far on Darwin with this; Have a look at <sys/disk.h> for the related disk IOCTLs. I think DKIOCGETFEATURES / DKIOCGETPHYSICALBLOCKSIZE etc should get you going.
If you have trouble with these concepts I HIGHLY recommend doing this development in a virtual machine that you can clobber because you do NOT want to accidentally use an IOCTL which will screw up your disks.
Addendum (possibly the answer)
GUID Partition Table
So you are working on Mac OS X / Darwin; We'll assume GUID Partition Table
LBA == Logical Block Addressing ... ; 1 block = 512 bytes
LBA 0 - Master Boot Record (also contained old partition table)
LBA 1 - GUID Partition Table (standard for OS X)
LBA 2 - first 4 entries
LBA 3 - 33 - next 124 entries making for a total of 128 entries
LBA 34 - Partition 1
You can grab the second block and start tracing the information
Have a read at http://en.wikipedia.org/wiki/GUID_Partition_Table
It's quite well defined. GUID uses little-endian byte order for integer values (see examples at the bottom of the wikipedia page)
Suggestion for testing
Make a copy so that you are not screwing with the actual disks:
dd if=/dev/rdisk0 of=fakedisk count=33
this will create a copy of the first 33 blocks or a disk.
Use fakedisk to test your program out.
MBR
In case your disk uses MBR use the same concepts as GPT
http://en.wikipedia.org/wiki/Master_Boot_Record
has excellent description of the sectors.
Using dtruss fdisk -d /dev/rdisk0 dump to get hints
dtrussing fdisk dump shows that fdisk uses the approach described above.
dtruss fdisk -d /dev/rdisk0
SYSCALL(args) = return
open("/dev/dtracehelper\0", 0x2, 0x7FFF5CFDD5C0) = 3 0
__sysctl(0x7FFF5CFDD084, 0x2, 0x7FFF5CFDD070) = 0 0
bsdthread_register(0x7FFF8BCA41D4, 0x7FFF8BCA41C4, 0x2000) = 0 0
[[ .... content edited ... ]]
open("/dev/rdisk0\0", 0x0, 0x7FFF5CFDDD7A) = 3 0
fstat64(0x3, 0x7FFF5CFDDA10, 0x0) = 0 0
fstat64(0x3, 0x7FFF5CFDDAC8, 0x0) = 0 0
ioctl(0x3, 0x40086419, 0x7FFF5CFDDB60) = 0 0
ioctl(0x3, 0x40046418, 0x7FFF5CFDDB5C) = 0 0
close(0x3) = 0 0
open("/dev/rdisk0\0", 0x0, 0x0) = 3 0
fstat64(0x3, 0x7FFF5CFDDAD0, 0x0) = 0 0
open("/dev/rdisk0\0", 0x0, 0x0) = 4 0
fstat64(0x4, 0x7FFF5CFDDA80, 0x0) = 0 0
lseek(0x4, 0x0, 0x0) = 0 0
issetugid(0x102C22000, 0x3, 0x7FFF5CFDDC00) = 0 0
geteuid(0x102C22000, 0x3, 0x0) = 0 0
[[ tracing data suppressed ]]
read(0x4, "\0", 0x200) = 512 0
close(0x4) = 0 0
getrlimit(0x1008, 0x7FFF5CFDCFA8, 0x7FFF8BD0D470) = 0 0
fstat64(0x1, 0x7FFF5CFDCEF8, 0x7FFF5CFDCFBC) = 0 0
ioctl(0x1, 0x4004667A, 0x7FFF5CFDCF94) = 0 0
write_nocancel(0x1, "1,625142447,0xEE,-,1023,254,63,1023,254,63\n\0", 0x2B) = 43 0
write_nocancel(0x1, "0,0,0x00,-,0,0,0,0,0,0\n\0", 0x17) = 23 0
write_nocancel(0x1, "0,0,0x00,-,0,0,0,0,0,0\n\0", 0x17) = 23 0
write_nocancel(0x1, "0,0,0x00,-,0,0,0,0,0,0\n\0", 0x17) = 23 0
close(0x3) = 0 0
deciphering ioctls
How did I figure out that it was these ioctls that are used.
dtruss dump is:
ioctl(0x3, 0x40086419, 0x7FFF5CFDDB60) = 0 0
ioctl(0x3, 0x40046418, 0x7FFF5CFDDB5C) = 0 0
and 0x40086518 corresponds to DKIOCGETBLOCKSIZE
This is gleaned by tracing back disk.h (and noting that _IOR expands to _IOC in ioccom.h) and that the last 8 bits correspond to the second number in the IOCTL constant define.
#define DKIOCGETBLOCKSIZE _IOR('d', 24, uint32_t)
in 0x40086418 the trailing 18(hex) == 24(dec)
So now that we note that fdisk performs DKIOCGETBLOCKCOUNT and DKIOCGETBLOCKSIZE  to get the physical extents because technically you should use the RESULT of that to figure out LBA offsets (see deciphering ioctls below)
The actual read in fdisk
This is how fdisk is doing it:
open("/dev/rdisk0\0", 0x0, 0x0) = 4 0
read(0x4, "\0", 0x200) = 512 0
close(0x4) = 0 0
You can follow suit make sure you replace the 0x200 with the actual block size.
Also, if you're going to use the dd command above to make a copy use the block size as it comes out here.
Have you tried the DKIOCGETPHYSICALEXTENT ioctl? It fills in a dk_physical_extent_t structure that includes a 64-bit offset and a 64-bit length.