rte_eth_tx_burst can not send packet out - dpdk

A dpdk application which generate a few arp request packets and call rte_eth_tx_burst to send them out, some packets are not received by peer NIC port(this can be confirmed by using wireshark to capture the packets from the peer NIC), dpdk-proc-info shows no error count. But before call rte_eth_tx_burst let the app sleep 10s, it can send all the packets.
example codes:
main(){
port_init();
sleep(10);
gen_pkt(mbuf);
rte_eth_tx_burst(mbuf);
}
System setup: Ubuntu 20.04.2 LTS, dpdk-stable-20.11.3, I350 Gigabit Network Connection 1521, igb_uio driver
root#k8s-node:/home/dpdk-stable-20.11.3/build/app# ./dpdk-proc-info -- --xstats
EAL: No legacy callbacks, legacy socket not created
###### NIC extended statistics for port 0 #########
####################################################
rx_good_packets: 10
tx_good_packets: 32
rx_good_bytes: 1203
tx_good_bytes: 1920
rx_missed_errors: 0
rx_errors: 0
tx_errors: 0
rx_mbuf_allocation_errors: 0
rx_q0_packets: 0
rx_q0_bytes: 0
rx_q0_errors: 0
tx_q0_packets: 0
tx_q0_bytes: 0
rx_crc_errors: 0
rx_align_errors: 0
rx_symbol_errors: 0
rx_missed_packets: 0
tx_single_collision_packets: 0
tx_multiple_collision_packets: 0
tx_excessive_collision_packets: 0
tx_late_collisions: 0
tx_total_collisions: 0
tx_deferred_packets: 0
tx_no_carrier_sense_packets: 0
rx_carrier_ext_errors: 0
rx_length_errors: 0
rx_xon_packets: 0
tx_xon_packets: 0
rx_xoff_packets: 0
tx_xoff_packets: 0
rx_flow_control_unsupported_packets: 0
rx_size_64_packets: 4
rx_size_65_to_127_packets: 3
rx_size_128_to_255_packets: 3
rx_size_256_to_511_packets: 0
rx_size_512_to_1023_packets: 0
rx_size_1024_to_max_packets: 0
rx_broadcast_packets: 0
rx_multicast_packets: 10
rx_undersize_errors: 0
rx_fragment_errors: 0
rx_oversize_errors: 0
rx_jabber_errors: 0
rx_management_packets: 0
rx_management_dropped: 0
tx_management_packets: 0
rx_total_packets: 10
tx_total_packets: 32
rx_total_bytes: 1203
tx_total_bytes: 1920
tx_size_64_packets: 32
tx_size_65_to_127_packets: 0
tx_size_128_to_255_packets: 0
tx_size_256_to_511_packets: 0
tx_size_512_to_1023_packets: 0
tx_size_1023_to_max_packets: 0
tx_multicast_packets: 0
tx_broadcast_packets: 32
tx_tso_packets: 0
tx_tso_errors: 0
rx_sent_to_host_packets: 0
tx_sent_by_host_packets: 0
rx_code_violation_packets: 0
interrupt_assert_count: 0
####################################################
root#k8s-node:/home/dpdk-stable-20.11.3/build/app# ./dpdk-proc-info -- --stats
EAL: No legacy callbacks, legacy socket not created
######################## NIC statistics for port 0 ########################
RX-packets: 5 RX-errors: 0 RX-bytes: 785
RX-nombuf: 0
TX-packets: 32 TX-errors: 0 TX-bytes: 1920
Stats reg 0 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 1 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 2 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 3 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 4 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 5 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 6 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 7 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 8 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 9 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 10 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 11 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 12 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 13 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 14 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 15 RX-packets: 0 RX-errors: 0 RX-bytes: 0
Stats reg 0 TX-packets: 0 TX-bytes: 0
Stats reg 1 TX-packets: 0 TX-bytes: 0
Stats reg 2 TX-packets: 0 TX-bytes: 0
Stats reg 3 TX-packets: 0 TX-bytes: 0
Stats reg 4 TX-packets: 0 TX-bytes: 0
Stats reg 5 TX-packets: 0 TX-bytes: 0
Stats reg 6 TX-packets: 0 TX-bytes: 0
Stats reg 7 TX-packets: 0 TX-bytes: 0
Stats reg 8 TX-packets: 0 TX-bytes: 0
Stats reg 9 TX-packets: 0 TX-bytes: 0
Stats reg 10 TX-packets: 0 TX-bytes: 0
Stats reg 11 TX-packets: 0 TX-bytes: 0
Stats reg 12 TX-packets: 0 TX-bytes: 0
Stats reg 13 TX-packets: 0 TX-bytes: 0
Stats reg 14 TX-packets: 0 TX-bytes: 0
Stats reg 15 TX-packets: 0 TX-bytes: 0
############################################################################
update:
Thanks for your response, I modified the codes:
main(){
uint32_t port_mask = 0x1;
port_init();
check_all_ports_link_status(port_mask);
gen_pkt(mbuf);
rte_eth_tx_burst(mbuf);
}
got the print logs:
Checking link status...............................
done
Port0 Link Up. Speed 1000 Mbps - full-duplex
I think the NIC should have initallized completely, but the peer NIC port still missed a lot of packets.

In most working cases the Physical NIC is enumerated for Duplex (full/half), speed (1, 10, 25, 40, 50, 100, 200) and negotiated for (auto/disable) within 1 second. Anything exceeding 2 or 3 seconds is the sign of connected machine or switch not able to negotiated with Duplex, speed or auto-negotiation. Hence the recommendation is
update the driver, firmware on both sides if the interfaces are NIC
Test out the different connection cable as link-sense might not be reaching properly
in case of hub or switch try fixing speed and auto-negotiation.
I do not recommend changing from FULL duplex to Half duplex (as it could be cable or SFI issue).
As temporary work around for the time being you can use rte_eth_link_get which also states it might need It might need to wait up to 9 seconds.
Note: easy way to test if it is cable issue is running DPDK on both ends to check time required for link to be up.
Modified Code Snippet:
main(){
port_init();
RTE_ETH_FOREACH_DEV(portid) {
struct rte_eth_link link;
memset(&link, 0, sizeof(link));
do {
retval = rte_eth_link_get_nowait(port, &link);
if (retval < 0) {
printf("Failed link get (port %u): %s\n",
port, rte_strerror(-retval));
return retval;
} else if (link.link_status)
break;
printf("Waiting for Link up on port %"PRIu16"\n", port);
sleep(1);
} while (!link.link_status);
}
gen_pkt(mbuf);
rte_eth_tx_burst(mbuf);
}
or
main(){
port_init();
RTE_ETH_FOREACH_DEV(portid) {
struct rte_eth_link link;
memset(&link, 0, sizeof(link));
ret = rte_eth_link_get(portid, &link);
if (ret < 0) {
printf("Port %u link get failed: err=%d\n", portid, ret);
continue;
}
gen_pkt(mbuf);
rte_eth_tx_burst(mbuf);
}

It's no surprise that packets can't be sent until the physical link goes up. That takes some time, and one can use rte_eth_link_get() API to automate waiting.

Related

dax contains() not showing Tru but should be

I'm stumped here. I have this measure.
Contains = IF(CONTAINS(Flags, Flags[FlagsConcat], SELECTEDVALUE(SlicerFlags[FlagNames])),1,-1)
The concatenated column looks something like this
Flags
id Bco Nat Gur Ga An Sim Oak Ort FlagsConcat
1826 0 0 0 0 0 0 1 1 Oakpoint,Orthoselect
1784 0 0 0 0 0 0 1 1 Oakpoint,Orthoselect
1503 0 0 0 1 0 0 0 0 Guardian
1502 0 0 0 1 0 0 0 0 Guardian
1500 0 0 0 1 0 0 0 0 Guardian
1499 0 0 0 1 0 0 0 0 Guardian
1326 0 0 0 1 0 0 0 0 Guardian
925 0 0 0 1 0 0 0 0 0 Guardian
and here is the values I am grabbing from the selectedvalue()
FlagNames
Benco
National
Guardian-Simply Clear
Guardian
Angel Align
Simply Clear
Oakpoint
Orthoselect
None
If I select Guardian in the slicerFlags table then I get a return value of 1 but if I select either Oakpoint or Orthoselect then I get a -1 even those there are 2 rows in the table that have either word in the FlagsConcat column. I tried putting spaces before and after the comma but that made no difference. Anyone know why the contains() function isn't showing true when looking for Oakpoint or Orthoselect? Thanks in advance.
#Jeroen's code is correct...
Contains = COUNTROWS(FILTER(Flags, CONTAINSSTRING(Flags[FlagsConcat], SELECTEDVALUE(SlicerFlags[FlagNames]))))+0

Using SAS EG to get percentages - again

Hopefully this is a relatively easy question for someone to be able to help me with. I am a newbie to SAS (and programming).
I have a dataset that has numerous variables each measuring time spent at different activities, with approx. 18,000 unique entries.
I need to get the percentage that each of these variables contribute to the total amount of time spent. I know how to do this just not how to make SAS do it.
Here is a screenshot of some of the variables along with the total at the far right. Please let me know if you need anything else.
DomPazz and momo1644 both contributed useful solutions which are contributing to my understanding of SAS. However both solutions covered doing this on a row by row basis suggesting I was unclear about what I am actually trying to achieve. I am trying to get a total for each of the variables and then find the percentage that that total contributes to the overall total. If I was doing this 'by hand' it would be v1_total/overall-total * 100/1.
You need to calculate the row total, calculate all column totals then drive the percentage. The previous answers were calculating the Row totals only.
SAS Code:
data have;
input surfm wskim Outdoorm dancem fishm snkrm tenpm Leisurem totadmin;
datalines;
0 0 0 0 0 0 0 0 180
0 0 0 0 0 0 0 0 180
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 98.75
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 30
0 0 0 0 0 0 0 0 30
0 0 0 0 120 0 0 120 750
0 0 0 0 0 0 0 0 30
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1291.25
0 0 0 0 0 0 0 0 370
0 0 0 15 0 0 0 15 160
;
RUN;
/*Calculate Total time per Row*/
data row_totals;
set have;
total= surfm + wskim + Outdoorm + dancem + fishm + snkrm + tenpm + Leisurem + totadmin;
run;
/*Calcualte Percentages*/
proc sql;
create table want as
select
sum(surfm)/sum(total) format=percent10.2 as surfm,
sum(wskim)/sum(total) format=percent10.2 as wskim,
sum(Outdoorm)/sum(total) format=percent10.2 as Outdoorm,
sum(dancem)/sum(total) format=percent10.2 as dancem,
sum(fishm)/sum(total) format=percent10.2 as fishm,
sum(snkrm)/sum(total) format=percent10.2 as snkrm,
sum(tenpm)/sum(total) format=percent10.2 as tenpm,
sum(Leisurem)/sum(total) format=percent10.2 as Leisurem,
sum(totadmin)/sum(total) format=percent10.2 as totadmin,
sum(total) as total
from row_totals
;
quit;
Output:
surfm=0.00% wskim=0.00% Outdoorm=0.00% dancem=0.44% fishm=3.54% snkrm=0.00% tenpm=0.00% Leisurem=3.98% totadmin=92.04% total=3390

time series sliding window with occurrence counts

I am trying to get a count between two timestamped values:
for example:
time letter
1 A
4 B
5 C
9 C
18 B
30 A
30 B
I am dividing time to time windows: 1+ 30 / 30
then I want to know how many A B C in each time window of size 1
timeseries A B C
1 1 0 0
2 0 0 0
...
30 1 1 0
this shoud give me a table of 30 rows and 3 columns: A B C of ocurancess
The problem is the data is taking to long to be break down because it iterates through all master table every time to slice the data eventhough thd data is already sorted
master = mytable
minimum = master.timestamp.min()
maximum = master.timestamp.max()
window = (minimum + maximum) / maximum
wstart = minimum
wend = minimum + window
concurrent_tasks = []
while ( wstart <= maximum ):
As = 0
Bs = 0
Cs = 0
for d, row in master.iterrows():
ttime = row.timestamp
if ((ttime >= wstart) & (ttime < wend)):
#print (row.channel)
if (row.channel == 'A'):
As = As + 1
elif (row.channel == 'B'):
Bs = Bs + 1
elif (row.channel == 'C'):
Cs = Cs + 1
concurrent_tasks.append([m_id, As, Bs, Cs])
wstart = wstart + window
wend = wend + window
Could you help me in making this perform better ? i want to use map function and i want to prevent python from looping through all the loop every time.
This is part of big data and it taking days to finish ?
thank you
There is a faster approach - pd.get_dummies():
In [116]: pd.get_dummies(df.set_index('time')['letter'])
Out[116]:
A B C
time
1 1 0 0
4 0 1 0
5 0 0 1
9 0 0 1
18 0 1 0
30 1 0 0
30 0 1 0
If you want to "compress" (group) it by time:
In [146]: pd.get_dummies(df.set_index('time')['letter']).groupby(level=0).sum()
Out[146]:
A B C
time
1 1 0 0
4 0 1 0
5 0 0 1
9 0 0 1
18 0 1 0
30 1 1 0
or using sklearn.feature_extraction.text.CountVectorizer:
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(token_pattern=r"\b\w+\b", stop_words=None)
r = pd.SparseDataFrame(cv.fit_transform(df.groupby('time')['letter'].agg(' '.join)),
index=df['time'].unique(),
columns=df['letter'].unique(),
default_fill_value=0)
Result:
In [143]: r
Out[143]:
A B C
1 1 0 0
4 0 1 0
5 0 0 1
9 0 0 1
18 0 1 0
30 1 1 0
If we want to list all times from 1 to 30:
In [153]: r.reindex(np.arange(r.index.min(), r.index.max()+1)).fillna(0).astype(np.int8)
Out[153]:
A B C
1 1 0 0
2 0 0 0
3 0 0 0
4 0 1 0
5 0 0 1
6 0 0 0
7 0 0 0
8 0 0 0
9 0 0 1
10 0 0 0
11 0 0 0
12 0 0 0
13 0 0 0
14 0 0 0
15 0 0 0
16 0 0 0
17 0 0 0
18 0 1 0
19 0 0 0
20 0 0 0
21 0 0 0
22 0 0 0
23 0 0 0
24 0 0 0
25 0 0 0
26 0 0 0
27 0 0 0
28 0 0 0
29 0 0 0
30 1 1 0
or using Pandas approach:
In [159]: pd.get_dummies(df.set_index('time')['letter']) \
...: .groupby(level=0) \
...: .sum() \
...: .reindex(np.arange(r.index.min(), r.index.max()+1), fill_value=0)
...:
Out[159]:
A B C
time
1 1 0 0
2 0 0 0
3 0 0 0
4 0 1 0
5 0 0 1
6 0 0 0
7 0 0 0
8 0 0 0
9 0 0 1
10 0 0 0
... .. .. ..
21 0 0 0
22 0 0 0
23 0 0 0
24 0 0 0
25 0 0 0
26 0 0 0
27 0 0 0
28 0 0 0
29 0 0 0
30 1 1 0
[30 rows x 3 columns]
UPDATE:
Timing:
In [163]: df = pd.concat([df] * 10**4, ignore_index=True)
In [164]: %timeit pd.get_dummies(df.set_index('time')['letter'])
100 loops, best of 3: 10.9 ms per loop
In [165]: %timeit df.set_index('time').letter.str.get_dummies()
1 loop, best of 3: 914 ms per loop

pandas: count the occurrence of month of years

I have a large number of rows dataframe(df_m) as following, I want to plot the number of occurrence of month for years(2010-2017) of date_m column in the dataframe. Since the year range of date_m is from 2010 -2017.
db num date_a date_m date_c zip_b zip_a
0 old HKK10032 2010-07-14 2010-07-26 NaT NaN NaN
1 old HKK10109 2011-07-14 2011-09-15 NaT NaN NaN
2 old HNN10167 2012-07-15 2012-08-09 NaT 177-003 NaN
3 old HKK10190 2013-07-15 2013-09-02 NaT NaN NaN
4 old HKK10251 2014-07-16 2014-05-02 NaT NaN NaN
5 old HKK10253 2015-07-16 2015-05-01 NaT NaN NaN
6 old HNN10275 2017-07-16 2017-07-18 2010-07-18 1070062 NaN
7 old HKK10282 2017-07-16 2017-08-16 NaT NaN NaN
............................................................
Firstly, I abstract the month occurrence of month(1-12) for every year(2010-2017). But there is error in my code:
lst_all = []
for i in range(2010, 2018):
lst_num = [sum(df_m.date_move.dt.month == j & df_m.date_move.dt.year == i) for j in range(1, 13)]
lst_all.append(lst_num)
print lst_all
You need add () to conditions:
lst_all = []
for i in range(2010, 2018):
lst_num = [((df_m.date_m.dt.month == j) & (df_m.date_m.dt.year == i)).sum() for j in range(1, 13)]
lst_all.append(lst_num)
Then get:
df1 = pd.DataFrame(lst_all, index=range(2010, 2018), columns=range(1, 13))
print (df1)
1 2 3 4 5 6 7 8 9 10 11 12
2010 0 0 0 0 0 0 1 0 0 0 0 0
2011 0 0 0 0 0 0 0 0 1 0 0 0
2012 0 0 0 0 0 0 0 1 0 0 0 0
2013 0 0 0 0 0 0 0 0 1 0 0 0
2014 0 0 0 0 1 0 0 0 0 0 0 0
2015 0 0 0 0 1 0 0 0 0 0 0 0
2016 0 0 0 0 0 0 0 0 0 0 0 0
2017 0 0 0 0 0 0 1 1 0 0 0 0

False Acceptance Rate and False Rejection Rate calculation using a n*n confusion matrix

FAR and FRR are used to express the results of biometric devices. Below is the confusion matrix produced by biometric data produced in weka. I couldn't find any resources explaining the procedure to calculate FAR and FRR using a n*n confusion matrix. Any help explaining the procedure would be of great help. Thanks in advance!
Weka also gives these values, TP Rate, FP Rate, Precision, Recall, F-Measure and ROC Area. Please suggest if the required values can be calculated using these.
=== Confusion Matrix ===
a b c d e f g h i j k l m n o <-- classified as
1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 | a = user1
0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 | b = user2
0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 | c = user3
0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 | d = user4
0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 | e = user5
0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 | f = user6
0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 | g = user7
0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 | h = user9
1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 | i = user10
0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 | j = user11
0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 | k = user14
0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 | l = user15
0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 | m = user16
0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 | n = user17
0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 | o = user19
The accepted answer here by user "chl" has a reference to the Biometrics Literature: https://stats.stackexchange.com/questions/3489/calculating-false-acceptance-rate-for-a-gaussian-distribution-of-scores .
He says,
[the ROC curve] is a plot of (TAR=1-FRR, the false rejection rate) against false
acceptance rate (FAR).
However, commonly the ROC curve happens to be a plot of TP Rate as a function of False Positive Rate (FP Rate).
Seems you can use TP Rate and FP Rate.