Is it possible to set a link speed for a port? - dpdk

I would like to know if it's possible to start links with link speeds of my choosing. I believe I saw a topic not unlike this one on the website before, but I cannot find it anymore.
To my understanding, one would need to set the "link_speeds" field in the rte_eth_conf struct to some value that isn't RTE_ETH_LINK_SPEED_AUTONEG, but regardless of what I choose, checking the link in rte_eth_link_get always gives me 1000M and autoneg.
Now, I also tried to look into the code from rte_eth_dev_configure to see how it does what it does, but it doesn't seem to take the link_speeds parameter into account. Keep in mind, I'm rather new to all this.
Anyway, I have yet to attempt this outside of my VM, so perhaps it's due to the e1000 driver but if I had to guess it is something else entirely.
Also, I did see on some website that interfaces had forced speeds and duplex (https://www.ibm.com/support/pages/forced-speedduplex-interface-settings-not-working-xgs-firmware-53), though the post dates a bit so I'm still hoping things are different now.
Thanks in advance.

You're on the right track with setting link_speeds in rte_eth_conf before it's used in the call to rte_eth_dev_configure. Note that the documentation says you must set the RTE_ETH_LINK_SPEED_FIXED bit of link_speeds in addition to setting one link speed. Perhaps that's the step you're missing?
The link_speeds field is used in the drivers/net/* files and not in rte_eth_dev_configure directly, because it is a hardware-specific request. For instance, if I look at drivers/net/e1000/igb_ethdev.c, I see usage of link_speeds in eth_igb_start
To investigate speed capabilities of a device, you'd make a call to rte_eth_dev_info_get, e.g. rte_eth_dev_info_get(port_id, &dev_info) and look at dev_info.speed_capa.
test-pmd prints them nicely:
static void
device_infos_display_speeds(uint32_t speed_capa)
{
printf("\n\tDevice speed capability:");
if (speed_capa == RTE_ETH_LINK_SPEED_AUTONEG)
printf(" Autonegotiate (all speeds)");
if (speed_capa & RTE_ETH_LINK_SPEED_FIXED)
printf(" Disable autonegotiate (fixed speed) ");
if (speed_capa & RTE_ETH_LINK_SPEED_10M_HD)
printf(" 10 Mbps half-duplex ");
if (speed_capa & RTE_ETH_LINK_SPEED_10M)
printf(" 10 Mbps full-duplex ");
if (speed_capa & RTE_ETH_LINK_SPEED_100M_HD)
printf(" 100 Mbps half-duplex ");
if (speed_capa & RTE_ETH_LINK_SPEED_100M)
printf(" 100 Mbps full-duplex ");
if (speed_capa & RTE_ETH_LINK_SPEED_1G)
printf(" 1 Gbps ");
if (speed_capa & RTE_ETH_LINK_SPEED_2_5G)
printf(" 2.5 Gbps ");
if (speed_capa & RTE_ETH_LINK_SPEED_5G)
printf(" 5 Gbps ");
if (speed_capa & RTE_ETH_LINK_SPEED_10G)
printf(" 10 Gbps ");
if (speed_capa & RTE_ETH_LINK_SPEED_20G)
printf(" 20 Gbps ");
if (speed_capa & RTE_ETH_LINK_SPEED_25G)
printf(" 25 Gbps ");
if (speed_capa & RTE_ETH_LINK_SPEED_40G)
printf(" 40 Gbps ");
if (speed_capa & RTE_ETH_LINK_SPEED_50G)
printf(" 50 Gbps ");
if (speed_capa & RTE_ETH_LINK_SPEED_56G)
printf(" 56 Gbps ");
if (speed_capa & RTE_ETH_LINK_SPEED_100G)
printf(" 100 Gbps ");
if (speed_capa & RTE_ETH_LINK_SPEED_200G)
printf(" 200 Gbps ");
}

Related

Best way to debug OpenCL Kernel

I have following openCL kernel I want to debug.
I have put some printf in it but those are not useful as work items are schedules randomly and values printed are not always right.
How I can make my work items in kernel execute in serial for debugging purpose?
Following is code
__kernel
void SampleKernel( __global float4* gVtx, __global float4* gColor,
__global float4* gDst,
const int cNvtx,
const int4 cRes )
{
printf("nVertex : %d ", cNvtx);
for(int i =0 ; i < 1; i+=4)
{
printf(" %f ", gVtx[0].x);
printf(" %f ", gVtx[0].y);
printf(" %f ", gVtx[0].z);
printf(" %f ", gVtx[0].w);
}
}
I have also tried putting calls barrier(CLK_LOCAL_MEM_FENCE | CLK_GLOBAL_MEM_FENCE); before and after printf but it was not useful.
Can anybody please suggest me way I can serialize work item execution so I can print and debug kernel? Or some other better way to debug OpenCL kernel. I am using RX 580 AMD GPU.
Some suggestions:
you can use global id and group id to control which thread to print, and when you print, also print out the thread and group id. This would significantly reduce the complexity of the printed info and give you more control over the information you may need.
Another tip is that, please try to group multiple prints into a single one if possible; for instance, it is not a good debugging method if we use print as follows
printf(" %f ", gVtx[0].x);
printf(" %f ", gVtx[0].y);
printf(" %f ", gVtx[0].z);
printf(" %f ", gVtx[0].w);
you had better print them all in once to avoid them being interleaved by other prints from other threads.
With above two tips, it might be easier to handle the debugging kernels.

Delay between SPI.write() calls

I'm using a RedBearLabs Blend V2 to communicate with a SPI peripheral. Everything is functioning correctly at this point and I am receiving the expected data using SPI.write() and catching the return value. The SPI bus is running at 1 MHz in mode 0.
The image below shows the SPI bus connected to a scope. Trace 1 is SCLK, 2 is MISO, 3 is MOSI and 4 is CS. As you can see, I have 4 bursts of SCLK whilst CS is low, each 8 bits in length with a delay of approximately 20 µs in between each burst. Ideally, I need to completely alleviate this 20 µs delay and have 1 SCLK burst of 32 cycles.
The below code is how I am currently achieving what is seen in the scope grab.
int16_t MAX1300::sampleChannel(uint8_t channel, uint8_t inputMode, uint8_t adcMode)
{
int16_t sample;
int8_t hi = 0;
int8_t lo = 0;
MAX1300::InputWord_u word;
word.bits.start = 0b1;
if (inputMode == MAX1300::SINGLE_ENDED) {
word.bits.select = (Channels_se_e)channel;
} else {
word.bits.select = (Channels_dif_e)channel;
}
word.bits.payload = 0b0000;
if (adcMode == MAX1300::EXT_CLK) {
m_cs = 0;
m_spiBus.write(word.all);
m_spiBus.write(7);
hi = m_spiBus.write(0);
lo = m_spiBus.write(0);
m_cs = 1;
}
sample = ((int16_t)hi << 8) | lo;
return sample;
}
So far I have tried setting SPI.format(16, 0) with the intention of having 2 SCLK bursts of 16 cycles, however the SPI bus no longer functions if I do this. The same happens if I use SPI.transfer() with 32-bit buffers - no SPI bus.
I am able to increase the frequency of the bus, thus reducing the delay between each SCLK burst, however this is not really a suitable solution due to the end-application for this device.
What am I doing wrong here, or is what I am attempting to do just not possible with this hardware/firmware combination?
Thanks, Adam

Reading to UART stream - Data chunked

I'm reading on a stream connected on a UART serial port via half-duplex RS-485 protocol at 9600bps, Data : 8 bits, Stop 1 bit no parity with an embedded device.
I know that the system which I'm connected to sends binary commands between 2 byte and 10 bytes long at an interval of 20ms.
I access the stream with the following flags:
uart0_filestream = open(COM_UART_DEV, O_RDWR | O_NOCTTY | O_NDELAY);
However, it happens frequently that the 10 bytes long commands will be chunked in half causing a checksum error in my application. I need to poll every 20 ms and the only solution I found for this is to inscrease the sleep time between polls, which I don't want.
Is there a flag or a smart method that I can use to make sure the transmission is complete before reading the content of the stream buffer?
Okay, I found a solution that's ok for my needs. Since I can't know for sure that when I will read the content of the stream all the data will be there and I don't want to increase my poll interval, as #sawdust suggested, I increased the poll rate:
unsigned char *pStartBuffer = pRxBuffer;
if(uart0_filestream != -1)
{
int rx_length = 0, rx = 0, elapsed = 0;
bool bCommand = false;
while(rx_length <= 10)//&& elapsed <= 10)
{
rx = read(uart0_filestream, (void*)pRxBuffer, length);
if(rx > 0)
{
rx_length += rx;
pRxBuffer += rx;
if(checksum(pStartBuffer, rx_length) == true)
{
bCommand = true;
break;
}
}
nanosleep(&sleep_rx, NULL);
//elapsed+=2;
}
I increased the poll rate to 8ms at first. Since I know that the longest command I can receive is 10 bytes long, I read until the checksum is valid or that the content read is 10 bytes long and sleep an extra 2ms between polls. This performs very well for now.

My program was "Killed"

Probably by the kernel as suggested in this question. I would like to see why I was killed, something like the function the assassination took place. :)
Moreover, is there anything I can do to allow my program execute normally?
Chronicle
My program executes properly. However, we encountered a big dataset, 1.000.000 x 960 floats and my laptop at home couldn't take it (gave an std::bad_alloc()).
Now, I am in the lab, in a desktop with 9.8 GiB at a processor 3.00GHz × 4, which has more than twice of the memory the laptop at home has.
At home, the data set could not be loaded in the std::vector, where the data is stored. Here, in the lab, this was accomplished and the program continued with building a data structure.
That was the last time I heard from it:
Start building...
Killed
The desktop in the lab runs on Debian 8. My program runs as expected for a subset of the data set, in particular 1.00.000 x 960 floats.
EDIT
strace output is finally available:
...
brk..
brk(0x352435000) = 0x352414000
mmap(NULL, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
mmap(NULL, 134217728, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = 0x7f09c1563000
munmap(0x7f09c1563000, 44683264) = 0
munmap(0x7f09c8000000, 22425600) = 0
mprotect(0x7f09c4000000, 135168, PROT_READ|PROT_WRITE) = 0
...
mprotect(0x7f09c6360000, 8003584, PROT_READ|PROT_WRITE) = 0
+++ killed by SIGKILL +++
So this tells us I am out of memory, I guess.
In C++, a float is a single (32 bit) floating point number:
http://en.wikipedia.org/wiki/Single-precision_floating-point_format
which means that you are allocating (without overhead) 3 840 000 000 bytes of data.
or roughly 3,57627869 gigabytes..
Lets safely assume that the header of the vector is nothing compared to the data, and continue with this number..
This is a huge amount of data to build up, Linux may assume that this is just a memoryleak, and protect it self by killing the application:
https://unix.stackexchange.com/questions/136291/will-linux-start-killing-my-processes-without-asking-me-if-memory-gets-short
I don't think this is an overcommit problem, since you are actually utillizing nearly half the memory in a single application.
but perhaps.. consider this just for fun.. are you building an 32bit application?
you are getting close to the 2^32 (4Gb) memory space that can be addresssed by your program if it's a 32 bit build..
So in case you have another large vector allocated... bum bum bum
First install the signal handler for example
static bool installSignalHandler(int sigNumber, void (*handler)(int) = signal_handler)
{
struct sigaction action;
memset(&action, 0, sizeof(action));
action.sa_flags = SA_SIGINFO;
action.sa_sigaction = signal_handler_action;
return !sigaction(sigNumber, &action, NULL);
}
Call it:
installSignalHandler(SIGINT);
installSignalHandler(SIGTERM);
And the next code will be executed:
static void signal_handler_action(int sig, siginfo_t *siginfo, void* content)
{
switch(sig) {
case SIGHUP:
break;
case SIGUSR1:
break;
case SIGTERM:
break;
case SIGINT:
break;
case SIGPIPE:
return;
}
}
Take a look at the siginfo_t structure for the data you want
printf("Continue. Signo: %d - code: %d - value: %d - errno: %d - pid: %ld - uid: %ld - addr %p - status %d - band %d",
siginfo->si_signo, siginfo->si_code, siginfo->si_value, siginfo->si_errno, siginfo->si_pid, siginfo->si_uid, siginfo->si_addr,
siginfo->si_status, siginfo->si_band);

Is there any C/C++ library to connect with a remote NTP server? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I'm making a C++ software that needs to syncronize system clock with remote NTP server.
For now, I'm ussing "system" command to call the console "ntpdate" command.
..But I think is an ugly way to do that.
Do you know any library that let me connect to remote NTP server?
Thanks.
this works in c++ (adapted from c). it will get you UTC time from pool.ntp.br (you MUST use IP). if you manage to work out how to get DAYTIME (daylight savings - horario verao), please advise. i can get it from UFRJ pads servers, but UFRJ is unreliable, what with them being on strike half the year...
/* This code will query a ntp server for the local time and display
* it. it is intended to show how to use a NTP server as a time
* source for a simple network connected device.
* This is the C version. The orignal was in Perl
*
* For better clock management see the offical NTP info at:
* http://www.eecis.udel.edu/~ntp/
*
* written by Tim Hogard (thogard#abnormal.com)
* Thu Sep 26 13:35:41 EAST 2002
* Converted to C Fri Feb 21 21:42:49 EAST 2003
* this code is in the public domain.
* it can be found here http://www.abnormal.com/~thogard/ntp/
*
*/
#include <stdio.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <netdb.h>
#include <time.h>
#include <string.h>
#include <iostream>
void ntpdate();
int main() {
ntpdate();
return 0;
}
void ntpdate() {
//char *hostname=(char *)"163.117.202.33";
//char *hostname=(char *)"pool.ntp.br";
char *hostname=(char *)"200.20.186.76";
int portno=123; //NTP is port 123
int maxlen=1024; //check our buffers
int i; // misc var i
unsigned char msg[48]={010,0,0,0,0,0,0,0,0}; // the packet we send
unsigned long buf[maxlen]; // the buffer we get back
//struct in_addr ipaddr; //
struct protoent *proto; //
struct sockaddr_in server_addr;
int s; // socket
long tmit; // the time -- This is a time_t sort of
//use Socket;
//
//#we use the system call to open a UDP socket
//socket(SOCKET, PF_INET, SOCK_DGRAM, getprotobyname("udp")) or die "socket: $!";
proto=getprotobyname("udp");
s=socket(PF_INET, SOCK_DGRAM, proto->p_proto);
perror("socket");
//
//#convert hostname to ipaddress if needed
//$ipaddr = inet_aton($HOSTNAME);
memset( &server_addr, 0, sizeof( server_addr ));
server_addr.sin_family=AF_INET;
server_addr.sin_addr.s_addr = inet_addr(hostname);
//argv[1] );
//i = inet_aton(hostname,&server_addr.sin_addr);
server_addr.sin_port=htons(portno);
//printf("ipaddr (in hex): %x\n",server_addr.sin_addr);
/*
* build a message. Our message is all zeros except for a one in the
* protocol version field
* msg[] in binary is 00 001 000 00000000
* it should be a total of 48 bytes long
*/
// send the data
printf("sending data..\n");
i=sendto(s,msg,sizeof(msg),0,(struct sockaddr *)&server_addr,sizeof(server_addr));
perror("sendto");
// get the data back
struct sockaddr saddr;
socklen_t saddr_l = sizeof (saddr);
i=recvfrom(s,buf,48,0,&saddr,&saddr_l);
perror("recvfr:");
//We get 12 long words back in Network order
/*
for(i=0;i<12;i++) {
//printf("%d\t%-8x\n",i,ntohl(buf[i]));
long tmit2=ntohl((time_t)buf[i]);
std::cout << "Round number " << i << " time is " << ctime(&tmit2) << std::endl;
}
*/
/*
* The high word of transmit time is the 10th word we get back
* tmit is the time in seconds not accounting for network delays which
* should be way less than a second if this is a local NTP server
*/
//tmit=ntohl((time_t)buf[10]); //# get transmit time
tmit=ntohl((time_t)buf[4]); //# get transmit time
//printf("tmit=%d\n",tmit);
/*
* Convert time to unix standard time NTP is number of seconds since 0000
* UT on 1 January 1900 unix time is seconds since 0000 UT on 1 January
* 1970 There has been a trend to add a 2 leap seconds every 3 years.
* Leap seconds are only an issue the last second of the month in June and
* December if you don't try to set the clock then it can be ignored but
* this is importaint to people who coordinate times with GPS clock sources.
*/
tmit-= 2208988800U;
//printf("tmit=%d\n",tmit);
/* use unix library function to show me the local time (it takes care
* of timezone issues for both north and south of the equator and places
* that do Summer time/ Daylight savings time.
*/
//#compare to system time
//printf("Time: %s",ctime(&tmit));
std::cout << "time is " << ctime(&tmit) << std::endl;
i=time(0);
//printf("%d-%d=%d\n",i,tmit,i-tmit);
//printf("System time is %d seconds off\n",(i-tmit));
std::cout << "System time is " << (i-tmit) << " seconds off" << std::endl;
}
Would it not be a better solution to just have ntpd running on said system to ensure clock being correct instead of having your software manually issuing a sync and possibly causing issues with other applications not enjoying sudden time jumps, especially backwards.
That being said there is libntp I believe.
I'll drop in more things as Google finds them for me.