Frequency measurement with 8051 microcontroller - concurrency

I simply want to continuously calculate the frequency of a sine signal with a comparator input (on the falling edges). The effective target frequency is about ~122 Hz and my implementation works most the time, but sometimes it calculates a wrong frequency with always about ~61Hz (which cannot be possible, I verified this with an oscilloscope).
It seems my implementation has a weakness, perhaps in form of a race condition or misuse of the timer, since it uses concurrent interrupt service routines and manually starts and stops the timer.
I also think the bug correlates with the measured frequency of about ~122Hz, because one timer overflow is pretty much the same:
One Timer Overflow = 1 / (1/8 MHz * 2^16 [Bits]) = 122.0703125 Hz
I am using a 8051 microcontroller (Silicon Labs C8051F121) with the following code:
// defines
#define PERIOD_TIMER_FREQ 8000000.0 // Timer 3 runs at 8MHz
#define TMR3_PAGE 0x01 /* TIMER 3 */
#define CP1F_VECTOR 12 /* comparator 1 falling edge */
#define TF3_VECTOR 14 /* timer3 reload timer */
sfr TMR3CN = 0xC8; /* TIMER 3 CONTROL */
sfr TMR3L = 0xCC; /* TIMER 3 LOW BYTE */
sfr TMR3H = 0xCD; /* TIMER 3 HIGH BYTE */
// global variables
volatile unsigned int xdata timer3_overflow_tmp; // temporary counter for the current period
volatile unsigned int xdata timer3_lastValue; // snapshot of the last timer value
volatile unsigned int xdata timer3_overflow; // current overflow counter, used in the main routine
volatile unsigned int xdata tempVar; // temporary variable
volatile unsigned long int xdata period; // the caluclated period
volatile float xdata period_in_SI; // calculated period in seconds
volatile float xdata frequency; // calculated frequency in Hertz
// Comparator 1 ISR has priority "high": EIP1 = 0x40
void comp1_falling_isr (void) interrupt CP1F_VECTOR
{
SFRPAGE = TMR3_PAGE;
TMR3CN &= 0xfb; // stop timer 3
timer3_lastValue = (unsigned int) TMR3H;
timer3_lastValue <<= 8;
timer3_lastValue |= (unsigned int) TMR3L;
// check if timer 3 overflow is pending
if (TMR3CN & 0x80)
{
timer3_overflow_tmp++; // increment overflow counter
TMR3CN &= 0x7f; // Clear over flow flag. This will also clear a pending interrupt request.
}
timer3_overflow = timer3_overflow_tmp;
// Reset all the timer 3 values to zero
TMR3H = 0;
TMR3L = 0;
timer3_overflow_tmp = 0;
TMR3CN |= 0x04; // restart timer 3
}
// Timer 3 ISR has priority "low", which means it can be interrupted by the
// comparator ISR: EIP2 = 0x00
// Timer 3 runs at 8MHz in 16 bit auto-reload mode
void timer3_isr(void) interrupt TF3_VECTOR using 2
{
SFRPAGE = TMR3_PAGE;
timer3_overflow_tmp++;
TMR3CN &= 0x7f; // Clear over flow flag. This will also clear a pending interrupt request.
}
void main(void)
{
for(;;)
{
...
calcFrequencyLabel: // this goto label is a kind of synchronization mechanism
// and is used to prevent race conditions caused by the ISRs
// which invalidates the current copied timer values
tempVar = timer3_lastValue;
period = (unsigned long int)timer3_overflow;
period <<= 16;
period |= (unsigned long int)timer3_lastValue;
// If both values are not equal, a race condition has been occured.
// Therefore the the current time values are invalid and needs to be dropped.
if (tempVar != timer3_lastValue)
goto calcFrequencyLabel;
// Caluclate period in seconds
period_in_SI = (float) period / PERIOD_TIMER_FREQ;
// Caluclate period in Hertz
frequency = 1 / period_in_SI; // Should be always stable about ~122Hz
...
}
}
Can someone please help me to find the bug in my implementation?

I can't pin-point the particular bug, but you have some problems in this code.
The main problem is that the 8051 was not a PC, but rather it was the most horrible 8-bit MCU to ever become mainstream. This means that you should desperately avoid things like 32 bit integers and floating point. If you disassemble this code you'll see what I mean.
There is absolutely no reason why you need to use floating point here. And 32 bit variables could probably be avoided too. You should use uint8_t whenever possible and avoid unsigned int too. Your C code shouldn't need to know the time in seconds or the frequency in Hz, but just care about the number of timer cycles.
You have multiple race condition bugs. Your goto hack in main is a dirty solution - instead you should prevent the race condition from happening in the first place. And you have another race condition between the ISRs with timer3_overflow_tmp.
Every variable shared between an ISR and main, or between two different ISR with different priorities, must be protected against race conditions. This means that you must either ensure atomic access or use some manner of guard mechanism. In this case, you could probably just use a "poor man's mutex" bool flag. The other alternative is to change to an 8 bit variable and write the code accessing it in inline assembler. Generally, you cannot have atomic access on an unsigned int on a 8-bit core.

With a slow edge as you would have for low frequency sine and insufficient hysteresis in the input (the default being none), it would only take a little noise for a rising edge to look like a falling edge and result in half the frequency.
The code fragment does not include the setting of CPT1CN where the hysteresis is set. For your signal you probably need to max it out, and ensure that the peak-to-peak noise on your signal is less that 30mV.

Related

How to measure elapse time in Cortex-M4

I am using Cortex-M4 on SOC and I want to measure the time a certain function take.
Googling it I saw two methods
Method 1 - using DWT_CYCCNT
REGISTER(DEMCR_ADDR) |= 1 << 24 ; //TRCENA_OFFSET
REGISTER(DWT_CTRL) |= 1; //on
startTime = REGISTER(DWT_CYCCNT);
//doing work
elapsedTime = REGISTER(DWT_CYCCNT) -startTime
REGISTER(DWT_CTRL) &= ~1; //of
Method 2: - using SysTick
//init
SysTick->LOAD = SysTick_LOAD_RELOAD_Msk; /* set reload register = MAX COUNT*/
SysTick->VAL = 0UL; /* Load the SysTick Counter Value */
SysTick->CTRL = SysTick_CTRL_CLKSOURCE_Msk |
SysTick_CTRL_ENABLE_Msk; /* Enable SysTick IRQ and SysTick Timer */
startTime = SysTick->VAL;
//do some work
elapsedTime = SysTick->VAL - start time;
SysTick->LOAD = SysTick_LOAD_RELOAD_Msk; /* set reload register = MAX COUNT*/
SysTick->VAL = 0UL; /* Load the SysTick Counter Value */
SysTick->CTRL = 0UL;
I wonder what are the advantages / disadvantages of these two methods
I have used both these methods in different projects.
In either case, you might use one of these because the other was already used for something else. If your RTOS wants the systick, use the debug counter. If your debugger wants the debug counter, use the systick.
The main disadvantage of the systick is that it only has 24 bits, whereas the debug counter has 32.
The main disadvantage is the debug counter is it is not available on every part (the systick is optional too, but hardly any silicon vendors take it out).
Enabling the whole debug block just for a counter also wastes a little bit of power, which you might care about if you are running from batteries.

C++ reading off ADC at pace of CLK frequency

I am using an hyperspectral sensor from Hamamatsu (C12880MA). So far, I got my firmware done to fit the Atmega328p, timings etc. are working great.
But now, I bumped into some issues regarding reading the measurements off the ADC.
According to my Clock-Signal generation (at variable f between 0.5 and 5 MHz), I need to read off the ADC values at exact "flag" values.
I am using the internal TC1 timer for clock generation. At each toggle of the CLK signal, I set a flag via an ISR to time the other signals / restart the program.
Now to the problem: I know (see datasheet for ref), that the VIDEO Signal of the sensor will appear between the flags "ADC_start" and "ADC_End" at the pace of the generated CLK. I have to read the values through the internal Arduino ADC at exactly the correct flag, to later match with the correct underlaying wavelengths.
Here are the essential snippets of the code:
volatile uint16_t flag = 0;
uint16_t data[288]; // define 1d matrix for data storage
uint16_t ST_Start = 2; // starter flag for the programm
uint16_t ST_End = 134; // end flag for ST signal
uint16_t ADC_Start = 310; //starter flag for ADC readout
uint16_t ADC_End = 886; //end flag for ADC readout
uint16_t index; //index used for data storage
volatile uint16_t End = 1000; // end flag for programm restart and clearing cache
The ISR handles increasing the flag on every CPU clock..
ISR(TIMER1_COMPA_vect){
if (flag <= End){
flag = flag+1;
} else
{
flag = 0;
//delayMicroseconds(2000);
}
this is my readData function. Unfortunately I have not found the correct way to read the ADC values at the exact flags needed. Say I need to read values at the flags 300, 302, 304, ..., 900 (on every second flag, between an set interval; ADC_Start to ADC_End)
void readData() {
if (flag >= ADC_Start && flag <= ADC_End ){
for (uint16_t i = ADC_Start; i < ADC_End; i = i+2)
{
index = i - ADC_Start;
data[index] = analogRead(VID);
}
} else { }
}
Here´s the data sheet: https://www.hamamatsu.com/resources/pdf/ssd/c12880ma_kacc1226e.pdf
See p.7 for timings.
Thanks!

timeGetTime() start variable is bigger than end variable

I am using timeGetTime() to limit the framerate to 60 frames per second. The way i intend to do that is get the time it takes to render said 60 frames and then use Sleep to wait the remainder of the second. But for some reason timeGetTime() is returning a way bigger number the first time i call it than when i call it after the 60 frames are rendered.
Here is the code:
Header
#ifndef __TesteMapa_h_
#define __TesteMapa_h_
#include "BaseApplication.h"
#include "Mundo.h"
class TesteMapa : public BaseApplication{
public:
TesteMapa()
virtual ~TesteMapa();
protected:
virtual void createScene();
virtual bool frameRenderingQueued(const Ogre::FrameEvent& evt);
virtual bool frameEnded(const Ogre::FrameEvent& evt);
virtual bool keyPressed(const OIS::KeyEvent &evt);
virtual bool keyReleased(const OIS::KeyEvent &evt);
private:
Mundo mundo = Mundo(3,3,3);
short altura, largura, passos, balanca, framesNoSegundo=0;
Ogre::SceneNode *noSol, *noSolFilho, *noCamera;
DWORD inicioSegundo = 0, finala;//inicioSegundo is the start variable and finala the ending variable
};
#endif
CPP relevant function
bool TesteMapa::frameEnded(const Ogre::FrameEvent& evt){
framesNoSegundo++;
if (inicioSegundo == 0)
inicioSegundo = timeGetTime();
else{
if (framesNoSegundo == 60){
finala = timeGetTime(); //getting this just to see the value being returned
Sleep(1000UL - (timeGetTime() - inicioSegundo));
inicioSegundo = 0;
framesNoSegundo = 0;
}
}
return true;
}
I am using timeBeginPeriod(1) and timeEndPeriod(1) in the main function.
Without even reading the complete question, the following:
using timeGetTime()
t limit the framerate to 60 frames per second
...
Sleep for the remainder of the second
can be answered with a firm "You are doing it wrong". In other words, stop here, and take a different approach.
Neither does timeGetTime have the necessary precision (not even if you use timeBeginPeriod(1)), nor does Sleep have the required precision, nor does Sleep give any guarantees about the maximum duration, nor are the semantics of Sleep even remotely close to what you expect, nor is sleeping to limit the frame rate a correct approach.
Also, calculating the remainder of the second will inevitably introduce a systematic error that will accumulate over time.
The one and only correct approach to limit frame rate is to use vertical sync.
If you need to otherwise limit a simulation to a particular rate, using a waitable timer is the correct approach. That will still be subject to the scheduler's precision, but it will avoid accumulating systematic errors, and priority boost will at least give a de-facto soft realtime guarantee.
In order to understand why what you are doing is (aside from precision and accumulating errors) conceptually wrong to begin with, consider two things:
Different timers, even if they run at apparently the same frequency, will diverge (thus, using any timer other than the vsync interrupt is wrong to limit frame rate). Watch cars at a red traffic light for a real-life analogy. Their blinkers will always be out of sync.
Sleep makes the current thread "not ready" to run, and eventually, some time after the specified time has passed, makes the thread "ready" again. That doesn't mean that the thread will run at that time again. Indeed, it doesn't necessarily mean that the thread will run at all in any finite amount of time.
Resolution is commonly around 16ms (1ms if you adjust the scheduler's granularity, which is an antipattern -- some recent architectures support 0.5ms by using the undocumented Nt API), which is way too coarse for something on the 1/60 second scale.
If you're using Visual Studio 2013 or older, std::chrono uses the 64hz ticker (15.625 ms per tick), which is slow. VS 2015 is supposed to fix this. You can use QueryPerformanceCounter instead. Here is example code that runs at a fixed frequency with no drift, since delays are based off an original reading of the counter. dwLateStep is a debugging aid that gets incremented if one or more steps took too long. The code is Windows XP compatible, where Sleep(1) can take up to 2 ms, which is why the code only does a sleep if there is 2 ms or more of time to delay.
typedef unsigned long long UI64; /* unsigned 64 bit int */
#define FREQ 60 /* frequency */
DWORD dwLateStep; /* late step count */
LARGE_INTEGER liPerfFreq; /* 64 bit frequency */
LARGE_INTEGER liPerfTemp; /* used for query */
UI64 uFreq = FREQ; /* thread frequency */
UI64 uOrig; /* original tick */
UI64 uWait; /* tick rate / freq */
UI64 uRem = 0; /* tick rate % freq */
UI64 uPrev; /* previous tick based on original tick */
UI64 uDelta; /* current tick - previous */
UI64 u2ms; /* 2ms of ticks */
UI64 i;
/* ... */ /* wait for some event to start thread */
QueryPerformanceFrequency(&liPerfFreq);
u2ms = ((UI64)(liPerfFreq.QuadPart)+499) / ((UI64)500);
timeBeginPeriod(1); /* set period to 1ms */
Sleep(128); /* wait for it to stabilize */
QueryPerformanceCounter(&liPerfTemp);
uOrig = uPrev = liPerfTemp.QuadPart;
for(i = 0; i < (uFreq*30); i++){
/* update uWait and uRem based on uRem */
uWait = ((UI64)(liPerfFreq.QuadPart) + uRem) / uFreq;
uRem = ((UI64)(liPerfFreq.QuadPart) + uRem) % uFreq;
/* wait for uWait ticks */
while(1){
QueryPerformanceCounter((PLARGE_INTEGER)&liPerfTemp);
uDelta = (UI64)(liPerfTemp.QuadPart - uPrev);
if(uDelta >= uWait)
break;
if((uWait - uDelta) > u2ms)
Sleep(1);
}
if(uDelta >= (uWait*2))
dwLateStep += 1;
uPrev += uWait;
/* fixed frequency code goes here */
/* along with some type of break when done */
}
timeEndPeriod(1); /* restore period */

Using 4 16bit timers for 400hz PWM

I'm dealing with arduino mega based quadcopter and trying to make PWM frequency for 4 motors - 400hz each. I've found an interesting solution where 4 ATmega2560 16bit timers are used to control 4 ESCs with PWM so it could reach 400hz frequency. 700 to 2000µs are normal pulse widths ESC are dealing with.
1sec/REFRESH_INTERVAL = 1/0.0025 = 400hz.
this is servo.h lib:
#define MIN_PULSE_WIDTH 700 // the shortest pulse sent to a servo
#define MAX_PULSE_WIDTH 2000 // the longest pulse sent to a servo
#define DEFAULT_PULSE_WIDTH 1000 // default pulse width when servo is attached
#define REFRESH_INTERVAL 2500 // minimum time to refresh servos in microseconds
#define SERVOS_PER_TIMER 1 // the maximum number of servos controlled by one timer
#define MAX_SERVOS (_Nbr_16timers * SERVOS_PER_TIMER)
The problem is to make it work each PWM should be controlled with 1 16bit timer. Otherwize, say, 2 escs on 1 timer would give 200hz. So all of 16bit timers are busy controlling 4 ESC but I still need to read input PPM from receiver. To do so I need at least one more 16bit timer which I don't have anymore. It's still one 8bit timer free bit it can only read 0..255 numbers while normal number escs operate with are 1000..2000 and stuff.
So what would happen if I'll use same 16bit timer for both pwm and ppm reading? Would it work? Would it decrease speed drastically? I have arduino working in pair with Raspberry Pi which controls data filtering, debugging, and stuff, is it better to move ppm reading to Raspberry?
To answer one of your questions:
So what would happen if I'll use same 16bit timer for both pwm and ppm
reading? Would it work?
Yes. When your pin change interrupt fires you may just read the current TCNT value to find out how long it has been since the last one. This will not in any way interfere with the timer's hardware PWM operation.
Would it decrease speed drastically?
No. PWM is done by dedicated hardware, software operations running at the same time will not affect its speed and neither will any ISRs you may have activated for the corresponding timer. Hence, you can let the timer generate the PWM as desired and still use it to a) read the current counter value from it and b) have an output compare and/or overflow ISR hooked to it to create a software-extended timer.
Edit in response to your comment:
Note that the actual value in the TCNT register is the current timer (tick) count at any moment, irrespective of whether PWM is active or not. Also, the Timer OVerflow interrupt (TOV) can be used in any mode. These two properties allow to make a software-extended timer for arbitrary other time measurement tasks via the following steps:
Install and activate a timer overflow interrupt for the timer/counter you want to use. In the ISR you basically just increment a (volatile!) global variable (timer1OvfCount for example), which effectively counts timer overflows and thus extends the actual timer range. The current absolute tick count can then be calculated as timer1OvfCount * topTimerValue + TCNTx.
When an event occurs, e.g a rising edge on one pin, in the handling routine (e.g. pin-change ISR) you read the current timer/couter (TCNT) value and timer1OvfCount and store these values in another global variable (e.g. startTimestamp), effectively starting your time measurement.
When the second event occurs, e.g. a falling edge on one pin, in the handling routine (e.g. pin-change ISR) you read the current timer/couter (TCNT) value and timer1OvfCount. Now you have the timestamp of the start of the signal in startTimestamp and the timestamp of the end of the signal in another variable. The difference between these two timestamps is exactly the duration of the pulse you're after.
Two points to consider though:
When using phase-correct PWM modes the timer will alternate between counting up and down successively. This makes finding the actual number of ticks passed since the last TOV interrupt a little more complicated.
There may be a race condition between one piece of code first reading TCNT and then reading timer1OvfCount, and the TOV ISR. This can be countered by disabling interrupts, then reading TCNT, then reading timer1OvfCount, and then checking the TOV interrupt flag; if the flag is set, there's a pending, un-handled overflow interrupt -> enable interrupts and repeat.
However, I'm pretty sure there are a couple of library functions around to maintain software-extended timer/counters that do all the timer-handling for you.
what is unit of 700 and 2000?I guess usec.You have not exaplained much in your question but i identified that you need pulses of 25msec duration in which 700 usec on time may be 0 degree and 2000 may be for 180 degree now pulse input of each servo may be attached with any GPIO of AVR.and this GPIOs provide PWM signal to Servo.so i guess you can even control this all motors with only one timer.With this kind of code:
suppose you have a timer that genrate inturrupt at every 50 usec.
now if you want 700 usec for motor1,800 usec for motor 2,900 usec for motor 3 & 1000 usec for motor 4 then just do this:
#define CYCLE_PERIOD 500 // for 25 msec = 50 usec * 500
unsigned short motor1=14; // 700usec = 50x14
unsigned short motor2=16; // 800usec
unsigned short motor3=18; // 900usec
unsigned short motor4=20; // 1000usec
unsigned char motor1_high_flag=1;
unsigned char motor2_high_flag=1;
unsigned char motor3_high_flag=1;
unsigned char motor4_high_flag=1;
PA.0 = 1; // IO for motor1
PA.1 = 1; // IO for motor2
PA.2 = 1; // IO for motor3
PA.3 = 1; // IO for motor4
void timer_inturrupt_at_50usec()
{
motor1--;motor2--;motor3--;motor4--;
if(!motor1)
{
if(motor1_high_flag)
{
motor1_high_flag = 0;
PA.0 = 0;
motor1 = CYCLE_PERIOD - motor1;
}
if(!motor1_high_flag)
{
motor1_high_flag = 1;
PA.0 = 1;
motor1 = 14; // this one is dummy;if you want to change duty time update this in main
}
}
if(!motor2)
{
if(motor2_high_flag)
{
motor2_high_flag = 0;
PA.1 = 0;
motor2 = CYCLE_PERIOD - motor2;
}
if(!motor2_high_flag)
{
motor2_high_flag = 1;
PA.1 = 1;
motor2 = 16;
}
}
if(!motor3)
{
if(motor3_high_flag)
{
motor3_high_flag = 0;
PA.2 = 0;
motor3 = CYCLE_PERIOD - motor3;
}
if(!motor3_high_flag)
{
motor3_high_flag = 1;
PA.2 = 1;
motor3 = 18;
}
}
if(!motor4)
{
if(motor4_high_flag)
{
motor4_high_flag = 0;
PA.3 = 0;
motor4 = CYCLE_PERIOD - motor4;
}
if(!motor4_high_flag)
{
motor4_high_flag = 1;
PA.3 = 1;
motor4 = 19;
}
}
}
& tell me what is ESC?

16-bit timer in AVR CTC mode

I'm trying to achieve that with an Arduino Uno board (ATmega328, 16 MHz). So I searched through the Internet and came up with something like this:
unsigned long Time=0;
int main (void)
{
Serial.begin(9600);
cli();
TCCR1A = 0;
TCCR1B = 0;
TCNT1 = 0;
OCR1A = 15999; // Compare value
TCCR1B |= (1 << WGM12)| (1 << CS10); // Prescaler
TIMSK1 |= (1 << OCIE1A); // Enable timer compare interrupt
sei();
while(1) {
Serial.println(TCNT1);
}
return 0;
}
ISR(TIMER1_COMPA_vect)
{
Time++;
Serial.println(Time);
}
I'm trying to achieve a frequency of 1 kHz, so I'll be able to create intervals which are a couple of milliseconds long.
That's why I chose the comparison value to be 15999 (so 16000-1) and the prescaler to be equal to 1, so I get (at least what I believe to be the right calculation):
Frequency = 16.000.000 MHz/16000 = 1000 Hz = 1 kHz
The problem now is that, even though the Serial.println(TCNT1) shows me numbers counted up to 16000, back to zero, up to 16000, back to zero,..., Serial.println(Time) just counts up to 8, and it just stops counting although TCNT1 is still counting.
I thought about some kind of overflow somewhere, but I could not think about where; the only thing I came up with is that the comparison value might be too big which is -as I think - obviously not the case since 2^16 -1=65.535>15999.
If I, for instance, make the prescaler, let's say 64, and leave the comparison value, Time counts as expected. So I'm wondering: Why does ISR() stops getting called at a value of 8, but works when bringing up the prescaler?
I'm not sure, but depending on the version of Arduino you use, the println call would be blocking. If you call it faster than it can complete in your ISR, the stack will overflow.
If you want higher resolution timing, maybe try differencing the getMicroseconds result in your Loop(). You should cycle in Loop() far faster than once per millisecond.
If you want to do something once per millisecond, capture a start microseconds, and then subtract it from the current microseconds in a conditional in your Loop() function. When you see more than 1000 do the task...
It seems like the resolution of the timer was too much for my Arduino Uno (16 MHz). Chosing a lower resolution (i.e higher compare value) fixed the issue for me.