Accelerometer vibration detection using a threshold - c++

I've program the Accelerometer to detect vibrations by setting a reasonable min/max threshold along all 3-axis' raw data. I need it to only count how many times it detects vibration, however, due to the way it's programmed with the threshold, I used a delay of about 1 sec in order to prevent multiple miscounts, which works but interferes with the Ultrasonic Module (HC-SR04) when it needs to read distance values is synchronously with the Accelerometer. Would like to get some feedback on this.

As far as I understand, you are using Arduino's delay() function. Bad idea as you block the all the rest of your application, well, you noticed already...
Better approach just checking if some time elapsed, e. g. using millis function:
static bool isDelay = false;
static unsigned long timestamp;
if(detect())
{
isDelay = true;
timestamp = millis();
}
if(isDelay && millis() - timestamp > 1000)
{
isDelay = false;
}
if(!isDelay)
{
// actions to be taken...
}
Always use subtraction between the timestamp and the current time – the time counter might overflow, subtraction result is unaffected, though, and you are safe...
You can simply skip the isDelay variable if you are sure enough that the relevant event always occurs at least once in between overflow period (around 50 days):
static unsigned long timestamp = millis() - 1000;
if(detect())
{
timestamp = millis();
}
if(millis() - timestamp > 1000)
{
// actions to be taken...
}
Both variants: static variables as assuming you have this code in Arduino's loop function (or one being called from loop). Yet prefer replacing the magic number 1000 with a macro and you're fine...

Related

Inconsistent chrono::high_resolution_clock delay

I'm trying to implement a MIDI-like clocked sample player.
There is a timer, which increments pulse counter, and every 480 pulses is a quarter, so pulse period is 1041667 ns for 120 beats per minute.
Timer is not sleep-based and running in separate thread, but it seems like delay time is inconsistent: period between samples played in a test file is fluctuating +- 20 ms (in some occasions period is OK and steady, I can't find out dependency of this effect).
Audio backend influence is excluded: i've tried OpenAL as well as SDL_mixer.
void Timer_class::sleep_ns(uint64_t ns){
auto start = std::chrono::high_resolution_clock::now();
bool sleep = true;
while(sleep)
{
auto now = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(now - start);
if (elapsed.count() >= ns) {
TestTime = elapsed.count();
sleep = false;
//break;
}
}
}
void Timer_class::Runner(void){
// this running as thread
while(1){
sleep_ns(BPMns);
if (Run) Transport.IncPlaybackMarker(); // marker increment
if (Transport.GetPlaybackMarker() == Transport.GetPlaybackEnd()){ // check if timer have reached end, which is 480 pulses
Transport.SetPlaybackMarker(Transport.GetPlaybackStart());
Player.PlayFile(1); // period of this event fluctuates severely
}
}
};
void Player_class::PlayFile(int FileNumber){
#ifdef AUDIO_SDL_MIXER
if(Mix_PlayChannel(-1, WaveData[FileNumber], 0)==-1) {
printf("Mix_PlayChannel: %s\n",Mix_GetError());
}
#endif // AUDIO_SDL_MIXER
}
Am i doing something wrong in terms of an approach? Is there any better way to implement timer of this kind?
Deviation higher than 4-5 ms is too much in case of audio.
I see a large error and a small error. The large error is that your code assumes that the main processing in Runner consistently takes zero time:
if (Run) Transport.IncPlaybackMarker(); // marker increment
if (Transport.GetPlaybackMarker() == Transport.GetPlaybackEnd()){ // check if timer have reached end, which is 480 pulses
Transport.SetPlaybackMarker(Transport.GetPlaybackStart());
Player.PlayFile(1); // period of this event fluctuates severely
}
That is, you're "sleeping" for the time you want your loop iteration to take, and then you're doing processing on top of that.
The small error is presuming that you can represent your ideal loop iteration time with an integral number of nanoseconds. This error is so small that it doesn't really matter. However I amuse myself by showing people how they can get rid of this error too. :-)
First lets correct the small error by exactly representing the idealized loop iteration time:
using quarterPeriod = std::ratio<1, 2>;
using iterationPeriod = std::ratio_divide<quarterPeriod, std::ratio<480>>;
using iteration_time = std::chrono::duration<std::int64_t, iterationPeriod>;
I know nothing of music, but I'm guessing the above code is right because if you convert iteration_time{1} to nanoseconds, you get approximately 1041667ns. iteration_time{1} is intended to be the precise amount of time you want each iteration of your loop in Timer_class::Runner to take.
To correct the large error, you need to sleep until a time_point, as opposed to sleeping for a duration. Here's a generic utility to help you do that:
template <class Clock, class Duration>
void
delay_until(std::chrono::time_point<Clock, Duration> tp)
{
while (Clock::now() < tp)
;
}
Now if you code Timer_class::Runner to use delay_until instead of sleep_ns, I think you'll get better results:
void
Timer_class::Runner()
{
auto next_start = std::chrono::steady_clock::now() + iteration_time{1};
while (true)
{
if (Run) Transport.IncPlaybackMarker(); // marker increment
if (Transport.GetPlaybackMarker() == Transport.GetPlaybackEnd()){ // check if timer have reached end, which is 480 pulses
Transport.SetPlaybackMarker(Transport.GetPlaybackStart());
Player.PlayFile(1);
}
delay_until(next_start);
next_start += iteration_time{1};
}
}
I ended up using #howard-hinnant version of delay, and reducing buffer size in openal-soft, that's what made a huge difference, fluctuations is now about +-5 ms for 1/16th at 120BPM (125 ms period) and +-1 ms for quarter beats. Leaves a lot to be desired, but i guess it's okay

How to make DC motor's RPM come to its maximum value (analog 255) SLOWLY

It is a homework and I have completely NO idea, my teacher says you need just while, analogWrite and a counter. I have a DC motor, a transistor and a 9V battery.
I know my code does NOTHING, but just as example.
int analogPin = 3;
int count = 0;
void setup()
{
pinMode(analogPin, OUTPUT);
}
void loop() {
while(count<30){
analogWrite(analogPin , 255);
delay(20000);
count++;
}
}
You need to use counter value as your analogue output value:
void loop()
{
if( count < 256 )
{
analogWrite( analogPin, count ) ;
delay( 20000 );
count++ ;
}
}
Note that you do not need a while loop; the Arduino framework already calls loop() iteratively (the clue is in the name). However if your teacher thinks you need one and is expecting one, you may need to use one just for the marks. Alternatively discuss it with your teacher, and explain why it is unnecessary
In fact the delay too is arguably bad practice - it is unhelpful in applications where the loop() must do other things while controlling the motor. The following allows other code to run whilst controlling the motor:
unsigned long delay_start = 0 ;
void loop()
{
if( count < 256 &&
millis() - delay_start >= 20000ul )
{
analogWrite( analogPin, count ) ;
count++ ;
delay_start = millis() ;
}
// Do other stuff here
}
Because the loop() now never blocks on the delay() function, you can have code that does other things such as read switch inputs and it can react to them instantly, whereas as in your solution, such inputs could be ignored for up-to 20 seconds!
A typical DC motor will not start moving at very low values - you may want to start count somewhat higher than zero to account for the "dead-band". Analogue signals are also generally a poor way to drive a DC motor and varying speed; a PWM is generally a more efficient method, and will allow the motor to run at lower speeds. With an analogue signal at low levels (lower than for PWM), your motor will not move and will just get warm and drain your battery.
For test purposes, reduce the delay time; you don't want to sit there for an hour and 25 minutes just to find the code does not work! Set it to say 500ms, then start it, time how long it takes before the motor starts to move. If that is say 30 seconds, then yu know the motor starts to move when count is about 60; in which case that is a better starting value that zero. Then you can increase your delay back to 20 seconds if you wish - though a DC power supply might be better than a battery - I'm not sure it will last that long.

C++ buzzer to play piano notes for an Arduino

unsigned long t;
boolean isHigh;
#define BUZZER_PIN 3
void setup() {
// put your setup code here, to run once:
pinMode(BUZZER_PIN, OUTPUT);
isHigh = false;
t = micros();
}
void loop() {
playNote('c');
}
void playNote(char note) {
unsigned long timeToWait;
unsigned long timeToPlayTheNote = millis();
while (timeToPlayTheNote - millis() < 1000) {
if (note == 'c') {
timeToWait = 1911;
}
if (micros() - t > timeToWait) {
if (!isHigh) {
digitalWrite(BUZZER_PIN, HIGH);
isHigh = true;
} else {
digitalWrite(BUZZER_PIN, LOW);
isHigh = false;
}
t = micros();
}
}
}
I don't know why this won't work. I used to play a frequency every 1,000 microseconds but is there any way to make this simpler as well? Also, with this method I have to do (1/f)/2 and then convert that value from seconds to microseconds and use that as the value for timeToWait.
Initialization of ˋtimeToWait` should obviously be outside of the loop.
An array could be used for timing data.
ˋt` should probably be initialized inside ˋplayNoteˋ
Alternatively, you might use an enum for delay associated to a note.
enum class notes
{
C = 1911
};
Well, all suggestion assume that you don't want to compensate for drifting offsets.
Buzzers have a fixed frequency. They don't work like speakers at all. You will get better results with a real speaker. Don't forget to put a capacitor in series with it so the speaker sees an AC signal, you can fry a speaker quite easily if you feed it a DC signal..
For best results, you should use 2 x 47uF to 100uF electrolytic capacitors back to back, with the negative poles joined together, one positive to the 'duino and the other positive pole connected to the speaker. With higher capacitance, you'll get more bass.
Why don't you use a PWM at 50% (128) and change the PWM frequency to generate the sound? You could use the Timer1 or Timer3 library for that. Letting the hardware do the work would be more presise and would free your application for other tasks, such as reading a keyboard.
https://playground.arduino.cc/Code/Timer1
Setting the PWM at 0% with an analogWrite() would cut the sound.

why did the chromium implement Time::Now ? what is the benefit?

code segment as follows, code come frome chromium, why?
// Initilalize initial_ticks and initial_time
void InitializeClock() {
initial_ticks = TimeTicks::Now();
// Initilalize initial_time
initial_time = CurrentWallclockMicroseconds();
}// static
Time Time::Now() {
if (initial_time == 0)
InitializeClock();
// We implement time using the high-resolution timers so that we can get
// timeouts which are smaller than 10-15ms. If we just used
// CurrentWallclockMicroseconds(), we'd have the less-granular timer.
//
// To make this work, we initialize the clock (initial_time) and the
// counter (initial_ctr). To compute the initial time, we can check
// the number of ticks that have elapsed, and compute the delta.
//
// To avoid any drift, we periodically resync the counters to the system
// clock.
while (true) {
TimeTicks ticks = TimeTicks::Now();
// Calculate the time elapsed since we started our timer
TimeDelta elapsed = ticks - initial_ticks;
// Check if enough time has elapsed that we need to resync the clock.
if (elapsed.InMilliseconds() > kMaxMillisecondsToAvoidDrift) {
InitializeClock();
continue;
}
return Time(elapsed + Time(initial_time));
}
}
I assume your answer lies in the comment of the code you pasted:
// We implement time using the high-resolution timers so that we can get
// timeouts which are smaller than 10-15ms. If we just used
// CurrentWallclockMicroseconds(), we'd have the less-granular timer.
So Now gives a time value of high resolution, which is beneficial when you need higher resolution than 10-15ms, as they state in the comment. For instance, if you want to reschedule a task every 100 ns, you need the higher resolution, or if you want to measure the execution time of something - 10-15 ms is an eternity.

Running code every x seconds, no matter how long execution within loop takes

I'm trying to make an LED blink to the beat of a certain song. The song has exactly 125 bpm.
The code that I wrote seems to work at first, but the longer it runs the bigger the difference in time between the LED flashes and the next beat starts. The LED seems to blink a tiny bit too slow.
I think that happens because lastBlink is kind of depending on the blink which happened right before that to stay in sync, instead of using one static initial value to sync to...
unsigned int bpm = 125;
int flashDuration = 10;
unsigned int lastBlink = 0;
for(;;) {
if (getTickCount() >= lastBlink+1000/(bpm/60)) {
lastBlink = getTickCount();
printf("Blink!\r\n");
RS232_SendByte(cport_nr, 4); //LED ON
delay(flashDuration);
RS232_SendByte(cport_nr, 0); //LED OFF
}
}
Add value to lastBlink, not reread it as the getTickCount might have skipped more than the exact beats want to wait.
lastblink+=1000/(bpm/60);
Busy-waiting is bad, it spins the CPU for no good reason, and under most OS's it will lead to your process being punished -- the OS will notice that it is using up lots of CPU time and dynamically lower its priority so that other, less-greedy programs get first dibs on CPU time. It's much better to sleep until the appointed time(s) instead.
The trick is to dynamically calculate the amount of time to sleep until the next time to blink, based on the current system-clock time. (Simply delaying by a fixed amount of time means you will inevitably drift, since each iteration of your loop takes a non-zero and somewhat indeterminate time to execute).
Example code (tested under MacOS/X, probably also compiles under Linux, but can be adapted for just about any OS with some changes) follows:
#include <stdio.h>
#include <unistd.h>
#include <sys/times.h>
// unit conversion code, just to make the conversion more obvious and self-documenting
static unsigned long long SecondsToMillis(unsigned long secs) {return secs*1000;}
static unsigned long long MillisToMicros(unsigned long ms) {return ms*1000;}
static unsigned long long NanosToMillis(unsigned long nanos) {return nanos/1000000;}
// Returns the current absolute time, in milliseconds, based on the appropriate high-resolution clock
static unsigned long long getCurrentTimeMillis()
{
#if defined(USE_POSIX_MONOTONIC_CLOCK)
// Nicer New-style version using clock_gettime() and the monotonic clock
struct timespec ts;
return (clock_gettime(CLOCK_MONOTONIC, &ts) == 0) ? (SecondsToMillis(ts.tv_sec)+NanosToMillis(ts.tv_nsec)) : 0;
# else
// old-school POSIX version using times()
static clock_t _ticksPerSecond = 0;
if (_ticksPerSecond <= 0) _ticksPerSecond = sysconf(_SC_CLK_TCK);
struct tms junk; clock_t newTicks = (clock_t) times(&junk);
return (_ticksPerSecond > 0) ? (SecondsToMillis((unsigned long long)newTicks)/_ticksPerSecond) : 0;
#endif
}
int main(int, char **)
{
const unsigned int bpm = 125;
const unsigned int flashDurationMillis = 10;
const unsigned int millisBetweenBlinks = SecondsToMillis(60)/bpm;
printf("Milliseconds between blinks: %u\n", millisBetweenBlinks);
unsigned long long nextBlinkTimeMillis = getCurrentTimeMillis();
for(;;) {
long long millisToSleepFor = nextBlinkTimeMillis - getCurrentTimeMillis();
if (millisToSleepFor > 0) usleep(MillisToMicros(millisToSleepFor));
printf("Blink!\r\n");
//RS232_SendByte(cport_nr, 4); //LED ON
usleep(MillisToMicros(flashDurationMillis));
//RS232_SendByte(cport_nr, 0); //LED OFF
nextBlinkTimeMillis += millisBetweenBlinks;
}
}
I think the drift problem may be rooted in your using relative time delays by sleeping for a fixed duration rather than sleeping until an absolute point in time. The problem is threads don't always wake up precisely on time due to scheduling issues.
Something like this solution may work for you:
// for readability
using clock = std::chrono::steady_clock;
unsigned int bpm = 125;
int flashDuration = 10;
// time for entire cycle
clock::duration total_wait = std::chrono::milliseconds(1000 * 60 / bpm);
// time for LED off part of cycle
clock::duration off_wait = std::chrono::milliseconds(1000 - flashDuration);
// time for LED on part of cycle
clock::duration on_wait = total_wait - off_wait;
// when is next change ready?
clock::time_point ready = clock::now();
for(;;)
{
// wait for time to turn light on
std::this_thread::sleep_until(ready);
RS232_SendByte(cport_nr, 4); // LED ON
// reset timer for off
ready += on_wait;
// wait for time to turn light off
std::this_thread::sleep_until(ready);
RS232_SendByte(cport_nr, 0); // LED OFF
// reset timer for on
ready += off_wait;
}
If your problem is drifting out of sync rather than latency I would suggest measuring time from a given start instead of from the last blink.
start = now()
blinks = 0
period = 60 / bpm
while true
if 0 < ((now() - start) - blinks * period)
ledon()
sleep(blinklengh)
ledoff()
blinks++
Since you didn't specify C++98/03, I'm assuming at least C++11, and thus <chrono> is available. This so far is consistent with Galik's answer. However I would set it up so as to use <chrono>'s conversion abilities more precisely, and without having to manually enter conversion factors, except to describe "beats / minute", or actually in this answer, the inverse: "minutes / beat".
using namespace std;
using namespace std::chrono;
using mpb = duration<int, ratio_divide<minutes::period, ratio<125>>>;
constexpr auto flashDuration = 10ms;
auto beginBlink = steady_clock::now() + mpb{0};
while (true)
{
RS232_SendByte(cport_nr, 4); //LED ON
this_thread::sleep_until(beginBlink + flashDuration);
RS232_SendByte(cport_nr, 0); //LED OFF
beginBlink += mpb{1};
this_thread::sleep_until(beginBlink);
}
The first thing to do is specify the duration of a beat, which is "minutes/125". This is what mpb does. I've used minutes::period as a stand in for 60, just in an attempt to improve readability and reduce the number of magic numbers.
Assuming C++14, I can give flashDuration real units (milliseconds). In C++11 this would need to be spelled with this more verbose syntax:
constexpr auto flashDuration = milliseconds{10};
And then the loop: This is very similar in design to Galik's answer, but here I only increment the time to start the blink once per iteration, and each time, by precisely 60/125 seconds.
By delaying until a specified time_point, as opposed to a specific duration, one ensures that there is no round off accumulation as time progresses. And by working in units which exactly describe your required duration interval, there is also no round off error in terms of computing the start time of the next interval.
No need to traffic in milliseconds. And no need to compute how long one needs to delay. Only the need to symbolically compute the start time of each iteration.
Um...
Sorry to pick on Galik's answer, which I believe is the second best answer next to mine, but it exhibits a bug which my answer not only doesn't have, but is designed to prevent. I didn't notice it until I dug into it with a calculator, and it is subtle enough that testing might miss it.
In Galik's answer:
total_wait = 480ms; // this is exactly correct
off_wait = 990ms; // likely a design flaw
on_wait = -510ms; // certainly a mistake
And the total time that an iteration takes is on_wait + off_wait which is 440ms, almost imperceptibly close to total_wait (480ms), making debugging very challenging.
In contrast my answer increments ready (beginBlink) only once, and by exactly 480ms.
My answer is more likely to be right for the simple reason that it delegates more of its computation to the <chrono> library. And in this particular case, that probability paid off.
Avoid manual conversions. Instead let the <chrono> library do them for you. Manual conversions introduce the possibility for error.
You should count the time spent on the process and substract it to the flashDuration value.
The most obvious issue is that you're losing precision when you divide bpm/60. This always yields an integer (2) instead of 2.08333333...
Calling getTickCount() twice could also lead to some drift.