I'm working on a smart Irrigation system built on an ESP8266 microcontroller coded on Arduino ide; basically I want to send data of temperature to my database every 4 minutes, check the sensors every 10 minutes, or execute any code after a certain time , but I don't want to use delay() since other codes in the same loop should keep on executing.
So, is there a way to use another function that can execute the rest code and when the timer is done it executes the sending to database?
Thanks
The Arduino package manager has a library called NTPClient, which provides, as you might guess, an NTP client. Using this with the system clock lets you keep reasonable time without a battery-backed RTC.
Once you know what time it is, scheduling events becomes trivial. This isn't a fully working example, but should give you the idea. The docs have more information. The NTPClient also works quite well with the AceTime library for time zone and other management.
#include <NTPClient.h>
#include <WiFiUdp.h>
#include <ESP8266WiFi.h>
#include <AceTime.h>
#include <ctime>
using namespace ace_time;
using namespace ace_time::clock;
using namespace std;
WiFiUDP ntpUDP;
const long utcOffsetInSeconds = 0;
// Get UTC from NTP - let AceTime figure out the local time
NTPClient timeClient(ntpUDP, "pool.ntp.org", utcOffsetInSeconds, 3600);
//3600s/h - sync time once per hour, this is sufficient.
// don't hammer pool.ntp.org!
static BasicZoneProcessor estProcessor;
static SystemClockLoop systemClock(nullptr /*reference*/, nullptr /*backup*/);
const char* ssid = // your SSID;
const char* password = // your wifi password;
void setup() {
WiFi.mode(WIFI_STA);
WiFi.begin(ssid, password);
Serial.println("Connecting to wifi...");
while (WiFi.status() != WL_CONNECTED) {
Serial.println('.');
delay(500);
}
systemClock.setup();
timeClient.begin();
}
int clockUpdateCounter = 50000;
void loop(){
// Update the system clock from the NTP client periodically
if (++clockUpdateCounter > 50000){ // can be more elegant than this... depends on your loop speed, can actually use the real time for this too, I'm just making this simple because I don't have a lot of time...
clockUpdateCounter = 0;
timeClient.update();
// doesn't matter how often you call timeClient.update().
// you can put this in the main loop() if you want.
// We supplied 3600s as the refresh interval in the constructor
// and internally .update() checks whether enough time has passed
// before making a request to the NTP server itself
auto estTz = TimeZone::forZoneInfo(&zonedb::kZoneAmerica_Toronto, &estProcessor);
auto estTime = ZonedDateTime::forUnixSeconds(timeClient.getEpochTime(), estTz);
// using the system clock is optional, but has a few
// conveniences. Here we just make sure that the systemClock remains
// sync'd with the NTP server. Doesn't matter how often you do this
// but usually around once an hour is plenty
systemClock.setNow(estTime.toEpochSeconds());
}
systemClock.loop();
delay(30);
}
The above keeps the system clock in sync with the NTP server. You can then use the system clock to fetch the time and use it for all types of timing purposes. eg :
// get Time
acetime_t nowT = systemClock.getNow();
auto estTz = TimeZone::forZoneInfo(&zonedb::kZoneAmerica_Toronto, &estProcessor);
auto nowTime = ZonedDateTime::forEpochSeconds(nowT, estTz);
Checking the time is fast, so you can do it inside loop() and only trigger the action when whatever time has elapsed.
This is a particularly nice solution for database integration since you can record a real timestamp of the actual date and time when logging your data to the database.
Related
I'm trying to develop a smart lock, with an rfid module, an esp8266 and integration with SinricPro (which makes the bridge for the lock to integrate with Alexa and Google Home)
It turns out that I'm having a very annoying problem, and I would like your help to solve it!
In this function, I execute what needs to be executed after the card passes the RFID module:
void handleRFID() {
if (RFID_card_is_not_present()) return;
String card_id = get_RFID_card_ID();
bool RFID_card_is_valid = validate_RFID_card(card_id);
if (RFID_card_is_valid) {
Serial.printf("The RFID card \"%s\" is valid.\r\n", card_id.c_str());
unlock_with_auto_relock();
send_lock_state(false);
// Insert a timeout here, to start reading the card again only after TEMP_AUTOLOCK is over
} else {
Serial.printf("The RFID card \"%s\" is not valid.\r\n", card_id.c_str());
// Insert a delay time here, to start reading the card again only after X time (something like 3 seconds)
}
}
If I run the code as it is, my serial monitor is spammed with a message that the card is valid/not valid, and it sends a shower of requests to the SinricPro api, as I have nothing limiting the reading of cards in the rfid module for X time, as a delay() function
But unfortunately I can't use delay(), so it's already out of the question
So basically what I want to do is limit the speed at which the cards are read by inserting some wait time where I put the comments in the code. Can someone help me?
For better understanding, I'll make my code available, and the RFID module library I'm using!
Project code: https://github.com/ogabrielborges/smartlock-rfid-iot
MRFC522 library: https://github.com/miguelbalboa/rfid
My serial monitor is spammed with messages that inform that the card is registered/not registered because I don't know how to limit the time that the module reads the tag
printscreen of the serial monitor with the message spam when keeping the tag on the sensor
There is a "blink without delay" example in the arduino environment (or there used to be?).
You can find a similar example here: https://docs.arduino.cc/built-in-examples/digital/BlinkWithoutDelay
Basically what you do is remember the current time in a global and check if enough time has passed for the next check:
These are your globals:
unsigned long previousMillis = 0; // will store last time you did your magic
const long interval = 1000; // interval at which to do your magic (milliseconds, 1000 = 1 sec)
Then do this somewhere in a function:
unsigned long currentMillis = millis();
if (currentMillis - previousMillis >= interval) {
// save the last time you did your magic
previousMillis = currentMillis;
// Do your magic here
}
I am exploring sound generation using C++ in Ubuntu Linux. Here is my code:
#include <iostream>
#include <cmath>
#include <stdint.h>
#include <ncurses.h>
//to compile: make [file_name] && ./[file_name]|aplay
int main()
{
initscr();
cbreak();
noecho();
nodelay(stdscr, TRUE);
scrollok(stdscr, TRUE);
timeout(0);
for ( int t=0;; t++ )
{
int ch = getch();
if (ch == 'q')
{
break;
}
uint8_t temp = t;
std::cout<<temp;
}
}
When this code is run, I want it to generate sound until I press "q" on my keyboard, after which I want the program to quit. This works fine; however, there is a noticeable delay between pressing the keyboard and the program quitting. This is not due to a delay with ncurses, as when I run the program without std::cout<<temp; (i.e. no sound generated), there is no latency
Is there a way to amend this? If not, how are real-time responsive audio programs written?
Edits and suggestions to the question are welcome. I am a novice to ALSA, so I am not sure if any additional details are required to replicate the bug.
The latency in the above loop is most likely due to delays introduced by the ncurses getch function.
Typically for realtime audio you will want to have a realtime audio thread running and a non-realtime user control thread running. The user control thread can alter the memory space of the real time audio thread which forces the real time audio loop to adjust synthesis as required.
In this gtkIOStream example, a full duplex audio class is created. The process method in the class can have your synthesis computation compiled in. This will handle the playback of your sound using ALSA.
To get user input, one possibility is to add a threaded method to the class by inheriting the FullDuplexTest class, like so :
class UIALSA : public FullDuplexTest, public ThreadedMethod {
void *threadMain(void){
while (1){
// use getchar here to block and wait for user input
// change the memory in FullDuplexTest to indicate a change in variables
}
return NULL;
}
public:
UIALSA(const char*devName, int latency) : FullDuplexTest(devName, latency), ThreadedMethod() {};
};
Then change all references to FullDuplexTest to UIALSA in the original test file (you will probably have to fix some compile time errors) :
UIALSA fullDuplex(deviceName, latency);
Also you will need to call UIALSA::run() to make sure the UI thread is running and listening for user input. You can add the call before you call "go" :
fullDuplex.run(); // start the UI thread
res=fullDuplex.go(); // start the full duplex read/write/process going.
I have an ESP 32 collect data from a moisture sensor, which it then serves on our network. Our WiFi turns off between 1am and 6 am (because no one is using it). The ESP does not automatically try to reconnect, so it gathered data all night which I straight up can not access now.
For obvious reasons I do not want it to halt data collection when it looses connection to our network, so I can not have a loop try to reconnect. I tried this code:
void loop() {
sensor_value = analogRead(sensor_pin);
Serial.println(sensor_value);
push_value(float(sensor_value)/2047.0);
//============
//RELEVANT BIT
//============
if( WiFi.status() != WL_CONNECTED ){
//Try to recconect if connection is lost....
WiFi.disconnect();
WiFi.begin(ssid, pwd);
}
delay(second_delay*1000);
}
I've seen everyone run Wifi.disconnect() before attempting reconnecting. Is that necessary. Also does WiFi.begin() pause execution? I can't test my code right now unfortunately.
I am using the Arduino IDE and Wifi.h
And before you ask: Yes, 2047 is correct. I am running the sensor on the wrong voltage which results in about this max value.
Given that you've tagged esp8266 wifi, I'm assuming you're using that library. If so, then wifi.begin will not block execution. The library sets autoreconnect by default, so it'll automatically reconnect to the last access point when available. Any client functions will simply return an error code while disconnected. I do not know of any reason that wifi.disconnect should be called before begin.
I'm implementing a class that talks to a motor controller over a USB device. I have everything working except for a way to indicate whether a parameter fetched over the comm link is "fresh" or not. What I have so far:
class MyCommClass
{
public:
bool getSpeed( double *speed );
private:
void rxThread();
struct MsgBase
{ /* .. */ };
struct Msg1 : public MsgBase
{ /* .. */ };
struct Msg2 : public MsgBase
{ /* .. */ };
/* .. */
struct MsgN : public MsgBase
{ /* .. */ };
Msg1 msg1;
Msg2 msg2;
/* .. */
MsgN msgn;
std::map< unsigned long id, MsgBase *msg > messages;
};
rxThead() is an infinite loop running in a separate thread checking the USB device for available messages. Each message has a unique identifier which rxThread() uses to stick it into the right msgx object. What I need is when the user calls the getSpeed() function it needs to be able to tell whether the current speed value is "fresh" or "stale" i.e. whether the msgx object that contains the speed value was updated within a specified timeout period. So each message object needs to implement its own timeout (since they vary per message).
All messages are transmitted periodically by the motor controller, but there are also some that get transmitted as soon as their contents change (but they will also be transmitted periodically if the contents do not change). This means that receiving a message at more than the nominal rate is OK, but it should appear at least once within the maximum timeout period.
The USB device provides timestamps along with the messages so I have access to that information. The timestamp does not reflect the current time, it is an unsigned long number with microsecond resolution that the device updates every time a message is received. I suspect the device just starts incrementing this from 0 from the time I call its initialization functions. A couple of different ways I can think of implementing this are:
Each message object launches a thread that runs infinitely waiting (WaitForSingleObject) for the timeout period. After the timeout it checks whether a counter variable (that was cached before the wait) has been incremented. If not it sets a flag marking the message as stale. The counter would be incremented every time rxThread() updates that message object.
rxThread(), in addition to stuffing messages, also iterates through the list of messages and checks the timestamp that each was last updated. If the timestamp exceeds the timeout it flags the message as stale. This method might have a problem with the amount of processing required. It probably wouldn't be a problem on most machines but this code needs to run on a piggishly slow 'industrial computer'.
I'd really appreciate your thoughts and suggestions on how to implement this. I'm open to ideas other than the two I've mentioned. I'm using Visual Studio 2005 and cross-platform portability is not a big concern as the USB device drivers are Windows only. There are currently about 8 messages I'm monitoring, but it would be nice if the solution were lightweight enough that I could add several (maybe another 8) more without running into processing horsepower limitations.
Thanks in advance,
Ashish.
If you don't need to do something "right away" when a message becomes stale, I think you can skip using timers if you store both the computer's time and the device's timestamp with each message:
#include <ctime>
#include <climits>
class TimeStamps {
public:
std::time_t sys_time() const; // in seconds
unsigned long dev_time() const; // in ms
/* .. */
};
class MyCommClass {
/* .. */
private:
struct MsgBase {
TimeStamps time;
/* .. */
};
TimeStamps most_recent_time;
bool msg_stale(MsgBase const& msg, unsigned long ms_timeout) const {
if (most_recent_time.sys_time() - msg.time.sys_time() > ULONG_MAX/1000)
return true; // device timestamps have wrapped around
// Note the subtraction may "wrap".
return most_recent_time.dev_time() - msg.time.dev_time() >= ms_timeout;
}
/* .. */
};
Of course, TimeStamps can be another nested class in MyCommClass if you prefer.
Finally, rxThread() should set the appropriate message's TimeStamps object and the most_recent_time member each time a message is received. All this won't detect a message as stale if it became stale after the last message of any other type was received, but your second possible solution in the question would have the same issue, so maybe that doesn't matter. If it does matter, something like this could still work, if msg_stale() also compares the current time.
How about storing the timestamp in the message, and having getSpeed() check the timestamp?
Is there a way to get notified when there is update to the system time from a time-server or due to DST change? I am after an API/system call or equivalent.
It is part of my effort to optimise generating a value for something similar to SQL NOW() to an hour granularity, without using SQL.
You can use timerfd_create(2) to create a timer, then mark it with the TFD_TIMER_CANCEL_ON_SET option when setting it. Set it for an implausible time in the future and then block on it (with poll/select etc.) - if the system time changes then the timer will be cancelled, which you can detect.
(this is how systemd does it)
e.g.:
#include <sys/timerfd.h>
#include <limits.h>
#include <stdio.h>
#include <unistd.h>
#include <errno.h>
int main(void) {
int fd = timerfd_create(CLOCK_REALTIME, 0);
timerfd_settime(fd, TFD_TIMER_ABSTIME | TFD_TIMER_CANCEL_ON_SET,
&(struct itimerspec){ .it_value = { .tv_sec = INT_MAX } },
NULL);
printf("Waiting\n");
char buffer[10];
if (-1 == read(fd, &buffer, 10)) {
if (errno == ECANCELED)
printf("Timer cancelled - system clock changed\n");
else
perror("error");
}
close(fd);
return 0;
}
I don't know if there is a way to be notified of a change in the system time, but
The system time is stored as UTC, so there is never a change due to DST change to be notified.
If my memory is correct, NTP deamon usually adjust the clock by changing its speed, again no change to be notified.
So the only times where you would be notified is after an uncommon manipulation.
clock_gettime on most recent Linux systems is incredibly fast, and usually pretty amazingly precise as well; you can find out the precision using clock_getres. But for hour level timestamps, gettimeofday might be more convenient since it can do the timezone adjustment for you.
Simply call the appropriate system call and do the division into hours each time you need a timestamp; all the other time adjustments from NTP or whatever will already have been done for you.