In my project, function clipsUpdate reads some facts which are set by CLIPS without the interference of my C++ code. Based on the read facts, clipsUpdate calls the needed function.
void updateClips(void)
{
// read clipsAction
switch(clipsAction)
{
case ActMove:
goToPosition (0, 0, clipsActionArg);
break;
}
}
In goToPosition function, a message is sent to the vehicle to move to the specified position and then a while loop is used to wait until the vehicle reaches the position.
void goToPosition(float north, float east, float down)
{
// Prepare and send the message
do
{
// Read new location information.
}while(/*Specified position reached?*/)
}
The problem is that updateClips should be called every 500 ms and when the goToPosition function is called, the execution is blocked until the target location is reached. During this waiting period, something may happen that requires the vehicle to stop. Therefore, updateClips should be called every 500 ms no matter what, and it should be able to stop executing goToPosition if it's running.
I tried using threads as following, but it didn't work successfully with me and it was difficult for me to debug. I think it can be done with a simpler and cleaner way.
case ActMove:
std::thread t1(goToPosition, 0, 0, clipsActionArg);
t1.detach();
break;
My question is, how can I check if the target location is reached without blocking the execution, i.e., without using while?
You probably want an event-driven model.
In an event-driven model, your main engine is a tight loop that reads events, updates state, then waits for more events.
Some events are time based, others are input based.
The only code that is permitted to block your main thread is the main loop, where it blocks until a timer hits or a new event arrives.
It might very roughly look like this:
using namespace std::literals::chrono_literals;
void main_loop( engine_state* state ) {
bool bContinue = true;
while(bContinue) {
update_ui(state);
while(bContinue && process_message(state, 10ms)) {
bContinue = update_state(state);
}
bContinue = update_state(state);
}
}
update_ui provides feedback to the user, if required.
process_message(state, duration) looks for a message to process, or for 10ms to occur. If it sees a message (like goToPosition), it modifies state to reflect that message (for example, it might store the desired destionation). It does not block, nor does it take lots of time.
If no message is recived in duration time, it returns anyhow without modifying state (I'm assuming you want things to happen even if no new input/messages occur).
update_state takes the state and evolves it. state might have a last updated time stamp; update_state would then make the "physics" reflect the time since last one. Or do any other updates.
The point is that process_message doesn't do work on the state (it encodes desires), while update_state advances "reality".
It returns false if the main loop should exit.
update_state is called once for every process_message call.
updateClips being called every 500ms can be encoded as a repeated automatic event in the queue of messages process_message reads.
void process_message( engine_state* state, std::chrono::milliseconds ms ) {
auto start = std::chrono::high_resolution_clock::now();
while (start + ms > std::chrono::high_resolution_clock::now()) {
// engine_state::delayed is a priority_queue of timestamp/action
// ordered by timestamp:
while (!state->delayed.empty()) {
auto stamp = state->delayed.front().stamp;
if (stamp >= std::chrono::high_resolution_clock::now()) {
auto f = state->queue.front().action;
state->queue.pop();
f(stamp, state);
} else {
break;
}
}
//engine_state.queue is std::queue<std::function<void(engine_state*)>>
if (!state->queue.empty()) {
auto f = state->queue.front();
state->queue.pop();
f(state);
}
}
}
The repeated polling is implemented as a delayed action that, as its first operation, inserts a new delayed action due 500ms after this one. We pass in the time the action was due to run.
"Normal" events can be instead pushed into the normal action queue, which is a sequence of std::function<void(engine_state*)> and executed in order.
If there is nothing to do, the above function busy-waits for ms time and then returns. In some cases, we might want to go to sleep instead.
This is just a sketch of an event loop. There are many, many on the internet.
Related
I want to make a program in which there are two dots blinking (with a break of 10ms) simultaneously, but one with delay 200ms and other with delay of 300ms. How can I play these two dots simultaneously from beginning? Is there a better way to that from following:
for(int i=1;i<100;i++)
{
if (i%2==0)
circle(10,10,2);
if (i%3==0)
circle(20,10,2);
delay(10);
cleardevice();
delay(100);
}
I would do something like this instead:
int t0=0,t1=0,t=0,s0=0,s1=0,render=1;
for (;;)
{
if (some stop condition like keyboard hit ...) break;
// update time, state
if (t>=t0) { render=1; s0=!s0; if (s0) t0+=10; else t0+=200; }
if (t>=t1) { render=1; s1=!s1; if (s1) t1+=10; else t1+=300; }
// render
if (render)
{
render=0;
cleardevice();
if (s0) circle(10,10,2);
if (s1) circle(20,10,2);
}
// update main time
delay(10); // Sleep(10) would be better but I am not sure it is present in TC++
t+=10;
if (t>10000) // make sure overflow is not an issue
{
t -=10000;
t0-=10000;
t1-=10000;
}
}
Beware the code is untested as I wrote it directly in here (so there might be syntax errors or typos).
The basic idea is having one global time t with small enough granularity (10ms). And for each object have time of event (t0,t1) state of object (s0,s1) and periods (10/200 , 10/300).
If main time reach the event time swap the state on/off and update event time to next state swap time.
This way you can have any number of objects just make sure your main time step is small enough.
The render flag just ensures that the scene is rendered on change only.
To improve timing you can use RDTSC instead of t+=10 and actually measure how much time has passed with CPU frequency accuracy.
To display the two circles simultaneously in the first round, you have to satisfy both conditions i%2==0 and i%3==0 at once. You can achieve it by simply changing
for(int i=1;i<100;i++)
to
for(int i=0;i<100;i++)
// ↑ zero here
I am working on an Arduino project where I receive messages trough I2C communication. I have a couple of routines that the program spends a lot of time in them without returning. Currently, I set an interrupt flag when an interrupt occurs and I basically check in those functions in a couple of places and if an interrupt occurred I return. I was wondering if it is ok for interrupt function to call my entry point function instead.
So this is my current interrupt function
void ReceivedI2CMessage(int numBytes)
{
Serial.print(F("Received message = "));
while (Wire.available())
{
messageFromBigArduino = Wire.read();
}
Serial.println(messageFromBigArduino);
I2CInterrupt = true;
}
and In the functions that the program spends most of the time, I had to do this in like a couple of places
if(I2CInterrupt) return;
Now I was wondering if it is ok to just call my entry point function from within my ReceiveI2CMessage. My main concern is that this might cause a memory leak because I leave the functions that I was executing behind when an interrupt happens and I am going back to the beginning of the program.
It is okay but not preferred. It is always safer to do less -- perhaps simply set a flag -- and exit interrupts as fast as possible. Then take care of the flag/semaphore back in your main loop. For example:
volatile uint8_t i2cmessage = 0; // must be volatile since altered in an interrupt
void ReceivedI2CMessage(int numBytes) // not sure what numBytes are used for...
{
i2cmessage = 1; // set a flag and get out quickly
}
Then in your main loop:
loop()
{
if (i2cmessage == 1) // act on the semaphore
{
cli(); // optional but maybe smart to turn off interrupts while big message traffic going through...
i2cmessage = 0; // reset until next interrupt
while (Wire.available())
{
messageFromBigArduino = Wire.read();
// do something with bytes read
}
Serial.println(messageFromBigArduino);
sei(); // restore interrupts if turned off earlier
}
}
This achieves the goal of the interrupt, which is ideally to set a semaphore to be acted on quickly in the main loop.
I've implemented code to call a service API every 10 seconds using a c++ client. Most of the times I've noticed it is around 10 seconds but occassionally I see an issue like below where it look longer. I'm using conditional variable on wait_until. What's wrong with my implementation? Any ideas?
Here's the timing output:
currentDateTime()=2015-12-21.15:13:21
currentDateTime()=2015-12-21.15:13:57
And the code:
void client::runHeartbeat() {
std::unique_lock<std::mutex> locker(lock);
for (;;) {
// check the current time
auto now = std::chrono::system_clock::now();
/* Set a condition on the conditional variable to wake up the this thread.
This thread is woken up on 2 conditions:
1. After a timeout of now + interval when we want to send the next heartbeat
2. When the client is destroyed.
*/
shutdownHeartbeat.wait_until(locker, now + std::chrono::milliseconds(sleepMillis));
// After waking up we want to check if a sign-out has occurred.
if (m_heartbeatRunning) {
std::cout << "currentDateTime()=" << currentDateTime() << std::endl;
SendHeartbeat();
}
else {
break;
}
}
}
You might want to consider using the high_resolution_clock for your needs. system_clock is not guaranteed a high resolution, so that may be a part of the problem.
Note that it's definition is implementation dependent so you might just get a typedef back onto system_clock on some compilers.
I'm working on a program that simulates a gas station. Each car at the station is it's own thread. Each car must loop through a single bitmask to check if a pump is open, and if it is, update the bitmask, fill up, and notify other cars that the pump is now open. My current code works but there are some issues with load balancing. Ideally all the pumps are used the same amount and all cars get equal fill-ups.
EDIT: My program basically takes a number of cars, pumps, and a length of time to run the test for. During that time, cars will check for an open pump by constantly calling this function.
int Station::fillUp()
{
// loop through the pumps using the bitmask to check if they are available
for (int i = 0; i < pumpsInStation; i++)
{
//Check bitmask to see if pump is open
stationMutex->lock();
if ((freeMask & (1 << i)) == 0 )
{
//Turning the bit on
freeMask |= (1 << i);
stationMutex->unlock();
// Sleeps thread for 30ms and increments counts
pumps[i].fillTankUp();
// Turning the bit back off
stationMutex->lock();
freeMask &= ~(1 << i);
stationCondition->notify_one();
stationMutex->unlock();
// Sleep long enough for all cars to have a chance to fill up first.
this_thread::sleep_for(std::chrono::milliseconds((((carsInStation-1) * 30) / pumpsInStation)-30));
return 1;
}
stationMutex->unlock();
}
// If not pumps are available, wait until one becomes available.
stationCondition->wait(std::unique_lock<std::mutex>(*stationMutex));
return -1;
}
I feel the issue has something to do with locking the bitmask when I read it. Do I need to have some sort of mutex or lock around the if check?
It looks like every car checks the availability of pump #0 first, and if that pump is busy it then checks pump #1, and so on. Given that, it seems expected to me that pump #0 would service the most cars, followed by pump #1 serving the second-most cars, all the way down to pump #(pumpsInStation-1) which only ever gets used in the (relatively rare) situation where all of the pumps are in use simultaneously at the time a new car pulls in.
If you'd like to get better load-balancing, you should probably have each car choose a different random ordering to iterate over the pumps, rather than having them all check the pumps' availability in the same order.
Normally I wouldn't suggest refactoring as it's kind of rude and doesn't go straight to the answer, but here I think it would help you a bit to break your logic into three parts, like so, to better show where the contention lies:
int Station::acquirePump()
{
// loop through the pumps using the bitmask to check if they are available
ScopedLocker locker(&stationMutex);
for (int i = 0; i < pumpsInStation; i++)
{
// Check bitmask to see if pump is open
if ((freeMask & (1 << i)) == 0 )
{
//Turning the bit on
freeMask |= (1 << i);
return i;
}
}
return -1;
}
void Station::releasePump(int n)
{
ScopedLocker locker(&stationMutex);
freeMask &= ~(1 << n);
stationCondition->notify_one();
}
bool Station::fillUp()
{
// If a pump is available:
int i = acquirePump();
if (i != -1)
{
// Sleeps thread for 30ms and increments counts
pumps[i].fillTankUp();
releasePump(i)
// Sleep long enough for all cars to have a chance to fill up first.
this_thread::sleep_for(std::chrono::milliseconds((((carsInStation-1) * 30) / pumpsInStation)-30));
return true;
}
// If no pumps are available, wait until one becomes available.
stationCondition->wait(std::unique_lock<std::mutex>(*stationMutex));
return false;
}
Now when you have the code in this form, there is a load balancing issue which is important to fix if you don't want to "exhaust" one pump or if it too might have a lock inside. The issue lies in acquirePump where you are checking the availability of free pumps in the same order for each car. A simple tweak you can make to balance it better is like so:
int Station::acquirePump()
{
// loop through the pumps using the bitmask to check if they are available
ScopedLocker locker(&stationMutex);
for (int n = 0, i = startIndex; n < pumpsInStation; ++n, i = (i+1) % pumpsInStation)
{
// Check bitmask to see if pump is open
if ((freeMask & (1 << i)) == 0 )
{
// Change the starting index used to search for a free pump for
// the next car.
startIndex = (startIndex+1) % pumpsInStation;
// Turning the bit on
freeMask |= (1 << i);
return i;
}
}
return -1;
}
Another thing I have to ask is if it's really necessary (ex: for memory efficiency) to use bit flags to indicate whether a pump is used. If you can use an array of bool instead, you'll be able to avoid locking completely and simply use atomic operations to acquire and release pumps, and that'll avoid creating a traffic jam of locked threads.
Imagine that the mutex has a queue associated with it, containing the waiting threads. Now, one of your threads manages to get the mutex that protects the bitmask of occupied stations, checks if one specific place is free. If it isn't, it releases the mutex again and loops, only to go back to the end of the queue of threads waiting for the mutex. Firstly, this is unfair, because the first one to wait is not guaranteed to get the next free slot, only if that slot happens to be the one on its loop counter. Secondly, it causes an extreme amount of context switches, which is bad for performance. Note that your approach should still produce correct results in that no two cars collide while accessing a single filling station, but the behaviour is suboptimal.
What you should do instead is this:
lock the mutex to get exclusive access to the possible filling stations
locate the next free filling station
if none of the stations are free, wait for the condition variable and restart at point 2
mark the slot as occupied and release the mutex
fill up the car (this is where the sleep in the simulation actually makes sense, the other one doesn't)
lock the mutex
mark the slot as free and signal the condition variable to wake up others
release the mutex again
Just in case that part isn't clear to you, waiting on a condition variable implicitly releases the mutex while waiting and reacquires it afterwards!
I have a socket program which acts like both client and server.
It initiates connection on an input port and reads data from it. On a real time scenario it reads data on input port and sends the data (record by record ) on to the output port.
The problem here is that while sending data to the output port CPU usage increases to 50% while is not permissible.
while(1)
{
if(IsInputDataAvail==1)//check if data is available on input port
{
//condition to avoid duplications while sending
if( LastRecordSent < LastRecordRecvd )
{
record_time temprt;
list<record_time> BufferList;
list<record_time>::iterator j;
list<record_time>::iterator i;
// Storing into a temp list
for(i=L.begin(); i != L.end(); ++i)
{
if((i->recordId > LastRecordSent) && (i->recordId <= LastRecordRecvd))
{
temprt.listrec = i->listrec;
temprt.recordId = i->recordId;
temprt.timestamp = i->timestamp;
BufferList.push_back(temprt);
}
}
//Sending to output port
for(j=BufferList.begin(); j != BufferList.end(); ++j)
{
LastRecordSent = j->recordId;
std::string newlistrecord = j->listrec;
newlistrecord.append("\n");
char* newrecord= new char [newlistrecord.size()+1];
strcpy (newrecord, newlistrecord.c_str());
if ( s.OutputClientAvail() == 1) //check if output client is available
{
int ret = s.SendBytes(newrecord,strlen(newrecord));
if ( ret < 0)
{
log1.AddLogFormatFatal("Nice Send Thread : Nice Client Disconnected");
--connected;
return;
}
}
else
{
log1.AddLogFormatFatal("Nice Send Thread : Nice Client Timedout..connection closed");
--connected; //if output client not available disconnect after a timeout
return;
}
}
}
}
// Sleep(100); if we include sleep here CPU usage is less..but to send data real time I need to remove this sleep.
If I remove Sleep()...CPU usage goes very high while sending data to out put port.
}//End of while loop
Any possible ways to maintain real time data transfer and reduce CPU usage..please suggest.
There are two potential CPU sinks in the listed code. First, the outer loop:
while (1)
{
if (IsInputDataAvail == 1)
{
// Not run most of the time
}
// Sleep(100);
}
Given that the Sleep call significantly reduces your CPU usage, this spin-loop is the most likely culprit. It looks like IsInputDataAvail is a variable set by another thread (though it could be a preprocessor macro), which would mean that almost all of that CPU is being used to run this one comparison instruction and a couple of jumps.
The way to reclaim that wasted power is to block until input is available. Your reading thread probably does so already, so you just need some sort of semaphore to communicate between the two, with a system call to block the output thread. Where available, the ideal option would be sem_wait() in the output thread, right at the top of your loop, and sem_post() in the input thread, where it currently sets IsInputDataAvail. If that's not possible, the self-pipe trick might work in its place.
The second potential CPU sink is in s.SendBytes(). If a positive result indicates that the record was fully sent, then that method must be using a loop. It probably uses a blocking call to write the record; if it doesn't, then it could be rewritten to do so.
Alternatively, you could rewrite half the application to use select(), poll(), or a similar method to merge reading and writing into the same thread, but that's far too much work if your program is already mostly complete.
if(IsInputDataAvail==1)//check if data is available on input port
Get rid of that. Just read from the input port. It will block until data is available. This is where most of your CPU time is going. However there are other problems:
std::string newlistrecord = j->listrec;
Here you are copying data.
newlistrecord.append("\n");
char* newrecord= new char [newlistrecord.size()+1];
strcpy (newrecord, newlistrecord.c_str());
Here you are copying the same data again. You are also dynamically allocating memory, and you are also leaking it.
if ( s.OutputClientAvail() == 1) //check if output client is available
I don't know what this does but you should delete it. The following send is the time to check for errors. Don't try to guess the future.
int ret = s.SendBytes(newrecord,strlen(newrecord));
Here you are recomputing the length of the string which you probably already knew back at the time you set j->listrec. It would be much more efficient to just call s.sendBytes() directly with j->listrec and then again with "\n" than to do all this. TCP will coalesce the data anyway.