In C++ I am running a bash command. The command is "echo | openssl s_client -connect zellowork.io:443"
But if this fails I want it to timeout in 4 seconds. The typical "/usr/bin/timeout 4 /usr/bin/sh -c" before the command does not work when run from the c++ code.
So I was trying to make a function that uses popen to send out the command and then waits for up to 4 seconds for the command to complete before it returns. The difficulty that I have is that fgets is blocking and it will wait for 20 seconds (on this command) before it unblocks and fails and I can not find anyway to see if there is something to read in a stream before I call fgets. Here is my code.
ExecuteCmdReturn Utils::executeCmdWithTimeout(string cmd, int ms)
{
ExecuteCmdReturn ecr;
ecr.success = false;
ecr.outstr = "";
FILE *in;
char buff[4096];
u64_t startTime = TWTime::ticksSinceStart();
u64_t stopTime = startTime + ms;
if(!(in = popen(cmd.c_str(), "r"))){
return ecr;
}
fseek(in,0,SEEK_SET);
stringstream ss("");
long int lastPos = 0;
long int newPos = 0;
while (TWTime::ticksSinceStart() < stopTime) {
newPos = ftell(in);
if (newPos > lastPos) {
lastPos = newPos;
if (fgets(buff, sizeof(buff), in) == NULL) {
break;
} else {
ss << buff;
}
} else {
msSleep(10);
}
}
auto rc = pclose(in);
ecr.success = true;
ecr.outstr = ss.str();
return ecr;
}
Use std::async to express that you may get your result asynchronously (a std::future<ExecuteCmdReturn>)
Use std::future<T>::wait_for to timeout waiting for the result.
Here's an example:
First, a surrogate for your executeCmdWithTimeout function that randomly sleeps between 0 and 5 seconds.
int do_something_silly()
{
std::random_device rd;
std::mt19937 gen(rd());
std::uniform_int_distribution<> distribution(0, 5);
auto sleep_time = std::chrono::seconds(distribution(gen));
std::cout << "Sleeping for " << sleep_time.count() << " seconds\n";
std::this_thread::sleep_for(sleep_time);
return 42;
}
Then, launching the task asynchronously and timing out on it:
int main()
{
auto silly_result = std::async(std::launch::async, [](){ return do_something_silly();});
auto future_status = silly_result.wait_for(3s);
switch(future_status)
{
case std::future_status::timeout:
std::cout << "timed out\n";
break;
case std::future_status::ready:
std::cout << "finished. Result is " << silly_result.get() << std::endl;
break;
case std::future_status::deferred:
std::cout << "The function hasn't even started yet.\n";
}
}
I used a lambda here even though I didn't need to because in your situation it will be easier because it looks like you are using a member function and you'll want to capture [this].
Live Demo
In your case, main would become ExecuteCmdReturn Utils::executeCmdWithTimeout(string cmd, int ms) and do_something_silly would become a private helper, named something like executeCmdWithTimeout_impl.
If you timeout waiting for the process to complete, you optionally kill the process so that you aren't wasting any extra cycles.
If you find yourself creating many short-lived threads like this, consider thread pooling. I've had a lot of success with boost::thread_pool (and if you end up going that direction, consider using Boost.Process for handling your process creation).
Related
In a C++ application running on a Raspberry Pi, I am using a loop in a thread to continuously wait for SocketCAN messages and process them. The messages come in at around 1kHz, as verified using candump.
After waiting for poll() to return and reading the data, I read the timestamp using ioctl() with SIOCGSTAMP. I then compare the timestamp with the previous one, and this is where it gets weird:
Most of the time, the difference is around 1ms, which is expected. But sometimes (probably when the data processing takes longer than usual or gets interrupted by the scheduler) it is much bigger, up to a few hundred milliseconds. In those instances, the messages that should have come in in the meantime (visible in candump) are lost.
How is that possible? If there is a delay somewhere, the incoming messages get buffered? Why do they get lost?
This is the slightly simplified code:
while(!done)
{
struct pollfd fd = {.fd = canSocket, .events = POLLIN};
int pollRet = poll(&fd, 1, 20); // 20ms timeout
if(pollRet < 0)
{
std::cerr << "Error polling canSocket" << errno << std::endl;
done = true;
return;
}
if(pollRet == 0) // timeout, never happens as expected
{
std::cout << "canSocket poll timeout" << std::endl;
if(done) break;
continue;
}
struct canfd_frame frame;
int size = sizeof(frame);
int readLength = read(canSocket, &frame, size);
if(readLength < 0) throw std::runtime_error("CAN read failed");
else if(readLength < size) throw std::runtime_error("CAN read incomplete");
struct timeval timestamp;
ioctl(canSocket, SIOCGSTAMP, ×tamp);
uint64_t timestamp_us = (uint64_t)timestamp.tv_sec * 1e6 + (uint64_t)timestamp.tv_usec;
static uint64_t timestamp_us_last = 0;
if((timestamp_us - timestamp_us_last) > 20000)
{
std::cout << "timestamp difference large: " << (timestamp_us - timestamp_us_last) << std::endl; // this sometime happens, why?
}
timestamp_us_last = timestamp_us;
// data processing
}
I need to delay a function by x amount of time. The problem is that I can't use sleep nor any function that suspends the function (that's because the function is a loop that contains more function, sleeping / suspending one will sleep / suspend all)
Is there a way I could do it?
If you want to execute some specific code at a certain time interval and don't want to use threads (to be able to suspend), then you have to keep track of time and execute the specific code when the delay time was exceeded.
Example (pseudo):
timestamp = getTime();
while (true) {
if (getTime() - timestamp > delay) {
//main functionality
//reset timer
timestamp = getTime();
}
//the other functionality you mentioned
}
With this approach, you invoke a specific fuction every time interval specified by delay. The other functions will be invoked at each iteration of the loop.
In other words, it makes no difference if you delay a function or execute it at specific time intervals.
Assuming that you need to run functions with their own arguments inside of a loop with custom delay and wait for them to finish before each iteration:
#include <cstdio>
void func_to_be_delayed(const int &idx = -1, const unsigned &ms = 0)
{
printf("Delayed response[%d] by %d ms!\n", idx, ms);
}
#include <chrono>
#include <future>
template<typename T, typename ... Ta>
void delay(const unsigned &ms_delay, T &func, Ta ... args)
{
std::chrono::time_point<std::chrono::high_resolution_clock> start = std::chrono::high_resolution_clock::now();
double elapsed;
do {
std::chrono::time_point<std::chrono::high_resolution_clock> end = std::chrono::high_resolution_clock::now();
elapsed = std::chrono::duration<double, std::milli>(end - start).count();
} while(elapsed <= ms_delay);
func(args...);
}
int main()
{
func_to_be_delayed();
const short iterations = 5;
for (int i = iterations; i >= 0; --i)
{
auto i0 = std::async(std::launch::async, [i]{ delay((i+1)*1000, func_to_be_delayed, i, (i+1)*1000); } );
// Will arrive with difference from previous
auto i1 = std::async(std::launch::async, [i]{ delay(i*1000, func_to_be_delayed, i, i*1000); } );
func_to_be_delayed();
// Loop will wait for all calls
}
}
Notice: this method potentially will spawn additional thread on each call with std::launch::async type of policy.
Standard solution is to implement event loop.
If you use some library, framework, system API, then most probably there is something similar provided to solve this kind of problem.
For example Qt has QApplication which provides this loop and there is QTimer.
boost::asio has io_context which provides even loop in which timer can be run boost::asio::deadline_timer.
You can also try implement such event loop yourself.
Example wiht boost:
#include <boost/asio.hpp>
#include <boost/date_time.hpp>
#include <exception>
#include <iostream>
void printTime(const std::string& label)
{
auto timeLocal = boost::posix_time::second_clock::local_time();
boost::posix_time::time_duration durObj = timeLocal.time_of_day();
std::cout << label << " time = " << durObj << '\n';
}
int main() {
boost::asio::io_context io_context;
try {
boost::asio::deadline_timer timer{io_context};
timer.expires_from_now(boost::posix_time::seconds(5));
timer.async_wait([](const boost::system::error_code& error){
if (!error) {
printTime("boom");
} else {
std::cerr << "Error: " << error << '\n';
}
});
printTime("start");
io_context.run();
} catch (const std::exception& e) {
std::cerr << e.what() << '\n';
}
return 0;
}
https://godbolt.org/z/nEbTvMhca
C++20 introduces coroutines, this could be a good solution too.
I want to make a timer that displays 30, 29 etc going down every second and then when there is an input it stops. I know you can do this:
for (int i = 60; i > 0; i--)
{
cout << i << endl;
Sleep(1000);
}
This will output 60, 59 etc. But this doesn't allow for any input while the program is running. How do I make it so you can input things while the countdown is running?
Context
This is not a homework assignment. I am making a text adventure game and there is a section where an enemy rushes at you and you have 30 seconds to decide what you are going to do. I don't know how to make the timer able to allow the user to input things while it is running.
Your game is about 1 frame per second, so user input is a problem. Normally games have higher frame rate like this:
#include <Windows.h>
#include <iostream>
int main() {
// Initialization
ULARGE_INTEGER initialTime;
ULARGE_INTEGER currentTime;
FILETIME ft;
GetSystemTimeAsFileTime(&ft);
initialTime.LowPart = ft.dwLowDateTime;
initialTime.HighPart = ft.dwHighDateTime;
LONGLONG countdownStartTime = 300000000; // 100 Nano seconds
LONGLONG displayedNumber = 31; // Prevent 31 to be displayed
// Game loop
while (true) {
GetSystemTimeAsFileTime(&ft); // 100 nano seconds
currentTime.LowPart = ft.dwLowDateTime;
currentTime.HighPart = ft.dwHighDateTime;
//// Read Input ////
bool stop = false;
SHORT key = GetKeyState('S');
if (key & 0x8000)
stop = true;
//// Game Logic ////
LONGLONG elapsedTime = currentTime.QuadPart - initialTime.QuadPart;
LONGLONG currentNumber_100ns = countdownStartTime - elapsedTime;
if (currentNumber_100ns <= 0) {
std::cout << "Boom!" << std::endl;
break;
}
if (stop) {
std::wcout << "Stopped" << std::endl;
break;
}
//// Render ////
LONGLONG currentNumber_s = currentNumber_100ns / 10000000 + 1;
if (currentNumber_s != displayedNumber) {
std::cout << currentNumber_s << std::endl;
displayedNumber = currentNumber_s;
}
}
system("pause");
}
If you're running this on Linux, you can use the classic select() call. When used in a while-loop, you can wait for input on one or more file descriptors, while also providing a timeout after which the select() call must return. Wrap it all in a loop and you'll have both your countdown and your handling of standard input.
https://linux.die.net/man/2/select
I have been dabbling with writing a C++ program that would control spark timing on a gas engine and have been running in to some trouble. My code is very simple. It starts by creating a second thread that works to emulate the output signal of a Hall Effect sensor that is triggered once per engine revolution. My main code processes the fake sensor output, recalculates engine rpm, and then determines the time necessary to wait for the crankshaft to rotate to the correct angle to send spark to the engine. The problem I'm running into is that I am using a sleep function in milliseconds and at higher RPM's I am losing a significant amount of data.
My question is how are real automotive ECU's programed to be able to control spark at high RPM's accurately?
My code is as follows:
#include <iostream>
#include <Windows.h>
#include <process.h>
#include <fstream>
#include "GetTimeMs64.cpp"
using namespace std;
void HEEmulator(void * );
int HE_Sensor1;
int *sensor;
HANDLE handles[1];
bool run;
bool *areRun;
int main( void )
{
int sentRpm = 4000;
areRun = &run;
sensor = &HE_Sensor1;
*sensor = 1;
run = TRUE;
int rpm, advance, dwell, oHE_Sensor1, spark;
oHE_Sensor1 = 1;
advance = 20;
uint64 rtime1, rtime2, intTime, curTime, sparkon, sparkoff;
handles[0] = (HANDLE)_beginthread(HEEmulator, 0, &sentRpm);
ofstream myfile;
myfile.open("output.out");
intTime = GetTimeMs64();
rtime1 = intTime;
rpm = 0;
spark = 0;
dwell = 10000;
sparkoff = 0;
while(run == TRUE)
{
rtime2 = GetTimeMs64();
curTime = rtime2-intTime;
myfile << "Current Time = " << curTime << " ";
myfile << "HE_Sensor1 = " << HE_Sensor1 << " ";
myfile << "RPM = " << rpm << " ";
myfile << "Spark = " << spark << " ";
if(oHE_Sensor1 != HE_Sensor1)
{
if(HE_Sensor1 > 0)
{
rpm = (1/(double)(rtime2-rtime1))*60000;
dwell = (1-((double)advance/360))*(rtime2-rtime1);
rtime1 = rtime2;
}
oHE_Sensor1 = HE_Sensor1;
}
if(rtime2 >= (rtime1 + dwell))
{
spark = 1;
sparkoff = rtime2 + 2;
}
if(rtime2 >= sparkoff)
{
spark = 0;
}
myfile << "\n";
Sleep(1);
}
myfile.close();
return 0;
}
void HEEmulator(void *arg)
{
int *rpmAd = (int*)arg;
int rpm = *rpmAd;
int milliseconds = (1/(double)rpm)*60000;
for(int i = 0; i < 10; i++)
{
*sensor = 1;
Sleep(milliseconds * 0.2);
*sensor = 0;
Sleep(milliseconds * 0.8);
}
*areRun = FALSE;
}
A desktop PC is not a real-time processing system.
When you use Sleep to pause a thread, you don't have any guarantees that it will wake up exactly after the specified amount of time has elapsed. The thread will be marked as ready to resume execution, but it may still have to wait for the OS to actually schedule it. From the documentation of the Sleep function:
Note that a ready thread is not guaranteed to run immediately. Consequently, the thread may not run until some time after the sleep interval elapses.
Also, the resolution of the system clock ticks is limited.
To more accurately simulate an ECU and the attached sensors, you should not use threads. Your simulation should not even depend on the passage of real time. Instead, use a single loop that updates the state of your simulation (both ECU and sensors) with each tick. This also means that your simulation should include the clock of the ECU.
I am trying to catch a SIGVTALRM sent by setitimer, and I have no idea why it doesn't work. here's my code:
void time(int time) {
cout << "time" << endl;
exit(0);
}
int main(void) {
signal(SIGVTALRM, time);
itimerval tv;
tv.it_value.tv_sec = 5;
tv.it_value.tv_usec = 0;
tv.it_interval.tv_sec = 5;
tv.it_interval.tv_usec = 0;
setitimer(ITIMER_VIRTUAL, &tv, NULL);
while (true) {
cout << "waiting" << endl;
}
return 0;
}
For some reason it never invokes time() - is it because it doesn't catch the signal or because the signal wasn't sent in the first place I don't know.
It should be pretty simple. Any ideas? thanks
Are you sure it is not working?
Everything looks fine to me. May be you are not waiting enough. Since you are printing the string waiting inside the loop and you are using the virtual timer, as a result the clock ticks only when the process runs (IO time not included). So in reality your timer might expire after several (>5) seconds.
Try commenting out the printing part.
It is due to signal function. As mentioned in http://manpages.ubuntu.com/manpages//precise/en/man2/signal.2.html:
The behavior of signal() varies across UNIX versions, and has also varied historically across different versions of Linux. Avoid its use: use sigaction(2) instead.
So the main method should be:
int main(void) {
itimerval tv;
struct sigaction sa;
sigemptyset(&sa.sa_mask);
sa.sa_flags = 0;
sa.sa_handler = timer_handler;
if (sigaction(SIGVTALRM, &sa, NULL) == -1) {
printf("error with: sigaction\n");
exit(EXIT_FAILURE);
}
tv.it_value.tv_sec = 5;
tv.it_value.tv_usec = 0;
tv.it_interval.tv_sec = 5;
tv.it_interval.tv_usec = 0;
setitimer(ITIMER_VIRTUAL, &tv, NULL);
while (true) {
cout << "waiting" << endl;
}
return 0;
}