I have a C++ application running on a Raspberry Pi (DietPi Distro - Jessie) and am using GPS data to update the system time at boot. The code is simple, however, it crashes or locks up the application about 50% of the time. No exceptions are thrown and I've tried to capture any stderr in a log file with no success. Occasionally I see a segmentation fault, but I think this may be unrelated.
The portion of the code that clearly causes the crash is "settimeofday(&tv, NULL)". I can comment out only this and it will run fine, but here's the segment of code that assigns timeval 'tv' and changes the system time:
//Convert gps_data_t* member 'time' to timeval
timeval tv;
double wholeseconds, decimalseconds, offsettime;
offsettime = gpsdata->fix.time - (5.0 * 3600.0);
decimalseconds = modf(offsettime, &wholeseconds);
tv.tv_sec = static_cast<int32_t>(wholeseconds);
tv.tv_usec = static_cast<int32_t>(decimalseconds * 1000000.0);
//Set system time - THIS IS CAUSING CRASHES, WHY?
if ( settimeofday(&tv, NULL) >= 0) {
std::cout << "Time set successful!" << '\n';
} else {
std::cout << "Time set failure!" << '\n';
}
A point I would like to make is the setting of the time is successful when the system crashes. I have seen it unsuccessful in the case where gpsdata->fix.time is 'NaN', and it seems to handle this well and just report a failure. My own theories of possible causes:
This is a multi-threading program where several other threads are in a
sleep state (std::this_thread::sleep_for() used extensively). Does
changing the system time while these threads are in a sleep state
interfere with the time it comes out of sleep?
I know there is a time service (NTP?) in the Debian distro that
manages system time synchronization. Could this be interfering?
Anyways, I've got some more experimenting to do but it seems like something somebody may recognize immediately. All advice is appreciated.
A few other points, I've followed this link to remove the ntpd service and the issue still stands, ruling that cause out. Furthermore, I found this link that says changing the system time during a sleeping thread doesn't impact when it wakes up. So now my two theories are shot. Any other ideas are appreciated!
Because of the occasional segmentation fault that occurs, which is not clear if it's related or not to the freezing/crashing, I went ahead and updated the code to prevent the only source of undefined behavior I could identify. So I added uniform initialization for all the variables used in the modf function and made my timeval const. Also changed the type casts per advice below. Behavior is still the same.
//Loop until first GPS lock to set system time
while ( (gpsdata == NULL) ||
(gpsdata->fix.mode <= 1) ||
(gpsdata->fix.time < 1) ||
std::isnan(gpsdata->fix.time) ) {
gpsdata = gps_rec.read();
}
//Convert gps_data_t* member 'time' to timeval
double offsettime{ gpsdata->fix.time - (5.0 * 3600.0) }; //5.0 hr offset for EST
double seconds{ 0.0 };
double microseconds{ 1000000.0 * modf(offsettime, &seconds) };
const timeval tv{ static_cast<time_t>(seconds),
static_cast<suseconds_t>(microseconds) };
//Set system time - THIS IS CAUSING CRASHES, WHY?
if ( settimeofday(&tv, NULL) >= 0) {
std::cout << "Time set successful!" << '\n';
} else {
std::cout << "Time set failure!" << '\n';
}
Related
I've implemented code to call a service API every 10 seconds using a c++ client. Most of the times I've noticed it is around 10 seconds but occassionally I see an issue like below where it look longer. I'm using conditional variable on wait_until. What's wrong with my implementation? Any ideas?
Here's the timing output:
currentDateTime()=2015-12-21.15:13:21
currentDateTime()=2015-12-21.15:13:57
And the code:
void client::runHeartbeat() {
std::unique_lock<std::mutex> locker(lock);
for (;;) {
// check the current time
auto now = std::chrono::system_clock::now();
/* Set a condition on the conditional variable to wake up the this thread.
This thread is woken up on 2 conditions:
1. After a timeout of now + interval when we want to send the next heartbeat
2. When the client is destroyed.
*/
shutdownHeartbeat.wait_until(locker, now + std::chrono::milliseconds(sleepMillis));
// After waking up we want to check if a sign-out has occurred.
if (m_heartbeatRunning) {
std::cout << "currentDateTime()=" << currentDateTime() << std::endl;
SendHeartbeat();
}
else {
break;
}
}
}
You might want to consider using the high_resolution_clock for your needs. system_clock is not guaranteed a high resolution, so that may be a part of the problem.
Note that it's definition is implementation dependent so you might just get a typedef back onto system_clock on some compilers.
I have been looking at the performance of our C++ server application running on embedded Linux (ARM). The pseudo code for the main processing loop of the server is this -
for i = 1 to 1000
Process item i
Sleep for 20 ms
The processing for one item takes about 2ms. The "Sleep" here is really a call to the Poco library to do a "tryWait" on an event. If the event is fired (which it never is in my tests) or the time expires, it comes returns. I don't know what system call this equates to. Although we ask for a 2ms block, it turns out to be roughly 20ms. I can live with that - that's not the problem. The sleep is just an artificial delay so that other threads in the process are not starved.
The loop takes about 24 seconds to go through 1000 items.
The problem is, we changed the way the sleep is used so that we had a bit more control. I mean - 20ms delay for 2ms processing doesn't allow us to do much processing. With this new parameter set to a certain value it does something like this -
For i = 1 to 1000
Process item i
if i % 50 == 0 then sleep for 1000ms
That's the rough code, in reality the number of sleeps is slightly different and it happens to work out at a 24s cycle to get through all the items - just as before.
So we are doing exactly the same amount of processing in the same amount of time.
Problem 1 - the CPU usage for the original code is reported at around 1% (it varies a little but that's about average) and the CPU usage reported for the new code is about 5%. I think they should be the same.
Well perhaps this CPU reporting isn't accurate so I thought I'd sort a large text file at the same time and see how much it's slowed up by our server. This is a CPU bound process (98% CPU usage according to top). The results are very odd. With the old code, the time taken to sort the file goes up by 21% when our server is running.
Problem 2 - If the server is only using 1% of the CPU then wouldn't the time taken to do the sort be pretty much the same?
Also, the time taken to go through all the items doesn't change - it's still 24 seconds with or without the sort running.
Then I tried the new code, it only slows the sort down by about 12% but it now takes about 40% longer to get through all the items it has to process.
Problem 3 - Why do the two ways of introducing an artificial delay cause such different results. It seems that the server which sleeps more frequently but for a minimum time is getting more priority.
I have a half baked theory on the last one - whatever the system call that is used to do the "sleep" is switching back to the server process when the time is elapsed. This gives the process another bite at the time slice on a regular basis.
Any help appreciated. I suspect I'm just not understanding it correctly and that things are more complicated than I thought. I can provide more details if required.
Thanks.
Update: replaced tryWait(2) with usleep(2000) - no change. In fact, sched_yield() does the same.
Well I can at least answer problem 1 and problem 2 (as they are the same issue).
After trying out various options in the actual server code, we came to the conclusion that the CPU reporting from the OS is incorrect. It's quite result so to make sure, I wrote a stand alone program that doesn't use Poco or any of our code. Just plain Linux system calls and standard C++ features. It implements the pseudo code above. The processing is replaced with a tight loop just checking the elapsed time to see if 2ms is up. The sleeps are proper sleeps.
The small test program shows exactly the same problem. i.e. doing the same amount of processing but splitting up the way the sleep function is called, produces very different results for CPU usage. In the case of the test program, the reported CPU usage was 0.0078 seconds using 1000 20ms sleeps but 1.96875 when a less frequent 1000ms sleep was used. The amount of processing done is the same.
Running the test on a Linux PC did not show the problem. Both ways of sleeping produced exactly the same CPU usage.
So clearly a problem with our embedded system and the way it measures CPU time when a process is yielding so often (you get the same problem with sched_yeild instead of a sleep).
Update: Here's the code. RunLoop is where the main bit is done -
int sleepCount;
double getCPUTime( )
{
clockid_t id = CLOCK_PROCESS_CPUTIME_ID;
struct timespec ts;
if ( id != (clockid_t)-1 && clock_gettime( id, &ts ) != -1 )
return (double)ts.tv_sec +
(double)ts.tv_nsec / 1000000000.0;
return -1;
}
double GetElapsedMilliseconds(const timeval& startTime)
{
timeval endTime;
gettimeofday(&endTime, NULL);
double elapsedTime = (endTime.tv_sec - startTime.tv_sec) * 1000.0; // sec to ms
elapsedTime += (endTime.tv_usec - startTime.tv_usec) / 1000.0; // us to ms
return elapsedTime;
}
void SleepMilliseconds(int milliseconds)
{
timeval startTime;
gettimeofday(&startTime, NULL);
usleep(milliseconds * 1000);
double elapsedMilliseconds = GetElapsedMilliseconds(startTime);
if (elapsedMilliseconds > milliseconds + 0.3)
std::cout << "Sleep took longer than it should " << elapsedMilliseconds;
sleepCount++;
}
void DoSomeProcessingForAnItem()
{
timeval startTime;
gettimeofday(&startTime, NULL);
double processingTimeMilliseconds = 2.0;
double elapsedMilliseconds;
do
{
elapsedMilliseconds = GetElapsedMilliseconds(startTime);
} while (elapsedMilliseconds <= processingTimeMilliseconds);
if (elapsedMilliseconds > processingTimeMilliseconds + 0.1)
std::cout << "Processing took longer than it should " << elapsedMilliseconds;
}
void RunLoop(bool longSleep)
{
int numberOfItems = 1000;
timeval startTime;
gettimeofday(&startTime, NULL);
timeval startMainLoopTime;
gettimeofday(&startMainLoopTime, NULL);
for (int i = 0; i < numberOfItems; i++)
{
DoSomeProcessingForAnItem();
double elapsedMilliseconds = GetElapsedMilliseconds(startTime);
if (elapsedMilliseconds > 100)
{
std::cout << "Item count = " << i << "\n";
if (longSleep)
{
SleepMilliseconds(1000);
}
gettimeofday(&startTime, NULL);
}
if (longSleep == false)
{
// Does 1000 * 20 ms sleeps.
SleepMilliseconds(20);
}
}
double elapsedMilliseconds = GetElapsedMilliseconds(startMainLoopTime);
std::cout << "Main loop took " << elapsedMilliseconds / 1000 <<" seconds\n";
}
void DoTest(bool longSleep)
{
timeval startTime;
gettimeofday(&startTime, NULL);
double startCPUtime = getCPUTime();
sleepCount = 0;
int runLoopCount = 1;
for (int i = 0; i < runLoopCount; i++)
{
RunLoop(longSleep);
std::cout << "**** Done one loop of processing ****\n";
}
double endCPUtime = getCPUTime();
std::cout << "Elapsed time is " <<GetElapsedMilliseconds(startTime) / 1000 << " seconds\n";
std::cout << "CPU time used is " << endCPUtime - startCPUtime << " seconds\n";
std::cout << "Sleep count " << sleepCount << "\n";
}
void testLong()
{
std::cout << "Running testLong\n";
DoTest(true);
}
void testShort()
{
std::cout << "Running testShort\n";
DoTest(false);
}
I am writing a simple program (my 1st program) to display the laptop battery, however, I would like to keep it active to monitor the battery %.:
using namespace std;
int main(int argc, char *argv[]) {
id:
SYSTEM_POWER_STATUS spsPwr;
if (GetSystemPowerStatus(&spsPwr)) {
cout << "\nAC Status : " << static_cast<double>(spsPwr.ACLineStatus)
<< "\nBattery Status : " << static_cast<double>(spsPwr.BatteryFlag)
<< "\nBattery Life % : " << static_cast<double>(spsPwr.BatteryLifePercent)
<< endl;
system("CLS");
goto id;
return 0;
}
else return 1;
}
using goto seems to be a bad idea as the CPU utilization jump to 99% ! :(, I am sure this is not the right way to do it.
Any suggestion?
Thanks
while (true) {
// do the stuff
::Sleep(2000); // suspend thread to 2 sec
}
(you are on Windows according to the API function)
see: Sleep
First of all, the issue you are asking about: of course you get 100% CPU usage, since you're asking the computer to try and get and print the power status of the computer as fast it possibly can. And since computers will happily do what you tell them to, well... you know what happens next.
As others have said, the solution is to use an API that will instruct your application to go to sleep. In Windows, which appears to be your platform of choice, that API is Sleep:
// Sleep for around 1000 milliseconds - it may be slightly more since Windows
// is not a hard real-time operating system.
Sleep(1000);
Second, please do not use goto. There are looping constructs in C and you should use them. I'm not fundamentally opposed to goto (in fact, in my kernel-driver programming days I used it quite frequently) but I am opposed to seeing it used when better alternatives are available. In this case the better alternative is a while loop.
Before I show you that let me point out another issue: DO NOT USE THE system function.
Why? The system function executes the command passed to it; on Windows it happens to execute inside the context of the command interpreter (cmd.exe) which supports and internal command called cls which happens to clear the screen. At least on your system. But yours isn't the only system in the world. On some other system, there might be a program called cls.exe which would get executed instead, and who knows what that would do? It could clear the screen, or it could format the hard drive. So please, don't use the system function. It's almost always the wrong thing to do. If you find yourself looking for that command stop and think about what you're doing and whether you need to do it.
So, you may ask, how do I clear the screen if I can't use system("cls")? There's a way to do it which should be portable across various operating systems:
int main(int, char **)
{
SYSTEM_POWER_STATUS spsPwr;
while (GetSystemPowerStatus(&spsPwr))
{
std::string status = "unknown";
if (spsPwr.ACLineStatus == 0)
status = "offline";
else if (spsPwr.ACLineStatus == 1)
status = "online";
// The percent of battery life left is returned as a value
// between 0 and 255 so we normalize it by multiplying it
// by 100.0 and dividing by 255.0 which is ~0.39.
std::cout << "Current Status: " << status << " ("
<< static_cast<int>(spsPwr.BatteryFlag) << "): "
<< 0.39 * static_cast<int>(spsPwr.BatteryLifePercent)
<< "% of battery remaining.\r" << std::flush;
// Sleep for around 1000 milliseconds - it may be slightly more
// since Windows is not a hard real-time operating system.
Sleep(1000);
}
// Print a new line before exiting.
std::cout << std::endl;
return 0;
}
What this does is print the information in a single line, then move back to the beginning of that line, sleep for around one second and then write the next line, overwriting what was previously there.
If the new line you write is shorter than the previous line, you may see some visual artifacts. Removing them should not be difficult but I'll leave it for you as an exercise. Here's a hint: what happens if you output a space where a letter used to be?
In order to do this across lines, you will need to use more advanced techniques to manipulate the console, and this exercise becomes a lot trickier.
You are having 100% CPU usage because your program is always running.
I don't want to get into details, and given that this is your first program, I'll recommend to put a call to usleep before the goto.
And, of course, avoid goto, use a proper loop instead.
int milliseconds2wait = 3000;
while (!flag_exit) {
// code
usleep( 1000 * milliseconds2wait )
}
Update: This is windows, use Sleep instead of usleep:
Sleep( milliseconds2wait );
I'm trying to write a program in C++, compiled in GCC 4.6.1 on Ubuntu 11.10, and the IPC is giving me a hard time. To demonstrate, here's my code for signaling a semaphore, with semid and semnum already supplied:
struct sembuf x;
x.sem_num = semnum;
x.sem_op = 1;
x.sem_flg = SEM_UNDO;
int old_value = semctl(semid, 0, GETVAL);
if(semop(semid, &x, 1) < 0)
{
std::cerr << "semaphore failed to signal" << std::endl;
}
else if(semctl(semid, 0, GETVAL) == old_value)
{
std::cerr << "signal returned OK, but didn't work" << std::endl;
}
The code for "wait" is similar; the main difference, of course, is that sem_op is set to -1. Sometimes I get the first error message here, but as often as not I get the second, which makes no sense at all to me. The first, I imagine I could hunt for an error code (though I'm not sure if that depends on C++11 features I'm not supposed to use), but I've got no idea how to even begin addressing the second. Rebooting didn't work. GDB isn't being much help, especially when "next" and "step" seem to jump around back and forth instead of going forward in sequence.
I am doing a performance comparison test. I want to record the run time for my c++ test application and compare it under different circumstances. The two cases to be compare are: 1) a file system driver is installed and active and 2) also when that same file system driver is not installed and active.
A series of tests will be conducted on several operating systems and the two runs described above will be done for each operating system and it's setup. Results will only be compared between the two cases for a given operating system and setup.
I understand that when running a c/c++ application within an operating system that is not a real-time system there is no way to get the real time it took for the application to run. I don't think this is a big concern as long as the test application runs for a fairly long period of time, therefore making the scheduling, priorities, switching, etc of the CPU negligible.
Edited: For Windows platform only
How can I generate some accurate application run time results within my test application?
If you're on a POSIX system you can use the time command, which will give you the total "wall clock" time as well as the actual CPU times (user and system).
Edit: Apparently there's an equivalent for Windows systems in the Windows Server 2003 Resource Kit called timeit.exe (not verified).
I think what you are asking is "How do I measure the time it takes for the process to run, irrespective of the 'external' factors, such as other programs running on the system?" In that case, the easiest thing would be to run the program multiple times, and get an average time. This way you can have a more meaningful comparison, hoping that various random things that the OS spends the CPU time on will average out. If you want to get real fancy, you can use a statistical test, such as the two-sample t-test, to see if the difference in your average timings is actually significant.
You can put this
#if _DEBUG
time_t start = time(NULL);
#endif
and finish with this
#if _DEBUG
time end = time(NULL);
#endif
in your int main() method. Naturally you'll have to return the difference either to a log or cout it.
Just to expand on ezod's answer.
You run the program with the time command to get the total time - there are no changes to your program
If you're on a Windows system you can use the high-performance counters by calling QueryPerformanceCounter():
#include <windows.h>
#include <string>
#include <iostream>
int main()
{
LARGE_INTEGER li = {0}, li2 = {0};
QueryPerformanceFrequency(&li);
__int64 freq = li.QuadPart;
QueryPerformanceCounter(&li);
// run your app here...
QueryPerformanceCounter(&li2);
__int64 ticks = li2.QuadPart-li.QuadPart;
cout << "Reference Implementation Ran In " << ticks << " ticks" << " (" << format_elapsed((double)ticks/(double)freq) << ")" << endl;
return 0;
}
...and just as a bonus, here's a function that converts the elapsed time (in seconds, floating point) to a descriptive string:
std::string format_elapsed(double d)
{
char buf[256] = {0};
if( d < 0.00000001 )
{
// show in ps with 4 digits
sprintf(buf, "%0.4f ps", d * 1000000000000.0);
}
else if( d < 0.00001 )
{
// show in ns
sprintf(buf, "%0.0f ns", d * 1000000000.0);
}
else if( d < 0.001 )
{
// show in us
sprintf(buf, "%0.0f us", d * 1000000.0);
}
else if( d < 0.1 )
{
// show in ms
sprintf(buf, "%0.0f ms", d * 1000.0);
}
else if( d <= 60.0 )
{
// show in seconds
sprintf(buf, "%0.2f s", d);
}
else if( d < 3600.0 )
{
// show in min:sec
sprintf(buf, "%01.0f:%02.2f", floor(d/60.0), fmod(d,60.0));
}
// show in h:min:sec
else
sprintf(buf, "%01.0f:%02.0f:%02.2f", floor(d/3600.0), floor(fmod(d,3600.0)/60.0), fmod(d,60.0));
return buf;
}
Download Cygwin and run your program by passing it as an argument to the time command. When you're done, spend some time to learn the rest of the Unix tools that come with Cygwin. This will be one of the best investments for your career you'll ever make; the Unix toolchest is a timeless classic.
QueryPerformanceCounter can have problems on multicore systems, so I prefer to use timeGetTime() which gives the result in milliseconds
you need a 'timeBeginPeriod(1)' before and 'timeEndPeriod(1)' afterwards to reduce the granularity as far as you can but I find it works nicely for my purposes (regulating timesteps in games), so it should be okay for benchmarking.
You can also use the program very sleepy to get a bunch of runtime information about your program. Here's a link: http://www.codersnotes.com/sleepy