I am writing a simple program (my 1st program) to display the laptop battery, however, I would like to keep it active to monitor the battery %.:
using namespace std;
int main(int argc, char *argv[]) {
id:
SYSTEM_POWER_STATUS spsPwr;
if (GetSystemPowerStatus(&spsPwr)) {
cout << "\nAC Status : " << static_cast<double>(spsPwr.ACLineStatus)
<< "\nBattery Status : " << static_cast<double>(spsPwr.BatteryFlag)
<< "\nBattery Life % : " << static_cast<double>(spsPwr.BatteryLifePercent)
<< endl;
system("CLS");
goto id;
return 0;
}
else return 1;
}
using goto seems to be a bad idea as the CPU utilization jump to 99% ! :(, I am sure this is not the right way to do it.
Any suggestion?
Thanks
while (true) {
// do the stuff
::Sleep(2000); // suspend thread to 2 sec
}
(you are on Windows according to the API function)
see: Sleep
First of all, the issue you are asking about: of course you get 100% CPU usage, since you're asking the computer to try and get and print the power status of the computer as fast it possibly can. And since computers will happily do what you tell them to, well... you know what happens next.
As others have said, the solution is to use an API that will instruct your application to go to sleep. In Windows, which appears to be your platform of choice, that API is Sleep:
// Sleep for around 1000 milliseconds - it may be slightly more since Windows
// is not a hard real-time operating system.
Sleep(1000);
Second, please do not use goto. There are looping constructs in C and you should use them. I'm not fundamentally opposed to goto (in fact, in my kernel-driver programming days I used it quite frequently) but I am opposed to seeing it used when better alternatives are available. In this case the better alternative is a while loop.
Before I show you that let me point out another issue: DO NOT USE THE system function.
Why? The system function executes the command passed to it; on Windows it happens to execute inside the context of the command interpreter (cmd.exe) which supports and internal command called cls which happens to clear the screen. At least on your system. But yours isn't the only system in the world. On some other system, there might be a program called cls.exe which would get executed instead, and who knows what that would do? It could clear the screen, or it could format the hard drive. So please, don't use the system function. It's almost always the wrong thing to do. If you find yourself looking for that command stop and think about what you're doing and whether you need to do it.
So, you may ask, how do I clear the screen if I can't use system("cls")? There's a way to do it which should be portable across various operating systems:
int main(int, char **)
{
SYSTEM_POWER_STATUS spsPwr;
while (GetSystemPowerStatus(&spsPwr))
{
std::string status = "unknown";
if (spsPwr.ACLineStatus == 0)
status = "offline";
else if (spsPwr.ACLineStatus == 1)
status = "online";
// The percent of battery life left is returned as a value
// between 0 and 255 so we normalize it by multiplying it
// by 100.0 and dividing by 255.0 which is ~0.39.
std::cout << "Current Status: " << status << " ("
<< static_cast<int>(spsPwr.BatteryFlag) << "): "
<< 0.39 * static_cast<int>(spsPwr.BatteryLifePercent)
<< "% of battery remaining.\r" << std::flush;
// Sleep for around 1000 milliseconds - it may be slightly more
// since Windows is not a hard real-time operating system.
Sleep(1000);
}
// Print a new line before exiting.
std::cout << std::endl;
return 0;
}
What this does is print the information in a single line, then move back to the beginning of that line, sleep for around one second and then write the next line, overwriting what was previously there.
If the new line you write is shorter than the previous line, you may see some visual artifacts. Removing them should not be difficult but I'll leave it for you as an exercise. Here's a hint: what happens if you output a space where a letter used to be?
In order to do this across lines, you will need to use more advanced techniques to manipulate the console, and this exercise becomes a lot trickier.
You are having 100% CPU usage because your program is always running.
I don't want to get into details, and given that this is your first program, I'll recommend to put a call to usleep before the goto.
And, of course, avoid goto, use a proper loop instead.
int milliseconds2wait = 3000;
while (!flag_exit) {
// code
usleep( 1000 * milliseconds2wait )
}
Update: This is windows, use Sleep instead of usleep:
Sleep( milliseconds2wait );
Related
I am currently working making a maze game with OpenGL.
I want to create a timer to keep track of the time that the user spends completing each maze. I am using the SFML Clock to try to keep track of this time.
I have the following set up for the first maze:
maze.draw();
if(mazeOneIteration == 1){
mazeOneIteration++;
mazeOneClock.restart();
}
char timeStr1[100];
char levelStr[100];
sprintf(levelStr, "Level: %d", levelNum);
sprintf(timeStr1, "Time: %.2fs", mazeOneClock.getElapsedTime().asSeconds());
//std::cout << timeStr1 << std::endl;
std::cout << mazeOneClock.getElapsedTime().asSeconds() << std::endl;
glUniformMatrix4fv(modelLoc, 1, GL_FALSE, glm::value_ptr(glm::mat4(1.0)));
box.draw();
text.setFontSize(20);
text.draw("User: zsloan112" , 20, 15);
text.draw(levelStr, getSize().x - 100, getSize().y - 20);
text.draw(timeStr1, 20, getSize().y - 20);
Since this is running in my game loop this block of code is being run 60 times per second, so I only one to restart and set the clock to 0 the very first time it is ran, hence the if statement restarting the clock.
My issue is that when I use sprintf to insert the time into the timeStr1 and it is displayed the time stays at 0 seconds.
How would I get the clock to correctly restart once the first time this block of code is executed, then continue to count the time?
I'm going to provide you with a bit of criticism on your code. I you where applying for a job and showed me this code, I wouldn't hire you.
You using sprintf. This is a particularly bad sign. Not only are you using sprintf, you're also using it with a fixed size array of char. While it's probably going to be fine in this example, it's indicative of a deeper problem. In C++, you should never find any of the s*printf functions -- there's simply no use for it. Instead just use std::string. It will allow you to do exactly what you need without any potential problems:
Replace:
char timeStr1[100];
char levelStr[100];
sprintf(levelStr, "Level: %d", levelNum);
sprintf(timeStr1, "Time: %.2fs", mazeOneClock.getElapsedTime().asSeconds());
With:
auto const levelStr = "Level: "s + std::to_string(level);
auto const timeStr1 = "Time: "s + mazeOneClock.getElapsedTime().asSeconds();
Second, look into std::chrono. Not because it's better per say, but simply because it's part of the standard library. SFML is from before std::chrono existed, so it's no surprise that it has it's own clock, but you should stick to the standard library when you have no reason not to.
When you're trying to measure elapsed time, std::chrono::steady_clock should be your default option.
Things to read:
http://en.cppreference.com/w/cpp/string
http://en.cppreference.com/w/cpp/chrono/steady_clock
i have a synchronize function that i want to test if it ends.
i want to be able to run code for X time, and if the time ends to continue.
here what i want:
bool flag = false;
some_function_that_run_the_next_block_for_x_sec()
{
my_sync_func_that_i_want_to_test();
flag = true;
}
Assert::IsTrue(flag);
is there a simple way to do this?
SynchronizationContext
thanks.
The link you posted gives me little insight on how that class would be used (maybe Microsoft is saving up bytes on sample code to pay for Ballmer's golden parachute next year?) so pardon me for completely ignoring it.
Something like this:
auto result = async(launch::async, my_sync_func_that_i_want_to_test);
future_status status = result.wait_for(chrono::milliseconds(100));
if (status == future_status::timeout)
cout << "Timed out" << endl;
if (status == future_status::ready)
cout << "Finished on time" << endl;
Need inclusion of the <future> and <chrono> headers.
If my_sync_func_that_i_want_to_test() never finishes you'll have another problem. The future object (result) will block until the thread launched by async() finishes. There's no portable way to recover from "killed/canceled/aborted" threads, so this will probably require some platform-specific code, even if you roll out your own async_that_detaches_the_thread() (which is not hard to find, here's one example).
void Wait(double Duration)
{
clock_t End;
End = clock() + (Duration*CLOCKS_PER_SEC);
while (clock() < End)
{
// This loop just stalls the program.
}
}
My function works perfectly half the time, but it occasionally stalls the program before it's even called. For example, take the following snippet:
cout << "This is\n";
Wait(2.5)
cout << "a test!";
You'd expect the first line to appear immediately and the second line to appear after 2.5 seconds, but it sometimes ALL appears after 2.5 seconds. What's the deal?
try
cout.flush();
before your Wait
That might be because of I/O buffering.
You should flush the output buffer (either try << endl instead of '\n' or writing cout.flush) manually.
Try cout << "This is" << endl;
It looks like a buffering, not clock issue.
The flush()/std::endl has already been mentioned - but is your intention to really consume 100% of one core while you wait? This is what the while() loop is doing! If you want a nicer approach to "waiting", consider one of the following:
boost::thread::sleep() - millisecond granularity
alarms (1 second granularity)
select()
pthread_cond_timedwait()
etc.
I have a while loop that runs in a do while loop. I need the while loop to run exactly every second no faster no slower. but i'm not sure how i would do that. this is the loop, off in its own function. I have heard of the sleep() function but I also have heard that it is not very accurate.
int min5()
{
int second = 00;
int minute = 0;
const int ZERO = 00;
do{
while (second <= 59){
if(minute == 5) break;
second += 1;
if(second == 60) minute += 1;
if(second == 60) second = ZERO;
if(second < 60) cout << "Current Time> "<< minute <<" : "<< second <<" \n";
}
} while (minute <= 5);
}
The best accuracy you can achieve is by using Operating System (OS) functions. You need to find the API that also has a callback function. The callback function is a function you write that the OS will call when the timer has expired.
Be aware that the OS may lose timing precision due to other tasks and activities that are running while your program is executing.
If you want a portable solution, you shouldn't expect high-precision timing. Usually, you only get that with a platform-dependent solution.
A portable (albeit not very CPU-efficient, nor particularly elegant) solution might make use of a function similar to this:
#include <ctime>
void wait_until_next_second()
{
time_t before = time(0);
while (difftime(time(0), before) < 1);
}
You'd then use this in your function like this:
int min5()
{
wait_until_next_second(); // synchronization (optional), so that the first
// subsequent call will not take less than 1 sec.
...
do
{
wait_until_next_second(); // waits approx. one second
while (...)
{
...
}
} while (...)
}
Some further comments on your code:
Your code gets into an endless loop once minute reaches the value 5.
Are you aware that 00 denotes an octal (radix 8) number (due to the leading zero)? It doesn't matter in this case, but be careful with numbers such as 017. This is decimal 15, not 17!
You could incorporate the seconds++ right into the while loop's condition: while (seconds++ <= 59) ...
I think in this case, it would be better to insert endl into the cout stream, since that will flush it, while inserting "\n" won't flush the stream. It doesn't truly matter here, but your intent seems to be to always see the current time on cout; if you don't flush the stream, you're not actually guaranteed to see the time message immediately.
As someone else posted, your OS may provide some kind of alarm or timer functionality. You should try to use this kind of thing rather than coding your own polling loop. Polling the time means you need to be context switched in every second, which keeps your code running when the system could be doing other stuff. In this case you interrupt someone else 300 times just to say "are we done yet".
Also, you should never make assumptions about the duration of a sleep - even if you had a real time OS this would be unsafe - you should always ask the real time clock or tick counter how much time has elapsed each time because otherwise any errors accumulate so you will get less and less accurate over time. This is true even on a real time system because even if a real time system could sleep accurately for 1 second, it takes some time for your code to run so this timing error would accumulate on each pass through the loop.
In Windows for example, there is a possibility to create a waitable timer object.
If that's Your operating system check the documentation here for example Waitable Timer Objects.
From the code You presented it looks like what You are trying to do can be done much easier with sleep. It doesn't make sense to guarantee that Your loop body is executed exactly every 1 second. Instead make it execute 10 times a second and check if the time that elapsed form the last time, You took some action, is more than a second or not. If not, do nothing. If yes, take action (print Your message, increment variables etc), store the time of last action and loop again.
Sleep(1000);
http://msdn.microsoft.com/en-us/library/ms686298(VS.85).aspx
I am doing a performance comparison test. I want to record the run time for my c++ test application and compare it under different circumstances. The two cases to be compare are: 1) a file system driver is installed and active and 2) also when that same file system driver is not installed and active.
A series of tests will be conducted on several operating systems and the two runs described above will be done for each operating system and it's setup. Results will only be compared between the two cases for a given operating system and setup.
I understand that when running a c/c++ application within an operating system that is not a real-time system there is no way to get the real time it took for the application to run. I don't think this is a big concern as long as the test application runs for a fairly long period of time, therefore making the scheduling, priorities, switching, etc of the CPU negligible.
Edited: For Windows platform only
How can I generate some accurate application run time results within my test application?
If you're on a POSIX system you can use the time command, which will give you the total "wall clock" time as well as the actual CPU times (user and system).
Edit: Apparently there's an equivalent for Windows systems in the Windows Server 2003 Resource Kit called timeit.exe (not verified).
I think what you are asking is "How do I measure the time it takes for the process to run, irrespective of the 'external' factors, such as other programs running on the system?" In that case, the easiest thing would be to run the program multiple times, and get an average time. This way you can have a more meaningful comparison, hoping that various random things that the OS spends the CPU time on will average out. If you want to get real fancy, you can use a statistical test, such as the two-sample t-test, to see if the difference in your average timings is actually significant.
You can put this
#if _DEBUG
time_t start = time(NULL);
#endif
and finish with this
#if _DEBUG
time end = time(NULL);
#endif
in your int main() method. Naturally you'll have to return the difference either to a log or cout it.
Just to expand on ezod's answer.
You run the program with the time command to get the total time - there are no changes to your program
If you're on a Windows system you can use the high-performance counters by calling QueryPerformanceCounter():
#include <windows.h>
#include <string>
#include <iostream>
int main()
{
LARGE_INTEGER li = {0}, li2 = {0};
QueryPerformanceFrequency(&li);
__int64 freq = li.QuadPart;
QueryPerformanceCounter(&li);
// run your app here...
QueryPerformanceCounter(&li2);
__int64 ticks = li2.QuadPart-li.QuadPart;
cout << "Reference Implementation Ran In " << ticks << " ticks" << " (" << format_elapsed((double)ticks/(double)freq) << ")" << endl;
return 0;
}
...and just as a bonus, here's a function that converts the elapsed time (in seconds, floating point) to a descriptive string:
std::string format_elapsed(double d)
{
char buf[256] = {0};
if( d < 0.00000001 )
{
// show in ps with 4 digits
sprintf(buf, "%0.4f ps", d * 1000000000000.0);
}
else if( d < 0.00001 )
{
// show in ns
sprintf(buf, "%0.0f ns", d * 1000000000.0);
}
else if( d < 0.001 )
{
// show in us
sprintf(buf, "%0.0f us", d * 1000000.0);
}
else if( d < 0.1 )
{
// show in ms
sprintf(buf, "%0.0f ms", d * 1000.0);
}
else if( d <= 60.0 )
{
// show in seconds
sprintf(buf, "%0.2f s", d);
}
else if( d < 3600.0 )
{
// show in min:sec
sprintf(buf, "%01.0f:%02.2f", floor(d/60.0), fmod(d,60.0));
}
// show in h:min:sec
else
sprintf(buf, "%01.0f:%02.0f:%02.2f", floor(d/3600.0), floor(fmod(d,3600.0)/60.0), fmod(d,60.0));
return buf;
}
Download Cygwin and run your program by passing it as an argument to the time command. When you're done, spend some time to learn the rest of the Unix tools that come with Cygwin. This will be one of the best investments for your career you'll ever make; the Unix toolchest is a timeless classic.
QueryPerformanceCounter can have problems on multicore systems, so I prefer to use timeGetTime() which gives the result in milliseconds
you need a 'timeBeginPeriod(1)' before and 'timeEndPeriod(1)' afterwards to reduce the granularity as far as you can but I find it works nicely for my purposes (regulating timesteps in games), so it should be okay for benchmarking.
You can also use the program very sleepy to get a bunch of runtime information about your program. Here's a link: http://www.codersnotes.com/sleepy