Read Laptop Battery Status in Float/Double - c++

I have a program that reads battery status in Windows that looks like this (simplified code):
#include <iostream>
#include <windows.h>
using namespace std;
int main(int argc, char *argv[]) {
SYSTEM_POWER_STATUS spsPwr;
if( GetSystemPowerStatus(&spsPwr) ) {
cout << "\nAC Status : " << static_cast<double>(spsPwr.ACLineStatus)
<< "\nBattery Status : " << static_cast<double>(spsPwr.BatteryFlag)
<< "\nBattery Life % : " << static_cast<double>(spsPwr.BatteryLifePercent)
<< endl;
return 0;
} else return 1;
}
spsPwr.BatteryLifePercent holds remaining battery charge in percent and is of type BYTE, which means it can only show reading in round numbers (i.e. int). I notice that an application called BatteryBar can show battery percentage in floating point value.
BatteryBar is a .NET application. How can I get battery percentage reading in float/double using pure C/C++ with Windows API? (Solution that can be compiled with MinGW is preferable)

You can get this information using the WMI . try using the BatteryFullChargedCapacity and BatteryStatus classes both are part of the root\WMI namespace.
To get the remaining battery charge in percent just must use the RemainingCapacity (BatteryStatus) and FullChargedCapacity (BatteryFullChargedCapacity) properties.
The remaining battery charge in percent is
(RemainingCapacity * 100) / FullChargedCapacity
for example if the FullChargedCapacity property report a 5266 value and the RemainingCapacity reports a 5039, the reuslt will be 95,68932776 %
If you don't know how access the WMI from C++ read these articles
WMI C++ Application Examples
Making WMI Queries In C++

Well, as you said, the Windows API provides only an integral percentage value. And, as you implied, .NET provides a floating-point one.
That means that, to use the floating-point one, you have to use .NET. That said, the .NET value is between 0.0 and 1.0, and it's not clear from the documentation whether you actually gain more precision.

The tool states that it does "Statistical Time Prediction" so I doubt it uses the direct value of SYSTEM_POWER_STATUS.
Personally, I hardly can imagine what a floating-point precision would be good for, but anyway you could use ILSpy to see how they are doing it, or maybe you also could ask them how they do.

The .NET version doesn't actually provide you any more precision. It simply divides the BatterLifePercent byte value by 100.0 and returns the result. Here are the contents of the getter in .NET.
public float BatteryLifePercent
{
get
{
this.UpdateSystemPowerStatus();
float num = ((float) this.systemPowerStatus.BatteryLifePercent) / 100f;
if (num <= 1f)
{
return num;
}
return 1f;
}
}
UpdateSystemPowerStatus() calls WINAPI's GetSystemPowerStatus(), which in turn updates systemPowerStatus.

Related

Determining if a 16 bit binary number is negative or positive

I'm creating a library for a temperature sensor that has a 16-bit value in binary that is being returned. I'm trying to find the best way to check if that value returned is negative or positive. I'm curious as to whether or not I can check if the most significant bit is a 1 or a 0 and if that would be the best way to go about it, how to successfully implement it.
I know that I can convert it to decimal and check that way but I just was curious if there was an easier way. I've seen it implemented with shifting values but I don't fully understand that method. (I'm super new to c++)
float TMP117::readTempC(void)
{
int16_t digitalTemp; // Temperature stored in the TMP117 register
digitalTemp = readRegister(TEMP_RESULT); //Reads the temperature from the sensor
// Check if the value is a negative number
/*Insert code to check here*/
// Returns the digital temperature value multiplied by the resolution
// Resolution = .0078125
return digitalTemp*0.0078125;
}
I'm not sure how to check if the code works and I haven't been able to compile it and run it on the device because the new PCB design and sensor has not come in the mail yet.
I know that I can convert it to decimal and check that way
I am not sure what you mean. An integer is an integer, it is an arithmetic object you just compare it with zero:
if( digitalTemp < 0 )
{
// negative
}
else
{
// positive
}
You can as you suggest test the MSB, but there is no particular benefit, it lacks clarity, and will break or need modification if the type of digitalTemp changes.
if( (digitalTemp & 0x8000 )
{
// negative
}
else
{
// positive
}
"conversion to decimal", can only be interpreted as conversion to a decimal string representation of an integer, which does not make your task any simpler, and is entirely unnecessary.
I'm not sure how to check if the code works and I haven't been able to compile it and run it on the device because the new PCB design and sensor has not come in the mail yet.
Compile and run it on a PC in a test harness with stubs for teh hardware dependent functions. Frankly if you are new to C++, you are perhaps better off practising the fundamentals in a PC environment with generally better debug facilities and faster development/test iteration in any case.
In general
float TMP117::readTempC(void)
{
int16_t digitalTemp; // Temperature stored in the TMP117 register
digitalTemp = readRegister(TEMP_RESULT); //Reads the temperature from the sensor
// Check if the value is a negative number
if (digitalTemp < 0)
{
printf("Dang it is cold\n");
}
// Returns the digital temperature value multiplied by the resolution
// Resolution = .0078125
return digitalTemp*0.0078125;
}

VMS timestamp to POSIX time_t --- Boost.DateTime bug?

How can I write a C++ function which takes a long long value representing a VMS timestamp and returns the corresponding time_t value, assuming the conversion yields a valid time_t? (I'll be parsing binary data sent over network on a commodity CentOS server, if that makes any differences.)
I've had a look into a document titled "Why Is Wednesday November 17, 1858 The Base Time For VAX/VMS" but I don't think I can write a correct implementation without testing with actual data which I don't have at hand right now, unfortunately.
If I'm not mistaken, it should be a simple arithmetic in this form:
time_t vmsTimeToTimeT(long long v) {
return v/10'000'000 - OFFSET;
}
Could somebody tell me what value to put into OFFSET ?
Things I'm concerned about:
I don't want to be bitten by my local timezone
I don't want to be bitten by the 0.5 thing (afternoon vs midnight) in the definition of modified Julian date (though it should be helping me here; modified Julian epoch and Unix Epoch should differ by a multiple of 24 hours thanks to the definition)
I tried to compute it by myself with the help from Boost.DateTime, only to get a mysterious negative value...
int main() {
boost::posix_time::ptime x(
boost::gregorian::date(1858, boost::gregorian::Nov, 17),
boost::posix_time::time_duration(0, 0, 0) );
boost::posix_time::ptime y(
boost::gregorian::date(1970, boost::gregorian::Jan, 1),
boost::posix_time::time_duration(0, 0, 0) );
std::cout << (y - x).total_seconds() << std::endl;
std::cout << (y > x ? "y is after x" : "y is before x") << std::endl;
}
-788250496
y is after x
I used Boost 1.60 for it:
The current implementation supports dates in the range 1400-Jan-01 to 9999-Dec-31.
Update
Crap, sizeof(total_seconds()) was 4, dispite what the document says
So I got 3506716800 from
auto diff = y - x;
std::cout << diff.ticks() / diff.ticks_per_second() << std::endl;
which doesn't look too wrong but... who can assure this is really correct?
Wow, you guys make it all appear to be so difficult with libraries and all.
So you read up on November-17 1858 and found out that VMS stores the time as 100nS 'clunks' since that date. Right?
Unix times are Seconds (or microseconds) since 1-jan-1970. Right?
So all you need to do is to subtract the OpenVMS time value 'offset' for 1-jan-1970 from the reported OpenVMS times ad divide by 10,000,000 (seconds) or 10 (microseconds).
You only need to find that value once using a trivial OpenVMS program.
Below I did not even use a dedicated program, just used the OpenVMS interactive debugger running a random executable program:
$ run tmp/debug
DBG> set rad hex
DBG> dep/date 10000 = "01-JAN-1970 00:00:00" ! Local time
DBG> examin/quad 10000
TMP\main: 007C95674C3DA5C0
DBG> examin/quad/dec 10000
TMP\main: 35067168005400000
So there is you offset, both in HEX and DECIMAL to use as you see fit.
In the simplest form you pre-divide the incoming OpenVMS time by 10,000,000 and subtract 3506716800 (decimal) to get Epoch seconds.
Be sure to keep the math, including the subtract to long-long int's
hth,
Hein.
According to this:
https://www.timeanddate.com/date/durationresult.html?d1=17&m1=11&y1=1858&d2=1&m2=jan&y2=1970
you'd want 40587 days, times 86400 seconds, makes 3506716800 as the offset in your calculation.
Using this free open-source library which extends <chrono> to calendrical computations, I can confirm your figure of the offset in seconds:
#include "chrono_io.h"
#include "date.h"
#include <iostream>
int
main()
{
using namespace date;
using namespace std::chrono;
using namespace std;
seconds offset = sys_days{jan/1/1970} - sys_days{nov/17/1858};
cout << offset << '\n';
}
Output:
3506716800s

C++: How Can I keep my program (output console) alive

I am writing a simple program (my 1st program) to display the laptop battery, however, I would like to keep it active to monitor the battery %.:
using namespace std;
int main(int argc, char *argv[]) {
id:
SYSTEM_POWER_STATUS spsPwr;
if (GetSystemPowerStatus(&spsPwr)) {
cout << "\nAC Status : " << static_cast<double>(spsPwr.ACLineStatus)
<< "\nBattery Status : " << static_cast<double>(spsPwr.BatteryFlag)
<< "\nBattery Life % : " << static_cast<double>(spsPwr.BatteryLifePercent)
<< endl;
system("CLS");
goto id;
return 0;
}
else return 1;
}
using goto seems to be a bad idea as the CPU utilization jump to 99% ! :(, I am sure this is not the right way to do it.
Any suggestion?
Thanks
while (true) {
// do the stuff
::Sleep(2000); // suspend thread to 2 sec
}
(you are on Windows according to the API function)
see: Sleep
First of all, the issue you are asking about: of course you get 100% CPU usage, since you're asking the computer to try and get and print the power status of the computer as fast it possibly can. And since computers will happily do what you tell them to, well... you know what happens next.
As others have said, the solution is to use an API that will instruct your application to go to sleep. In Windows, which appears to be your platform of choice, that API is Sleep:
// Sleep for around 1000 milliseconds - it may be slightly more since Windows
// is not a hard real-time operating system.
Sleep(1000);
Second, please do not use goto. There are looping constructs in C and you should use them. I'm not fundamentally opposed to goto (in fact, in my kernel-driver programming days I used it quite frequently) but I am opposed to seeing it used when better alternatives are available. In this case the better alternative is a while loop.
Before I show you that let me point out another issue: DO NOT USE THE system function.
Why? The system function executes the command passed to it; on Windows it happens to execute inside the context of the command interpreter (cmd.exe) which supports and internal command called cls which happens to clear the screen. At least on your system. But yours isn't the only system in the world. On some other system, there might be a program called cls.exe which would get executed instead, and who knows what that would do? It could clear the screen, or it could format the hard drive. So please, don't use the system function. It's almost always the wrong thing to do. If you find yourself looking for that command stop and think about what you're doing and whether you need to do it.
So, you may ask, how do I clear the screen if I can't use system("cls")? There's a way to do it which should be portable across various operating systems:
int main(int, char **)
{
SYSTEM_POWER_STATUS spsPwr;
while (GetSystemPowerStatus(&spsPwr))
{
std::string status = "unknown";
if (spsPwr.ACLineStatus == 0)
status = "offline";
else if (spsPwr.ACLineStatus == 1)
status = "online";
// The percent of battery life left is returned as a value
// between 0 and 255 so we normalize it by multiplying it
// by 100.0 and dividing by 255.0 which is ~0.39.
std::cout << "Current Status: " << status << " ("
<< static_cast<int>(spsPwr.BatteryFlag) << "): "
<< 0.39 * static_cast<int>(spsPwr.BatteryLifePercent)
<< "% of battery remaining.\r" << std::flush;
// Sleep for around 1000 milliseconds - it may be slightly more
// since Windows is not a hard real-time operating system.
Sleep(1000);
}
// Print a new line before exiting.
std::cout << std::endl;
return 0;
}
What this does is print the information in a single line, then move back to the beginning of that line, sleep for around one second and then write the next line, overwriting what was previously there.
If the new line you write is shorter than the previous line, you may see some visual artifacts. Removing them should not be difficult but I'll leave it for you as an exercise. Here's a hint: what happens if you output a space where a letter used to be?
In order to do this across lines, you will need to use more advanced techniques to manipulate the console, and this exercise becomes a lot trickier.
You are having 100% CPU usage because your program is always running.
I don't want to get into details, and given that this is your first program, I'll recommend to put a call to usleep before the goto.
And, of course, avoid goto, use a proper loop instead.
int milliseconds2wait = 3000;
while (!flag_exit) {
// code
usleep( 1000 * milliseconds2wait )
}
Update: This is windows, use Sleep instead of usleep:
Sleep( milliseconds2wait );

How might one implement FileTimeToSystemTime?

I'm writing a simple wrapper around the Win32 FILETIME structure. boost::datetime has most of what I want, except I need whatever date type I end up using to interpolate with Windows APIs without issues.
To that end, I've decided to write my own things for doing this -- most of the operations aren't all that complicated. I'm implementing the TimeSpan - like type at this point, but I'm unsure how I'd implement FileTimeToSystemTime. I could just use the system's built-in FileTimeToSystemTime function, except FileTimeToSystemTime cannot handle negative dates -- I need to be able to represent something like "-12 seconds".
How should something like this be implemented?
Billy3
Windows SYSTEMTIME and FILETIME data types are intended to represent a particular date and time. They are not really suitable to represent time differences. Time differences are better of as a simple integer representing the number of between two SYSTEMTIMEs or FILETIMEs. might be seconds, or something smaller if you need more precision.
If you need to display a difference to users, simple division and modulus can be used to compute the components.
std::string PrintTimeDiff(int nSecDiff)
{
std::ostringstream os;
if (nSecDiff<0)
{
os << "-";
nSecDiff= -nSecDiff;
}
int nSeconds = nSecDiff % (24*60*60);
nSecDiff /= 60;
int nMinutes = nSecDiff % (24*60)
nSecDiff /= 60;
int nHours = nSecDiff % 24;
int nDays = nSecDiff / 24;
os << nDays << " Days " << nHours << ":" << nMinutes << ":" << nSeconds;
return os .str();
}
Assuming you didn't have a problem with the structure all having unsigned components, you could take any negative timespans, make them positive, call FileTimeToSystemTime, and then (if the original input was negative) pick out components to make negative.
I see bad design here. Time span, difference between two times, is the same when measuring with system time and with file time too. W32 FileTimeToSystemTime is right about not accepting negative values because it has no sense. Period of 2 seconds is a period of 2 seconds, no matter which time zone you used.
//EDIT:
Second problem. SYSTEMTIME is somehow able to represent time span, but it would be erroneous. I.e. month is not usable unit when measuring time spans.

Windows: How do I calculate the time it takes a c/c++ application to run?

I am doing a performance comparison test. I want to record the run time for my c++ test application and compare it under different circumstances. The two cases to be compare are: 1) a file system driver is installed and active and 2) also when that same file system driver is not installed and active.
A series of tests will be conducted on several operating systems and the two runs described above will be done for each operating system and it's setup. Results will only be compared between the two cases for a given operating system and setup.
I understand that when running a c/c++ application within an operating system that is not a real-time system there is no way to get the real time it took for the application to run. I don't think this is a big concern as long as the test application runs for a fairly long period of time, therefore making the scheduling, priorities, switching, etc of the CPU negligible.
Edited: For Windows platform only
How can I generate some accurate application run time results within my test application?
If you're on a POSIX system you can use the time command, which will give you the total "wall clock" time as well as the actual CPU times (user and system).
Edit: Apparently there's an equivalent for Windows systems in the Windows Server 2003 Resource Kit called timeit.exe (not verified).
I think what you are asking is "How do I measure the time it takes for the process to run, irrespective of the 'external' factors, such as other programs running on the system?" In that case, the easiest thing would be to run the program multiple times, and get an average time. This way you can have a more meaningful comparison, hoping that various random things that the OS spends the CPU time on will average out. If you want to get real fancy, you can use a statistical test, such as the two-sample t-test, to see if the difference in your average timings is actually significant.
You can put this
#if _DEBUG
time_t start = time(NULL);
#endif
and finish with this
#if _DEBUG
time end = time(NULL);
#endif
in your int main() method. Naturally you'll have to return the difference either to a log or cout it.
Just to expand on ezod's answer.
You run the program with the time command to get the total time - there are no changes to your program
If you're on a Windows system you can use the high-performance counters by calling QueryPerformanceCounter():
#include <windows.h>
#include <string>
#include <iostream>
int main()
{
LARGE_INTEGER li = {0}, li2 = {0};
QueryPerformanceFrequency(&li);
__int64 freq = li.QuadPart;
QueryPerformanceCounter(&li);
// run your app here...
QueryPerformanceCounter(&li2);
__int64 ticks = li2.QuadPart-li.QuadPart;
cout << "Reference Implementation Ran In " << ticks << " ticks" << " (" << format_elapsed((double)ticks/(double)freq) << ")" << endl;
return 0;
}
...and just as a bonus, here's a function that converts the elapsed time (in seconds, floating point) to a descriptive string:
std::string format_elapsed(double d)
{
char buf[256] = {0};
if( d < 0.00000001 )
{
// show in ps with 4 digits
sprintf(buf, "%0.4f ps", d * 1000000000000.0);
}
else if( d < 0.00001 )
{
// show in ns
sprintf(buf, "%0.0f ns", d * 1000000000.0);
}
else if( d < 0.001 )
{
// show in us
sprintf(buf, "%0.0f us", d * 1000000.0);
}
else if( d < 0.1 )
{
// show in ms
sprintf(buf, "%0.0f ms", d * 1000.0);
}
else if( d <= 60.0 )
{
// show in seconds
sprintf(buf, "%0.2f s", d);
}
else if( d < 3600.0 )
{
// show in min:sec
sprintf(buf, "%01.0f:%02.2f", floor(d/60.0), fmod(d,60.0));
}
// show in h:min:sec
else
sprintf(buf, "%01.0f:%02.0f:%02.2f", floor(d/3600.0), floor(fmod(d,3600.0)/60.0), fmod(d,60.0));
return buf;
}
Download Cygwin and run your program by passing it as an argument to the time command. When you're done, spend some time to learn the rest of the Unix tools that come with Cygwin. This will be one of the best investments for your career you'll ever make; the Unix toolchest is a timeless classic.
QueryPerformanceCounter can have problems on multicore systems, so I prefer to use timeGetTime() which gives the result in milliseconds
you need a 'timeBeginPeriod(1)' before and 'timeEndPeriod(1)' afterwards to reduce the granularity as far as you can but I find it works nicely for my purposes (regulating timesteps in games), so it should be okay for benchmarking.
You can also use the program very sleepy to get a bunch of runtime information about your program. Here's a link: http://www.codersnotes.com/sleepy