Control Windows 10's "Power Mode" programmatically - c++

Background
Hi. I have an SB2 (Surface Book 2), and I'm one of the unlucky users who are facing the infamous 0.4GHz throttling problem that is plaguing many of the SB2 machines. The problem is that the SB2 suddenly, and very frequently depending on the ambient temperature, throttles heavily from a boost of 4GHz to 0.4GHz and hangs in there for a minute or two (this causes a severe slowup of the whole laptop). This is extremely frustrating and almost makes the machine unusable for even the simplest of workloads.
Microsoft apparently stated that it fixed the problem in the October 2019 update, but I and several other users are still facing it. I'm very positive my machine is up to date, and I even manually installed all the latest Surface Book 2 firmware updates.
Here's a capture of the CPU state when the problem is happening:
As you can see, the temperature of the unit itself isn't high at all, but CPU is throttling at 0.4GHz exactly.
More links about this: 1 2
Workarounds
I tried pretty much EVERYTHING. Undervolting until freezing screens, disabling BD PROCHOT, disabling power throttling in GPE, messing up with the registry, tuning several CPU/GPU settings. Nothing worked.
You can do only 2 things when the throttling starts:
Wait for it to finish (usually takes a minute or two).
Change the Power Mode in windows 10. It doesn't even matter if you're changing it from "Best performance" to "Best battery life", what matters is that you change it. As soon as you do, throttling completely stops in a couple seconds. This is the only manual solution that worked.
Question
In practice, changing this slider each 10 seconds no matter how heavy the workload is, indefinitely lead to a smooth experience without throttling. Of course, this isn't a feasible workaround by hand.
In theory, I thought that if I could find a way to control this mode programmatically, I might be able to wish this problem goodbye by switching power modes every 10 seconds or so.
I don't mind if it's in win32 (winapi) or a .net thing. I looked a lot, found this about power management, but it seems there's no interface for setting in win32. I could have overlooked it, so here's my question:
Is there any way at all to control the Power Mode in Windows 10 programmatically?

OK... I've been wanting command line or programmatic access to adjust the power slider for a while, and I've run across this post multiple times when looking into it. I'm surprised no one else has bothered to figure it out. I worked it out myself today, motivated by the fact that Windows 11 appears to have removed the power slider from the taskbar and you have to go digging into the Settings app to adjust it.
As previously discussed, in the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Power\User\PowerSchemes you can find values "ActiveOverlayAcPowerScheme" and "ActiveOverlayDcPowerScheme" which record the current values of the slider for AC power and battery power, respectively. However, changing these values is not sufficient to adjust the power slider or the system's mode of operation.
Turns out there is an undocumented method in C:\Windows\System32\powrprof.dll called PowerSetActiveOverlayScheme. It takes a single parameter. I "guessed" that it would take a GUID in the same manner that PowerSetActiveScheme does, and it seems to work.
Note — Using an undocumented API is unsupported by Microsoft. This method may break in future Windows releases. It can be used for personal tinkering but I would not suggest using it in any actual production projects.
Here is the C# PInvoke signature:
[DllImportAttribute("powrprof.dll", EntryPoint = "PowerSetActiveOverlayScheme")]
public static extern uint PowerSetActiveOverlayScheme(Guid OverlaySchemeGuid);
It returns zero on success and non-zero on failure.
Calling it is as simple as:
PowerSetActiveOverlayScheme(new Guid("ded574b5-45a0-4f42-8737-46345c09c238"));
It has immediate effect. This particular GUID moved the slider all the way to the right for me and also updated the "ActiveOverlayAcPowerScheme" value in the registry. Using a GUID of all zeros reset the slider to the middle value. You can see what GUID options are available by just observing the values that show up in the registry when you set the power slider to different positions.
There are two methods that can be used to read the current position of the slider. I'm not sure what the difference between them is, they returned the same value each time in my testing.
[DllImportAttribute("powrprof.dll", EntryPoint = "PowerGetActualOverlayScheme")]
public static extern uint PowerGetActualOverlayScheme(out Guid ActualOverlayGuid);
[DllImportAttribute("powrprof.dll", EntryPoint = "PowerGetEffectiveOverlayScheme")]
public static extern uint PowerGetEffectiveOverlayScheme(out Guid EffectiveOverlayGuid);
They also return zero on success and non-zero on failure. They can be called like...
if (PowerGetEffectiveOverlayScheme(out Guid activeScheme) == 0)
{
Console.WriteLine(activeScheme);
}
There is one more method called "PowerGetOverlaySchemes", which I presume can be used to fetch a list of available GUIDs that could be used. It appears to take three parameters and I haven't bothered with figuring it out.
I created a command-line program which can be used to set the power mode, and it can be found at https://github.com/AaronKelley/PowerMode.

Aaron's answer is awesome work, helped me massively, thank you.
If you're anything like me and
don't have Visual Studio at the ready to compile his tool for yourself and/or
don't necessarily want to run an arbitrary executable file off of GitHub (no offence),
you can use Python (3, in this case) to accomplish the same thing.
For completeness' sake, I'll copy over the disclaimer:
Note — Using an undocumented API is unsupported by Microsoft. This method may break in future Windows releases. It can be used for personal tinkering but I would not suggest using it in any actual production projects.
Please also note, that the following is just basic proof-of-concept code!
Getting the currently active Byte Sequence:
import ctypes
output_buffer = ctypes.create_string_buffer(b"",16)
ctypes.windll.powrprof.PowerGetEffectiveOverlayScheme(output_buffer)
print("Current Effective Byte Sequence: " + output_buffer.value.hex())
ctypes.windll.powrprof.PowerGetActualOverlayScheme(output_buffer)
print("Current Actual Byte Sequence: " + output_buffer.value.hex())
On my system, this results in the following values:
Mode
Byte Sequence
Better Battery
77c71c9647259d4f81747d86181b8a7a
Better Performance
00000000000000000000000000000000
Best Performance
b574d5dea045424f873746345c09c238
Apparently Aaron's and my system share the same peculiarity, where the "Better Performance" Byte Sequence is just all zeros (as opposed to the "expected" value of 3af9B8d9-7c97-431d-ad78-34a8bfea439f).
Please note, that the Byte Sequence 77c71c9647259d4f81747d86181b8a7a is equivalent to the GUID 961cc777-2547-4f9d-8174-7d86181b8a7a and b574d5dea045424f873746345c09c238 represents ded574b5-45a0-4f42-8737-46345c09c238.
This stems from the the fact that GUIDs are written down differently than how they're actually represented in memory. (If we assume a GUID's bytes to be written as ABCD-EF-GH-IJ-KLMN its Byte Sequence representation ends up being DCBAFEHGIJKLMN). See https://stackoverflow.com/a/6953207 (particularly the pararaph and table under "Binary encodings could differ") and/or https://uuid.ramsey.dev/en/latest/nonstandard/guid.html if you want to know more.
Setting a value (for "Better Battery" in this example) works as follows:
import ctypes
modes = {
"better_battery": "77c71c9647259d4f81747d86181b8a7a",
"better_performance": "00000000000000000000000000000000",
"best_performance": "b574d5dea045424f873746345c09c238"
}
ctypes.windll.powrprof.PowerSetActiveOverlayScheme(bytes.fromhex(modes["better_battery"]))
For me, this was a nice opportunity to experiment with Python's ctypes :).

Here is a PowerShell version that sets up a scheduled task to toggle the power overlay every minute. It is based off the godsend answers of Michael and Aaron.
The CPU throttling issue has plagued me on multiple Lenovo X1 Yoga laptops (Gen2 and Gen4 models).
# Toggle power mode away from and then back to effective overlay
$togglePowerOverlay = {
$function = #'
[DllImport("powrprof.dll", EntryPoint="PowerSetActiveOverlayScheme")]
public static extern int PowerSetActiveOverlayScheme(Guid OverlaySchemeGuid);
[DllImport("powrprof.dll", EntryPoint="PowerGetActualOverlayScheme")]
public static extern int PowerGetActualOverlayScheme(out Guid ActualOverlayGuid);
[DllImport("powrprof.dll", EntryPoint="PowerGetEffectiveOverlayScheme")]
public static extern int PowerGetEffectiveOverlayScheme(out Guid EffectiveOverlayGuid);
'#
$power = Add-Type -MemberDefinition $function -Name "Power" -PassThru -Namespace System.Runtime.InteropServices
$modes = #{
"better_battery" = [guid] "961cc777-2547-4f9d-8174-7d86181b8a7a";
"better_performance" = [guid] "00000000000000000000000000000000";
"best_performance" = [guid] "ded574b5-45a0-4f42-8737-46345c09c238"
}
$actualOverlayGuid = [Guid]::NewGuid()
$ret = $power::PowerGetActualOverlayScheme([ref]$actualOverlayGuid)
if ($ret -eq 0) {
"Actual power overlay scheme: $($($modes.GetEnumerator()|where {$_.value -eq $actualOverlayGuid}).Key)." | Write-Host
}
$effectiveOverlayGuid = [Guid]::NewGuid()
$ret = $power::PowerGetEffectiveOverlayScheme([ref]$effectiveOverlayGuid)
if ($ret -eq 0) {
"Effective power overlay scheme: $($($modes.GetEnumerator() | where { $_.value -eq $effectiveOverlayGuid }).Key)." | Write-Host
$toggleOverlayGuid = if ($effectiveOverlayGuid -ne $modes["best_performance"]) { $modes["best_performance"] } else { $modes["better_performance"] }
# Toggle Power Mode
$ret = $power::PowerSetActiveOverlayScheme($toggleOverlayGuid)
if ($ret -eq 0) {
"Toggled power overlay scheme to: $($($modes.GetEnumerator()| where { $_.value -eq $toggleOverlayGuid }).Key)." | Write-Host
}
$ret = $power::PowerSetActiveOverlayScheme($effectiveOverlayGuid)
if ($ret -eq 0) {
"Toggled power overlay scheme back to: $($($modes.GetEnumerator()|where {$_.value -eq $effectiveOverlayGuid }).Key)." | Write-Host
}
}
else {
"Failed to toggle active power overlay scheme." | Write-Host
}
}
# Execute the above
& $togglePowerOverlay
Create a scheduled job that runs the above script every minute:
Note that Register-ScheduledJob only works with Windows PowerShell, not PowerShell Core
I couldn't get the job to start without using the System principal. Otherwise gets stuck indefinitely in Task Scheduler with "The task has not run yet. (0x41303)".
Get-Job will show the job in Windows PowerShell, but Receive-Job doesn't return anything even though there is job output in dir $env:UserProfile\AppData\Local\Microsoft\Windows\PowerShell\ScheduledJobs$taskName\Output. This might be due to running as System while trying to Receive-Job as another user?
I wish -MaxResultCount 0 was supported to hide the job in Get-Job, but alas it is not.
You can see the task in Windows Task Scheduler under Task Scheduler Library path \Microsoft\Windows\PowerShell\ScheduledJobs
It was necessary to have two script blocks, one as command and one as arguments (that gets serialized/deserialized as a string) because PowerShell script blocks use dynamic closures instead of lexical closures and thus referencing one script block from another when creating a new runspace is not readily possible.
The min interval for scheduled tasks is 1 minute. If it turns out that more frequent toggling is needed, might just add a loop in the toggling code and schedule the task only for startup or login.
$registerJob = {
param($script)
$taskName = "FixCpuThrottling"
Unregister-ScheduledJob -Name $taskName -ErrorAction Ignore
$job = Register-ScheduledJob -Name $taskName -ScriptBlock $([scriptblock]::create($script)) -RunEvery $([TimeSpan]::FromMinutes(1)) -MaxResultCount 1
$psSobsSchedulerPath = "\Microsoft\Windows\PowerShell\ScheduledJobs";
$principal = New-ScheduledTaskPrincipal -UserId SYSTEM -LogonType ServiceAccount
$someResult = Set-ScheduledTask -TaskPath $psSobsSchedulerPath -TaskName $taskName -Principal $principal
}
# Run as Administrator needed in order to call Register-ScheduledJob
powershell.exe -command $registerJob -args $togglePowerOverlay
To stop and remove the scheduled job (must use Windows PowerShell):
$taskName = "FixCpuThrottling"
Unregister-ScheduledJob -Name $taskName-ErrorAction Ignore

Related

Accessing Max Input Delay with C++ on Windows

I am having trouble obtaining certain data from Windows Performance Counters with C++. I will preface my question by stating that I am new to both C++ and to developing for Windows, but I have spent some time on this issue already so I feel familiar with the concepts I am discussing here.
Question:
How do I use Windows PDH (Performance Data Helper) C++ to obtain Max Input Delay--either per session or per process? Are there certain Performance Counters that are not available outside of perfmon?
Progress so far:
I have used this example to log some Performance Counters successfully, but the ones I want produce the error code 0xC0000BB8: "The specified object is not found on the system." This confuses me because I can access the objects--"User Input Delay per Process" or "User Input Delay per Session"--fine through perfmon. I even went as far as enabling the counter in the registry as outlined in the article I linked in my question, despite being on a build of Windows 10 that should have it enabled by default. I had to make a small change to get the code to compile, but I have changed only the definition of COUNTER_PATH during my testing because, again, the code works as advertised except when it comes to the counter I want to access. Specifically:
Does not compile:
CONST PWSTR COUNTER_PATH = L"\\Processor(0)\\% Processor Time";
Does compile and log:
CONST wchar_t *COUNTER_PATH = L"\\Processor(0)\\% Processor Timee";
OR
CONST PWSTR COUNTER_PATH = const_cast<PWSTR>(TEXT( "\\Processor(0)\\% Processor Time" ));
Compiles, but throws error code 0xC0000BB8 at runtime (This is the Counter I want to access):
CONST PWSTR COUNTER_PATH = const_cast<PWSTR>(TEXT( "\\User Input Delay per Session(1)\\Max Input Delay" ));
The hardcoded session ID of 1 in the string was for troubleshooting purposes, but wildcard (*) and 0 were also used with the same result. The counter path matches that shown in perfmon.
Essentially, all Performance Counters that I have attempted to access with this code--about 5 completely different ones--have successfully logged the data being requested, but the one I want to access continues to be evasive.
I asked this same question on Microsoft Q&A and received the answer:
The Performance Counters in question require administrator privileges to access. All I had to do was run this program in administrator command prompt, and that solved my issue.

Can a process be limited on how much physical memory it uses?

I'm working on an application, which has the tendency to use excessive amounts of memory, so I'd like to reduce this.
I know this is possible for a Java program, by adding a Maximum heap size parameter during startup of the Java program (e.g. java.exe ... -Xmx4g), but here I'm dealing with an executable on a Windows-10 system, so this is not applicable.
The title of this post refers to this URL, which mentions a way to do this, but which also states:
Maximum Working Set. Indicates the maximum amount of working set assigned to the process. However, this number is ignored by Windows unless a hard limit has been configured for the process by a resource management application.
Meanwhile I can confirm that the following lines of code indeed don't have any impact on the memory usage of my program:
HANDLE h_jobObject = CreateJobObject(NULL, L"Jobobject");
if (!AssignProcessToJobObject(h_jobObject, OpenProcess(PROCESS_ALL_ACCESS, FALSE, GetCurrentProcessId())))
{
throw "COULD NOT ASSIGN SELF TO JOB OBJECT!:";
}
JOBOBJECT_EXTENDED_LIMIT_INFORMATION tagJobBase = { 0 };
tagJobBase.BasicLimitInformation.MaximumWorkingSetSize = 1; // far too small, just to see what happens
BOOL bSuc = SetInformationJobObject(h_jobObject, JobObjectExtendedLimitInformation, (LPVOID)&tagJobBase, sizeof(tagJobBase));
=> bSuc is true, or is there anything else I should expect?
In top of this, the mentioned tools (resource managed applications, like Hyper-V) seem not to work on my Windows-10 system.
Next to this, there seems to be another post about this subject "Is there any way to force the WorkingSet of a process to be 1GB in C++?", but here the results seem to be negative too.
For a good understanding: I'm working in C++, so the solution, proposed in this URL are not applicable.
So now I'm stuck with the simple question: is there a way, implementable in C++, to limit the memory usage of the current process, running on Windows-10?
Does anybody have an idea?
Thanks in advance

How can we get a CPU temperature through WMI?

I installed WMI code creator from here, and I'm wondering how we can use it to get the CPU temperature.
The application gives many options (as shown below), but I am not sure where I have to click to get the CPU temperature.
I went to the description of WMI code creator and saw the following:
The WMI Code Creator tool allows you to generate VBScript, C#, and VB
.NET code that uses WMI to complete a management task such as querying
for management data, executing a method from a WMI class, or receiving
event notifications using WMI.
Namespace: root\wmi
Path: MSAcpi_ThermalZoneTemperature
To run this (using wmic) from the Windows command line (cmd.exe) the command would be:
wmic /namespace:\\root\wmi PATH MSAcpi_ThermalZoneTemperature get CriticalTripPoint, CurrentTemperature
Attention: the results are in Kelvin * 10, so you need to divide the result by 10, and then subtract 273.15 to get °Celsius.
More information:
WUtils.com : MSAcpi_ThermalZoneTemperature Properties
Wikipedia : Kelvin (and conversion to/from °C and °F)
Wikipedia : Windows Management Instrumentation (WMI)
MSDN : WMI Command Line (WMIC)
Tech Advisor : What's the Best CPU Temperature?
SpeedFan : Fan & Temperature control utility (Freeware)
systeminformation: systeminformation npm package (for nodejs)
For those looking for a PowerShell solution:
Get-CimInstance -Namespace root/wmi -ClassName MsAcpi_ThermalZoneTemperature -Filter "Active='True' and CurrentTemperature<>2732" -Property InstanceName, CurrentTemperature |
Select-Object InstanceName, #{n='CurrentTemperatureC';e={'{0:n0} C' -f (($_.CurrentTemperature - 2732) / 10.0)}}
The WMI/CIM filter here is only looking for active sensors that aren't returning 0 C as the temperature. My system returns several sensors with that value and I assume they're just disabled or otherwise non-functional. The InstanceName property should give you an approximate idea where the sensor is located. In my systems, even though the property is supposed to be in tenths of degrees K, it's always reporting in whole degree values and it appears to round the -273.15 C absolute zero to -273.2. Other systems or sensors may vary.
Note that you must be an administrator to query the MsAcpi_ThermalZoneTemperature class.
My HP laptop has HP specific WMI objects that contains temperature in Celcius units and fan RPM speed objects.
Running this WMIC command in administrator mode will retrieve them:
wmic /NAMESPACE:\\root\HP\InstrumentedBIOS path HPBIOS_BIOSNumericSensor get Name,CurrentReading
Adding for syntax for looping it every 5 seconds:
FOR /L %G IN (1,1,100) DO wmic /NAMESPACE:\\root\HP\InstrumentedBIOS path HPBIOS_BIOSNumericSensor get Name,CurrentReading && PING localhost -l 0 -n 5 >NUL
On my Dell-Laptop I could get the CPU-Temperature in Celsius like this:
$data = Get-WMIObject -Query "SELECT * FROM Win32_PerfFormattedData_Counters_ThermalZoneInformation" -Namespace "root/CIMV2"
#($data)[0].HighPrecisionTemperature

How to correctly determine fastest CDN, mirror, download server in C++

The question that I'm struggling with is how to determine in c++ that which is the server with fastest connection for the client do make git clone from or download tarball. So basically I want to choose from collection of known mirrors which one will be used for downloading content from.
Following code I wrote demonstrates that what I am trying to achieve more clearly perhaps, but I believe that's not something one should use in production :).
So lets say I have two known source mirrors git-1.exmple.com and git-2.example.com and I want to download tag-x.tar.gz from one which client has best connectivity to.
CDN.h
#include <iostream>
#include <cstdio>
#include <cstring>
#include <cstdlib>
#include <netdb.h>
#include <arpa/inet.h>
#include <sys/time.h>
using namespace std;
class CDN {
public:
long int dl_time;
string host;
string proto;
string path;
string dl_speed;
double kbs;
double mbs;
double sec;
long int ms;
CDN(string, string, string);
void get_download_speed();
bool operator < (const CDN&);
};
#endif
CDN.cpp
#include "CND.h"
CDN::CDN(string protocol, string hostname, string downloadpath)
{
proto = protocol;
host = hostname;
path = downloadpath;
dl_time = ms = sec = mbs = kbs = 0;
get_download_speed();
}
void CDN::get_download_speed()
{
struct timeval dl_started;
gettimeofday(&dl_started, NULL);
long int download_start = ((unsigned long long) dl_started.tv_sec * 1000000) + dl_started.tv_usec;
char buffer[256];
char cmd_output[32];
sprintf(buffer,"wget -O /dev/null --tries=1 --timeout=2 --no-dns-cache --no-cache %s://%s/%s 2>&1 | grep -o --color=never \"[0-9.]\\+ [KM]*B/s\"",proto.c_str(),host.c_str(),path.c_str());
fflush(stdout);
FILE *p = popen(buffer,"r");
fgets(cmd_output, sizeof(buffer), p);
cmd_output[strcspn(cmd_output, "\n")] = 0;
pclose(p);
dl_speed = string(cmd_output);
struct timeval download_ended;
gettimeofday(&download_ended, NULL);
long int download_end = ((unsigned long long)download_ended.tv_sec * 1000000) + download_ended.tv_usec;
size_t output_type_k = dl_speed.find("KB/s");
size_t output_type_m = dl_speed.find("MB/s");
if(output_type_k!=string::npos) {
string dl_bytes = dl_speed.substr(0,output_type_k-1);
double dl_mb = atof(dl_bytes.c_str()) / 1000;
kbs = atof(dl_bytes.c_str());
mbs = dl_mb;
} else if(output_type_m!=string::npos) {
string dl_bytes = dl_speed.substr(0,output_type_m-1);
double dl_kb = atof(dl_bytes.c_str()) * 1000;
kbs = dl_kb;
mbs = atof(dl_bytes.c_str());
} else {
cout << "Should catch the errors..." << endl;
}
ms = download_end-download_start;
sec = ((float)ms)/CLOCKS_PER_SEC;
}
bool CDN::operator < (const CDN& other)
{
if (dl_time < other.dl_time)
return true;
else
return false;
}
main.cpp
#include "CDN.h"
int main()
{
cout << "Checking CDN's" << endl;
char msg[128];
CDN cdn_1 = CDN("http","git-1.example.com","test.txt");
CDN cdn_2 = CDN("http","git-2.example.com","test.txt");
if(cdn_2 > cdn_1)
{
sprintf(msg,"Downloading tag-x.tar.gz from %s %s since it's faster than %s %s",
cdn_1.host.c_str(),cdn_1.dl_speed.c_str(),cdn_2.host.c_str(),cdn_2.dl_speed.c_str());
cout << msg << endl;
}
else
{
sprintf(msg,"Downloading tag-x.tar.gz from %s %s since it's faster than %s %s",
cdn_2.host.c_str(),cdn_2.dl_speed.c_str(),cdn_1.host.c_str(),cdn_1.dl_speed.c_str());
cout << msg << endl;
}
return 0;
}
So what are your thoughts and how would you approach this. What are the alternatives to replace this wget and achieve same clean way in c++
EDIT:
As #molbdnilo pointed correctly
ping measures latency, but you're interested in throughput.
So therefore I edited the demonstrating code to reflect that, however question remains same
For starters, trying to determine "fastest CDN mirror" is an inexact science. There is no universally accepted definition of what "fastest" means. The most one can hope for, here, is to choose a reasonable heuristic for what "fastest" means, and then measure this heuristic as precisely as can be under the circumstances.
In the code example here, the chosen heuristic seems to be how long it takes to download a sample file from each mirror via HTTP.
That's not such a bad choice to make, actually. You could reasonably make an argument that some other heuristic might be slightly better, but the basic test of how long it takes to transfer a sample file, from each candidate mirror, I would think is a very reasonable heuristic.
The big, big problem here I see here is the actual implementation of this heuristic. The way that this attempt -- to time the sample download -- is made, here, does not appear to be very reliable, and it will end up measuring a whole bunch of unrelated factors that have nothing do with network bandwidth.
I see at least several opportunities here where external factors completely unrelated to network throughput will muck up the measured timings, and make them less reliable than they should be.
So, let's take a look at the code, and see how it attempts to measure network latency. Here's the meat of it:
sprintf(buffer,"wget -O /dev/null --tries=1 --timeout=2 --no-dns-cache --no-cache %s://%s/%s 2>&1 | grep -o --color=never \"[0-9.]\\+ [KM]*B/s\"",proto.c_str(),host.c_str(),path.c_str());
fflush(stdout);
FILE *p = popen(buffer,"r");
fgets(cmd_output, sizeof(buffer), p);
cmd_output[strcspn(cmd_output, "\n")] = 0;
pclose(p);
... and gettimeofday() gets used to sample the system clock before and after, to figure out how long this took. Ok, that's great. But what would this actually measure?
It helps a lot here, to take a blank piece of paper, and just write down everything that happens here as part of the popen() call, step by step:
1) A new child process is fork()ed. The operating system kernel creates a new child process.
2) The new child process exec()s /bin/bash, or your default system shell, passing in a long string that starts with "wget", followed by a bunch of other parameters that you see above.
3) The operating system kernel loads "/bin/bash" as the new child process. The kernel loads and opens any and all shared libraries that the system shell normally needs to run.
4) The system shell process initializes. It reads the $HOME/.bashrc file and executes it, most likely, together with any standard shell initialization files and scripts that your system shell normally does. That itself can create a bunch of new processes, that have to be initialized and executed, before the new system shell process actually gets around to...
5) ...parsing the "wget" command it originally received as an argument, and exec()uting it.
6) The operating system kernel now loads "wget" as the new child process. The kernel loads and open any and all shared libraries that the wget process needs. Looking at my Linux box, "wget" loads no less than 25 separate shared libraries, including kerberos, and ssl libraries. Each one of those shared libraries get initialized.
7) The wget command performs a DNS lookup on the host, to obtain the IP address of the web server to connect to. If the local DNS server doesn't have the CDN mirror's hostname's IP address cached, it often takes several seconds to look up the CDN mirrors's DNS zone's authoritative DNS servers, then query them for the IP address, hopping this way and that way, across the intertubes.
Now, one moment... I seem have forgotten what we were trying to do here... Oh, I remember: which CDN mirror is "fastest", by downloading a sample file from each mirror, right? Yeah, that must be it!
Now, what does all of work done above, all of that work, have to do with determining which content mirror is the fastest???
Err... Not much, from the way it looks to me. Now, none of the above should really be such shocking news. After all, all of that is described in popen()'s manual page. If you read popen's manual page, it tells you that's ...what it does. Starts a new child process. Then executes the system shell, in order to execute the requested command. Etc, etc, etc...
Now, we're not talking about measuring time intervals that last many seconds, or minutes. If we're trying to measure something that takes a long time to execute, the relative overhead of popen()'s approach would be negligible, and not much to worry about. But the expected time to download the sample file, for the purpose of figuring out how fast each content mirror is -- I would expect that the actual download time would be relatively short. But it seems to me that the overhead to doing it this way, of forking an entirely new process, and executing first the system shell, then the wget command, with its massive list of dependencies, is going to be statistically significant.
And as I mentioned in the beginning, given that this is trying to determine the vaguely nebulous concept of "fastest mirror", which is already an inexact science -- it seems to me that you'd really want to get rid of as much utterly irrelevant overhead here -- as much as possible, in order to get as accurate of a result.
So, it seems to me that you don't really want to measure here anything other than what you're trying to measure: network bandwidth. And you certainly don't want to measure any of what transpires before any network activity takes place.
I still think that trying to time a sample download is a reasonable proposition. What's not reasonable here is all the popen and wget bloat. So, forget all of that. Throw it out the window. You want to measure how long it takes to download a sample file over HTTP, from each candidate mirror? Well, why don't you do just that?
1) Create a new socket().
2) Use getaddrinfo() to perform a DNS lookup, and obtain the candidate mirror's IP address.
3) connect() to the mirror's HTTP port.
4) Format the appropriate HTTP GET request, and send it to the server.
The above does pretty much what the popen/wget does, up to this point.
And only now I would start the clock running by grabbing the current gettimeofday(), then wait until I read the entire sample file from the socket, then grab the current gettimeofday() once more, to get the ending time of the transmission, and then calculate the actual time it took to receive the file from the mirror.
Only then, will I have some reasonable confidence that I'll be actually measuring the time it takes to receive a sample file from a CDN mirror, and completely ignoring the time it takes to execute a bunch of completely unrelated processes; and then by taking the same sample from multiple CDN mirrors, have any hope of picking one, using as much of a sensible heuristic, as possible.

How do I set the DPI of a scan using TWAIN in C++

I am using TWAIN in C++ and I am trying to set the DPI manually so that a user is not displayed with the scan dialog but instead the page just scans with set defaults and is stored for them. I need to set the DPI manually but I can not seem to get it to work. I have tried setting the capability using the ICAP_XRESOLUTION and the ICAP_YRESOLUTION. When I look at the image's info though it always shows the same resolution no matter what I set it to using the ICAPs. Is there another way to set the resolution of a scanned in image or is there just an additional step that needs to be done that I can not find in the documentation anywhere?
Thanks
I use ICAP_XRESOLUTION and the ICAP_YRESOLUTION to set the scan resolution for a scanner, and it works at least for a number of HP scanners.
Code snipset:
float x_res = 1200;
cap.Cap = ICAP_XRESOLUTION;
cap.ConType = TWON_ONEVALUE;
cap.hContainer = GlobalAlloc(GHND, sizeof(TW_ONEVALUE));
if(cap.hContainer)
{
val_p = (pTW_ONEVALUE)GlobalLock(cap.hContainer);
val_p->ItemType = TWTY_FIX32;
TW_FIX32 fix32_val = FloatToFIX32(x_res);
val_p->Item = *((pTW_INT32) &fix32_val);
GlobalUnlock(cap.hContainer);
ret_code = SetCapability(cap);
GlobalFree(cap.hContainer);
}
TW_FIX32 FloatToFIX32(float i_float)
{
TW_FIX32 Fix32_value;
TW_INT32 value = (TW_INT32) (i_float * 65536.0 + 0.5);
Fix32_value.Whole = LOWORD(value >> 16);
Fix32_value.Frac = LOWORD(value & 0x0000ffffL);
return Fix32_value;
}
The value should be of type TW_FIX32 which is a floating point format defined by twain (strange but true).
I hope it works for you!
It should work the way.
But unfortunately we're not living in a perfect world. TWAIN drivers are among the most buggy drivers out there. Controlling the scanning process with TWAIN has always been a big headache because most drivers have never been tested without the scan dialog.
As far as I know there is also no test-suite for twain-drivers, so each of them will behave slightly different.
I wrote an OCR application back in the 90th and had to deal with these issues as well. What I ended up was having a list of supported scanners and a scanner module with lots of hacks and work-arounds for each different driver.
Take the ICAP_XRESOLUTION for example: The TWAIN documentation sais you have to send the resolution as a 32 bit float. Have you tried to set it using an integer instead? Or send it as float but put the bit-representation of an integer into the float, or vice versa. All this could work for the driver you're working with. Or it could not work at all.
I doubt the situation has changed much since then. So good luck getting it working on at least half of the machines that are out there.