How to perform sleep state S1 using WINAPI (C++) - c++

Following is code to put system in sleep state S3, I am looking for a way to perform sleep state S1
bool PerformS3() {
int wait = 100;
LARGE_INTEGER WaitTime;
WaitTime.QuadPart = wait;
WaitTime.QuadPart *= -10000000;
HANDLE hTimer = CreateWaitableTimer(NULL, FALSE, NULL);
if(0 == SetWaitableTimer(hTimer, &WaitTime, 0, NULL, NULL, TRUE))
{
return false;
}
if(0 == SetSuspendState(FALSE, FALSE, FALSE))
{
return false;
}
return true;
}

Short answer: in Windows, there's no S1 or S3, there's only "sleep mode". How it's implemented depends on a number of factors and is hardware- and software-specific. You can only change that, to some degree, by reconfiguring power settings in Control Panel and BIOS setup.
As the article you linked to, System Power States (Windows), hints, Windows Power Management does not expose ACPI power states but rather uses its own states. A somewhat more explicit declaration of that is at Standby Explained (S1, S3) – Omar Shahine – MSDN Blogs.
How these map to ACPI ones depend on motherboard capabilities, driver capabilities and system/BIOS settings.
In particular, Sleep mode used to map to either S1 or S3 (depending on BIOS settings), and newer versions of Windows can also use "Hybrid sleep" or "Away mode".
According to Which sleep mode should I use? S1 or S3? - Tom's Hardware and my personal experience, the S1/S3 switch specifically is either an option in BIOS setup or a jumper on the motherboard.
Judging by your question formulations, you'll probably be fine with "Away mode".

Related

High precision timed operations with multiprocess application on windows/c++

I have multiple processes(which are in different exe files generated by subprojects) created by my main program.
What I want to do is running each process for about 1-2 milliseconds within every 40-50 milliseconds major frame. When I use suspend/resume thread to suspend one process(by suspending all threads it have, but each have only one.) and resuming next, only one switch context(suspend old and resume new) lasts about 60 milliseconds. Which is longer even my major frame. By the way I know that using Sleep is not advised within this manner since the only sleep/wake operation lasts 15-30 ms and I dont use any.
If I change the priority of the running process to lower and next process to higher; is it guaranteed context switch to occur by windows within microseconds?
or what should I consider to achieve an only microsecond sensitive process switch?
And I wonder how long a simple Suspend/ResumeThread operation normally takes?
Currently I can't use threads insted of processes since I need the memory isolation of a process and my processes may spawn and terminate their own threads. Does Waithandlers like syncronization methods give me the high precised time?
Edit: The proposed sync objcets are in the resolution maximum to milliseconds (Like waitable timers, multimedia timers etc. all get parameter as ms and gives you ms). I need to use QueryPerformanceCounter and other ways to achieve high resolution as I mentioned.
As Remy says, you should be doing this with synchronisation objects - that's what they're for. Let's suppose that process A executes first and wants to 'hand over' to process B at some point. It can then do this:
SECURITY_ATTRIBUTES sa = { sizeof (SECURITY_ATTRIBUTES), NULL, TRUE };
HANDLE hHandOffToA = CreateEventW (&sa, TRUE, FALSE, L"HandOffToA");
HANDLE hHandOffToB = CreateEventW (&sa, TRUE, FALSE, L"HandOffToB");
// Start process B
CreateProcess (...);
while (!quit)
{
// Do work, and then:
SetEvent (hHandOffToB);
WaitForSingleObject (hHandOffToA, INFINITE);
}
CloseHandle (hHandOffToA);
CloseHandle (hHandOffToB);
And process B can then do:
HANDLE hHandOffToA = OpenEventW (EVENT_MODIFY_STATE, FALSE, L"HandoffToA");
HANDLE hHandOffToB = OpenEventW (SYNCHRONIZE, FALSE, L"HandoffToB");
while (!quit)
{
WaitForSingleObject (hHandOffToB, INFINITE);
// Do work, and then:
SetEvent (hHandOffToA);
}
CloseHandle (hHandOffToA);
CloseHandle (hHandOffToB);
You should, of course, include proper error checking and I've left it up to you to decide how process A should tell process B to shut down (I guess it could just kill it). Remember also that event names are system-wide so choose them more carefully than I have done.
For very high precision one can use the funciton below:
void get_clock(LONGLONG* SYSTEM_TIME)
{
static REAL64 multiplier = 1.0;
static BOOL alreadyCalculated = FALSE;
if (alreadyCalculated == FALSE)
{
LARGE_INTEGER frequency;
BOOL result = QueryPerformanceFrequency(&frequency);
if (result == TRUE)
{
multiplier = 1000000000.0 / frequency.QuadPart;
}
else
{
DWORD error = GetLastError();
}
alreadyCalculated = TRUE;
}
LARGE_INTEGER time;
QueryPerformanceCounter(&time);
*SYSTEM_TIME = static_cast<SYSTEM_TIME_TYPE>(time.QuadPart * multiplier);
}
In my case sync objects didn't fit very well(however I have used them where time is not critical), instead I have redesigned my logic to put place holders where my thread need to take action and calculated the time using function above.
But still not sure if higher priority task arrives how long does it take windows to take it into cpu and preempt running one.

How to check if a true hardware video adapter is used

I develop an application which shows something like a video in its window. I use technologies which are described here Introducing Direct2D 1.1. In my case the only difference is that eventually I create a bitmap using
ID2D1DeviceContext::CreateBitmap
then I use
ID2D1Bitmap::CopyFromMemory
to copy raw RGB data to it and then I call
ID2D1DeviceContext::DrawBitmap
to draw the bitmap. I use the high quality cubic interpolation mode D2D1_INTERPOLATION_MODE_HIGH_QUALITY_CUBIC for scaling to have the best picture but in some cases (RDP, Citrix, virtual machines, etc) it is very slow and has very high CPU consumption. It happens because in those cases a non-hardware video adapter is used. So for non-hardware adapters I am trying to turn off the interpolation and use faster methods. The problem is that I cannot exactly check if the system has a true hardware adapter.
When I call D3D11CreateDevice, I use it with D3D_DRIVER_TYPE_HARDWARE but on virtual machines it typically returns "Microsoft Basic Render Driver" which is a software driver and does not use GPU (it consumes CPU). So currently I check the vendor ID. If the vendor is AMD (ATI), NVIDIA or Intel, then I use the cubic interpolation. In the other case I use the fastest method which does not consume CPU a lot.
Microsoft::WRL::ComPtr<IDXGIDevice> dxgiDevice;
if (SUCCEEDED(m_pD3dDevice->QueryInterface(...)))
{
Microsoft::WRL::ComPtr<IDXGIAdapter> adapter;
if (SUCCEEDED(dxgiDevice->GetAdapter(&adapter)))
{
DXGI_ADAPTER_DESC desc;
if (SUCCEEDED(adapter->GetDesc(&desc)))
{
// NVIDIA
if (desc.VendorId == 0x10DE ||
// AMD
desc.VendorId == 0x1002 || // 0x1022 ?
// Intel
desc.VendorId == 0x8086) // 0x163C, 0x8087 ?
{
bSupported = true;
}
}
}
}
It works for physical (console) Windows session even in virtual machines. But for RDP sessions IDXGIAdapter still returns the vendors in case of real machines but it does not use GPU (I can see it via the Process Hacker 2 and AMD System Monitor (in case of ATI Radeon)) so I still have high CPU consumption with the cubic interpolation. In case of an RDP session to Windows 7 with ATI Radeon it is 10% bigger than via the physical console.
Or am I mistaken and somehow RDP uses GPU resources and that is the reason why it returns a real hardware adapter via IDXGIAdapter::GetDesc?
DirectDraw
Also I looked at DirectX Diagnostic Tool. It looks like the "DirectDraw Acceleration" info field returns exactly what I need. In case of physical (console) sessions it says "Enabled". In case of RDP and virtual machine (without hardware video acceleration) sessions it says "Not Available". I looked at sources and theoretically I can use the verification algorithm. But it is actually for DirectDraw which I do not use in my application. I would like to use something which is directly linked to ID3D11Device, IDXGIDevice, IDXGIAdapter and so on.
IDXGIAdapter1::GetDesc1 and DXGI_ADAPTER_FLAG
I also tried to use IDXGIAdapter1::GetDesc1 and check the flags.
Microsoft::WRL::ComPtr<IDXGIDevice> dxgiDevice;
if (SUCCEEDED(m_pD3dDevice->QueryInterface(...)))
{
Microsoft::WRL::ComPtr<IDXGIAdapter> adapter;
if (SUCCEEDED(dxgiDevice->GetAdapter(&adapter)))
{
Microsoft::WRL::ComPtr<IDXGIAdapter1> adapter1;
if (SUCCEEDED(adapter->QueryInterface(__uuidof(IDXGIAdapter1), reinterpret_cast<void**>(adapter1.GetAddressOf()))))
{
DXGI_ADAPTER_DESC1 desc;
if (SUCCEEDED(adapter1->GetDesc1(&desc)))
{
// desc.Flags
// DXGI_ADAPTER_FLAG_NONE = 0,
// DXGI_ADAPTER_FLAG_REMOTE = 1,
// DXGI_ADAPTER_FLAG_SOFTWARE = 2,
// DXGI_ADAPTER_FLAG_FORCE_DWORD = 0xffffffff
}
}
}
}
Information about the DXGI_ADAPTER_FLAG_SOFTWARE flag
Virtual Machine RDP Win Serv 2012 (Microsoft Basic Render Driver) -> (0x02) DXGI_ADAPTER_FLAG_SOFTWARE
Physical Win 10 (Intel Video) -> (0x00) DXGI_ADAPTER_FLAG_NONE
Physical Win 7 (ATI Radeon) - > (0x00) DXGI_ADAPTER_FLAG_NONE
RDP Win 10 (Intel Video) -> (0x00) DXGI_ADAPTER_FLAG_NONE
RDP Win 7 (ATI Radeon) -> (0x00) DXGI_ADAPTER_FLAG_NONE
In case of RDP session on a real machine with a hardware adapter, Flags == 0 but as I can see via Process Hacker 2 the GPU is not used. At least on Windows 7 with ATI Radeon I can see bigger CPU usage in case of an RDP session. So it looks like DXGI_ADAPTER_FLAG_SOFTWARE is only for Microsoft Basic Render Driver. So the issue is not solved.
The question
Is there a correct way to check if a real hardware video card (GPU) is used for the current Windows session? Or maybe it is possible to check if a specific interpolation mode of ID2D1DeviceContext::DrawBitmap has hardware implementation and uses GPU for the current session?
UPD
The topic is not about detecting RDP or Citrix sessions. It is not about detecting if the application is inside a virtual machine or not. I already have the all verifications and use the linear interpolation for those cases. The topic is about detecting if a real GPU is used for the current Windows session to display the desktop. I am looking for a more sophisticated solution to make decision using features of DirectX and DXGI.
If you want to detect the Microsoft Basic Renderer, the best option is to use it's VID/PID combo:
ComPtr<IDXGIDevice> dxgiDevice;
if (SUCCEEDED(device.As(&dxgiDevice)))
{
ComPtr<IDXGIAdapter> adapter;
if (SUCCEEDED(dxgiDevice->GetAdapter(&adapter)))
{
DXGI_ADAPTER_DESC desc;
if (SUCCEEDED(adapter->GetDesc(&desc)))
{
if ( (desc.VendorId == 0x1414) && (desc.DeviceId == 0x8c) )
{
// WARNING: Microsoft Basic Render Driver is active.
// Performance of this application may be unsatisfactory.
// Please ensure that your video card is Direct3D10/11 capable
// and has the appropriate driver installed.
}
}
}
}
See Microsoft Docs and Anatomy of Direct3D 11 Create Device
You will probably find for testing/debugging that you don't want to explicitly block these scenarios, but you do want to provide some kind of warning or notice feedback to the user that they are using software rather than hardware rendering.
Remote Desktop detection from Win32 classic desktop applications is better done directly via GetSystemMetrics( SM_REMOTESESSION ).
See Microsoft Docs
Answering a 3 years old question as I struggled myself to do so.
I had to go through the registry. First thing is to find the adapter LUID in the registry, to get the adapter GUID
private string GetAdapterGuid(long luid)
{
var directXRegistryKey = Registry.LocalMachine.OpenSubKey(#"SOFTWARE\Microsoft\DirectX");
if (directXRegistryKey == null)
return "";
var subKeyNames = directXRegistryKey.GetSubKeyNames();
foreach (var subKeyName in subKeyNames)
{
var subKey = directXRegistryKey.OpenSubKey(subKeyName);
if (subKey.GetValueKind("AdapterLuid") != RegistryValueKind.QWord)
continue;
var luidValue = (long)subKey.GetValue("AdapterLuid");
if (luidValue == luid)
return subKeyName;
}
return "";
}
Once you have that Guid, you can search for the details of the graphic card in HKLM like this. If it is virtual, the service name will be "INDIRECTKMD" :
private bool IsVirtualAdapter(string adapterGuid)
{
var videoRegistryKey = Registry.LocalMachine.OpenSubKey($#"SYSTEM\CurrentControlSet\Control\Video\{adapterGuid}\Video");
if (videoRegistryKey == null)
return false;
if (videoRegistryKey.GetValueKind("Service") != RegistryValueKind.String)
return false;
var serviceName = (string)videoRegistryKey.GetValue("Service");
return serviceName.ToUpper() == "INDIRECTKMD";
}
Checking the service name felt easier than parsing the DeviceDesc value.
My use case involved having the Guid ready so I split up the function, you could merge it into one.
It also only detect RDP/MSTSC through this, additional service names might be needed for other virtual adapters. Or you could try to detect only Nvidia/AMD/Intel driver names... up to you.

Interogate which process has locked a file in Windows C ++

I have 2 applications sharing the same lock file, and I need to know when the
the other application has either locked/unlocked the file. The code below was
originally implemented on a Linux machine, and is being ported to Window 8, VS12.
I have ported all other code in the class successfully and am locking files with
LockFile(handle, 0, 0, sizeof(int), 0) and the equivalent UnlockFile(...). However,
I am having trouble with the following wait() command.
bool devices::comms::CDeviceFileLock::wait(bool locked,
int timeout)
{
// Retrieve the current pid of the process.
pid_t pid = getpid();
// Determine if we are tracking time.
bool tracking = (timeout > 0);
// Retrieve the lock information.
struct flock lock;
if (fcntl(m_iLockFile, F_GETLK, &lock) != 0)
raiseException("Failed to retrieve lock file information");
// Loop until the state changes.
time_t timeNow = time(NULL);
while ((pid == lock.l_pid)
&&
(lock.l_type != (locked ? F_WRLCK : F_UNLCK)))
{
// Retrieve the lock information.
if (fcntl(m_iLockFile, F_GETLK, &lock) != 0)
raiseException("Failed to retrieve lock file information");
// Check for timeout, if we are tracking.
if (tracking)
{
time_t timeCheck = time(NULL);
if (difftime(timeNow, timeCheck) > timeout)
return false;
}
}
// Return success.
return true;
}
Note: m_iLockFile used to be a file descriptor from open(), it is now called
m_hLockFile and is a HANDLE from CreateFile().
I cannot seem to find the Windows equivalent of the fcntl F_GETLK command.
Does anyone know if I can either:
a) use an fcntl equivalent to interrogate locking information, to find out
which process has obtained the lock
b) suggest how the above can be re-written for Windows C++.
Note: The server application using the lock file is a standalone C++ executable,
however the client using the lock file is a WinRT Windows Application. So any
suggested solution cannot break the sandboxing of the client.
Thanks.
You are not going to find this in Windows, it is fundamentally unsound on a multi-tasking operating system. The value you'd get from an IsFileLocked() api function is meaningless, another process or thread could still lock the file a microsecond later.
The workaround is simple, if you need to lock then just try to acquire one. If the file is already locked then LockFile() will simply return FALSE, GetLastError() tells you why. Now it is atomic, an essential property of a lock. If you can afford to wait for the lock then use LockFileEx() without the LOCKFILE_FAIL_IMMEDIATELY option.
I am just googling for you, but I found this
"Various C language run-time systems use the IOCTLs for purposes
unrelated to Windows Sockets. As a consequence, the ioctlsocket
function and the WSAIoctl function were defined to handle socket
functions that were performed by IOCTL and fcntl in the Berkeley
Software Distribution."
There is also a brief discussion here - it is python based but has some clues.

Getting timezone in Windows with C++

I want to synchronize Windows and Linux clocks. Windows gets its system clock (with GetSystemTimeAsFileTime function) and sends it to Linux. Then, Linux sets its clock accordingly (with settimeofday function).
I also need to transmit the time zone of Windows, and convert it to Linux standard. How can I get the timezone of Windows in C++?
best wishes,
Mustafa
GetTimeZoneInformation is probably what you're looking for.
Even if you're not synching to standard time, but to time between machines, you should use NTP.
NTP is a mature, robust protocol that has solved the whole stack of problems you're going to find, or have found already: discovery, comms transport, latency and jitter, timezone differences, managing drift so you don't confuse other processes sharing the same machine(s), actually setting the time correctly, permissions, etc.
Simply set up an NTP server on the machine you want as a master, and set up the NTP client on the other machine, querying the master. Simple and painless.
It's been a while since I set up NTP servers; I assume that you can use the NTP utilities that come standard with the operating systems to do the job with minimum configuration, as long as you have admin privileges on the boxes.
GetDynamicTimeZoneInformation is more useful function. it gives the Registry Key for timezone also..
http://msdn.microsoft.com/en-us/library/windows/desktop/ms724318(v=vs.85).aspx
GetDynamicTimeZoneInformation doesn't always work. The minimum supported versions are Windows Vista, Windows Server 2008 and Windows Phone 8. So for anything below that GetTimeZoneInformation is better.
However another issue is both sometimes return StandardName or DaylightName empty. In that case you have to use the windows registry. Here is the function taken from gnu cash which was also modified from glib.
static std::string
windows_default_tzname(void)
{
const char *subkey =
"SYSTEM\\CurrentControlSet\\Control\\TimeZoneInformation";
constexpr size_t keysize{128};
HKEY key;
char key_name[keysize]{};
unsigned long tz_keysize = keysize;
if (RegOpenKeyExA(HKEY_LOCAL_MACHINE, subkey, 0,
KEY_QUERY_VALUE, &key) == ERROR_SUCCESS)
{
if (RegQueryValueExA(key, "TimeZoneKeyName", nullptr, nullptr,
(LPBYTE)key_name, &tz_keysize) != ERROR_SUCCESS)
{
memset(key_name, 0, tz_keysize);
}
RegCloseKey(key);
}
return std::string(key_name);
}
This is what works for me and ports between windows and linux
#include "time.h"
...
time_t now = time(NULL);
struct tm utctm;
utctm = *gmtime(&now);
utctm.tm_isdst = -1;
time_t utctt = mktime(&utctm);
// diff is the offset in seconds
long diff = now - utctt;

How to set the process priority in C++

I am working on a program to sort data, and I need to to set the process to priority 31, which I believe is the highest process priority in Windows. I have done some research, but can't figure out how to do it in C++.
The Windows API call SetPriorityClass allows you to change your process priority, see the example in the MSDN documentation, and use REALTIME_PRIORITY_CLASS to set the highest priority:
SetPriorityClass(GetCurrentProcess(), REALTIME_PRIORITY_CLASS)
Caution: if you are asking for true realtime priority, you are going to get it. This is a nuke. The OS will mercilessly prioritize a realtime priority thread, well above even OS-level input processing, disk-cache flushing, and other high-priority time-critical tasks. You can easily lock up your entire system if your realtime thread(s) drain your CPU capacity. Be cautious when doing this, and unless absolutely necessary, consider using high-priority instead. More information
The following function will do the job:
void SetProcessPriority(LPWSTR ProcessName, int Priority)
{
PROCESSENTRY32 proc32;
HANDLE hSnap;
if (hSnap = CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0));
if (hSnap == INVALID_HANDLE_VALUE)
{
}
else
{
proc32.dwSize = sizeof(PROCESSENTRY32);
while ((Process32Next(hSnap, &proc32)) == TRUE)
{
if (_wcsicmp(proc32.szExeFile, ProcessName) == 0)
{
HANDLE h = OpenProcess(PROCESS_SET_INFORMATION, TRUE, proc32.th32ProcessID);
SetPriorityClass(h, Priority);
CloseHandle(h);
}
}
CloseHandle(hSnap);
}
}
For example, to set the priority of the current process to below normal, use:
SetProcessPriority(GetCurrentProcess(), BELOW_NORMAL_PRIORITY_CLASS)
After (or before) SetPriorityClass, you must set the individual thread priority to achieve the maximum possible. Additionally, another security token is required for realtime priority class, so be sure to grab it (if accessible). SetThreadPriority is the secondary API after SetPriorityClass.