WINDOWPLACEMENT's showCmd... always 1? - c++

When I do a get GetWindowPlacement, the WINDOWPLACEMENT::showCmd seems to be always 1, which is SW_SHOWNORMAL.
Does anyone know why is this so and if it is updated? Does anyone know if this variable is maintained by the application itself or by the operating system?
I am running this on Windows 7.
I am using this to achieve the same purpose as mentioned in this thread: I am trying to undo hidden windows that were previously shown without storing the hidden windows in memory (hide/show will be called in different run sessions) or on disk.
void hide(const unsigned int pid){
std::list<HWND> windowList = getWindowbyPID(pid);
for(std::list<HWND>::iterator it = windowList.begin(); it != windowList.end(); it++){
if(IsWindowVisible(*it)){ std::cout << "Hid WIN#" << *it << std::endl; ShowWindow(*it,SW_HIDE); }
}
}
void show(const unsigned int pid){
std::list<HWND> windowList = getWindowbyPID(pid);
for(std::list<HWND>::iterator it = windowList.begin(); it != windowList.end(); it++){
//if(IsWindowVisible(*it)){ ShowWindow(*it,SW_SHOW); }
WINDOWPLACEMENT wp;
wp.length = sizeof(wp);
wp.showCmd = 0; // Just to clear showCmd before reading.
std::cout << *it << std::endl;
std::cout << "BEFORE: " << wp.showCmd << std::endl;
GetWindowPlacement(*it,&wp);
std::cout << "AFTER: " << wp.showCmd << std::endl;
}
}
Output of one example that I did (pid of notepad.exe) after hiding hwnd#00060CD0:
003D0642
BEFORE: 0
AFTER: 1
000B0682
BEFORE: 0
AFTER: 1
00060CD0
BEFORE: 0
AFTER: 1
I am trying to use GetWindowPlacement to differentiate the windows that were always hidden and the windows that were previously shown. It never seems to be 0 even for windows that were always hidden.

There are only three possible values of the showCmd after calling GetWindowPlacement.
From the MSDN documentation on GetWindowPlacement (emphasis mine):
The flags member of WINDOWPLACEMENT retrieved by this function is always zero. If the window identified by the hWnd parameter is maximized, the showCmd member is SW_SHOWMAXIMIZED. If the window is minimized, showCmd is SW_SHOWMINIMIZED. Otherwise, it is SW_SHOWNORMAL.
Therefore, it appears that the window you're asking for placement info on is in a state other than maximized or minimized when you're calling GetWindowPlacement.
I'd suspect what you're actually looking for is IsWindowVisible.

Related

How does Qt enumerate screens?

Today I found that the order how Qt enumerates screens (QGuiApplication::screens) differs from the order in Windows (EnumDisplayMonitors).
What is the logic behind this difference, in order to take it into account when mixing Windows API and Qt? For example, if required to show something in screen #2 (using Windows enumeration).
Here the code I've used to test (also available in GitHub):
#include <qapplication.h>
#include <qdebug.h>
#include <qscreen.h>
#include <Windows.h>
#include <iostream>
std::ostream& operator<<(std::ostream& of, const RECT& rect)
{
return of << "RECT(" << rect.left << ", " << rect.top << " " << (rect.right - rect.left) << "x" << (rect.bottom - rect.top) << ")";
}
BOOL CALLBACK printMonitorInfoByHandle(HMONITOR hMonitor, HDC hdcMonitor, LPRECT lprcMonitor, LPARAM dwData)
{
auto index = (int*)dwData;
std::cout << ++*index << " " << *lprcMonitor << std::endl;
return TRUE;
}
int main(int argc, char* argv[])
{
QApplication a(argc, argv);
qDebug() << "*** Qt screens ***";
const auto screens = qApp->screens();
for (int ii = 0; ii < screens.count(); ++ii) {
qDebug() << ii + 1 << screens[ii]->geometry();
}
qDebug() << "*** Windows monitors ***";
int index = 0;
EnumDisplayMonitors(NULL, NULL, printMonitorInfoByHandle, (LPARAM)&index);
return 0;
}
My displays configuration is, from left to right, 2 (1280x1024), 3 (1920x1080), 1 (1920x1080), being my primary screen the 3.
Results:
*** Qt screens ***
1 QRect(0,0 1920x1080)
2 QRect(1920,233 1920x1080)
3 QRect(-1280,47 1280x1024)
*** Windows monitors ***
1 RECT(1920, 233 1920x1080)
2 RECT(-1280, 47 1280x1024)
3 RECT(0, 0 1920x1080)
As far as I've been able to see in different systems, EnumDisplayMonitors returns monitors in the order defined in the Display Settings, while QGuiApplication::screens always shows primary screen at the first position (actually, QGuiApplication::primaryScreen simply do that: return the first element).
Looking at the source code, in Windows Qt also uses the EnumDisplayMonitors function but basically moves the primary screen to the first position (it actually inserts in the first position the primary screen, while inserting at the end of the list any other monitor).
So, the primary screen will be at first position, screens with an index lower than primary screen's will be shifted one position, while the rest will match the index.
As a side note, taken from the comments of the code, if the primary screen is changed during the execution of the application, Qt is not able to report the change.
Note that the side effect of this policy is that there is no way to change primary screen reported by Qt, unless we want to delete all existing screens and add them again whenever primary screen changes.

How do I perform a 'simplified experiment' with Nvidia's Performance Toolkit?

I am trying to use Nvidia's performance toolkit to identify the performance bottleneck in an OpenGL application. Based on the user guide and the samples provided, I have arrived at this code:
// ********************************************************
// Set up NVPMAPI
#define NVPM_INITGUID
#include "NvPmApi.Manager.h"
// Simple singleton implementation for grabbing the NvPmApi
static NvPmApiManager S_NVPMManager;
NvPmApiManager *GetNvPmApiManager() { return &S_NVPMManager; }
const NvPmApi* getNvPmApi() { return S_NVPMManager.Api(); }
void MyApp::profiledRender()
{
NVPMRESULT nvResult;
nvResult = GetNvPmApiManager()->Construct(L"C:\\Program Files\\PerfKit_4.1.0.14260\\bin\\win7_x64\\NvPmApi.Core.dll");
if (nvResult != S_OK)
{
return; // This is an error condition
}
auto api = getNvPmApi();
nvResult = api->Init();
if ((nvResult) != NVPM_OK)
{
return; // This is an error condition
}
NVPMContext context;
nvResult = api->CreateContextFromOGLContext((uint64_t)::wglGetCurrentContext(), &context);
if (nvResult != NVPM_OK)
{
return; // This is an error condition
}
api->AddCounterByName(context, "GPU Bottleneck");
NVPMUINT nCount(1);
api->BeginExperiment(context, &nCount);
for (NVPMUINT i = 0; i < nCount; i++) {
api->BeginPass(context, i);
render();
glFinish();
api->EndPass(context, i);
}
api->EndExperiment(context);
NVPMUINT64 bottleneckUnitId(42424242);
NVPMUINT64 bottleneckCycles(42424242);
api->GetCounterValueByName(context, "GPU Bottleneck", 0, &bottleneckUnitId, &bottleneckCycles);
char name[256] = { 0 };
NVPMUINT length = 0;
api->GetCounterName(bottleneckUnitId, name, &length);
NVPMUINT64 counterValue(42424242), counterCycles(42424242);
api->GetCounterValue(context, bottleneckUnitId, 0, &counterValue, &counterCycles);
std::cout << "--- NVIDIA Performance Kit GPU profile ---\n"
"bottleneckUnitId: " << bottleneckUnitId
<< ", bottleneckCycles: " << bottleneckCycles
<< ", unit name: " << name
<< ", unit value: " << counterValue
<< ", unit cycles: " << counterCycles
<< std::endl;
}
However, the printed output shows that all of my integer values have been left unmodified:
--- NVIDIA Performance Kit GPU profile ---
bottleneckUnitId: 42424242, bottleneckCycles: 42424242, unit name: , unit value:
42424242, unit cycles: 42424242
I am in a valid GL context when calling profiledRender and while the cast in api->CreateContextFromOGLContext((uint64_t)::wglGetCurrentContext(), &context); looks a tiny bit dodgy it does return an OK result (whereas passing 0 for the context will return a not-OK result and putting in a random number will cause an access violation).
This is built against Cinder 0.8.6 running in x64 on Windows 8.1. Open GL 4.4, GeForce GT 750M.
Ok some more persistent analysis of the API return codes and further examination of the manual revealed the problems.
The render call needs to be wrapped in api->BeginObject(context, 0); and api->EndObject(context, 0);. That gives us a bottleneckUnitId.
It appears that the length pointer passed to GetCounterName both indicates the char array size as an input and is written to with the string length as output. This is kind of obvious on reflection but is a mistake copied from the user guide example. This gives us the name of the bottleneck.

How to locate where an error arises in a "PackageManager.AddPackageAsync" method call?

I'm debugging an example app that deploys an Windows Metro App Package (".Appx" file). It call a WinRT method "PackageManager.AddPackageAsync" which fails with detailed error code text (retrieved from the call return value after the operation was finished):
error 0x80070002: Windows cannot register the package because of an
internal error or low memory.
My target is to find where exactly this error arises in the WinRT call. I think the best way for achieving this is by finding where the error code is set. I've done this before with the old simple Win32 API but now with this new complex com-based, async Interface I got completely lost.
The example project files can be found at. Its main function looks like this:
[MTAThread]
int __cdecl main(Platform::Array<String^>^ args)
{
wcout << L"Copyright (c) Microsoft Corporation. All rights reserved." << endl;
wcout << L"AddPackage sample" << endl << endl;
if (args->Length < 2)
{
wcout << "Usage: AddPackageSample.exe packageUri" << endl;
return 1;
}
HANDLE completedEvent = nullptr;
int returnValue = 0;
String^ inputPackageUri = args[1];
try
{
completedEvent = CreateEventEx(nullptr, nullptr, CREATE_EVENT_MANUAL_RESET, EVENT_ALL_ACCESS);
if (completedEvent == nullptr)
{
wcout << L"CreateEvent Failed, error code=" << GetLastError() << endl;
returnValue = 1;
}
else
{
auto packageUri = ref new Uri(inputPackageUri);
auto packageManager = ref new PackageManager();
auto deploymentOperation = packageManager->AddPackageAsync(packageUri, nullptr, DeploymentOptions::None);
deploymentOperation->Completed =
ref new AsyncOperationWithProgressCompletedHandler<DeploymentResult^, DeploymentProgress>(
[&completedEvent](IAsyncOperationWithProgress<DeploymentResult^, DeploymentProgress>^ operation, AsyncStatus)
{
SetEvent(completedEvent);
});
wcout << L"Installing package " << inputPackageUri->Data() << endl;
wcout << L"Waiting for installation to complete..." << endl;
WaitForSingleObject(completedEvent, INFINITE);
if (deploymentOperation->Status == AsyncStatus::Error) //Here I decided to track "deploymentOperation->Status"
{
auto deploymentResult = deploymentOperation->GetResults();
wcout << L"Installation Error: " << deploymentOperation->ErrorCode.Value << endl;
wcout << L"Detailed Error Text: " << deploymentResult->ErrorText->Data() << endl;
}
else if (deploymentOperation->Status == AsyncStatus::Canceled)
{
wcout << L"Installation Canceled" << endl;
}
else if (deploymentOperation->Status == AsyncStatus::Completed)
{
wcout << L"Installation succeeded!" << endl;
}
}
}
catch (Exception^ ex)
{
wcout << L"AddPackageSample failed, error message: " << ex->ToString()->Data() << endl;
returnValue = 1;
}
if (completedEvent != nullptr)
CloseHandle(completedEvent);
return returnValue;
}
As the operation (PackageManager.AddPackageAsync) should be async and am not really sure how to track the code executed in the new thread I decided to search where the "deploymentOperation->Status" variable was set (it appeared that this was actually a function call) to "AsyncStatus::Error" (or the integer 3). After I gone through a LOT of code and function calls I understand that whenever this variable will be set or no (it seems it doesn't matter BUT it's certain that this function retrieves the operation error data) depends of a member of variable initialised by an Undocumented ntdll function call named "NtGetCompleteWnfStateSubscription" by it's pointer. It was called from ntdll too. The structure of the variable member I mentioned is the following:
struct Unknown
{
AsyncStatus /*? 32bit long*/ dw0; // it was set to '3' during the current operation as the AsyncStatus::error enum value so I think it should belong to it
DWORD dw4; //was set to 0x5F
DWORD dw8; //was set to 0x80073CF6 (some generic error)
} ;
The code in ntdll where the "NtGetCompleteWnfStateSubscription" function was called, initing the variable which member have this structure type (in assembly, IDA PRO generated):
ntdll.dll:77906200 loc_77906200: ; CODE XREF: ntdll.dll:ntdll_strncpy_s+1A3j
ntdll.dll:77906200 push 1030h
ntdll.dll:77906205 push esi ; the variable pointer
ntdll.dll:77906206 push edi
ntdll.dll:77906207 push eax
ntdll.dll:77906208 lea eax, [ebp-0Ch]
ntdll.dll:7790620B push eax
ntdll.dll:7790620C push ebx
ntdll.dll:7790620D call near ptr ntdll_NtGetCompleteWnfStateSubscription
ntdll.dll:77906212 test eax, eax ; now "[esi+2Ch] + esi" contains data from the "Unkown" structure and contains the operation error data
ntdll.dll:77906214 jns short loc_7790623F
The above code is actually called 3 times but with the same "esi" pointer. So now my question is how to find where the error code is set to be retrieved using this function. I tried capturing most of the function in ntdll that looks like doing this but without success. I can't debug NtGetCompleteWnfStateSubscription for some strange reason. Any suggestions will be helpful. I'm using IDA PRO 6.5, VS 2013 U1, Windows 8.1 x64 U1.
EDIT: If you don't want to bother with problem specific details my generic question is how to locate where WinRT async methods sets the "IAsyncInfo::Status" property or what function or method is called when an error arrises while executing them.

MS CryptoAPI doesn't work on Windows XP with CryptAcquireContext()

I wrote some code using the Microsoft CryptoAPI to calculate a SHA-1 and got the compiled exe working on Windows 7, Win Server 2008, Win Server 2003. However, when I run it under Windows XP SP3, it does not work.
I narrowed down the failure to the CryptAcquireContext() call.
I did notice that a previous post talked about the XP faulty naming of "… (Prototype)" and it must be accounted for by using a WinXP specific macro MS_ENH_RSA_AES_PROV_XP.
I did the XP specific code modifications and it still doesn't work. (The bResult returns 0 false on Win XP, all other platforms bResult returns 1 true.)
I checked the MS_ENH_RSA_AES_PROV_XP with the actual key+string values I see in regedit.exe so everything looks like it's set up to work but no success.
Have I overlooked something to make it work on Windows XP?
I've pasted shortest possible example to illustrate the issue. I used VS2010 C++.
// based on examples from http://msdn.microsoft.com/en-us/library/ms867086.aspx
#include "windows.h"
#include "wincrypt.h"
#include <iostream>
#include <iomanip> // for setw()
void main()
{
BOOL bResult;
HCRYPTPROV hProv;
// Attempt to acquire a handle to the default key container.
bResult = CryptAcquireContext(
&hProv, // Variable to hold returned handle.
NULL, // Use default key container.
MS_DEF_PROV, // Use default CSP.
PROV_RSA_FULL, // Type of provider to acquire.
0); // No special action.
std::cout << "line: " << std::setw(4) << __LINE__ << "; " << "bResult = " << bResult << std::endl;
if (! bResult) { // try Windows XP provider name
bResult = CryptAcquireContext(
&hProv, // Variable to hold returned handle.
NULL, // Use default key container.
MS_ENH_RSA_AES_PROV_XP, // Windows XP specific instead of using default CSP.
PROV_RSA_AES, // Type of provider to acquire.
0); // No special action.
std::cout << "line: " << std::setw(4) << __LINE__ << "; " << "bResult = " << bResult << std::endl;
}
if (bResult)
CryptReleaseContext(hProv, 0);
}
Windows 7 success:
Windows XP failure:
In your CryptAcquireContext code, it appears you are missing the parameter to get a context without a specific container set. You need to pass the CRYPT_VERIFYCONTEXT option in CryptAcquireContext.
Windows 7 might be working around this.
http://msdn.microsoft.com/en-us/library/windows/desktop/aa379886(v=vs.85).aspx
For further diagnosis, the results of GetLastError() would be requisite.

C++ OIS Segfault When Almost Identical Functions Work

See important edit below!
Hi all I'm having trouble figuring out why this segfault is happening. I'm using the Ogre and OIS library. Here is the code that causes it:
bool Troll::Application::keyPressed(const OIS::KeyEvent& event) {
//TODO: Segfault here!
Troll::State* state = mStateManager->peek();
state->key_pressed(event); //This causes the SEGFAULT!!!
return true;
};
And the key_pressed function:
void Troll::RootState::key_pressed(const OIS::KeyEvent& event) {
std::cout << "You got here" << std::endl; //this isnt printed!
std::cout << "Key Pressed: " << event.key << std::endl;
};
Because the segfault is happening on key_pressed but the first line of key_pressed isn't being executed, I can only guess that it is passing the const OIS::KeyEvent& that is causing it.
And the weird thing about this is I have three other functions that are almost identical (but for the mouse) which work perfectly.
bool Troll::Application::mouseMoved(const OIS::MouseEvent& event) {
mStateManager->peek()->mouse_moved(event);
return true;
};
void Troll::RootState::mouse_moved(const OIS::MouseEvent& event) {
std::cout << "Mouse Moved: rel x = " << event.state.X.rel << std::endl;
std::cout << " rel y = " << event.state.Y.rel << std::endl;
std::cout << " abs x = " << event.state.X.abs << std::endl;
std::cout << " abs y = " << event.state.Y.abs << std::endl;
};
I'm creating a basic state system so I can start writing applications for Ogre3D using the OIS library for input. I have an Application class which acts as an input listener for the mouse and keyboard. Here is how its setup...
void Troll::Application::setup_ois() {
//create a parameter list for holding the window handle data
OIS::ParamList pl;
size_t windowHnd = 0;
//we need the window handle to setup OIS
std::ostringstream windowHndStr;
mWindow->getCustomAttribute("WINDOW", &windowHnd);
windowHndStr << windowHnd;
//add the handle data into the parameter list
pl.insert(std::make_pair(std::string("WINDOW"), windowHndStr.str()));
//create the input system with the parameter list (containing handle data)
mInputManager = OIS::InputManager::createInputSystem(pl);
//true in createInputObject means we want buffered input
mKeyboard = static_cast<OIS::Keyboard*>(mInputManager->createInputObject( OIS::OISKeyboard, true ));
mMouse = static_cast<OIS::Mouse*>(mInputManager->createInputObject( OIS::OISMouse, true ));
//set this as an event handler
mKeyboard->setEventCallback(this);
mMouse->setEventCallback(this);
};
The application class relays the mouse moves, button pressed and key strokes to the Troll::State (the framework I'm making is called Troll) at the top of the state stack which is inside the Troll:: StateManager (which is merely a wrapper for an std::stack with memory allocation and startup() and shutdown() calls)
Sorry for any confusion the difference of the naming conventions causes for some reason I decided to use_underscores_for_some_reason and I haven't got round to changing it. Thanks in advance, ell. Hope you can solve my problem and please inform me if I haven't given enough detail.
EDIT:
After recently upgrading to Ubuntu Natty Narwhal I cannot get the debugger to work properly, it just crashes the computer. I use Code::Blocks and I don't have a clue how to use a debugger or compiler outside the IDE (sad I know, but I'll get round to learning someday). So sorry, I can't use a debugger.
EDIT:
In response to GMan's comment, even if I check for null, I still get segfaults
bool Troll::Application::keyPressed(const OIS::KeyEvent& event) {
//TODO: Segfault here!
Troll::State* state = mStateManager->peek();
if(state == 0) {
std::cout << "State is null!" << std::endl;
};
state->key_pressed(event);
return true;
};
Although I'm not sure thats the correct way to check for null? Also, other methods using peek() work correctly. Thanks again! :)
Important Edit:
It seems that it is in fact the peek function that is causing trouble, but only when called from the keyPressed function. I discovered this by adding a parameter to peek() so that it would print the address of the state object it return as well as the message. By setting the message parameter to the function from where the peek() function is called, I got these results.
Root state is: 0x8fdd470
Peeking state... 0x8fdd470 from: Application::frameRenderingQueued()
Peeking state... 0x8fdd470 from: Application::mouseMoved
Peeking state... 0x8fdd470 from: Application::frameRenderingQueued()
Peeking state... 0x8fdd470 from: Application::frameRenderingQueued()
Peeking state... 0x8fdd470 from: Application::frameRenderingQueued()
Peeking state... 0x936cf88 from: Application::keyPressed
Segmentation fault
Notice that when the keyPressed function calls the peek method, a different address is shown. I cannot see why a different address is being returned only when the keyPress function calls peek? Somebody please help me with this!
What happens when you check for mStateManager being NULL, and for NULL being returned from mStateManager->peek()?
bool Troll::Application::keyPressed(const OIS::KeyEvent& event) {
if (mStateManager == NULL) {
//! set breakpoint on next line
std::cout << "mStateManager is NULL, returning false" << std::endl;
return false;
}
std::cout << "about to call peek" << std::endl;
if (Troll::State* state = mStateManager->peek())
{
std::cout << "about to call key_pressed" << std::endl;
state->key_pressed(event); //Does this still cause a SEGFAULT?
std::cout << "back from key_pressed" << std::endl;
return true;
}
std::cout << "mStateManager->peek() returned NULL, returning false" << std::endl;
return false;
};
EDIT: I edited the code to print each branch, how it was traced through.