Screen Rotation on Pocket PC - c++

I am developing an application for pocket PC which should run in landscape mode.
I wrote the function SetScreenOrientation(int angle), which rotates the screen. This function is called on application start and on application close. I want to change the screen orientation when I minimize/maximize orientation as well. To do this I edited the following function:
void CMainFrame::OnSize(UINT nType, int cx, int cy)
{
RECT r;
GetWindowRect(&r);
RECT rstatus;
rstatus.left = 0;
rstatus.top = 0;
rstatus.right = r.right;
rstatus.bottom = TOOLBAR_HEIGHT;
m_wndStatus.MoveWindow(&rstatus, TRUE);
RECT rcamera;
rcamera.left = 0;
rcamera.top = 0;
rcamera.right = r.right;
rcamera.bottom = r.bottom - TOOLBAR_HEIGHT;
m_wndCameraView.MoveWindow(&rcamera, TRUE);
if(nType == SIZE_MAXIMIZED)
{
CScreenOrientation::SetScreenOrientation(270);
}
if(nType == SIZE_MINIMIZED)
{
CScreenOrientation::SetScreenOrientation(0);
}
}
The problem is that when I minimize the application the function is executed more than once so the screen first rotates back to 0 degrees and then it rotates to 270 degrees.
While debugging I can see that the second time the function is executed the following piece of wincore code is executed:
BOOL CWnd::OnWndMsg(UINT message, WPARAM wParam, LPARAM lParam, LRESULT* pResult)
{
...
switch (lpEntry->nSig)
{
...
case AfxSig_v_u_ii:
(this->*mmf.pfn_v_u_i_i)(static_cast<UINT>(wParam), LOWORD(lParam), HIWORD(lParam));
break;
...
}
}
Does anyone know any other way to set the screen orientation on application minimize/maximize or any trick that could prevent multiple function execution?

For one thing, it seems likely that SetScreenOrientation is going to give you another OnSize notification, so you want to detect recursive calls and do nothing when that happens.
More importantly, how do you know what orientation the user really wants? When your application starts up you can check the orientation and save that. But if the user changed the orientation while you happened to be running, they won't be happy when you change it back. Maybe you can check notifications of system settings changes and detect if the user changed the orientation themselves.

Related

Win32 detect if window is maximized/docked to half screen (Win-key + Left/Right)

I have a classic Win32-API (C++) application and need to detect if the window is docked to the left/right half of the screen.
Background of the question is that the window only sizes in grid steps, let's say 32 pixel. In full screen the program detects that state, allow size to match the full screen and pad the excess space. With Windows 8 and later I would like to do the same instead of currently leaving borders (because the size snaps to a multiple of 32 pixel).
With the function GetWindowPlacement() you can retrieve the normal window rectangle, using the member rcNormalPosition of WINDOWPLACEMENT. Then compare the normal rectangle to the actual window rectangle. If they don't match the window is most likely in a docked state.
Example:
bool IsDockedToMonitor(HWND hWnd)
{
WINDOWPLACEMENT placement = {sizeof(WINDOWPLACEMENT)};
GetWindowPlacement(hWnd, &placement);
RECT rc;
GetWindowRect(hWnd, &rc);
return placement.showCmd == SW_SHOWNORMAL
&& (rc.left != placement.rcNormalPosition.left ||
rc.top != placement.rcNormalPosition.top ||
rc.right != placement.rcNormalPosition.right ||
rc.bottom != placement.rcNormalPosition.bottom);
}
Note that this solution is not reliable 100% of the time. There is a slim chance that the normal rectangle and current window rectangle could match even when the window is docked to the side of the monitor.
The Aero Snap feature is built into the Shell, not the window manager. As such, there is no particular window style or flag that indicates the docked state. The Shell simply repositions windows in response to certain actions (and internally records the state). It does so in a way that is indistinguishable from manually repositioning a window with the mouse or keyboard.
You cannot reliably determine, whether a window is docked to the left or right of the screen. There is no particular message sent by the Shell, nor is a window's size and position relative to the working area a sufficient property.
What you are trying to accomplish isn't possible. You will have to implement a solution, that doesn't require the information that isn't available. One such implementation would be to always use padding for window sizes, that don't allow the entire client area to be used. Another solution would be to implement the opposite: Allow window resizing to any size, unless you know the user is manually resizing the window. You can determine the latter by handling the WM_SIZING message.
In addition to what IInspectable has already mentioned, there is another way to determine this information and act accordingly.
Wait for a WM_WINDOWPOSCHANGED message and read its x, y, cx, and cy values from the WINDOWPOS pointer stored in lParam.
Get a handle to the current monitor on which the window is placed by calling MonitorFromWindow.
Create a MONITORINFO variable and set its cbSize field to sizeof(MONITORINFO).
Use the monitor handle and the address of your MONITORINFO variable to call GetMonitorInfo.
Read the rcWork value from your MONITORINFO variable.
rcWork.top == WINDOWPOS.y && rcWork.bottom == (WINDOWPOS.y + WINDOWPOS.cx) && rcWork.left == WINDOWPOS.x - the window is "docked" to the left
rcWork.top == WINDOWPOS.y && rcWork.bottom == (WINDOWPOS.y + WINDOWPOS.cx) && rcwork.right == (WINDOWPOS.x + WINDOWPOS.cx) - the window is "docked" to the right
rcWork.top == WINDOWPOS.y && rcWork.left == WINDOWPOS.x && rcWork.right == (WINDOWPOS.x + WINDOWPOS.cx) - the window is "docked" to the top
rcWork.top == (WINDOWPOS.y + WINDOWPOS.cy) && rcWork.left == WINDOWPOS.x && rcWork.right == (WINDOWPOS.x + WINDOWPOS.cx) - the window is "docked" to the bottom
You say you already have logic to determine if the window is fullscreen (do you mean fullscreen or maximized?), but effective maximization can be determined if left == x && top == y && right == x + cx && bottom == y + cy.
Here is an MSDN example of something similar.
Note that it may be more desirable to cache the MONITORINFO values so you don't need to call it every time the window is repositioned.
If you only want this to apply when a user does NOT manually resize the window, here is a contrived example of a possible way to do so:
LRESULT CALLBACK windowProc(HWND hwnd, UINT msg, WPARAM wparam, LPARAM lparam)
{
static bool userSizing = false;
switch (msg)
{
// could also catch WM_ENTERSIZEMOVE here, but this will trigger on
// moves as well as sizes
case WM_SIZING:
userSizing = true;
break;
case WM_EXITSIZEMOVE:
userSizing = false;
break;
case WM_WINDOWPOSCHANGED:
if (userSizing)
{
break;
}
// do logic to check to see if the window is sized in a "docked"
// manner here
break;
// handle other window messages ...
}
}

Search icon in edit control overlapped by input area

I am trying to make a search edit control in MFC that has an icon displayed in the control window all the time (regardless the state and text of the control). I have written something like this many years ago and worked very well, but the code no longer works on Windows 7 and newer (maybe even Vista, but did not try that). What happens is that the image shown in the control is overlapped with the input area (see the picture below).
The idea behind the code:
have a class derived from CEdit (that handles painting in OnPaint)
the icon is displayed on the right and the edit area is shrunk based on the size of the icon
resizing is done differently for single-line and multiline edits. For single line I call SetMargins and for multiline edits I call SetRect.
this edit resizing is applied in PreSubclassWindow(), OnSize() and OnSetFont()
This is how the edit input size is applied:
void CSymbolEdit::RecalcLayout()
{
int width = GetSystemMetrics( SM_CXSMICON );
if(m_hSymbolIcon)
{
if (GetStyle() & ES_MULTILINE)
{
CRect editRect;
GetRect(&editRect);
editRect.right -= (width + 6);
SetRect(&editRect);
}
else
{
DWORD dwMargins = GetMargins();
SetMargins(LOWORD(dwMargins), width + 6);
}
}
}
The following image shows the problem with the single line edits (the images have been zoomed in for a better view). The yellow background is for highlighting purposes only, in real code I am using the COLOR_WINDOW system color. You can see that when the single line edit has text and has the input the left side image is painted over. This does not happen with the multiline edit where SetRect correctly sets the formatting rectangle.
I have tried using ExcludeClipRect to remove the area of the edit where the image is being displayed.
CRect rc;
GetClientRect(rc);
CPaintDC dc(this);
ExcludeClipRect(dc.m_hDC, rc.right - width - 6, rc.top, rc.right, rc.bottom);
DWORD dwMargins = GetMargins();
SetMargins(LOWORD(dwMargins), width + 6);
This does not seem to have any effect on the result.
For reference, this is the painting method, written years ago and used to work well on Windows XP, but not correct any more.
void CSymbolEdit::OnPaint()
{
CPaintDC dc(this);
CRect rect;
GetClientRect( &rect );
// Clearing the background
dc.FillSolidRect( rect, GetSysColor(COLOR_WINDOW) );
DWORD dwMargins = GetMargins();
if( m_hSymbolIcon )
{
// Drawing the icon
int width = GetSystemMetrics( SM_CXSMICON );
int height = GetSystemMetrics( SM_CYSMICON );
::DrawIconEx(
dc.m_hDC,
rect.right - width - 1,
1,
m_hSymbolIcon,
width,
height,
0,
NULL,
DI_NORMAL);
rect.left += LOWORD(dwMargins) + 1;
rect.right -= (width + 7);
}
else
{
rect.left += (LOWORD(dwMargins) + 1);
rect.right -= (HIWORD(dwMargins) + 1);
}
CString text;
GetWindowText(text);
CFont* oldFont = NULL;
rect.top += 1;
if(text.GetLength() == 0)
{
if(this != GetFocus() && m_strPromptText.GetLength() > 0)
{
oldFont = dc.SelectObject(&m_fontPrompt);
COLORREF color = dc.GetTextColor();
dc.SetTextColor(m_colorPromptText);
dc.DrawText(m_strPromptText, rect, DT_LEFT|DT_SINGLELINE|DT_EDITCONTROL);
dc.SetTextColor(color);
dc.SelectObject(oldFont);
}
}
else
{
if(GetStyle() & ES_MULTILINE)
CEdit::OnPaint();
else
{
oldFont = dc.SelectObject(GetFont());
dc.DrawText(text, rect, DT_SINGLELINE | DT_INTERNAL | DT_EDITCONTROL);
dc.SelectObject(oldFont);
}
}
}
I have looked at other implementations of similar edit controls and they all have the same fault now.
Obviously, the question is how do I exclude the image area from the input area of the control?
I think what's going on is that CPaintDC calls BeginPaint(), which sends a WM_ERASEBKGND to the edit box. I wasn't able to ignore it, so I guess it's being sent to maybe an internal STATIC window? Not sure.
Calling ExcludeClipRect() in your OnPaint() handler won't do anything because EDIT will reset the clipping region to the whole client area in either BeginPaint() or its own WM_PAINT handler.
However, EDIT sends a WM_CTRCOLOREDIT to its parent just before painting itself, but seemingly after setting the clipping region. So you can call ExcludeClipRect() there. Sounds like an implementation detail which may change with future versions of the common controls. Indeed, it seems to have done so already.
I did a quick test without MFC on Windows 7, here's my window procedure:
LRESULT CALLBACK wnd_proc(HWND h, UINT m, WPARAM wp, LPARAM lp)
{
switch (m)
{
case WM_CTLCOLOREDIT:
{
const auto dc = (HDC)wp;
const auto hwnd = (HWND)lp;
RECT r;
GetClientRect(hwnd, &r);
// excluding the margin, but not the border; this assumes
// a one pixel wide border
r.left = r.right - some_margin;
--r.right;
++r.top;
--r.bottom;
ExcludeClipRect(dc, r.left, r.top, r.right, r.bottom);
return (LRESULT)GetStockObject(DC_BRUSH);
}
}
return ::DefWindowProc(h, m, wp, lp);
}
I then subclassed the EDIT window to draw my own icon in WM_PAINT, and then forwarded the message so I didn't have to draw everything else myself.
LRESULT CALLBACK edit_wnd_proc(
HWND h, UINT m, WPARAM wp, LPARAM lp,
UINT_PTR id, DWORD_PTR data)
{
switch (m)
{
case WM_PAINT:
{
const auto dc = GetDC(h);
// draw an icon
ReleaseDC(h, dc);
break;
}
}
return DefSubclassProc(h, m, wp, lp);
}
Note that I couldn't call BeginPaint() and EndPaint() (the equivalent of constructing a CPaintDC) in WM_PAINT because the border wouldn't get drawn. I'm guessing it has something to do with calling BeginPaint() twice (once manually, once by EDIT) and the handling of WM_ERASEBKGND. YMMV, especially with MFC.
Finally, I set the margins right after creating the EDIT:
SendMessage(
e, EM_SETMARGINS,
EC_LEFTMARGIN | EC_RIGHTMARGIN, MAKELPARAM(0, margin));
You might also have to update the margins again if the system font changes.
Have a look at this tutorial... from www.catch22.net. It gives a clear picture of how to insert a button into edit control. Though it is an Win32 example, this can be improvised to MFC as the basic structure of MFC is using win32 apis.
http://www.catch22.net/tuts/win32/2001-05-20-insert-buttons-into-an-edit-control/#
It uses WM_NCCALCSIZE to restrict the text control.

C++ window draggable by its menu bar

So I'm using Visual C++, and I have created a draggable, borderless window. Anyway, there is a toolbar along the top, and i want to be able to drag the window by that toolbar. I still want the toolbar to be functional, but i have no earthly idea how to be able to drag the window by it. This is my current window (see the toolbar on the top):
And this is my current code to make it draggable:
case WM_NCHITTEST: {
LRESULT hit = DefWindowProc(hWnd, message, wParam, lParam);
if(hit == HTCLIENT) hit = HTCAPTION;
return hit;
}
break;
You are on the right track with hooking WM_NCHITTEST. Now, you need to make change what constitutes a client hit versus a caption hit. If I understand your code right now, anywhere you click within the client area of the window (everything but the border) will allow you to drag the window elsewhere. This will make interacting with your application very difficult. Instead, you should be returning HTCAPTION only after you have determined that the hit was within the menubar area. Specifically, the area of the menubar that does not hold the File/Edit/Help buttons.
case WM_NCHITTEST: {
LRESULT hit = DefWindowProc(hWnd, message, wParam, lParam);
if (hit == HTCLIENT) { // The hit was somewhere in the client area. Don't know where yet.
// Perform your test for whether the hit was in the region you would like to intercept as a move and set hit only when it is.
// You will have to pay particular attention to whether the user is actually clicking on File/Edit/Help and take care not to intercept this case.
// hit = HTCAPTION;
}
return hit;
break;
}
Some things to keep in mind here:
This can be very confusing to a user that wants to minimize, close, or move your application. Menubars do not convey to the user that you can move the window by dragging them.
If you are concerned with vertical pixels you may consider doing what other applications on Windows are starting to do -- moving the menubar functionality to a single button that is drawn in the titlebar. (See recent versions of Firefox/Opera or Windows explorer in Windows 8 for some idea to move things to the titlebar.
In one of my applications I also wanted to make the window what I call "client area draggable". Unfortunately the mentioned solution (replacing HTCLIENT with HTCAPTION)
does have a serious flaws:
Double-clicking in the client area now shows the same behaviour as
double-clicking the caption (i.e. minimizing/maximizing the window)!
To solve this I did the following in my message handler (excerpt):
case WM_MOUSEMOVE:
// Move window if we are dragging it
if (mIsDragging) // variable: bool mIsDragging;
{
POINT mousePos = {GET_X_LPARAM(lParam), GET_Y_LPARAM(lParam)};
mIsDragging = (ClientToScreen(hWnd, &mousePos) &&
SetWindowPos(hWnd,
NULL,
mDragOrigin.left + mousePos.x - mDragPos.x,
mDragOrigin.top + mousePos.y - mDragPos.y,
0,
0,
SWP_NOSIZE | SWP_NOZORDER | SWP_NOACTIVATE));
}
break;
case WM_LBUTTONDOWN:
// Check if we are dragging and safe current cursor position in case
if (wParam == MK_LBUTTON)
{
POINT mousePos = {GET_X_LPARAM(lParam), GET_Y_LPARAM(lParam)};
if (ClientToScreen(hWnd, &mousePos) &&
DragDetect(hWnd, mousePos))
{
// Check if the cursor is pointing to your new caption here!!!!
mIsDragging = true;
mDragPos = mousePos;
GetWindowRect(hWnd, &mDragOrigin);
SetCapture(hWnd);
}
}
break;
// Remove this case when ESC key handling not necessary
case WM_KEYDOWN:
// Restore original window position if ESC is pressed and dragging active
if (!mIsDragging || wParam != VK_ESCAPE)
{
break;
}
// ESC key AND dragging... we restore original position of window
// and fall through to WM_LBUTTONUP as if mouse button was released
// (i.o.w. NO break;)
SetWindowPos(hWnd, NULL, mDragOrigin.left, mDragOrigin.top, 0, 0,
SWP_NOSIZE | SWP_NOZORDER | SWP_NOACTIVATE);
case WM_LBUTTONUP:
ReleaseCapture();
break;
case WM_CAPTURECHANGED:
mIsDragging = false;
break;
The (pseudo) code omits the return values (default: 0) and variable definitions but
should make the procedure clear anyway!? (If not drop me a line and I'll add more
or all code).
ps: I just found another comprehensive description which also explains the differences
of these two solutions: http://tinyurl.com/bqtyt3q

How do you create a button outside the WM_CREATE message in win32?

I'm using c++ with win32 and gdi+ for graphics.
When I initialize a button outside WM_CREATE, specifically in a WM_TIMER message, I can't draw anything else, after that one frame is drawn.
Here's a snippet of the code:
case WM_TIMER:
RECT client;
GetClientRect(hWnd, &client);
client.bottom-=100; //The bottom hundred pixels are static after I draw the first frame, so I never update them
if(starting==true)
{
starting=false;
hdc = GetDC(hWnd);
hdcBuf = CreateCompatibleDC(hdc);
hdcMem = CreateCompatibleDC(hdcBuf);
hbmBackBM = CreateCompatibleBitmap(hdc, client.right, client.bottom );
hbmOldBackBM = (HBITMAP)SelectObject(hdcBuf, hbmBackBM);
Graphics temp(hdc);
SolidBrush yelloworange(Color(250,225,65));
temp.FillRectangle(&yelloworange,0,client.bottom,client.right,100); //Fill the bottom with yellow
buttons[0]=CreateWindow("button","makereg", WS_VISIBLE | WS_CHILD | BS_DEFPUSHBUTTON, 100, 630, 60, 20, hWnd, HMENU(IDB_MAKEREG), NULL, NULL);
//buttons[1]=CreateWindow("button","destroyreg", WS_VISIBLE | WS_CHILD | BS_DEFPUSHBUTTON, 100, 670, 80, 20, hWnd, HMENU(IDB_MAKEREG+1), NULL, NULL);
}
Graphics g(hdcBuf);
The first part is for double buffering, and the variables that I instantiate are global. I delete the HDCs and HBITMAPs in WM_DESTROY. starting is a global boolean that is instantiated as true.
I do all of my drawing in this WM_TIMER message. If I comment out just the two lines where the buttons are created, everything runs normally. With them, it only draws out what is left in this WM_TIMER, and does not draw in the next one. All of the other drawing code is done to hdcBuf or g, and hdcBuf is then BitBlt'd onto hdc.
I tried creating the button in WM_CREATE, and then showing it in WM_TIMER, but that caused the same problem. I can't create and show the window in WM_CREATE, because otherwise it gets drawn over when I fill the bottom 100 pixels with a yellow color.
Is there a way to create and show a button outside WM_CREATE and outside WM_PAINT without crashing the rest of the code?
EDIT: Here is some of the code that stops working, in WM_TIMER:
if(mousex!=uptomousex && mousey!=uptomousey && lbuttondown==true) // this code draws a rectangle between the point where the user begins holding the left mousebutton, and where the mouse is right now.
{
if(uptomousex-mousex>0 && uptomousey-mousey>0)
g.DrawRectangle(&(p[0]), mousex, mousey, uptomousex-mousex, uptomousey-mousey);
else if(uptomousex-mousex<0 && uptomousey-mousey>0)
g.DrawRectangle((&p[0]), uptomousex, mousey, mousex-uptomousex, uptomousey-mousey);
else if(uptomousex-mousex>0 && uptomousey-mousey<0)
g.DrawRectangle((&p[0]), mousex, uptomousey, uptomousex-mousex, mousey-uptomousey);
else if(uptomousex-mousex<0 && uptomousey-mousey<0)
g.DrawRectangle(&(p[0]), uptomousex, uptomousey, mousex-uptomousex, mousey-uptomousey);
}
Some global variables:
bool lbuttondown=false;
float mousex=0;
float mousey=0;
float uptomousex=0;
float uptomousey=0;
Elsewhere in WndProc...
case WM_LBUTTONDOWN:
lbuttondown=true;
mousex=(float)GET_X_LPARAM(lParam);
mousey=(float)GET_Y_LPARAM(lParam);
uptomousex=mousex;
uptomousey=mousey;
break;
case WM_MOUSEMOVE:
if(mousex!=GET_X_LPARAM(lParam) && mousey!=GET_Y_LPARAM(lParam))
{
uptomousex=(float)GET_X_LPARAM(lParam);
uptomousey=(float)GET_Y_LPARAM(lParam);
}
break;
You are creating/getting at least 3 Device Context instances on each timer call, and you never delete/release them (at least in the sample that you posted), so no surprise that you are ending by crushing the whole GDI system.
For each GetDC() call, ReleaseDC() should be called,
for each CreateCompatibleDC() call, DeleteObject() should be called.
I have not figured out why this happens, but I have created a simple patch: my own button class, which I'll share here:
class button
{
public:
int x;
int y;
int width;
int height;
string text;
void (*func)(short);
button(int px, int py, int w, int h, string txt, void (*f)(short))
{
x=px;
y=py;
width=w;
height=h;
text=txt;
func=*f;
}
};
Then, there is a global vector of buttons, allbuttons, and in the WM_LBUTTONUP message I loop through allbuttons and check if the click was in the button, and if so I call func.
Its simple, more flexible, and it actually works. However, the graphics are worse, and I would still like to find out why the windows button is dysfunctional, just out of curiosity.

How to get (true) mouse displacement with windows

I'm wondering how to get true mouse displacement with windows.
For example, I could save the position of a previous mouse position, and request the new position, and subtract the latter from the previous. But this would not give me true mouse displacement.
Imagine the previous cursor position would be the maximum x coordinate of your screen resolution, and the user is moving his mouse to the right. Is there still a way to capture the true mouse displacement then?
Thanks
Although it might be possible to actually read sensor data (after all the mouse itself only reports movement, not location), I'm not aware of how this could be done. I think at the very low levels of windows, that displacement information gets translated into cursor position on the screen and from then on, you will always be limited by your screen resolution.
In whatever you are trying to do, is the mouse cursor still visible?
A little while ago, I wrote a WPF numeric edit box control that mimicked the way those controls work in Expression Blend. The ones where you can drag the mouse from the edit box itself and it'll change the value. I ran into exactly same issue that you found and my solution was to hide the mouse cursor, detect displacement on every tick and reset the cursor to the center of the screen. Then when the user lets go of the button to stop dragging, I would put the cursor back to where I found it before the drag. This worked out really well and Expression Blend also behaves this way in hiding the cursor.
As far as I know DirectX has its own APIs to interact with peripherals, that is recommended for game developers. You should look into it - try for example DirectX 8 and the Mouse, more detailed documentation you can find on MSDN.
The most reliable way is to use window hooks. Here is a minimalist sample using C(++)
LRESULT CALLBACK LowLevelMouseProc(int nCode, WPARAM wParam, LPARAM lParam){
MSLLHOOKSTRUCT* hookStruct = (MSLLHOOKSTRUCT*)lParam;
int ScreenWidth = GetSystemMetrics(SM_CXVIRTUALSCREEN);
int ScreenHeight = GetSystemMetrics(SM_CYVIRTUALSCREEN);
int x = hookStruct->pt.x * ScreenWidth / 65536);
int y = hookStruct->pt.y * ScreenHeight / 65536);
return CallNextHookEx(NULL, nCode, wParam, lParam);
}
HHOOK LowLevelMouseProcHook = NULL;
void hook_window_procedure()
{
if (LowLevelMouseProcHook == NULL)
LowLevelMouseProcHook = SetWindowsHookEx(WH_MOUSE_LL, (HOOKPROC)LowLevelMouseProc, (HINSTANCE)NULL, NULL);
}
void unhook_window_procedure()
{
if (LowLevelMouseProcHook != NULL)
UnhookWindowsHookEx(LowLevelMouseProcHook);
}
You can hook the mouse calling hook_window_procedure(); and unhook it by calling unhook_window_procedure();
You will receive calls to void LowLevelMouseProc(...) whenever the mouse moves. lParam contains a pointer to MSLLHOOKSTRUCT and you find all needed mouse information in its structs. Most interesting here is POINT pt; that is in relative screen coordinates assuming 1/65536th of the screen. Transform them to real pixels by multiplying with int ScreenWidth or int ScreenHeight and dividing by 65536
If you struggle being clamped by the screen borders, you can make a workaround setting the mouse back to the center of your window each call. This is often seen in older games. If you are not forced to use only the Windows API, I would try using Direct Input.
Another way would be to evaluate WM_INPUT directly in your WindowProc for example. This reads data directly from the Human Interface Device (HID) stack and should have raw Mouse-Data in it. Here is a link to the MS-Documentation
The method on windows is as such:
// you can #include <hidusage.h> for these defines
#ifndef HID_USAGE_PAGE_GENERIC
#define HID_USAGE_PAGE_GENERIC ((USHORT) 0x01)
#endif
#ifndef HID_USAGE_GENERIC_MOUSE
#define HID_USAGE_GENERIC_MOUSE ((USHORT) 0x02)
#endif
RAWINPUTDEVICE Rid[1];
Rid[0].usUsagePage = HID_USAGE_PAGE_GENERIC;
Rid[0].usUsage = HID_USAGE_GENERIC_MOUSE;
Rid[0].dwFlags = RIDEV_INPUTSINK;
Rid[0].hwndTarget = hWnd;
RegisterRawInputDevices(Rid, 1, sizeof(Rid[0]));
then in message loop:
case WM_INPUT:
{
UINT dwSize = sizeof(RAWINPUT);
static BYTE lpb[sizeof(RAWINPUT)];
GetRawInputData((HRAWINPUT)lParam, RID_INPUT, lpb, &dwSize, sizeof(RAWINPUTHEADER));
RAWINPUT* raw = (RAWINPUT*)lpb;
if (raw->header.dwType == RIM_TYPEMOUSE)
{
int xPosRelative = raw->data.mouse.lLastX;
int yPosRelative = raw->data.mouse.lLastY;
}
break;
}
Installing hooks will trigger sensitive antiviruses.
MSDN doc to refer
This sort of displacement ? : http://en.wikipedia.org/wiki/Differential_calculus
Try pitch/yaw : How could simply calling Pitch() and Yaw() cause the camera to eventually Roll()?
FPS style : http://cboard.cprogramming.com/game-programming/94944-mouse-input-aiming-fps-style-using-glut.html
and this : Jittering when moving mouse
pseudo code:
int mouseX, mouseY;
int oldMouseX, oldMouseY;
while(game_is_Running)
{
oldMouseX = mouseX;
oldMouseY = mouseY;
mouseX = get_new_mouse_from_windows_X();
mouseY = get_new_mouse_from_windows_Y();
if ((mouseX - oldMouseX) > 0)
{
// mouse moved to the right
}
else if ((mouseX - oldMouseX) < 0)
{
// mouse moved to the left
}
if ((mouseY - oldMouseY) < 0)
{
// mouse moved down
}
else if ((mouseY - oldMouseY) > 0)
{
// mouse moved up
}
}
A common method for this (in games):
Every frame of the game, get the position of the mouse and then recenter the mouse in the middle of the window using operating system functions. For every frame other than the first, this should give you an accurate displacement information. The key is to just be re-centering the mouse every frame so it never reaches the blocking outer bounds of the screen.
EDIT: Woops, didn't realize that DXM had already said this.
It is not possible to track physical mouse displacement with the Windows API. However, DirectInput provides features to keep track of it. You can also 'fake' it using only the Windows API, using the neat little trick in the answer of SO user DXM