C++ Global Hotkeys with platform APIs - c++

I'm working on an application for taking screenshots on Windows, OSX and Linux in C++/Qt. Now I need to set global hotkeys, so the user can take screenshots when the application is running in the background. I tried with Qxt and UGlobalHotkey, which are both Qt libraries, but neither of them seemed to work.
I tried to implement it for OSX with Carbon (tutorial), but I need to call a class member function, which just doesn't work. Could someone provide me with an example? You can find my code here. The function i need to call is new_screenshot().
Or is there any other way to achieve something like this? I really need my application to take a screenshot from the background, otherwise it's pretty useless (yes, I should probably have implemented it at the very beginning to see if it even works). Would it maybe be better to have a separate client for every platform (Cocoa Swift for OSX, GTK for Linux, C# client for Windows)? I have often thought about this the past few days.

Do I understand correctly that you want to call new_screenshot from the hot key event handler? If so, InstallApplicationEventHandler lets you pass a pointer to user data in 4th argument. Pass a pointer to your MainWindow instance (based on code from the tutorial):
MainWindow *mainWindow = ... // get main window somehow
InstallApplicationEventHandler(&MyHotKeyHandler,1,&eventType,mainWindow,NULL);
Then you can use it in the event handler.
OSStatus MyHotKeyHandler(EventHandlerCallRef nextHandler,EventRef theEvent, void *userData)
{
//Do something once the key is pressed
static_cast<MainWindow*>(userData)->new_screenshot();
return noErr;
}

I did something in the past with MFC and WIN32 API....so it only works on Windows...but pressing ALT+F10 was able to hide/show a window...
void CWinHideDlg::OnButtonActive()
{
CString tmp;
GetDlgItemText(IDC_BUTTON_ACTIVE,tmp);
if(0 == strcmp(tmp.GetBuffer(tmp.GetLength()),"Activate"))
{
m_myAtom=GlobalAddAtom("MY_GLOBAL_HOT_HIDE_KEY");
int err=RegisterHotKey(this->GetSafeHwnd(),m_myAtom,MOD_ALT,VK_F10);
SetDlgItemText(IDC_BUTTON_ACTIVE,"Stop");
CButton *pBtn = (CButton *)GetDlgItem(IDC_BUTTON_UNHIDE);
pBtn->EnableWindow(TRUE);
SetDlgItemText(IDC_STATIC_INFO,"Set the mouse over the window \nand press ALT + F10 to hide it...");
}
else
{
UnregisterHotKey(this->GetSafeHwnd(),m_myAtom);
GlobalDeleteAtom(m_myAtom);
CButton *pBtn = (CButton *)GetDlgItem(IDC_BUTTON_UNHIDE);
pBtn->EnableWindow(FALSE);
SetDlgItemText(IDC_BUTTON_ACTIVE,"Activate");
}
}
Basically this code activates/deactivates the hot key ALT+F10, once it activates you can hide/unhide a running window on the system by setting the mouse pointer over the window and press ALT+F10...
This is from the WindowProc function:
if(message == WM_HOTKEY)
{
CString tmp;
POINT pc;
GetCursorPos(&pc);
if(GetAsyncKeyState(VK_F10))
{
HWND hwnd=::WindowFromPoint(pc);
if(hwnd)
{
tmp.Format("%08Xh",hwnd);
m_HideWins.InsertString(m_HideWins.GetCount(),tmp);
::ShowWindow(hwnd,SW_HIDE);
}
}
}
You can use the code to register your own HOT Key and use it to take a screenshot...
Hope it helps...

Related

Qt5 Not Registering Touch Events

I'm working on determining if a certain touchscreen will be compatible with an application and recently got a loaner model of an Elo 2402L touchscreen. I've installed the driver the company provides and was able to see multi-touch events using the evtest utility (parser for /dev/input/eventX).
The thing is that I'm running Scientific Linux 6.4, which uses Linux kernel 2.6.32. I've seen a lot of mixed information on touchscreen compatibility for Linux kernels before 3.x.x. Elo says that their driver only supports single-touch for 2.6.32. Also, I've seen people say that the majority of the compatibility issues with touch events in this kernel version are with Xorg interfaces.
I developed a very simple Qt5 application to test whether Qt could detect the touch events or not, because I'm not sure whether Qt applications are X-based and if they read events directly from /dev/input or something else.
However, despite a simple mouse event handler being able to correctly register mouse events, I also created a simple touch event handler and nothing happens when I touch the main screen. There is a beep, as part of the driver that Elo provides makes a beep when the screen is touched, so I know that SOMETHING is registering that touch, but neither the desktop, nor this application seem to recognize the touch event.
Also, yes, the WA_AcceptTouchEvents attribute is set to true in the window's constructor.
I have a simple mainwindow.h:
...
protected:
int touchEvent(QTouchEvent *ev);
...
And mainwindow.cpp:
MainWindow::MainWindow(QWidget *parent) {
...
setAttribute(Qt::WA_AcceptTouchEvents, true);
touchPoints = 0;
}
...
int MainWindow::touchEvent(QTouchEvent *ev) {
switch(ev->type()) {
case QEvent::TouchBegin:
touchPoints++;
break;
case QEvent::TouchEnd:
touchPoints--;
break;
}
ui->statusBar->showMessage("Touch Points: " + touchPoints);
}
Is there something wrong with the way I'm using the touch event handler? Or is there some issue with the device itself? Does Qt read input events directly from /dev/input, or does it get its input events from X?
Very confused here, as I haven't used Qt before and want to narrow down the cause before I say that it's the device causing the issue.
Also, if anyone has any insight into the device / kernel compatibility issue, that would be extremely helpful.
The QTouchEvent documentation says:
Touch events occur when pressing, releasing, or moving one or more
touch points on a touch device (such as a touch-screen or track-pad).
To receive touch events, widgets have to have the
Qt::WA_AcceptTouchEvents attribute set and graphics items need to have
the acceptTouchEvents attribute set to true.
Probably you just need to call setAttribute(Qt::WA_AcceptTouchEvents, true) inside the MainWindow constructor.
Is there something wrong with the way I'm using the touch event handler?
There is no touch event handler. If you change:
int touchEvent(QTouchEvent *ev);
to:
int touchEvent(QTouchEvent *ev) override;
(which you should always do when you are trying to override virtual functions so you can catch exactly this kind of mistake), you'll see that there is no such function for you to override. What you need to override is the event() handler:
protected:
bool event(QEvent *ev) override;
You need to check for touch events there:
bool MainWindow::event(QEvent *ev)
{
switch(ev->type()) {
case QEvent::TouchBegin:
touchPoints++;
break;
case QEvent::TouchEnd:
touchPoints++;
break;
default:
return QMainWindow(ev);
}
ui->statusBar->showMessage("Touch Points: " + touchPoints);
}
However, it might be better to work with gestures instead of touch events. But I don't know what kind of application you're writing. If you wanted to let Qt recognize gestures rather than implementing them yourself through touch events, you would first grab the gestures you want, in this case pinching:
setAttribute(Qt::WA_AcceptTouchEvents);
grabGesture(Qt::PinchGesture);
and then handle it:
bool MainWindow::event(QEvent *ev)
{
if (e->type() != QEvent::Gesture) {
return QMainWindow::event(e);
}
auto* gestEv = static_cast<QGestureEvent*>(e);
if (auto* gest = gestEv->gesture(Qt::PinchGesture)) {
auto* pinchGest = static_cast<QPinchGesture*>(gest);
auto sf = pinchGest->scaleFactor();
// You could use the pinch scale factor here to zoom an image
// for example.
e->accept();
return true;
}
return QMainWindow::event(e);
}
Working with gestures instead of touch events has the advantage of using the platform's gesture recognition facilities, like those of Android and iOS. But again, I don't know what kind of application you're writing and on what kind of platform you're working on.

C++ Windows System Tray wont display message

I have been stuck here for 4 days. I made a function that puts a program in the system tray but the problem here is that it wont show balloon title and message. What Am I doing Wrong? I even made a separate function to determine what windows os we are running on and initialize cbSize based on the Os detected. Any help will be appreciated. Bellow is the function.
EDIT: I am using Windows 7 and the Icon shows up in the system tray but wont show the message or title. I am also doing this Console Application right now as this will be used as a plugin in Unity3D. I want a solution that uses windows api but not windows form as I don't want any new window to open from this.
void createSystemTray()
{
HWND wHandler = GetDesktopWindow();
NOTIFYICONDATA iData;
ZeroMemory(&iData,sizeof(iData));
if(getOsVersion()=="Windows Vista" || getOsVersion()=="Windows 7" || getOsVersion()=="Windows 8" || getOsVersion()=="Windows 8.1")
{
iData.cbSize = sizeof(NOTIFYICONDATA);
}
else if (getOsVersion()=="Windows XP"||getOsVersion()=="Windows XP Professional x64 Edition")
{
iData.cbSize = NOTIFYICONDATA_V3_SIZE;
}
else if (getOsVersion()=="Windows 2000")
{
iData.cbSize = NOTIFYICONDATA_V2_SIZE;
}
else if (getOsVersion()=="UNKNOWN OS")
{
//Assume we have old Windows Os such as Me,95....
iData.cbSize = NOTIFYICONDATA_V1_SIZE;
}
iData.hWnd = wHandler;
iData.uID = 100;
iData.uVersion = NOTIFYICON_VERSION_4;
iData.uCallbackMessage = WM_MESSAGE;
iData.hIcon = LoadIcon(NULL,(LPCTSTR)IDI_WARNING);
lstrcpy(iData.szTip,"My First Tray Icon");
lstrcpy(iData.szInfo,"My App Info");
lstrcpy(iData.szInfoTitle,"My Info Title");
iData.uFlags = NIF_MESSAGE|NIF_ICON|NIF_TIP;
Shell_NotifyIcon(NIM_SETVERSION,&iData); //called only when usingNIM_ADD
Shell_NotifyIcon(NIM_ADD,&iData);
}
I added NIF_INFO to the uFlags and the problem is gone. Now it displays everything including text, title and info title.
The code below is what solved it.
iData.uFlags = NIF_MESSAGE|NIF_ICON|NIF_TIP|NIF_SHOWTIP|NIF_INFO;
Your biggest problem with the code in the question is that you pass the wrong window handle. You have to pass one of your window handles. But instead you pass the window handle of the desktop.
You will need to create a window and use its handle. The window does not need to be visible. I believe that you can use a message only window.
You must also call NIM_SETVERSION after NIM_ADD.
I'm very sceptical of your version switching being based on string equality testing. Your code will break on Windows 9 for instance. Use the version helper functions.
You also perform no error checking. This isn't the easiest function to call but your failure to check for errors makes things even harder than they need to be. Please read the documentation and add error checking code.

MFC WebBrowser Control: How many (normal) lines of code does it take to simulate Ctrl+N?

Update: Answer: Two normal lines of code required. Thanks Noseratio!
I banged my head on the keyboard for more hours than I would have cared to trying to simulate IEs Ctrl+N behavior in my hosted Browser control app. Unfortunately, due to complications which I've abstracted out of my code examples below, I can't just let IE do Ctlr+N itself. So I have to do it manually.
Keep in mind that I am running a hosted browser. So typically, opening links in new windows will actuall open it within a new "tab" within my application (it's not really a tab, but another window... but appearance-wise it's a tab). However, Ctrl+N is different -- here, it is expected a fully-fledged IE window will launch when pressed.
I think my problem is that of framing the questions -- admittedly I am new to WebBrowser control and I find it to be a lot of yucky. Regardless, I've scoured the Internet for the past day and couldn't come up with an elegant solution.
Basically, the ideal solution would be to call a "NewWindow" function within WebBrowser control or its affiliate libraries; however, all I was able to find where the *On*NewWindow methods, which were event handlers, not event signallers. Which I understand that most of the time, the user will be creating the events... but what about programmatic simulation?
I tried looking into an SENDMESSAGE approach where I could use the IDs that the OnNewWindow events use... that ended up in nothing than crashes. Perhaps I could go back to get it work, but I'd like confirmation is that approach is even worth my time.
The next approach, which should have been the most elegeant, but sadly didn't pan out, was like the following:
Navigate2(GetLocationURL().GetBuffer(), BrowserNavConstants::navOpenInNewWindow);
It would have worked marvelously if it weren't for the fact that the new window would open in the background, blinking in the taskbar. needing clicking to bring it to the front.
I tried to get around the limitation in a myriad of ways, including getting the dispatcher of the current context, then calling OnNewWindow2 with that IDispatch object. Then I would invoke QueryInterface on the dispatch object for an IWebBrowser control. The webBrowser control (presumably under the control of the new window) could then navigate to the page of the original context. However... this too was a pretty messy solution and in the end would cause crashes.
Finally, I resorted to manually invoking JavaScript to get the desired behavior. Really?? Was there really no more elegant a solution to my problem than the below mess of code?
if ((pMsg->wParam == 'N') && (GetKeyState(VK_CONTROL) & 0x8000) && !(GetKeyState(VK_SHIFT) & 0x8000) && !(GetKeyState(VK_MENU) & 0x8000))
{
LPDISPATCH pDisp = CHtmlView::GetHtmlDocument();
IHTMLDocument2 *pDoc;
if (SUCCEEDED(pDisp->QueryInterface(IID_IHTMLDocument2, (void **)&pDoc)))
{
IHTMLWindow2* pWnd;
pDoc->get_parentWindow(&pWnd);
BSTR bStrLang = ::SysAllocString(L"JavaScript");
CString sCode(L"window.open(\"");
sCode.Append(GetLocationURL().GetBuffer());
sCode.Append(L"\");");
BSTR bStrCode = sCode.AllocSysString();
COleVariant retVal;
pWnd->execScript(bStrCode, bStrLang, retVal);
::SysFreeString(bStrLang);
::SysFreeString(bStrCode);
pDoc->Release();
}
pDisp->Release();
I find it hard to believe that I must resort to such hackery as this to get something as simple as opening a new window when the user presses Ctrl+N.
Please stackoverflow, please point out the clearly obvious thing I overlooked.
Ctrl-N in IE starts a new window on the same session. In your case, window.open or webBrowser.Navigate2 will create a window on a new session, because it will be run by iexplore.exe process which is separate from your app. The session is shared per-process, this is how the underlying UrlMon library works. So you'll loose all cookies and authentication cache for the new window. On the other hand, when you create a new window which hosts WebBrowser control within your own app process, you'll keep the session.
If such behavior is OK for your needs, try first your initial Navigate2 approach, precededing it with AllowSetForegroundWindow(ASFW_ANY) call. If the new window still doesn't receive the focus correctly, you can try creating an instance of InternetExplorer.Application out-of-proc COM object, and use the same IWebBrowser2 interface to automate it. Below is a simple C# app which works OK for me, the new window is correctly brought to the foreground, no focus issues. It should not be a problem to do the same with MFC.
using System;
using System.Runtime.InteropServices;
using System.Windows.Forms;
namespace IeApp
{
public partial class MainForm : Form
{
// get the underlying WebBrowser ActiveX object;
// this code depends on SHDocVw.dll COM interop assembly,
// generate SHDocVw.dll: "tlbimp.exe ieframe.dll",
// and add as a reference to the project
public MainForm()
{
InitializeComponent();
}
private void NewWindow_Click(object sender, EventArgs e)
{
AllowSetForegroundWindow(ASFW_ANY);
// could do: var ie = new SHDocVw.InternetExplorer()
var ie = (SHDocVw.InternetExplorer)Activator.CreateInstance(Type.GetTypeFromProgID("InternetExplorer.Application"));
ie.Visible = true;
ie.Navigate("http://www.example.com");
}
const int ASFW_ANY = -1;
[DllImport("user32.dll")]
static extern bool AllowSetForegroundWindow(int dwProcessId);
}
}

CDockingManager GetPaneList() causes assertion failure in wincore.cpp?

So I thought this would be pretty simple, but I forgot it's MFC. Instead of registering a notification listener for data model changes that would possibly require a GUI update on each individual control I figure why not register it once and then send a message to all the open dock panes and allow them to update their controls as needed on their own terms for efficiency.
My callback function for handling the notification from the server looks something like this:
void CMainFrame::ChangeCallback(uint32_t nNewVersion, const std::vector<uint32_t>& anChangedObjectTypes)
{
CObList panes;
GetDockingManager()->GetPaneList(panes); // assert failure
if (!panes.IsEmpty())
{
POSITION pos = panes.GetHeadPosition();
while (pos)
{
CDockablePane* pPane = dynamic_cast<CDockablePane*>(panes.GetNext(pos));
if (pPane)
pPane->PostMessage(DM_REFRESH, nNewVersion);
}
}
}
The error I am getting is an assertion failure on line 926 of wincore.cpp
CHandleMap* pMap = afxMapHWND();
ASSERT(pMap != NULL); // right here
There is a comment below this saying this can happen if you pass controls across threads however this is a single threaded MFC application and this is all being done from the main frame.
Does anyone know what else can cause this?
If there is another way to go about sending a message to all the open CDockablePane derived windows in MFC that works as well ...
Here's the obvious workaround that I didn't want to have to do but after hours of debugging and no response here I guess this is a viable answer:
I added std::vector<CDockPane*> m_dockList; to the members of CMainFrame
Now after each call to AddPane in various places that can create and open new dock panes I make a subsequent call to push_back and then I override CDockablePane::OnClose like so:
CMainFrame* pMainFrame = reinterpret_cast<CMainFrame*>(AfxGetMainWnd());
if (pMainFrame)
{
std::vector<CDockPane*>::const_iterator found(
std::find(pMainFrame->DockList()->begin(), pMainFrame->DockList()->end(), this));
if (found != pMainFrame->DockList()->end())
pMainFrame->DockList()->erase(found);
}
CDockablePane::OnClose();
Now this list will only contain pointers to open dock panes which allows me to handle the event notification in my callback and simply do a for loop and PostMessage to each.

C++ subclassing a form to trap F1 - F12 keys

The main form opens a child form that has a handful of button CONTROLs on it. I need to trap keyboard events so I subclassed one of the controls. All is good until the control loses focus of course.
Ideally, as long as this child form is open I would like to assign the focus to this control and thus trap all the keystrokes, no matter where the user clicks.
I suspect superclassing might be a better way to go but I am not as familiar with it.
Perhaps what I should do is use accelerators on the main form?
ADDED:
I should mention that the main form has a large listview control that is subclassed to recover up/down arrows and mousewheel etc.
The traditional way is to install a keyboard hook (SetWindowsHookEx), but you need to inject it into every application, and it doesn't work across 32/64 bit boundaries.
What you can do however, and quite easily at that, is to poll the keyboard with GetKeyboardState on a timer and check whether your f1-f12 keys are activated. The timer can be as slow ticking as 100ms and it will catch almost everything while using virtually no resources.
Assuming that this is within Windows and the Win32 API, one option is to look for messages in your main GetMessage, TranslateMessage, DispatchMessage loop. You can special-case any message within this loop, irrespective of which window it's aimed at.
You should probably use IsChild to check that the message is intended for a control on your main window (as opposed to some dialog box or message box that might be displayed separately). Getting the logic right can be fiddly, too. It would be best to only intercept messages when you know your control has lost the focus, and only intercept the exact messages you need to.
Years ago, I wrote a library message loop with a lot of this built in. I had a simple manager class that held pointers to instances of my own little window class. The loop knew the difference between dialogs and normal windows, gave each window class a chance to spy on its childrens messages, and so on. You won't be able to run this directly and the conventions are a bit strange, but you might find this useful...
int c_Window_List::Message_Loop (void)
{
MSG msg;
bool l_Handled;
while (GetMessage (&msg, NULL, 0, 0))
{
l_Handled = false;
c_Windows::c_Cursor l_Cursor;
bool ok;
for (ok = l_Cursor.Find_First (g_Windows); ok; ok = l_Cursor.Step_Next ())
{
if (IsChild (l_Cursor.Key (), msg.hwnd))
{
if (l_Cursor.Data ().f_Accelerators != NULL)
{
l_Handled = TranslateAccelerator (l_Cursor.Key (), l_Cursor.Data ().f_Accelerators, &msg);
if (l_Handled) break;
}
if (l_Cursor.Data ().f_Manager != 0)
{
l_Handled = l_Cursor.Data ().f_Manager->Spy_Msg (l_Cursor.Key (), msg);
}
if (l_Handled) break;
if (l_Cursor.Data ().f_Is_Dialog)
{
l_Handled = IsDialogMessage (l_Cursor.Key (), &msg);
if (l_Handled) break;
}
}
}
if (!l_Handled)
{
TranslateMessage (&msg);
DispatchMessage (&msg);
}
if (g_Windows.Size () == 0)
{
// When all windows have closed, exit
PostQuitMessage (0);
}
}
return msg.wParam;
}
The f_ prefixes mean field - I picked up the m_ convention later, but this code hasn't been revisited in a very long time. f_Manager in particular points to an instance of my c_Window_Base class. The c_Cursor class is a kind of iterator, used to step through all the windows stored in the g_Windows variable (actually a static class member rather than a global).