Qt - Catch events normally handled by the Window Manager - c++

I'm not sure quite how to phrase the question concisely, so if there is a similar question, please point me in the right direction and close this one.
I am currently building a CAD app, the user interacts within the 3D viewports primarily through the mouse and the three keyboard modifiers (alt, shift, ctrl). Shift and control modify the currently selected tool options, and alt operates the camera - much like any other 3D CAD app.
However I'm currently developing with a Gnome desktop, and it's window manager (AFAIK) catches any Alt-RightButton mouse dragging events and interprets them as a window drag command - even when not holding the title bar and regardless of the currently highlighted widget.
This is a disaster for me because camera keyboard controls are quite standardised in my target industry. So does anyone know of a way to override this behaviour, preferably from within Qt, and preferably focus it for my one scenario in one particular widget class?
Thank you,
Cam

If you use the Qt::X11BypassWindowManagerHint on the window, then the window manager can't steal your keypresses. However, this means you lose the native window frame (including decoration, moving, and resizing), so it is likely you don't want to do this.
Another way: if your users are only on 1 or 2 varieties of Linux, add something to the installer which asks the user whether they want to manipulate the gnome (or whatever) keysettings, and if so, changes them via gconftool-2 (or equivalent).

Related

Drag and Drop a toolbox like photoshop

I'm developing a multi platform application windows, mac and linux and I need yours to know how to have the same effect of drag and drop of photoshop on qt, that is to say how can I do to have the same aspect of moving a toolbox and depositing it on a corner of my main window. I do not need qtoolbar, I want my own implementation.
Photoshop effect example
I've done similar work in other languages and frameworks. If you want to roll your own from scratch my #1 advice is to break the problem up into smaller parts and take it on progressively.
Write or import a good drag drop handler
Handle dragging around to position a free floating "toolbox" (no docking yet)
Create logic to detect dropping on to "hot zones" (areas of a set width at the edges of the screen) to initiate a dock
Handle the behavior of a panel in the "docked" state (locked to left x coordinate, etc)
Jazz it up with drawing cues to indicate to the user where a panel will dock as it is dragged.
Keep building the logic for nested panels, tabbed panels, etc. Try to tackle it in an object oriented way to keep the code clean and tight and promote reuse.
You may want to start by studying some other implementations, regardless of language to get some ideas and see how they setup the logic.
A Qt4 docking framework
A JS docking framework
A c# docking framework
Good luck!

Can you force a console app to look like a gui app?

I've been weighing the pro's/con's of making a gui app, and I've decided a console app is much more powerful for my calculator, especialy since it does different things like foil, quadratic equations, etc. So my question is make the console look like a gui based app?
The answer to your question depends on exactly what you mean by "console." If you're talking about Windows console windows, then the answer is "maybe." Some Windows installations can emulate VGA/EGA graphics within a console window, making them able to play old games for DOS.
Your mission would be to implement every GUI widget you need, such as clickable buttons, text-entry fields, etc. in terms of simple graphics primitives for drawing lines and rectangles. Then you have to write code that figures out where the mouse is and draws the mouse pointer in the right spot. You'd also have to write code to make the cursor blink, to make the arrow keys move the cursor, and to make it possible to select characters in a text entry box and copy, cut, and paste them.
When you got done, you'd have a program that works on some people's computers, but not on others. On some Windows installations, the console windows can't do graphics or go fullscreen. Your app wouldn't work at all on those systems, although you could write a fullscreen Windows app using a 2D game library such as SDL or Allegro instead of writing a console app, which would bring you back to the previous paragraph.
As you might have guessed by now, rolling your own GUI would be a whole lot more work than writing a Windows GUI program in which the buttons, text fields, etc are already implemented for you, the cursor already blinks, the mouse already clicks, etc.
Also, the code that does the actual calculations should be totally separate
from the code that gets the input from the user and puts the answers on the screen, so that code shouldn't factor into whether you want to write a GUI or a console app. They shouldn't even be in the same .cpp file as the I/O routines.
Now, some programmers use the term "console" to refer to xterm windows on Linux. These are not the same thing at all, and cannot draw graphics (and "console" is the wrong name for them to boot). But sometimes you see menus and stuff within them, "drawn" with colored text. Usually, these are drawn and managed using the external dialog shell command.

Adding a user interface to an image viewer plugin

I have a general question on how to develop an image viewer plugin with Firebreath. For that, I would like to incorporate a GUI framework, like wxwidget or Qt. The GUI would be used to to fire up some dialogs, adding a toolbar on top, or to open context menus with right clicking an image.
As far as I understand I have a hwnd handle and so I can draw onto a window. I also understand that I have various events I can react on, like mouse button clicks or keyboard strokes. But it fails me how I would add graphical menus, buttons, etc. I know I could use html around the window but that's not the route I like to take.
For instance, does it makes sense to render an user interface offline (in memory) onto an image and then keep somehow track of the state internally?
Has anyone done such thing? Or can anyone give me some insight on how to accomplish adding a user interface.
Assuming you only care about windows and assuming that you don't mind using a windowed plugin, which is the easiest (but no HTML elements can float over the plugin), it should be no different than creating a GUI in any other windows application.
You are given a window that shows up with the AttachedEvent; when DetachedEvent is fired you need to stop using the window. Many people create a child window inside that parent window and use that for all their actual real code which makes it a little easier to use one of those other abstractions, but that's basically all there is to it. I don't know specifically how you'd do it with QT or wxwidget but you'd create a child window of that HWND that you are given and have the abstraction do your thing for you.
As to whether or not it would be rendering things offscreen, etc, I have no idea; that would totally depend on the window system. There is no reason that I know of that you would need to do that, and most things just draw directly to the HWND, but there are a zillion different ways you could do it. It looks to me like what you really need is to understand how drawing in Windows actually works.
I hope that helps

See if mouse click was handled

I'm currently making a whimsical iPhone app that will allow you to change your windows cursor into a space ship controlled by the iPhone (simple rotation and such), and currently I have the movement and clicking handled, however I'd like to add additional features, such as bullets that you can shoot around the screen which will move until they die or hit a button, which will then be clicked. And I have two questions:
Question number one: Is there any way to detect if the mouse is currently over some click-able button? OR is there any way to see if a mouse event was handled?
Question number two: Is there any way to overlay the screen with small bullets? (perhaps small [3,3] child windows or something?)
Further Information:
The client program will be in c++
SDL or SFML will likely be the graphics libs, if any are necessary (winAPI should be fine)
The most reliable route would be the Microsoft Active Accessibility interface. Many tools to help visually impaired people need to answer the question "Is this a button?", and MSAA answers that question.
Overlaying the screen is trvial in a Windows environment; just create a window :). It can be partially transparent, so you're not restricted to rectangular bullets.

How do I get the window that currently has the cursor on top of it with X11?

How can I retrieve the top window of which the cursor is on top of in the X11 server?
The window doesn't have to be ”active” (selected, open, whatever), it just has to have the cursor floating on top of it.
Thanks in advance.
You can use XQueryPointer() to get the mouse position. Then get a window list using XQueryTree(). XQueryTree() returns the window list in proper z-order so you can just loop through all the windows until you find one whose bounding box is under the pointer, XGetWindowAttributes() will give you everything you need to figure out the bounding box. I'm not sure what you would do with shaped windows though.
I haven't work with X11 for a few years so this might be a rather clunky approach but it should work. I also don't have my O'Reilly X11 books anymore, you'll want to get your hands on book one of that series if you're going to work with low level X11 stuff; I think the whole series is available for free online these days.
I haven't programmed X11 for over a decade, so forgive me if I get this wrong.
I believe you can register for mouse movement events on your windows. If you handle such event by storing the window handle in some variable or other, and then handling the event so it doesn't percolate down the tree, then at the time you want to identify the window you can just query the variable.
However this will only work when the mouse is over a window you have registered a suitable event handler for, so you won't know about windows belonging to other applications - unless there is a way to register for events on other people's windows which may be possible.
The advantage over the other answer is that you don't have to traverse the whole tree. The disadvantage is that you need to handle a great many mouse movement events, and it may not work to find other people's windows.
I believe there may also be mouse enter and mouse leave events too which would reduce the amount of processing required.