Drag and Drop a toolbox like photoshop - c++

I'm developing a multi platform application windows, mac and linux and I need yours to know how to have the same effect of drag and drop of photoshop on qt, that is to say how can I do to have the same aspect of moving a toolbox and depositing it on a corner of my main window. I do not need qtoolbar, I want my own implementation.
Photoshop effect example

I've done similar work in other languages and frameworks. If you want to roll your own from scratch my #1 advice is to break the problem up into smaller parts and take it on progressively.
Write or import a good drag drop handler
Handle dragging around to position a free floating "toolbox" (no docking yet)
Create logic to detect dropping on to "hot zones" (areas of a set width at the edges of the screen) to initiate a dock
Handle the behavior of a panel in the "docked" state (locked to left x coordinate, etc)
Jazz it up with drawing cues to indicate to the user where a panel will dock as it is dragged.
Keep building the logic for nested panels, tabbed panels, etc. Try to tackle it in an object oriented way to keep the code clean and tight and promote reuse.
You may want to start by studying some other implementations, regardless of language to get some ideas and see how they setup the logic.
A Qt4 docking framework
A JS docking framework
A c# docking framework
Good luck!

Related

Are VCL ListViews ListBox DBGrids full touch aware?

Since we have tablets with Windows 10 I have decided to use again Delphi XE7 and VCL to develop for this multitouch devices.
I have found ListView, ListBox and DBGrid seem not have a standard behavior with pan and scroll (just PanUp & PanDown, ScrollUp ScrollDown). DBGrid does not support touching panning. ListBox, seem doesn't control inertial panning like TListview... and ListView react erratically, sometimes "loose" pannings moving scrollbar but not items list.
Have someone tested this controls on Windows 8.1 or Windows 10 using a multitouch tablet ?. Just load components with, let me say 100 items and try to have a simple vertical smooth scroll / pan using fingers.
All together is kind frustrating, and I cannot focus in develop application which is my task.
Question is: Which is the right component or way to use panning (at least vertical panning / Scrolling) with touch screens and working smooth and without problems ? I thought this components should react to standard actions (like PanUp or PanDown) without need to implement the Gesture Manager and control one by one each touch on screen. I would like to receive your kind feedback. Thank You
Conclusion: Many thanks to all who have helped with their comments. My own conclusion is Delphi is not ready to be used as a RAD for touching screens. The touching implementation is poor and need too much work for very standard using. Should not be necessary invent the wheel again for a very common and standard controls. Actually there are more mobile device users, than desktop users. Perhaps Embarcadero should decide to pay attention to this matter, and give well finished tools wich meet the OS touch and feel controls.
Let me add the same in FM using TGrid works fine.

How are opengl menus that go outside of the window implemented?

I was looking at how sometimes when you right click, the menu goes outside of the window.
Is this implemented with a separate window? If so, how can I get this functionality. I am trying to use GLFW, but I understand if it isn't possible.
Currently I am on windows, but I like keeping my options open, which is why GLFW would be preferable.
I noticed that GLUT has such a feature. If you are confused to what I am looking at then look at that.
Thanks for any help!!
Overlapping menus (in MS Windows) have to be implemented as a new top-level window, you would have a new OpenGL rendering context and draw the menu in that space - yes, it's a fair bit of work all for the edge-case of a menu overspilling the parent window,
However this isn't often a problem in OpenGL programming because if you're working on a full-screen game then the menu will always be displayed within the main window, and even if it isn't a full-screen a game your users really won't notice them as games tend to use different UI concepts like radial-menus which wouldn't overspill the parent window.
Or if you're working on a non-game title, chances are it isn't full-screen and is going to be an OpenGL rendering area within a larger application that is rendered using a native UI toolkit (e.g. 3ds Max, AutoCAD, etc), in which case no problem: just use native menus.
You can, of course, use native menus in an OpenGL application anyway, provided you do the necessary plumbing for native window messages.

Producing buttons with Direct2D and DirectWrite (C++, DirectX)

I've been looking around for awhile about how to produce buttons using Direct2D and DirectWrite with no luck. I'm comfortable with shapes, text and that jazz. However, it suddenly occurred to me I might be looking about it in the wrong way.
Take the sentence:
you draw your controls and content for your app using the Direct2D and
DirectWrite APIs, handling all the input events directly.
I'm now thinking this means that instead of being able to quickly produce a fully functional button as I would using XAML. I would draw the button, manually check the location of the mouse on click, whether it's within the button boundaries and then handle the event? Similar method for hovering without the click.
Is this the kind of method required when using Direct2D and DirectWrite?
I haven't any experience with DirectX, but in OpenGL I build my buttons from scratch. Assuming you have animated sprites implemented, your buttons are essentially sprites that play certain animations in response to being clicked, hovered over, etc., and which you can register callbacks with. In my 2D engine, I have a class called UiButton, which inherits Sprite, and listens for various UI events. It gets more complicated when you want to handle keyboard navigation (arrow keys + enter to select) as you have to think about how the buttons are connected and which of them has focus at any given moment.
Here is my implementation for reference:
Headers: https://github.com/RobJinman/dodge/tree/master/Dodge/include/dodge/ui
Source: https://github.com/RobJinman/dodge/tree/master/Dodge/src/ui
If you're not prepared to roll your own, Googling "direct2d gui framework" seems to bring up some promising results.
Sorry I can't be of more help.
Yes, to draw a UI Button with Direct2D, you need to handle everything yourself, why? Direct2D is a 2D graphics API, not controls library. you need to draw the layout of your button, and handle the message of your button(such as click, mouse hover...), you lost lots of convenient and that's time-consuming, but the most important thing is: you can control it by yourself!
Direct2D is a graphics library. UI controls like, Text-selection, Textbox, and Buttons is not a part of it. However the benefits of using Direct2D and DirectWrite is we can implement our own UI controls, and having a full control of it.
Please also see: ID2D1Geometry::FillsContainsPoint() for hit-testing task.

Adding a user interface to an image viewer plugin

I have a general question on how to develop an image viewer plugin with Firebreath. For that, I would like to incorporate a GUI framework, like wxwidget or Qt. The GUI would be used to to fire up some dialogs, adding a toolbar on top, or to open context menus with right clicking an image.
As far as I understand I have a hwnd handle and so I can draw onto a window. I also understand that I have various events I can react on, like mouse button clicks or keyboard strokes. But it fails me how I would add graphical menus, buttons, etc. I know I could use html around the window but that's not the route I like to take.
For instance, does it makes sense to render an user interface offline (in memory) onto an image and then keep somehow track of the state internally?
Has anyone done such thing? Or can anyone give me some insight on how to accomplish adding a user interface.
Assuming you only care about windows and assuming that you don't mind using a windowed plugin, which is the easiest (but no HTML elements can float over the plugin), it should be no different than creating a GUI in any other windows application.
You are given a window that shows up with the AttachedEvent; when DetachedEvent is fired you need to stop using the window. Many people create a child window inside that parent window and use that for all their actual real code which makes it a little easier to use one of those other abstractions, but that's basically all there is to it. I don't know specifically how you'd do it with QT or wxwidget but you'd create a child window of that HWND that you are given and have the abstraction do your thing for you.
As to whether or not it would be rendering things offscreen, etc, I have no idea; that would totally depend on the window system. There is no reason that I know of that you would need to do that, and most things just draw directly to the HWND, but there are a zillion different ways you could do it. It looks to me like what you really need is to understand how drawing in Windows actually works.
I hope that helps

Qt - Catch events normally handled by the Window Manager

I'm not sure quite how to phrase the question concisely, so if there is a similar question, please point me in the right direction and close this one.
I am currently building a CAD app, the user interacts within the 3D viewports primarily through the mouse and the three keyboard modifiers (alt, shift, ctrl). Shift and control modify the currently selected tool options, and alt operates the camera - much like any other 3D CAD app.
However I'm currently developing with a Gnome desktop, and it's window manager (AFAIK) catches any Alt-RightButton mouse dragging events and interprets them as a window drag command - even when not holding the title bar and regardless of the currently highlighted widget.
This is a disaster for me because camera keyboard controls are quite standardised in my target industry. So does anyone know of a way to override this behaviour, preferably from within Qt, and preferably focus it for my one scenario in one particular widget class?
Thank you,
Cam
If you use the Qt::X11BypassWindowManagerHint on the window, then the window manager can't steal your keypresses. However, this means you lose the native window frame (including decoration, moving, and resizing), so it is likely you don't want to do this.
Another way: if your users are only on 1 or 2 varieties of Linux, add something to the installer which asks the user whether they want to manipulate the gnome (or whatever) keysettings, and if so, changes them via gconftool-2 (or equivalent).