There are a lot of articles related to customizing main menu of the dialog window on, including "custom-drawn" menus. But none of them seem to answer my question.
I have a (borderless) window that has a custom-drawn by hand title bar:
I need to display somehow a main menu under this "fat" title bar. How could I do this, using MFC? By default "native" menus seem to be able to be located only in the top of the client area of the dialog window (or am I wrong here?). Is there any solution for my problem?
If someone could give some links related to my issue, I'd greatly appreciate this! I've seen a lot of products that implement this, for example Ontrack comes on mind, but never met any explanation on how this was achieved.
Thank you!
I need to display somehow a main menu under this "fat" title bar.
That's precisely where it is being drawn, according to the image you posted.
By default "native" menus seem to be able to be located only in the top of the client area of the dialog window (or am I wrong here?).
No, that's correct. The menu will be automatically drawn where menu bars belong: at the top of the window, just below the title bar.
You've decided to mess up the defaults and throw usability of your application to hell by reimplementing the non-client area. That means you can't rely on Windows to draw these elements for you. Instead, you'll need to take responsibility for drawing all of these things yourself, which requires you to write code to do so. I don't know what "Ontrack" is, but any application that does this is owner-drawing its menus.
Another popular option (used by Internet Explorer for a while) is to create your own menu-like object using a rebar control. This has the advantage of integrating into an existing toolbar control and allowing the user to rearrange items as desired. It, like writing your own menu control, has the disadvantage of not conforming to standard platform conventions and user expectations (although it's probably a lot better than whatever you can come up with yourself). There's a how-to article here on MSDN.
I suspect that in undertaking this project you've probably bitten off more than you can chew. Keep in mind that there's hardly ever (if even that) a reason to draw your own title bar. As you're seeing, conforming to your platform's standard expectations is often easier on the programmer and far better received by your users.
Related
In our application, we have a variable number of dockwidgets because some of them are added by plugins that are loaded at runtime. Not all dockwidgets need to be visible at the same time necessarily. This depends strongly on what the user is working on and what plugins are active.
However, if too many dockwidgets are added programmatically with addDockWidget(...), they start to overlap each other (not in terms of tabs, but in terms of content of one being painted on the area of a different one, which obviously looks broken).
The user can move the dockwidgets to dockareas that still have space left, but the layout/main window successfully prevents (untabbed) re-addition to the "crowded" dockarea.
We do allow tabbed docks to allow the user to arrange the dockwidgets a required, but we don't want to enable QMainWindow::ForceTabbedDocks since this would constrain the number of simultaneously visible dockwidgets too much (one per dock area).
How can I prevent this or better control how dockwidgets are added?
Not answering your question directly but it might be worthwhile to actually forget about Qt and actually think of how the whole interaction should work. What are the user expectations? What should actually happen if 10 different plugins become active? Should they be docked or should they be floating or should they become pin-able docking windows with initial state as a small button on the MainWindow edges? I think once you do that ground work and come up with user interface mock-ups, you can then start looking at Qt and figure out if Qt provides a direct way to develop that interface and if not what additional components you will need to develop to get that interface working.
From my own experience, I had developed a similar interface long back but in MFC. The way we did it was that some of the docked windows were deemed to be must have and they would come up as docked. Then there were a set of windows that didnt need to be visible always but should be quickly available and their initial state was as hidden pin-able dock window which meant they came up as buttons on the MainWindow edge. Finally there was a third set that was not required by the user always and could be called in from File->View Menu. Once the user made it visible, the user typically would assign it to one of the first two groups or keep it afloat. This whole configuration was saved in a config file and from there onwards whenever the plugin was loaded/became active the last used state of the associated docking window was used. It though involved quite a bit of extra work but the end result was to the satisfaction of all users.
Have you tryed setDockOptions(QMainWindow::AllowNestedDocks)? I can't test it now but it may help.
By default, QMainWindow::dockOptions is set to AnimatedDocks | AllowTabbedDocks so you would want something like
setDockOptions(QMainWindow::AllowNestedDocks | QMainWindow::AnimatedDocks | QMainWindow::AllowTabbedDocks)
EDIT:
If you are having too many problems, you may be going about this the wrong way. Instead of using docks, you may want to try using QMdiArea with QMdiWindow. This may not work for your program, but its something to think about.
This is the solution I tried:
I created in QTCreator an empty project with a window, a minimalistic menu labelled "New Dock" and a DockWidget named dockWidget
This is the triggered() handler for my menu item:
void MainWindow::on_actionNew_Dock_triggered()
{
QDockWidget* w = new QDockWidget("Demo", ui->dockWidget);
this->addDockWidget(Qt::LeftDockWidgetArea,w);
this->tabifyDockWidget(ui->dockWidget,w);
}
tabifyDockWidget(QDockWidget* first, QDockWidget* second) is a QMainWindow method that stacks the second dockwidget upon the first one. Hope it helps...
I always been interested on how we can accomplish this (hide/show the main menu using the alt key), and now some applications do this very often. One that really please me is the visual studio 2010 with this plugin:
http://visualstudiogallery.msdn.microsoft.com/bdbcffca-32a6-4034-8e89-c31b86ad4813?SRC=VSIDE
(firefox also do this, but i think that is in a different way)
Can anyone explain me how this can be achieved or if you known of any sample project that demonstrate this please tell me.
(what i can see in some replies here in stack is that we have to destroy the menu when is to hide and create it when is to show?! but this seems a bit bad solution...)
Thanks
The SetMenu function lets you add/remove the menu from the window. It does not destroy the menu.
Note that most applications which have the dynamic menu hide/show behavior are not really showing a menu. They're showing a custom control that looks like a menu.
You might also take a look at MFC support for auto hiding menus. I used this technique and it worked really well.
in CMainFrame::OnCreate I did
m_wndMenuBar.ShowWindow(SW_HIDE);
which actually works fine in our project
I stumbled across a related pit fall that will show a hidden main frame without your consent:
Whenever the focus for a child window in an MDI application changes (e.g. due to right clicking in it), the function CMDIChildWnd::OnMDIActivate will be called, which in turn shows the main menu (even if it was removed or destroyed previously) of the MDI application.
This works basically by adding the saved main manu from the underlying's CMDIChildWnd m_hMenuShared variable.
A quick&dirty hack to prevent this, is setting m_hMenuShared to NULL (it's protected in CMDIChildWnd so this needs a custom derived child class of CMDIChildWnd) for all child frames.
I am trying to create a tab-control that have the tab-buttons aligned from right-to-left, in Win32/c++. The WS_EX_LAYOUTRTL flag doesn't help me, as it mirrors the drawing completely both for the tab items and the tab page contents. The application itself handles the mirroring automatically (it's a cross platform UI solution), which is also a reason for us not to use WS_EX_LAYOUTRTL flag (we have mirroring implemented in a generic way for all UI frameworks/platforms).
One solution would be to override TCM_GETITEMRECT and TCM_HITTEST in the subclassed TabCtrls window procedure. This enables me to move the buttons allright, but the mouse events still acts on the positions that the control "knows" the buttons really are at (ie. mouseover on the first button invalidates the leftmost button - the coordinates are not mirrored).
So that seems to be a dead end for me.
Another possibility would be to insert padding before the first tab button, to push them all to the right edge. I haven't been able to figure out how to do that, though. Visual Studio sports this little dialog:
How did they put the buttons in front of the first tab page? Knowing this would enable me to solve this problem.
Update, solution:
The solution to my problem is to use the built-in RTL support. For this to work, the tab control must have both the WS_EX_LAYOUTRTL and WS_EX_NOINHERITLAYOUT flags. That will preserve the function of all existing drawing code while only the TabCtrl buttons are mirrored. I didn't realize that the ES_EX_NOINHERITLAYOUT flag goes on the parent (the TabCtrl), which is why I was looking for the workaround originally described.
For reference, I am still curious to have an answer to the original question, though.
If you take a look with a spy application you will see that it is not actually a normal windows tab-control but custom thing and the drawing is done by the parent window AFAIK:
Both Visual Studio and Office use a lot of custom controls, some of the features make their way into the common controls after a few years, some features stay private...
I know this type of thing is looked negatively upon but I write software for people with disabilities and sometimes good gui practices don't make sense. In this case, the user interacts with a assistive interface and under certain conditions, my control app needs to prompt the user with a question. My background process creates a dialog (I'm using wxwidgets wxDialog class) and calls Show(). The dialog box appears but it does not have focus (the application that the user was previously using keeps it). Since my users can't use mice, they can't simply click on the window. I've tried calling show and then followed by SetFocus(HWND) but that doesn't do it. What's the problem? Is this even possible? Window7. I'm thinking that it might have something to do with it being a dialog and not a full window (wxFrame). Any help is greatly appreciated.
Try using SetWindowPos(hWnd, HWND_TOPMOST, 0, 0, 0, 0, SWP_NOSIZE|SWP_NOMOVE)
Unfortunately, not only is it 'looked negatively upon', but it is not possible. There's no getting around this; ask yourself what would happen if every application could do this? Obviously, if you can put your dialog on top of the other application, it can do exactly the same back to you.
http://blogs.msdn.com/b/oldnewthing/archive/2010/11/25/10096329.aspx
The only think I can think of would be for you to put a notification icon in the system tray, and then have it display a notification balloon.
I had to do something like this before. Simply calling functions like SetForegroundWindow or SetWindowPos didn't do the trick.
I ended up using this ForceForegroundWindow function (1st one) and it works pretty well.
I know this is Delphi code, but the API is the same and Delphi is a pretty simple language.
I want to make a C++ button on Start>Run i.e but when I do it will not do signalled event?
Im sorry I have seen that you do not get the question.
Ok basically when you create a button with CreateWindowEx(); I want to do that but put on a different window with SetPArent which I have already done now the button does not work so I need my program to someone get when it is clicked from the Run window as example!
And yes you have it, but it's not making the button is the problem it's getting when it's clicked with my program since it does not belong to it anymore!
You need to apply the ancient but still-supported technique known in Windows as subclassing; it is well explained here (15-years-old article, but still quite valid;-). As this article puts it,
Subclassing is a technique that allows
an application to intercept messages
destined for another window. An
application can augment, monitor, or
modify the default behavior of a
window by intercepting messages meant
for another window.
You'll want "instance subclassing", since you're interested only in a single window (either your new button, or, the one you've SetParented your new button to); if you decide to subclass a window belonging to another process, you'll also need to use the injection techniques explained in the article, such as, injecting your DLL into system processes and watching over events with a WH_CBT hook, and the like. But I think you could keep the button in your own process even though you're SetParenting it to a window belonging to a different process, in which case you can do your instance subclassing entirely within your own process, which is much simpler if feasible.
"Superclassing" is an alternative to "subclassing", also explained in the article, but doesn't seem to offer that many advantages when compared to instance subclassing (though it may compared with global subclassing... but, that's not what you need here anyway).
You'll find other interesting articles on such topics here, here, and here (developing a big, rich C++ library for subclassing -- but, also showing a simpler approach based on hooks which you might prefer). Each article has a pretty difference stance and code examples, so I think that having many to look at may help you find the right mindset and code for your specific needs!
OK, I'll do my very best - as I understand you, you're trying to inject a button into some existing window. That meaning: Your tool creates a button in some window that does not belong to your application. Now, you want to be notified when that button is pressed. Am I correct so far?
To be notified about the button being pressed, you need to get the respective window message, which will only work if you also "inject" a different WndProc into the window. Actually I have no idea how that should work, but I faintly remember functions like GetWindowLong and SetWindowLong. Maybe they will help?
EDIT
I've searched MSDN a little: While you can get the address of a window's WndProc using GetWindowLong, you can not set the WndProc using SetWindowLong on Windows NT/2000/XP (and up I suppose). See here (MSDN).
So what you could do is install a global message hook that intercepts all window messages, filter those for the window you've injected the button into and then find your message. If you have trouble with this, however, I'm the wrong person to ask, because it's been years ago since I've done anything like that, but it would be stuff for a new question.
EDIT 2
Please see Alex Martinellis answer for how to define the hook. I think he's describing the technique I was referring to when I talked about defining global message hooks to intercept the window messages for the window you injected your button into.