I am attempting to hook up cocoa events to a minimal objective-c++ cocoa message proxy that will call my c++ functions to handle the event.
For a button, OnClick, its simple, I go:
#interface cocoa_proxy : NSObject
- (void)action:(id)sender;
#end
#implementation cocoa_proxy
- (void)action:(id)sender
{
exit(0);
}
#end
cocoa_proxy* proxy = [[cocoa_proxy alloc] init];
[button setTarget:proxy];
[button setAction:#selector(action:)];
However I am unsure about how I could capture other events for the button (such as OnPress, OnRelease, OnEnter, OnExit, etc...) nor can I seem to capture events for a Window or View.
Any attempt trying a similar route, with slightly differently formatted methods (from various, not quite the same, but similar questions on the web) results in an error like:
reason: '-[NSWindow setTarget:]: unrecognized selector sent to instance xxxxx
I already have a large codebase that I want to reuse, so it is paramount that I have a proxy, rather than just "do everything in objective-c++".
For completeness, here is how I create the Window
window = [[[NSWindow alloc] initWithContentRect:NSMakeRect(x, y, w, h)
styleMask:NSTitledWindowMask backing:NSBackingStoreBuffered defer:NO]
autorelease];
To be clear, I do not use XCode, or Interface Builder, or any GUI creation software or Cocoa aware software, or utilize concepts such as NIBs, everything is done procedurally and with minimal interaction with objective-c++.
So my question is, how do I handle different events for a Button, and how do I handle events (at all), in a similar manner for Windows and Views.
The action method fired by a button when it is clicked (complete press and release sequence), is not an event as such. It's the consequence of a series of events.
It's not entirely clear at what level you need to operate, but you may need to use a custom subclass of NSApplication and override -sendEvent:. There you will get the low-level events like left-mouse-button-down, left-mouse-button-up, key-down, key-up, etc.
If you are only interested in a single window, you could instead use a custom subclass of NSWindow and override -sendEvent: there.
If you don't allow events to be processed through Cocoa in the normal way (by calling through to super in either override), then you are responsible for all application responses to events. You can rely on clicks in the window controls to close, minimize, or full-screen/zoom the window. Clicks on Cocoa buttons won't work. Etc. So, you have to choose carefully what you do or don't do with the events.
To just get additional control over a button, you would subclass NSButton, NSButtonCell, or both and override a bunch of methods. For example, you would probable want to override -startTrackingAt:inView:, -continueTracking:at:inView:, and -stopTracking:at:inView:mouseIsUp: in your button cell class.
Related
So when I use a setText() on a QLabel for example, Qt automatically updates the view/gui for me and the new text is shown, but what happens behind the scenes? Is there an update function that gets called automatically when using functions like setText()?
Thanks!!
You should check the basic documentation in this link.
The internal system is a little bit more complex but in general, it follows the observer pattern. This mechanism allows the detection of a user action or changing state, and respond to this action.
Low-level interactions, like refreshing the screen are implemented via the Event System
In Qt, events are objects, derived from the abstract QEvent class, that represent things that have happened either within an application or as a result of outside activity that the application needs to know about. Events can be received and handled by any instance of a QObject subclass, but they are especially relevant to widgets. This document describes how events are delivered and handled in a typical application.
So, regarding the display process, there is a dedicated event. A QWidget object handles/subscribe to a PaintEvent, see QWidget::paintEvent.
This event handler can be reimplemented in a subclass to receive paint events passed in event. A paint event is a request to repaint all or part of a widget.
When you call, QLineEdit::setText(), the widget will be repainted the next time a display event is triggered, based in the OS configuration, refresh rate, etc.
For high-level interactions, Qt uses a similar pattern based in the signal/slot mechanism:
Observer pattern is used everywhere in GUI applications and often leads to some boilerplate code. Qt was created with the idea of removing this boilerplate code and providing a nice and clean syntax, and the signal and slots mechanism is the answer.
I need to extend an existing MFC app with a UI that will end up very cluttered unless I use a tab control. However, the nature of the UI is that there are some controls that are global, and only some that can be localised to a particular tab.
The standard use of tab controls (CPropertySheet + CPropertyPage) more or less expects there only to be CPropertyPage instances (tabs) visible on the CPropertySheet object, and nothing else. There is a Microsoft Example Project that shows a single additional window painted outside the area occupied by the tab control... but it's not immediately clear how it is created/drawn/handled, and it is only one single additional window that generates few events (I guess it is painted, so there must be a WM_PAINT event handler lurking somewhere).
Is it possible to lay out a bunch of controls with the MS Dialog Editor, including a tab control, and create the CPropertySheet using that template, hook up event handlers in a nice way, etc... or some equivalent way of getting the MFC framework to do as much of the creating, drawing and event handling as possible when it comes to a situation like this?
Yes it is possible to create dialog templates and use them in a CPropertyPage.
Each CPropertyPage behaves nearly like a dialog and handles all events for the controls on it.
Also there are features like OnApply that help you to manage the data exchange between the controls and your internal storage.
The central CPropertySheet only creates the dialog that get active. So OnInintDialog for a page is called the first time when the page gets active.
In the MFC since 2010 are more possibilities than a CPropertySheet. You can create tabbed views, that again may be CFormViews. I don't like CDialog based applications so I would prefer a tabbed view in a standard frame with toolbar and menus if appropriate for the application. So another method to unclutter your UI is to choose the MDI interface with tabbed documents... but multiple documents maybe isn't what you want.
Here is a sample of an SDI application with multiple tabbed views.
Also Coeproject shows some more samples here and with splitters and tabs here.
There are at least three solutions paths:
Try to squeeze the situation into the CPropertySheet + CPropertyPage framework which does not naturally allow for additional dialog controls on the CPropertySheet object, and so you will get no framework support this
Place a tab control on an ordinary dialog, and then use the TCN_SELCHANGE messages to fire code that manually hides & shows individual dialog controls subject to the tab control (again no framework support, but this time "within" the tab control instead of outside it)
Follow Mark Ransom's observation that if you can embed one kind of CWnd-based control on a CPropertySheet then you can probably embed any such object, including a CDialog-based object that has been developed in the MFC Dialog Editor
All of these approaches are going to encounter challenges, and it will depend on the specifics of the situation as to which is better. But first you should really consider whether there is a cleaner UI design which would lend itself to a simpler solution.
In my specific case, I saw no cleaner design alternatives, and found it easiest to take the second approach. It left me with some reasonably simple calls to ShowWindow() to show/hide the controls inside the tab control.
I have an existing application that uses MFC for the UI, and I'm trying to migrate to Qt. For the most part the migration is straight forward, but I'm not sure how to manage the enabled state of actions (menu and toolbar items).
In MFC you implement a callback with enable/disable logic, and this is called when the item is displayed. In Qt, you only have access to the setEnabled() method.
Is there a built-in or standardized way of connecting an update callback to an action? or do I need to create my solution using timers and registering actions with it? In a large application such as the one I'm working with, the 'should enable' logic can jump all over the place - i.e. certain files on disk must exist, the main display must have a selection, the application's ProcessManager::isProcessing() must be false, etc. It doesn't seem practical to rely on setEnabled() being called on specific actions when there are so many conditions behind the enable/disable logic.
The most "standard" Qt way would be the use of signals/slots.
In my MDI apps, which are based on the Qt MainWindow/MDI examples, I just connect a single "updateMenus()" function to the signal emitted whenever an MDI subwindow is shown or hidden.
Now that may not be enough granularity for your application. So what you could do is - still have a single "updateMenus()" method - but connect it to each menu's "aboutToShow()/aboutToHide()" signals.
That way you keep the logic from sprawling all over the place, and only update menus right when they are needed (like in MFC's OnCmdUI()).
Here's my mainwindow constructor:
mp_mdiArea = new QMdiArea();
setCentralWidget(mp_mdiArea);
connect(mp_mdiArea, SIGNAL(subWindowActivated(QMdiSubWindow*)), this, SLOT(updateMenus()));
And here's my updateMenus():
void MainWindow::updateMenus()
{
bool hasMdiChild = (activeMdiChild() != nullptr);
mp_actionSave->setEnabled(hasMdiChild);
mp_actionSaveAs->setEnabled(hasMdiChild);
mp_actionClose->setEnabled(hasMdiChild);
}
See Qt 4.8 doc for menu->aboutToShow()/Hide() here
As we all know, EVT_CHAR is a basic event which don't propagate to wxTopLevelWindow(wxFrame and wxDialog).
But I have a wxDialog without any wxWidgets controls on it, and need to handle user keyboard input (handle EVT_CHAR event).
I saw the wiki about catch key events globally, but it's not work on EVT_CHAR event as EVT_CHAR event need to be translated to get user input
And I have try to have wxDialog a hided children wxWindow which foward EVT_CHAR to its parent wxDialog. It works on Windows platform, and not on OSX which is my target platform.
Is there a way to implement it ?
Why do you need to handle all keyboard entry in the dialog itself? There are two typical cases for this that I know of: either you want to handle the key presses in several different controls in the same way, or you need to handle some particular key press (e.g. WXK_F1) in all controls. The former can be done by binding the same event handler to several controls. The latter -- by using accelerator table with an entry for the key that you want to handle specially.
Finally, I implemented what I want according to this:
http://trac.wxwidgets.org/ticket/15345
In wxWidgets 3.0, wxNSView implementes NSTextInputClient protocol, which makes every widgets can handle EVT_CHAR correctly.
But EVT_CHAR still cannnot be handle by wxDialog or wxFrame, because of some call of IsUserPanel() function. So I commented some call of IsUserPanel to make it work for me.
I want to make a custom made component (a line chart), that would be used in other applications.
I don't know 2 things:
Where should I use (within component class!) the methods for drawing, like FillRect
or PolyLine? In OnPaint handler that I should define and map it in MESSAGE MAP? Will
it (OnPaint handler) be called from OnPaint handler of the dialog of the application
or where from?
How to connect the component, once it is made, to the test application, which will
for example be dialog based? Where should I instantiate that component? From an
OnCreate method of the MyAppDialog.cpp?
I started coding in MFC few days ago and I'm so confused about it.
Thanks in advance,
Cheers.
Painting the control is handled exactly like it would be if it wasn't a control. Given that you're using MFC, that (at least normally) means you do the drawing in the View class' OnDraw (MFC normally handles OnPaint internally, so you rarely touch it).
Inserting the resulting ActiveX control in the host application will be done like inserting any other ActiveX control. Assuming you're doing your development in Visual Studio, you'll normally do that by opening the dialog, right clicking inside the dialog box, and clicking "Insert ActiveX Control..." in the menu that pops up. Pick your control from the list, and it'll generate a wrapper class for the control and code to create an object of that class as needed. From the viewpoint of the dialog code, it's just there, and you can use it about like any other control.
For create new component in MFC, you must create a class from the window class (CWND),
after that you can have your MessageMap for the component and your methods and also can override CWND::OnDraw method to draw the thing you want.
Before that I suggest you to take a look to device context
http://msdn.microsoft.com/en-us/library/azz5wt61(VS.80).aspx
Good Luck friend.