I am trying to have my subclass of CCLayer respond to multitouch. In the init method I call
self.isTouchEnabled=YES;
In a method called registerWithTouchDispatcher, I call
[[CCTouchDispatcher sharedDispatcher] addTargetedDelegate:self priority:0 swallowsTouches:NO];
In my app delegate, I call
[glView setMultipleTouchEnabled:YES];
The ccTouchBegan:withEvent: method gets called, but never the ccTouchesBegan:withEvent. I am pretty new to cocos2d, so it could be something simple, I just can't figure out what it is.
Add [[CCTouchDispatcher sharedDispatcher] addStandardDelegate:self priority:0]; in your class to receive non targeted touches.
From the cocos2d documentation (Link: http://www.cocos2d-iphone.org/api-ref/0.99.0/interface_c_c_touch_dispatcher.html)
CCTouchDispatcher. Singleton that handles all the touch events. The dispatcher dispatches events to the registered TouchHandlers. There are 2 different type of touch handlers:
Standard Touch Handlers
Targeted Touch Handlers
The Standard Touch Handlers work like the CocoaTouch touch handler: a set of touches is passed to the delegate. On the other hand, the Targeted Touch Handlers only receive 1 touch at the time, and they can "swallow" touches (avoid the propagation of the event).
Firstly, the dispatcher sends the received touches to the targeted touches. These touches can be swallowed by the Targeted Touch Handlers. If there are still remaining touches, then the remaining touches will be sent to the Standard Touch Handlers.
Related
So when I use a setText() on a QLabel for example, Qt automatically updates the view/gui for me and the new text is shown, but what happens behind the scenes? Is there an update function that gets called automatically when using functions like setText()?
Thanks!!
You should check the basic documentation in this link.
The internal system is a little bit more complex but in general, it follows the observer pattern. This mechanism allows the detection of a user action or changing state, and respond to this action.
Low-level interactions, like refreshing the screen are implemented via the Event System
In Qt, events are objects, derived from the abstract QEvent class, that represent things that have happened either within an application or as a result of outside activity that the application needs to know about. Events can be received and handled by any instance of a QObject subclass, but they are especially relevant to widgets. This document describes how events are delivered and handled in a typical application.
So, regarding the display process, there is a dedicated event. A QWidget object handles/subscribe to a PaintEvent, see QWidget::paintEvent.
This event handler can be reimplemented in a subclass to receive paint events passed in event. A paint event is a request to repaint all or part of a widget.
When you call, QLineEdit::setText(), the widget will be repainted the next time a display event is triggered, based in the OS configuration, refresh rate, etc.
For high-level interactions, Qt uses a similar pattern based in the signal/slot mechanism:
Observer pattern is used everywhere in GUI applications and often leads to some boilerplate code. Qt was created with the idea of removing this boilerplate code and providing a nice and clean syntax, and the signal and slots mechanism is the answer.
Here's the situation:
I have a custom widget subclassed from QTabWidget that I've implemented to accept QDropEvents for files. When files are dropped on the TabWidget they are opened for editing. This part works fine. However, I want to implement drag and drop functionality as part of the editor (think like a LabView-esque GUI). I have properly implemented the event handlers and acceptsDrops on the EditorWidget but the TabWidget receives all the events and attempts to process them as files. I can differentiate file-related events from the editor's events by mimedata but I can't figure out how to pass the event from the TabWidgeton to the appropriate EditorWidget.
So the question:
How can I pass a QDropEvent from the widget which received it from the system to another widget which it owns? Alternatively, how do I tell the system which widget should receive the event, based on the contents of said event?
What I've tried:
I can't call the dropEvent method of the child as it's protected. I could create a series of my own methods that pass the events around but that seems redundant and fragile. I've looked into installing an EventFilter, but from what I can tell that only discards events, it doesn't say "not me try someone else."
Thanks in advance for your assistance!
Intersting! I think that accepting the event in the parent widget, and then trying to forward it to the child widget, is not the right approach architecturally. It would basically violate encapsulation (objects handling their own events).
If I were you, I would investigate why the child widget isn't seeing the event first. Children widgets are on top of their parents, so your child widget should have a first go at the event. Did you call setAcceptDrops(true)?
When you fix that, in the child widget event handler you can analyze the event and call event->ignore() if the event should be forwarded to the parent QTabWidget. If you don't call ignore(), the child will "consume" the event and it will not be propagated to the parent!
Here's an old blog post on event propagation that could help:
http://blog.qt.io/blog/2006/05/27/mouse-event-propagation/
Solving my own problem:
As Pliny stated the child should see the event first. My problem appears to have been that in EditorWidget I had not implemented dragEnterEvent and dragMoveEvent so even though I had implemented dropEvent in EditorWidget the TabWidget took control of the drag and therefore stole the drop.
I am attempting to hook up cocoa events to a minimal objective-c++ cocoa message proxy that will call my c++ functions to handle the event.
For a button, OnClick, its simple, I go:
#interface cocoa_proxy : NSObject
- (void)action:(id)sender;
#end
#implementation cocoa_proxy
- (void)action:(id)sender
{
exit(0);
}
#end
cocoa_proxy* proxy = [[cocoa_proxy alloc] init];
[button setTarget:proxy];
[button setAction:#selector(action:)];
However I am unsure about how I could capture other events for the button (such as OnPress, OnRelease, OnEnter, OnExit, etc...) nor can I seem to capture events for a Window or View.
Any attempt trying a similar route, with slightly differently formatted methods (from various, not quite the same, but similar questions on the web) results in an error like:
reason: '-[NSWindow setTarget:]: unrecognized selector sent to instance xxxxx
I already have a large codebase that I want to reuse, so it is paramount that I have a proxy, rather than just "do everything in objective-c++".
For completeness, here is how I create the Window
window = [[[NSWindow alloc] initWithContentRect:NSMakeRect(x, y, w, h)
styleMask:NSTitledWindowMask backing:NSBackingStoreBuffered defer:NO]
autorelease];
To be clear, I do not use XCode, or Interface Builder, or any GUI creation software or Cocoa aware software, or utilize concepts such as NIBs, everything is done procedurally and with minimal interaction with objective-c++.
So my question is, how do I handle different events for a Button, and how do I handle events (at all), in a similar manner for Windows and Views.
The action method fired by a button when it is clicked (complete press and release sequence), is not an event as such. It's the consequence of a series of events.
It's not entirely clear at what level you need to operate, but you may need to use a custom subclass of NSApplication and override -sendEvent:. There you will get the low-level events like left-mouse-button-down, left-mouse-button-up, key-down, key-up, etc.
If you are only interested in a single window, you could instead use a custom subclass of NSWindow and override -sendEvent: there.
If you don't allow events to be processed through Cocoa in the normal way (by calling through to super in either override), then you are responsible for all application responses to events. You can rely on clicks in the window controls to close, minimize, or full-screen/zoom the window. Clicks on Cocoa buttons won't work. Etc. So, you have to choose carefully what you do or don't do with the events.
To just get additional control over a button, you would subclass NSButton, NSButtonCell, or both and override a bunch of methods. For example, you would probable want to override -startTrackingAt:inView:, -continueTracking:at:inView:, and -stopTracking:at:inView:mouseIsUp: in your button cell class.
What I am going to do is very simple, I want to hide the mouse cursor after 1 second if the move does not move or click.
I searched and saw someone recommends using WM_MOUSEMOVE. In my app, however, WM_MOUSEMOVE is simply a dectection of whether the mouse is in the client area. If it is, the app receives WM_MOUSEMOVE continually. I've read the MSDN page but I am still confused.
Use WM_SETCURSOR.
Use WM_SETCURSOR for cursor related works. This message is made for that purpose. Your mentioning of client area suggest you probably need to use SetCapture API also.
Another way (or more modern way of doing) is using TrackMouseEvent. It provides WM_MOUSEHOVER.
The recommendation is correct. What you need to do is define a timer (for example, one that triggers the WM_TIMER message).
You activate it in the first mouse movement (WM_MOUSEMOVE). If a mouse movement won't occur within the interval you defined for the timer, the WM_TIMER event will fire and you can then hide the mouse.
Each time the WM_MOUSEMOVE event fires, you simply restart the timer (using its dedicated API). So that WM_MOUSEMONVE events prevent the timer from expiring. If WM_MOUSEMOVE stop arriving (because you don't move the mouse anymore), the timer will tick without interrupts until it elapses and fires.
A CCMenuItemImage will call the selector after the touch that was pressing it releases it.
However, is it not possible to make it call the selector as soon as the touch presses it? This is to create a faster effect in the menu.
subclass CCMenuItemImage and process the event
-(void) selected{
// do you thing : for example you could have a target:selector pair of
// properties added in your implementation.
[super selected];
}
i use that to detect a long touch and pop a contextual tooltip window (for example).