How to get absolute positions of touches and layer objects? - cocos2d-iphone

I'm having a weird issue with cocos2d-x when trying to detect which character (currently a CCLayer-extending object) I'm touching. The problem is that the location of the sprite I'm clicking on never matches the touch location that is registered.
I've tried different conversion functions but neither of them seem to work.
Any idea about how can I detect in ccTouchesBegan where a map (CCLayer) is being touched in the same 'scale' than the characters (also CCLayer's)? How can I get the absolute position in the map of the touch poisition as I receive it (I will move the character to the clicked position)?
I know that they may be very basic questions, but I've been looking for the answer for some hours and I can't find the solution. Any suggestion either for cocos2d-x or cocos2d is really welcome.
Thanks in advance!

1) In order to detect whether world point is in the node, I use following function:
bool VUtils::isNodeAtPoint(cocos2d::CCNode* node, cocos2d::CCPoint& touchLocation) {
CCPoint nodePoint = node->convertToNodeSpace(touchLocation);
CCRect rect = CCRectMake(0, 0, node->getContentSize().width, node->getContentSize().height);
return rect.containsPoint(nodePoint);
}
2) touchLocation - is point in world coordinates, in order to get it use CCTouch::getLocation() method in your touch listeners.

Related

how to get Qt Mouse Position all the time

So I have been struggling with this question for quite some time. I have tried many things, none of them seem to work.
So, I want to make a game in Qt, and one of the things I need is that player(QRectItem for now) rotate always to the mouse position. I just need to get readings of that position all the time, so not when i click or when I drag, all the time.
How can I do that?
I set
this->setMouseTracking(true);
on a class that inherits for QGraphicsView class, also I have set focus on it.
Dont know if the problem is with overriding functions(dont know which one to override) or with focus.
void Game::mouseMoveEvent(QGraphicsSceneMouseEvent *event)
{
qDebug() << QCursor::pos();
}
Did this but it does not work at all.
Btw, I am noob it Qt, this is my first project.
Thanks in advance! :)
P.S.
I have really done research, but if I have somehow missed topic with same or similar question that can solve this problem, just paste it and accept my apologies. :)
EDITED
You can install an event filter on your QApplication object, examine the received events for mouse movement events, convert the resulting position into your scene, and then use it to orient your rectangle.
Look at QObject::installEventFilter. Event filters are pretty easy to use. When a mouse event is received by an object, its coordinates are in that object's coordinate space, so you'll need to convert from that to your graphics scene coordinates. There will probably be several conversions to get that because you'll need to map the received position to your QGraphicsView using mapTo and then map the result of that to your scene using QGraphicsView::mapToScene.
This should get you pretty close. Let me know if you need more help.

How to add a sprite that is always on the screen in Cocos2d?

I'm doing a platformer game using cocos2d-x v3 in c++, where the maps are usually very large, the visible screen follows the object through the map.
Let's say I want to show a sprite in the top right corner of the screen and it would be in this position even when the screen is following the object.
Using the object position doesn't do it.
Is there a way to show a sprite or whatever in the screen and it would be in the screen even when the screen is moving?
Ps. I'm super noob in game development
As it's written here, you whould use convertToWorldSpace
convertToWorldSpace converts on-node coords to SCREEN coordinates.convertToWorldSpace will always return SCREEN position of our sprite, might be very useful if you want to capture taps on your sprite but need to move/scale your layer.
Generally, the parent node call this method with the child node position, return the world’s postion of child’s as a result. It seems make no sense calling this method if the caller isn’t the parent…
So, as you can read,
Point point = node1->convertToWorldSpace(node2->getPosition());
the above code will convert the node2‘s coordinates to the coordinates on the screen.
For example if the anchor position of node1 is which will be the bottom left corner of the node1, but not necessarily on the screen. This will convert the position of the node2 which is to the screen coordinate of the point relative to node1 ).
Or if you wish, you can get position relative to scenes' anchor points with function convertToWorldSpaceAR.
So there are some assumptions that will have to be made to answer this question. I am assuming that you are using a Follow action on your layer that contains your map. Check here for example. Something like:
// If your playerNode is the node you want to follow, pass it to the create function.
auto cameraFollowAction = Follow:create(playerNode);
// running the action on the layer that has the game/map on it
mapLayer->runAction(cameraFollowAction);
The code above will cause the viewport to "move" to where the player is in world position. So following the player on your map that's bigger than the current viewport. What I did for my in-game menu/hud is add the Hud onto a different layer and add it to the root of the main game scene. The scene that does not have the follow action running on it. Something like below.
// Hud inherits from layer and has all the elements you need on it.
auto inGameHud = HudLayer::create();
// Add the map/game layer to the root of main game scene
this->addChild(mapLayer, 0);
// Add the hud to the root layer
this->addChild(inGameHud, 1);
The code above assumes 'this' to be your MainGameScene. This restricts the Follow action from scrolling the element off the screen. Your element will be on the screen no matter where in World space your scene currently is.
Let me know if this is clear enough. I can help you out more if you get stuck.
I've managed to do it using a Parallax Node, and using the velocity which the sprite goes to Vec2(0,0), this way it stays always on the same spot in the screen.
You can always just put that sprite into different node / layer that everything else is. That way moving this layer / node won't move the sprite

How do you make a clickable sprite in SFML?

I've been looking through the SFML documentation for making clickable sprites, but so far I haven't found anything.
Do you guys think you could help me out?
There is nothing like sf::ClickableSprite in SFML so far, and probably there will never be. (Current list of classes in SFML)
However, you can obtain this behavior with the sf::Sprite object and the events. The idea is simple - as soon as you get the sf::Mouse::isButtonPressed(sf::Mouse::Left) event, check if the mouse is in the sprite. If it is, perform the action. You can perform another action (maybe undo) when button is released.
There is sf::Sprite::getGlobalBounds() function which returns you the position and the dimensions of the sprite. There's also sf::Mouse::getPosition() function, which returns the current position of the mouse. You can use sprite.getGlobalBounds().contains(mousePos) to check whether the mouse is in the sprite.
If you're using views, you'll need to add the view's position to sf::Mouse::getPosition(window), since it gets the mouse position relative to window coordinates.
(thanks to Chaosed0 for additional notes.)

VTKActor not visible after render but visible on camera->resetview()

I am working on a qt-vtk project. We have a line drawing function. where straight lines are created between two mouse click position. But once actor is created it is not visible. I was calling render function just after adding the actor. But it didn't work. But if i do camera->resetview() lines become visible , but entire perspective changes. Where am i doing wrong ?
thanks
Rwik
This may not be relevant to you, but I had this exact same problem (in ActiViz [managed VTK]) and wrangled with it for a week, so I hope this helps someone out there. It turned out to be a problem with the location of the lines we wanted to draw on the canvas; they were too far away from the camera (on the Z axis) to be visible.
For us, we were trying to draw a cross on the viewing area wherever the user clicked. The data points were there, as were the actors and whatnot, but they would only be visible in the scene if you called resetCamera() and thusly changed the camera's configuration.
Initially, I blamed the custom interactor that we had to add to cirvumvent the default interactor's swallowing of MouseUp events (intended behavior). Investigation revealed that this seemed unlikely.
After this I shifted the blame onto the camera under the suspicion that perhaps the reset call was making a call to some kind of update method which I wasn't aware of. I called resetCamera() and then reverted the camera values to what they were initially.
When this was successfully done, it eventuated that the crosses would appear when the camera zoomed out and then disappear again as soon as it was set back, and it was at this point I realized that it was something to do with the scene.
At this point, I checked the methods we were using to retrieve the mouse location in 3D and realized that the z value was enormous and it was placing the points too far away as a byproduct of VTK's methods to convert 2D locations on the control to 3D locations in the scene and vice versa.
So after all that, a very mundane and avoidable mistake that originated from the methods renderer.DisplayToWorld() and WorldToDisplay().
This might not be everyone's problem, but I hope I've spared someone a week of fiddling around with VTK.
I think that's a bit hard to help, without see the code, but have you tried using
ui->qvtkwidget->update();
, where ui is the instance of your class derived from QMainWindow?

detecting touch on child sprite

I added sprite B to sprite A as a child..
[spriteA addChild:spriteB];
my game logic is based on if spriteB was touched or not.
However I was not able to get spriteB to detect the touch.
I have converted the touchLocation to nodeSpace..
And I can get spriteA to detect the touch with no problem..
The if-condition I use is
if (CGRectContainsPoint(spriteB.boundingBox, touchLocation))
It'll be sweet if someone can point me to a direction..
spriteA is added to spriteBatchNode if that matters.
Thanks in advance!
Update:
I figure that the child is actually detecting the touch.
the reason it wasn't working was because I have the dirty tag set.
In short here's how my game works.
Targets popping from some hides.. and player touches on targets to get points.
I have a few arrays to hold targets at different location.
And I set dirty tag for each target..
target.dirty = TRUE when it pops
target.dirty = FALSE when it hides
this is equivalent to
If it pops, then it's clickable.
If it's behind the hide, then it's NOT clickable.
so right before I enter my if-condition
if (CGRectContainsPoint(curTarget.boundingBox, touchLocation))
I have this
if(curTarget.dirty == FALSE) continue;
My problem is that when I have the above condition check.. I detect no touch..
But If I take the above condition away, I can detect all touches..
but the problem would become the reverse.. which is I am able to click on my targets even it's not visible..
the above logic works if I add the target sprites as a child of the layer instead...
Help!