Is there any way to change the touch priority for cocos2d iOS sprites? What I have are multiple cards on the screen and they are arrayed in an arc, just like it would when you hold them in your hands. So in this setup, they overlap, and I need to recognize on which card the touch was made. I could measure the coordinates of each vertex of cards and determine the visible area of a card and then check if the touch was made inside that area (couldn't I?) but I thought there would be an easier way to deal with this, say changing the touch priority? Which means that the card closest to the screen would have the highest priority and it'll keep decreasing along the way into the background, so that even if the touch was made on 2 sprites at once (the above and below one), it would be registered only on the sprite with higher priority.
Reading on the internet only revealed ways to change the priority for a sprite and layer so that it defines whether the touch was made on the layer or the sprite, but that's not what I want.
As far as I know, by default you get exactly that behavior, the sprites closer (on the z ax) to you have priority. However, I think they pass down the event to the ones behind them as well. So, what i think you need to do is to eat the event when it gets to any of your sprites. To do that, just return NO when overwriting the "touchBegin" method. Hope it helps.
Related
I am currently working on a new RPG game using Pygame (my aim here is really to learn how to use object oriented programming). I started a few days ago and developed a movement system where the player clicks a location and the character sprite goes to that location and stops when it get's there by checking if the sprite 'collides' with the mouse position.
I quickly found however that this greatly limited the world size (to the app window size).
I started having a look into making a movement system where the background would move with respect to the player, hence providing the illusion of movement.
I managed to achieve this by creating a variable keeping track of my background map position. The map is much bigger than the app window. And each time I want my player to move I offset the background by the speed of the player in the opposite direction.
My next problem now is that I can't get my character to stop moving... because the character sprite never actually reaches the last position clicked by the mouse, since it is the background that is moving, not the character sprite.
I was thinking of spending some time coding in a variable that would keep track of how many displacements it would take the character sprite to reach the mouse clicked position if it was to move. Since the background moves at the character sprite's speed it would take as many displacement of the background in the x and y directions to center the clicked position on the background to the character sprite at the center of the screen.
It would be something like that:
If MOUSEBUTTON clicked:
NM = set number of moves needed to reach the clicked position based on character sprite distance to click and character sprite speed.
If NM != 0:
Move background image
Else:
pass
This would mean that when my background has moved enough for the character sprite to now be just over the area of the background that was originally clicked by the player, the movement would stop since NM == 0.
I guess that my question is: Does that sound like a good idea or will it be a nightmare to handle the movement of other sprites and collisions ? And are there better tools in Pygame to achieve this movement system ?
I could also maybe use a clock and work out how many seconds the movements would take.
I guess that ultimately the whole challenge is dealing with a fixed reference point and make everything move around it, both with respect to this fixed reference, but also to their own. e.g. If two other sprites move toward one another, and the character of the player also "moves" then the movement of the other two sprites will have to depend both on the position of the other sprite and also on the offset of the background caused by the movement of the player's character.
An interesting topic which has been frying my brain for a few nights !
Thank you for your suggestions !
You actually asking for an opinion on game design. The way I look at it, nothing is impossible so go ahead and try your coding. Also it would be wise to look around at similar projects scattered around the net. You may be able to pick up a lot of tips without re inventing the wheel. Here is a good place to start.
scrolling mini map
I am working on a qt-vtk project. We have a line drawing function. where straight lines are created between two mouse click position. But once actor is created it is not visible. I was calling render function just after adding the actor. But it didn't work. But if i do camera->resetview() lines become visible , but entire perspective changes. Where am i doing wrong ?
thanks
Rwik
This may not be relevant to you, but I had this exact same problem (in ActiViz [managed VTK]) and wrangled with it for a week, so I hope this helps someone out there. It turned out to be a problem with the location of the lines we wanted to draw on the canvas; they were too far away from the camera (on the Z axis) to be visible.
For us, we were trying to draw a cross on the viewing area wherever the user clicked. The data points were there, as were the actors and whatnot, but they would only be visible in the scene if you called resetCamera() and thusly changed the camera's configuration.
Initially, I blamed the custom interactor that we had to add to cirvumvent the default interactor's swallowing of MouseUp events (intended behavior). Investigation revealed that this seemed unlikely.
After this I shifted the blame onto the camera under the suspicion that perhaps the reset call was making a call to some kind of update method which I wasn't aware of. I called resetCamera() and then reverted the camera values to what they were initially.
When this was successfully done, it eventuated that the crosses would appear when the camera zoomed out and then disappear again as soon as it was set back, and it was at this point I realized that it was something to do with the scene.
At this point, I checked the methods we were using to retrieve the mouse location in 3D and realized that the z value was enormous and it was placing the points too far away as a byproduct of VTK's methods to convert 2D locations on the control to 3D locations in the scene and vice versa.
So after all that, a very mundane and avoidable mistake that originated from the methods renderer.DisplayToWorld() and WorldToDisplay().
This might not be everyone's problem, but I hope I've spared someone a week of fiddling around with VTK.
I think that's a bit hard to help, without see the code, but have you tried using
ui->qvtkwidget->update();
, where ui is the instance of your class derived from QMainWindow?
I am wondering how can I add a border & background to labels generated via CCLabelBMFont class in cocos2d.
I don't want to use sprites because my labels are generated on the fly and will keep changing and the size of labels are also different.
Also, I want user to touch and move these labels across the screen. When user picks a label it wiggles a bit like in free air. In that case, I wish to keep low complexity and preserve memory and cpu calculations.
Anyone knows best ways to achieve this?
IOS app LetterPress has similar effects.
Create your own class, that will incapsulate creation of complex node.
It will have several layers, for example, the first layer can be simple CCLayerColor of given rect with zOrder -2, the next layer will be your CCLabelBMFont with zOrder -1 and then you can overload draw method to draw border over your control. All that you draw in this method will be drawn with zOrder 0.
Then you can encapsulate any effects in this class. For example, you can rotate it a bit with method pick, etc. Whatever you want.
I'm currently developing a program to show and control animated sprites on the desktopscreen. My problem is now to actually draw them onto the screen. The user should still be able to access other applications, as long as the sprite does not obstruct it.
My attempts are below and I hope, someone can point me in the right direction. I don't really care which library I need to use, as long as the performance is good enough for something around 20-30 animated sprites.
My attempts so far:
My first attempt was with Qt. I used a QWidget with a QLabel in it to show the pixmap of an object. The pixmap itself had an alpha channel and I used the "setMask(pixmap.mask()" method of QWidget to remove anything I don't want to show. But this method can't be used for rapidly shifting shapes, like moving creatures. If setMask is called all 50-100ms to change the mask to the next movementphase, then the cpu load gets to high with a lot of creatures moving at the same time.
My second attempt was to use one QWidget for all creatures. This way setMask ist called only one time and not once for every creature. It's possible to move more creatures this way, but the screen is flickering like hell when moving the mouse pointer over the creatures.
My third attempt were the XShape functions from Xlib to change the shape of each creature, but the performance is not much better then setMask.
I tried the transparency in Qt but if I use a QWidget over the whole screen the cpu load of X gets really high while moving the mouse. I don't know, if I can do something better here.
Create a QGLWidget and learn to use the OpenGL API to draw sprites within it, even if only using glDrawPixels rather than texture objects.
You certainly won't have any problems drawing a few tens of sprites, and the time spent learning OpenGL will be a good investment if you aspire to do more complex graphical things in future.
Not sure if this is your language but the ESheep is on GitHub, could get you started: https://github.com/Adrianotiger/desktopPet
This is related to one of my other questions.
If I am tiling a large image by creating a separate QGraphicsItem (with the raster data as its pixmap), how do I keep track of the QGraphicsItem's position within the scene? Obviously for raster data, it is important to keep all the tiles "touching" to make a continuous image and they also have to be in the right place so the image doesnt look jumbled.
Does each tile have to have positioning methods that move it in relation to it's neighbors on the top/left/bottom/right? This seems kind of clunky. Is there a better way to make them all move together?
In other words, if I pan the scene with scroll bars, or pick up the image and drag/move it around in the scene, I want all the tiles to also move and stay in the right position relative to each other.
What is the best approach for controlling the layout, which tiles need to be rendered (i.e. only the visible ones), and populating the data only once it is needed? Also, once a tile has been rendered, is the data from it ever dropped, and repopulated from the image file, say if it stays out of view for a while, then comes back later if someone pans to that section?
There are (more than) 2 ways of doing this:
Use QGraphicsItemGroup which
handles grouping of your tile items
for you. It moves, selects, updates
it's group members as if they are
one. I've never used it but from the
doc, it seems to work with typical
applications.
Paint the tiles yourself in the
paint() of one custom item. This
gives you total control on how to
place and draw the tiles while the
item truly acts as one item since it
is, well, one item. This is what I
do.