Let me give the brief summary of our setup.
There's a world.
Inside the world, there are lots of places.
Inside the places, there are lots of characters.
Now many of the characters share the same texture.
We currently have
world(layer)-batchNode-character sprite.
world(layer)-batchNode-place sprite
Hence character's position is relative to world, not the place it's in(conceptually).
How could we set up class hierarchies so that
we still utilize the power of batchNode
and be able to use the local coordinate for character(relative to place it is in)
Simple structure such as
world(layer)-place(layer)-batchNode-character(sprite) won't work because there will be many common characters in a world but not sharing the batchNode.
First of all, you don't need (multiple) layers. You can have your batch nodes take the place of layers. You may want to have one main layer for user input, or manage user input yourself by registering a touch delegate class with touch dispatcher.
Conceptually what I would recommend the node hierarchy to be:
scene (touch delegate)
batchNode (places)
batchNode (characters)
or
scene
layer (input)
batchNode (places)
batchNode (characters)
Now assuming that your places (tiled images?) are somehow spread across the world, they each have a position. To have the characters in each place have coordinates relative to the place they're in, you can do one of two things:
Set the position of the character batchnode to be the position of the corresponding place. This works if you only batch the characters in that place.
If you want to batch all identical characters regardless of place, you can add "dummy" CCSprites (uses a tiny, fully transparent image) to the batch node and position each to be at the position of a specific place. Then add the characters for this place as children of this sprite. If characters move from place to place, remove them from one dummy sprite and add them to the corresponding other dummy sprite.
Finally, you can always create helper methods that transform the position of a sprite to local coordinates based on the place they're in. That way you don't need any specially setup node hierarchy and you can still work with local coordinates where necessary. If you do that, you may find that whether you use local or world coordinates doesn't really make much of a difference. World coordinates are simply character position plus place position, a simple addition/subtraction gets you from world to local coordinates and vice versa.
Related
I am new to VTK. I would like to know how VTK abstract picker behaves for multiple actors of different opacity values. Let's consider two actors, with one
inside another. When I set the opacity of the outer surface to 0.3, while
keeping the opacity of the inner one 1.0.Since the outer one is semi-transparent, I can see the inner actor in the overlap region of the two actors. When I perform picking in that region,the resulting coordinates is from the inner surface itself, and when I pick some point other than the overlap region,I am getting outer surface coordinates. How can I perform picking operation based on opacity values? Such that i want to pick one actor at a time. Anybody please help..
vtkAbstractPicker is, as the name suggest, just as an abstract class that defines interface for picking, but nothing more. When choosing actual pickers, you basically have a choice between picking based on ray casting or "color picking" using graphical hardware (see the linked documentation for actual vtk classes that implement those).
Now to the actual problem, if I understood what you wrote correctly, you are facing a rather simple sorting problem. The opacity can be seen as kind of a priority - the actors with higher opacity should be picked even if they are inside others with lower opacity, right? Then all you need to do is get all the actors that are underneath your mouse cursor and then choose the one with highest opacity, or the closest one for cases when they have the same opacity.
I think the easiest way to implement this is using the vtkPropPicker (vtkProp is a parent class for actor so this is a good picker for picking actors). It is one of the "hardware" pickers, using the color picking algorithm. The basic algorithm is that each pickable object is rendered using a different color into a hidden buffer (texture). This color (which after all is a 32-bit number like any other) serves as an ID of that object: when the user clicks on a screen, you read the color of the pixel from the picking texture on the clicked coordinates and then you simply look into the map of objects under the ID that is equal to that color and you have the object. Obviously, it cannot use any transparency - the individual colors are IDs of the objects, blending them would make them impossible to identify.
However, the vtkPropPicker provides a method:
// Perform a pick from the user-provided list of vtkProps
// and not from the list of vtkProps that the render maintains.
// If something is picked, a 1 is returned, otherwise 0 is returned.
// Use the GetViewProp() method to get the instance of vtkProp that was picked.
int PickProp (double selectionX, double selectionY,
vtkRenderer *renderer, vtkPropCollection *pickfrom);
What you can do with this is simply first call PickProp(mouseClickX, mouseClickY, renderer of your render window, pickfrom) providing only the highest-priority actors in the pickfrom collection i.e. the actors with the highest opacity. Underneath, this will do a render of all the provided actors using the color-coding algorithm and tell you which actor is underneath the specified coordinates. If it picks something (return value is 1, you call GetViewProp on it and it gives you the pointer to your picked actor), you keep it, if it does not (return value is 0), you call it again, this time providing actors with lower opacity and so on until you pick something or you test all the actors.
You can do the same with the ray-casting pickers like vtkPicker as well - it cast a ray underneath your mouse and gives you all intersections with everything in the scene. But the vtkPicker's API is optimized for finding the closest intersection, it might be a bit of work to get all of them and then sorting them and in the end, I believe the solution using vtkPropPicker will be faster anyway.
If this solution is good, you might want to look at vtkHardwareSelector, which uses the same algorithm, but unlike the vtkPropPicker allows you to access the underlying picking texture many times, so that you don't need to re-render for every picking query. Depending on how your rendering pipeline is set up, this might be a more efficient solution (= if you make a lot of picking without updating the scene).
I asked a question yesterday about a problem I was having displaying a level in a C++ 2D GameBoy Advance game when the level is larger than the screen size. However, I may have been slightly too specific, and so I want to ask this more generally.
What is the simplest way to go about trying to display a scrolling level which is large (512x512 pixels) on a screen which is much smaller (240x160 pixels)?
A brief description of my code structure so far: I have a base class called Object which defines (x, y) position and a width and height, there is an Entity class which inherits from Object and adds velocity components, and a Character class which inherits from Entity and adds movement functions. My player is a Character object, and boxes I want the player to pick up are an array of Entity objects. Both the player and cubes array are members of the Level class, which also inherits from Object.
So far I have implemented a game which works very well when the level is the same size as the screen - all the objects' positions are stored relative to their position in the level. However, I am having serious trouble trying to work out how to display the objects in the correct place when the level is offset on the screen. I want the viewport to never extend out of the level, and if the viewport is not against the edge of the level, display the player in the middle of the screen.
Should I try to work it out using a couple of simple offset variables to move the background? If so, in what order should the offsets be calculated and applied? How would the offsets differently apply to the player and boxes? Or instead, should I try creating another Object as a member of the Level class for the viewport? How would I go about calculating the offsets using that?
Any advice provided wilil be greatly appreciated.
I'm self learning C++ and playing around 2D tile mapping.
I have been reading through this scrolling post here, which is based on this tiling tutorial.
Using the above tutorial and some help from the Pearson, Computer Graphics with OpenGL book I have written a small program that draws a 40x40 tiled world and a Sprite (also a tile).
In terms of drawing/render order, the map(or world) itself is that back layer and the Sprite is the forward most (or top) layer. I'm assuming that's a good way of doing it as its easier for 2 tiles to interact than a tile and a custom sprite or rectangle. Is that correct?
I have implemented a Keyhandling() function that lets you move the map inside the viewport using the keyboards arrow keys. I have a variable called offsetx, offsety that when a key is pressed increases or decreases. Depending on whether I assign the variable to the map or sprite, I can more one or the other in any direction on the screen.
Neither seems to work very well, so I assigned the variables to both (map and sprite) but with positive values for the sprite, and negative for the map. So upon a key press, this allows my Sprite to move in one direction whilst the map moves in the opposite direction.
My problem is, the sprite soon moves enough to leave the window and not enough to bring the more of the map into the scene. (The window only shows about 1/8th of the tiles at any one time).
I've been thinking all day, and I think an efficient/effective way to solve this issue would be to fix the sprite to the centre of the screen and when a key is pressed the map moves around... I'm unsure how to implement this though.
Would that be a good way? Or is it expected to move the viewport or camera too?
You don't want to move everything relative to the Sprite whenever your character moves. Consider a more complicated world where you also have other things on the map, eg other sprites. It's simplest to keep the map fixed, and move each sprite relative to the map, (if it's a movable sprite). It just doesn't make much sense to move everything in the world whenever your character moves around in the world.
If you want to render your world with your character always at the center, that's perfectly fine. The best thing to do is move the camera. This also allows you to zoom your camera in/out, rotate the camera, etc. with very little hassle in keeping track of all the objects in the world.
Be careful with your usage of the word "viewport". It means a very specific thing in OpenGL. (ie, look at the function glViewport). If your book uses it differently, that's fine. I'm just pointing this out because it's not 100% clear to me what you mean by it.
So with Tiled, I can set Tile Properties directly on a tile before placing it on a map like so:
This is how I have done collision checking, by setting the collision property to 'true' and then checking the tile properties when moving a sprite.
However, I would like to add a 'teleport' tile. When the player walks on a specific tile, it takes them to a separate location.
The problem I am running into, is that when you set a property on a tile, you only get to set it once, and not on the tile instance. Meaning every tile would have the same teleport location.
Am I overlooking something? Is there a better way to go about doing this in Cocos2d in general?
You can use the object layer for this. Add an "object" (that's just a rectangle or point in Tiled) to the teleporter tile and use the object's properties to connect two locations together.
When you load the map you could walk over all objects to find the connecting objects. Then you know the two tile locations of the teleporter end points which you could store in a teleportation array. Every time your player moves to a new tile, check the teleportation array to see if the player is on one of the teleportation fields, and if he is, move him to the other teleportation tile.
Of course you could also check intersection with the object (rectangle) but since there's a chance that you might accidentally create an object (rectangle) that spans multiple tiles it seems more reliable to check these objects before the game starts.
Well this probably is the best way but it's what I've done. You could create a meta layer and have separate tiles for each teleporting pad. So when you check if the player is on teleportingpad1 you set the players location to receiverPad1 (which could be another tile, object in tiled, or just a point you set when you check for collisions). And you would just make another one e.g. teleportingpad2, teleportingpad3, etc. for more pads.
If you begin to render points, render a ton of vertices, and then end, you get noticeably better performance than if you begin points, render a vertex, end, and repeat a ton of times (e.g., redraws during pan and zoom actions for, say, 200,000 points are MUCH smoother).
I guess this might make sense, but it's disappointing. Is there a way to get back the performance while still rendering each point in its own begin-end block?
BACKGROUND:
I wanted to design a control that could contain a ton (upwards of a million in an extreme case) of "objects" that each do its own rendering. Many of these objects will represent themselves as points.
If I let a hundred-thousand points individually render themselves in their own begin-end blocks, I get a major performance hit (as opposed to rendering them all in a single begin-end block). It thus seems I might have to make the container aware of the way the objects render themselves (for example, beginning points, telling everything that needs to render a point to do so, and then ending).
This messes up the independent nature of the display-object relationship I wanted. It also messes up hit testing by selection because I don't think you can add a name to a vertex inside a begin-end block of points, right?
FYI (in case this helps) my project will be displaying a 2D scene (using an ortho projection) and requires hit testing to determine which related object a user might click. In general, the objects will represent "tracks" containing individual points connected with lines. The position data is generally static, but point and track colors and display representations may change due to user settings and selection information. One exception--a "playback" mode may allow the user to see only one track point at a time (the "current" point in the playback) and step through time from one point to the next. However, even in that case I assumed I would simply change which point on each track is actually displayed (at its "static" location) depending on the current time in the playback. If any of that brings to mind further suggestions for an OpenGL newbie, then much thanks!
To solve this issue, I started by using VBOs (which did speed things up). I then allowed my "track" objects to each draw their own set of points as well as the lines connecting the points (each track using two DrawArrays: one for the line strip and one for the points). Each point does not have to draw itself independent of the other points--this was the major performance improvement.
BUT, I still needed hit-testing against the points, so..
Finally, I needed allowed each displayed object (in this case, the tracks) to do its own selection routine so each object can do what it needs for efficient selection. For tracks, they took a two-step process. First, a track names its entire line strip with one name (0) and performs the select. IF that results in a hit, then the track does a second render pass, naming each individual point and line segment to hit-test against each part of the track. This makes hit-testing against each point quite fast.
As an aside, I'm using .Net (C#) for my programming. With it, I created a class (SelectEventArgs) derived from EventArgs to describe selection criteria to objects being displayed. My SelectEventArgs class includes a list meant to be filled with selected objects. The display class then has an EventHandler<SelectEventArgs> event. When an object is added to a display, it subscribes to that event. When the event fires, each object determines whether it's been selected and fills the list of selected objects (in the SelectEventArgs passed in the event) with its information. After the event fires, I can access the list of objects returned in the SelectEventArgs instance to handle the user interaction. I find this to be a nice pattern for creating flexible display objects and selection tools.