CCRenderTexture size and position in Cocos2d-iphone - cocos2d-iphone

I was trying to use CCRenderTexture for pixel perfect collision detection, as outlined in this forum posting:
http://www.cocos2d-iphone.org/forum/topic/18522/page/2
The code "as is" works, and I have integrated it with my project
But I am having trouble doing some of the other things discussed:
If I create the renderTexture to be any size less than the screen size, the collision detection doesn't work properly - It seems to show collisions when the sprites are close (<15px) to each other but not actually colliding.
Also I have trouble changing the location of the render texture. Regardless of the position I specify, it seems to go from bottom left (0,0) till the width & height specified. I followed this post:
http://www.cocos2d-iphone.org/forum/topic/18796
But it doesn't solve my problem. I still get bad collisions like specified above. Also, the first post I mentioned on the list contains comments by many users who have resized their textures to 10x10, and repositioned them off screen.
Does anyone have any sample code, so I can see what I am doing wrong? I just use the boilerplate code:
CCRenderTexture* _rt = [CCRenderTexture renderTextureWithWidth:winSize.width height:winSize.height];
_rt.position = CGPointMake(winSize.width*0.5f, winSize.height*0.5f);
[[RIGameScene sharedGameScene]addChild:_rt];
_rt.visible = YES;
I use cocos2d-iphone 1.0.1

You need to move the sprites you intend to draw into the region of the renderTexture before calling draw or visit. Moving the renderTexture does not change the position of _rt.sprite.
The intersection rectangle must be in the region of the renderTexture, otherwise you get inaccurate collisions.
It seems that you cannot change the position of _rt.sprite.
The solution that I use is to determine the origin (x,y) of the intersection box, and offset both the colliding sprites by that much. This will ensure that the intersection rectangle will have its origin at 0,0. Then I calculate the intersection rectangle again (after ensuring the origin of the intersection rect is 0,0) . Then I follow the instructions in the forum posting.
When determining the dimensions of the render texture, I ensure that they are at least as large as the intersection rectangle, and I ensure that the intersection rectangle is fully inside the render texture. This way there are accurate collisions. If even part of the intersection box is outside the render texture i get inaccurate collisions, so before drawing into the render texture, make sure you move the sprites you intend to visit so that the intersection box is entirely within the render texture.
Remember to move the sprites back after you're done. :)

Related

Draw texture on finger movement in Cocos2dx

I want to create an app in Cocos2d/Cocos2dx in which i have an image which is not visible but when i move my finger on device it start drawing. Only that part of image draws where i move my finger.
Thanks in Advance
There are two ways I can think of drawing an image.
The first way would be like a brush. You would use RenderTexture and draw/visit a brush sprite to draw it into this texture. If you just need to draw with solid colors (can have opacity) you could also use the primitive draw commands (drawCircle, drawPoly, drawSegment). You will need a high rate of touch tracking and will likely want to draw segments or bezier curves between touch movements to catch fast movements.
http://discuss.cocos2d-x.org/t/using-rendertexture-to-render-one-sprite-multiple-times/16332/3
http://discuss.cocos2d-x.org/t/freehand-drawing-app-with-cocos2d-x-v3-3-using-rendertexture/17567/9
Searching on how other drawing games work would be useful.
The second way I can envision it is similar to revealing except using an inverse mask. So you would draw a set image, but reveal that image by drawing.
http://www.raywenderlich.com/4428/how-to-mask-a-sprite-with-cocos2d-2-0
There are more elaborate ways to handle the drawing into the RenderTexture in order to have the brush design tile correctly and repeat based on a given size, but that'll involve making something closer to an image editing tool.

Methods of zooming in/out with openGL (C++)

I am wondering what kind of methods are commonly used when we do zoom in/out.
In my current project, I have to display millions of 2D rectangles on the screen, and I am using a fixed viewport and changing glortho2D variables when I have to zoom in/out.
I am wondering if this is a good way of doing it and what other solution can I use.
I also have another question which I think it is related to
how should I do zoom in/out.
As I said, I am currenly using a fixed viewport and changing glortho2D variables in my code, and I assumed that opengl will be able to figure out which rectangles are out of the screen and not render them. However, it seems like opengl is redrawing all the rectangles again and again. The rendering time of viewing millions of rectangles (zoom out) is equal to vewing hundreds of rectangles (zoom into a particular area), which is opposite of what I expected. I am wondering if it is related to the zooming methods I used or am I missing something important.
ie . I am using VBO while rendering the rectangles.
and I assumed that opengl will be able to figure out which rectangles are out of the screen
You assumed wrong
and not render them.
OpenGL is a rather dumb drawing API. There's no such thing like a scene in OpenGL. All it does is coloring pixels on the framebuffer one point, line or triangle at a time. When geometry lies outside the viewport it still has to be processed up to the point it gets clipped (and then discarded).

Using texture during window resize

So what I'm trying to do is copy the current front buffer to a texture and use this during a resize to mimic what a normal window resize would do. I'm doing this because the scene is too expensive to render during a resize and I want to provide a fluid resize.
The texture coping is fine, but I'm struggling to work out the maths to get the texture to scale and translate appropriately (I know there will be borders visible when magnifying beyond the largest image dimension).
Can anyone help me?
but I'm struggling to work out the maths to get the texture to scale and translate appropriately
Well, this depends on which axis you base your field of view on. If it's the vertical axis, then increasing the ratio width/height will inevitably lead to letterboxing left and right. Similar if your FOV is based on the horizontal axis inreasing height/width will letterbox top and bottom. If the aspect variation goes opposite you have no letterboxing, because you don't need additional picture information for this.
There's unfortunately no one-size-fits all solition for this. Either you live with some borders, or stretch your image without keeping the aspect, or before resizing the window you render with a largely bigger FOV to a quadratic texture of which you show only a subset.

OpenGL rendering and rasterization of slivers

I am testing some rendering stuff with OpenGL and I noticed that I have some issues with long thin polygons that are forming a plane. So when having two of these long polygons directly next to each other, snapping at the long side, I noticed that some of the pixels at the edge are invisible. These invisible pixels move around when I move the camera.
What I found is that this is because the pixels at the edge of these "sliver" polygons will be invisible because the rasterization thinks that they are not within that polygon at this specific view angle.
What I didn't figure out is how to tell OpenGL to also put pixels on screen that are directly at the edge of that polygon.
If you found my description of the problem a bit weird see http://www.ugrad.cs.ubc.ca/~cs314/Vjan2008/slides/week5.day3-4x4.pdf page 27 and following. That's what I mean.
EDIT: ok i think i have to make clear what my problem is, because i have a feeling that i cant adress it with anti aliasing techniques
aaa|b|cc
aaa|b|cc
aaa|b|cc
^ ^
1 2
- the polygons a, b and c form a plane
- some pixels at edge 1 and 2 are invisible at certain camera angles
What I didn't figure out is how to tell OpenGL to also put pixels on screen that are directly at the edge of that polygon.
In general, you don't. If OpenGL thinks that a part of a triangle is too thin to be rendered for a given resolution, then it's too thin to be rendered. The general form of this issue is called called "aliasing".
The solution is to use an antialiasing technique. For example, multisampling. When you create the context, select a number of samples to use.

In openGL, how can you get items to draw back to front?

By default it seems that objects are drawn front to back. I am drawing a 2-D UI object and would like to create it back to front. For example I could create a white square first then create a slightly smaller black square on top of it thus creating a black pane with a white border. This post had some discussion on it and described this order as the "Painter's Algorithm" but ultimately the example they gave simply rendered the objects in reverse order to get the desired effect. I figure back to front (first objects go in back, subsequent objects get draw on top) rendering can be achieved via some transformation (gOrtho?) ?
I will also mention that I am not interested in a solution using a wrapper library such as GLUT.
I have also found that the default behavior on the Mac using the Cocoa NSOpenGLView appears to draw back to front, where as in windows I cannot get this behavior. The setup code in windows I am using is this:
glViewport (0, 0, wd, ht);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho (0.0f, wd, ht, 0.0f, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
The following call will turn off depth testing causing objects to be drawn in the order created. This will in effect cause objects to draw back to front.
glDepthFunc(GL_NEVER); // Ignore depth values (Z) to cause drawing bottom to top
Be sure you do not call this:
glEnable (GL_DEPTH_TEST); // Enables Depth Testing
For your specific question, no there is no standardized way to specify depth ordering in OpenGL. Some implementations may do front to back depth ordering by default because it's usually faster, but that is not guaranteed (as you discovered).
But I don't really see how it will help you in your scenario. If you draw a black square in front of a white square the black square should be drawn in front of the white square regardless of what order they're drawn in, as long as you have depth buffering enabled. If they're actually coplanar, then neither one is really in front of the other and any depth sorting algorithm would be unpredictable.
The tutorial that you posted a link to only talked about it because depth sorting IS relevant when you're using transparency. But it doesn't sound to me like that's what you're after.
But if you really have to do it that way, then you have to do it yourself. First send your white square to the rendering pipeline, force the render, and then send your black square. If you do it that way, and disable depth buffering, then the squares can be coplanar and you will still be guaranteed that the black square is drawn over the white square.
Drawing order is hard. There is no easy solution. The painter's alogorithm (sort objects by their distance in relation to your camera's view) is the most straightforward, but as you have discovered, it doesn't solve all cases.
I would suggest a combination of the painter's algroithm and layers. You build layers for specific elements on your program. So you got a background layer, objects layers, special effect layers, and GUI layer.
Use the painter's algorithm on each layer's items. In some special layers (like your GUI layer), don't sort with the painter's algorithm, but by your call order. You call that white square first so it gets drawn first.
Draw items that you want to be in back slightly behind the items that you want to be in the front. That is, actually change the z value (assuming z is perpendicular to the screen plane). You don't have to change it a lot to get the items to draw in front of eachother. And if you only change the z value slightly, you shouldn't notice much of an offset from their desired position. You could even go really fancy, and calculate the correct x,y position based on the changed z position, so that the item appears where it is supposed to be.
Your stuff will be drawn in the exact order you call the glBegin/glEnd functions in. You can get depth-buffering using the z-buffer, and if your 2d objects have different z values, you can get the effect you want that way. The only way you are seeing the behavior you describe on the Mac is if the program is drawing stuff in back-to-front order manually or using the z-buffer to accomplish this. OpenGL otherwise does not have any functionality automatically as you describe.
As AlanKley pointed out, the way to do this is to disable the depth buffer. The painter's algorithm is really a 2D scan-conversion technique used to render polygons in the correct order when you don't have something like a z-buffer. But you wouldn't apply it to 3D polygons. You'd typically transform and project them (handling intersections with other polygons) and then sort the resulting list of 2D projected polygons by their projected z-coordinate, then draw them in reverse z-order.
I've always thought of the painter's algorithm as an alternate technique for hidden surface removal when you can't (or don't want to) use a z-buffer.