cocos2d 2.0 custom drawing causes incomplete scene rendering - cocos2d-iphone

I have a weird problem that I have not found a solution for despite lots and lots of googling and reading.
I have a scene that uses a dynamically generated background. The code for the background is based on this tutorial and other code I have found related to that such as Haqus Tiny-Wings github.
Anyway, my code has simplified the hill generation and it is all contained in one CCNode class called StripedTerrain. It all works fine (now!) but when going to another scene that uses the same layout with the same background sprite, it doesnt render completely. See the screenshot. Image A is the first run with my code as is. Image B is after a replaceScene call to a new scene of the same scene class. Then, I made this small change to my draw code just before popping the matrix:
ccDrawColor4B(255, 255, 255, 255);
ccDrawLine(ccp(0.0,0.0),ccp(0.0,0.0));
and then it works fine (images C and D)
This is the strangest thing and I cannot figure out what's going wrong.
I'll post the draw call code, but spare you the rest of the details :
/**
* Previus to the draw method we have already done the following:
* Randomly selected, or have been given two colors to paint the stripes onto our texture
* Generated a texture to overlay on to our hill vertice geometry
* Generated the top of the hill peaks and valleys
* Generated the hill verticies that will fill in the surface of the hills
* with the texture applied
*/
- (void) draw {
CC_NODE_DRAW_SETUP();
// this statement fixed the screwed up jagged line rendering
// since we are only using position and texcoord vertexes, we have to use this shader
kmGLPushMatrix();
CHECK_GL_ERROR_DEBUG();
ccGLBlendFunc( CC_BLEND_SRC, CC_BLEND_DST ); //TB 25-08-12: Allows change of blend function
ccGLBindTexture2D(self.stripes.texture.name);
//TB 25-08-12: Assign the vertices array to the 'position' attribute
glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, 0, _hillVertices);
//TB 25-08-12: Assign the texCoords array to the 'TexCoords' attribute
glVertexAttribPointer(kCCVertexAttrib_TexCoords, 2, GL_FLOAT, GL_FALSE, 0, _hillTexCoords);
glEnableVertexAttribArray(kCCVertexAttrib_Position);
glEnableVertexAttribArray(kCCVertexAttrib_TexCoords);
//TB 25-08-12: Draw the above arrays
glDrawArrays(GL_TRIANGLE_STRIP, 0, (GLsizei)_nHillVertices);
CHECK_GL_ERROR_DEBUG();
//Debug Drawing (remove comments to enable)
if(0) {
for(int i = MAX(_fromKeyPointI, 1); i <= _toKeyPointI; ++i) {
ccDrawColor4B(255, 0, 0, 255);
ccDrawLine(_hillKeyPoints[i-1], _hillKeyPoints[i]);
}
for(int i =0;i<_nHillVertices;i+=3) {
ccDrawColor4B(255, 0, 0, 255);
ccDrawLine(_hillVertices[i+1], _hillVertices[i+2]);
}
}
// have to do this to force it to work the next scene load
ccDrawColor4B(255, 255, 255, 255);
ccDrawLine(ccp(0.0,0.0),ccp(0.0,0.0));
kmGLPopMatrix();
CC_INCREMENT_GL_DRAWS(1);
}
Any obvious mistakes above?
I've set the shader in another method.

Check if the previous scene and its children run their dealloc method. If not, and one or more are leaking the weirdest things can happen.
The same goes for overriding cleanup without calling super cleanup.

Related

OpenGL draws weird lines on top of polygons

Let me introduce you to Fishtank:
It's an aquarium simulator I am doing on OpenGL to learn before going into Vulkan.
I have drawn many fish like these:
Aquarium
Now I added the grid functionnality which goes like this:
Grid
But when I let it turn for some time, these lines appear:
Weird Lines
I've seen somewhere to clear the Depth Buffer, which I did, but that doesn't resolve the problem.
Here's the code of the function:
void Game::drawGrid()
{
std::vector<glm::vec2> gridVertices;
for (unsigned int x = 1; x < mGameMap.mColCount; x += 1) //Include the last one as the drawed line is the left of the column
{
gridVertices.push_back(glm::vec2(transformToNDC(mWindow, x*mGameMap.mCellSize, mGameMap.mCellSize)));
gridVertices.push_back(glm::vec2(transformToNDC(mWindow, x*mGameMap.mCellSize, (mGameMap.mRowCount-1)*mGameMap.mCellSize)));
}
for (unsigned int y = 1; y < mGameMap.mRowCount; y += 1) //Same here but special info needed:
// Normally, the origin is at the top-left corner and the y-axis points to the bottom. However, OpenGL's y-axis is reversed.
// That's why taking into account the mRowCount-1 actually draws the very first line.
{
gridVertices.push_back(glm::vec2(transformToNDC(mWindow, mGameMap.mCellSize, y*mGameMap.mCellSize)));
gridVertices.push_back(glm::vec2(transformToNDC(mWindow, (mGameMap.mColCount - 1)*mGameMap.mCellSize, y*mGameMap.mCellSize)));
}
mShader.setVec3("color", glm::vec3(1.0f));
glBufferData(GL_ARRAY_BUFFER, gridVertices.size()*sizeof(glm::vec2), gridVertices.data(), GL_STATIC_DRAW);
glVertexAttribPointer(0, 2, GL_FLOAT_VEC2, GL_FALSE, sizeof(glm::vec2), (void*)0);
glEnableVertexAttribArray(0);
glDrawArrays(GL_LINES, 0, gridVertices.size()*sizeof(glm::vec2));
glClear(GL_DEPTH_BUFFER_BIT);
}
I'd like to erase those lines and understand why OpenGL does this (or maybe it's me but I don't see where).
This is the problematic line:
glDrawArrays(GL_LINES, 0, gridVertices.size()*sizeof(glm::vec2));
If you look at the documentation for this function, you will find
void glDrawArrays( GLenum mode, GLint first, GLsizei count);
count: Specifies the number of indices to be rendered
But you are passing the byte size. Hence, you are asking OpenGL to draw more vertices than there are in your vertex buffer. The specific OpenGL implementation you are using is probably reading past the end of the grid vertex buffer and finds vertices from the fish vertex buffer to draw (but this behavior is undefined).
So, just change it to
glDrawArrays(GL_LINES, 0, gridVertices.size());
A general comment: Do not create vertex buffers every time you want to draw the same thing. Create them at the beginning of the application and re-use them. You can also change their content if needed, but be careful with that since it comes with a performance price. Creating vertex buffers is even more costlier.

How to fill a rectangle with color when there is a shader in Processing?

Usually with Processing, you fill a rectangle with a given color like so:
fill(255, 0, 0); // Make it red
rect(0,0, 100, 100); // Make it square
However, this does not work. Instead, the rectangle displays this shader. Somewhere earlier in the execution I call this:
PShader shader = loadShader(filePath); // A shader is loaded once upon startup
// In a draw() method
shader(shader);
rect(0, 0, screenWidth, screenHeight);
This draws a rectangle which covers the whole screen, and a nice dynamic background is displayed.
Why does the fill() call have no effect and why is the shader drawn in the rectangle instead? How can I keep the background shader and also display a red rectangle in Processing?

Not able to get output with glDrawElements() & glMultiDrawElements()

I'm in the process of building a graphics app where the user can specify vertices by clicking on a canvas and then the vertices are used to draw polygons.
The app supports line, triangle and polygon modes. Drawing a line and triangle is done by counting the number of clicks. Then vertex arrays are created and data is bound to buffers and rendered using glDrawArrays(). The tricky one is the polygon mode. The user can specify any number of vertices and clicking right mouse button triggers drawing. I initially planned to use glMultiDrawElements, but somehow I wasn't getting any output. So I tried to call glDrawElements() in a loop. still with no luck. I searched a lot and read a lot of documentation about using glDrawElements()/glMultiDrawElements() with VBOs and VAOs and also with glVertexPointer() and glColorPointer. Still no luck.
I have used the following for keeping track of vertex attributes:
GLfloat ** polygonVertices; //every polygon vertex list goes into this..
GLuint * polygonIndicesCounts; //pointer to hold the number of vertices each polygon has
GLuint ** polygonIndices; //array of pointers to hold indices of vertices corresponding to polygons
GLfloat * polygonColors; //for every mouse click, colors are randomly generated.
and the code for rendering:
glVertexPointer(4, GL_FLOAT, 0, (GLvoid*)polygonVertices);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(4, GL_FLOAT, 0, (GLvoid*)polygonColors);
//glMultiDrawElements(GL_POLYGON, polygonIndicesCounts, GL_UNSIGNED_INT, polygonIndices, polygonCount);
for(int i = 0 ; i < polygonCount; i ++)
glDrawElements(GL_POLYGON, polygonIndicesCounts[i], GL_UNSIGNED_INT, polygonIndices[i]);
Why are polygonVertices pointers to pointers? If you cast that to (void*) the only thing OpenGL sees is the value of pointer to which each points. You want those to be flat a array, so their type signature should be compatible with float* (not float**). A pointer to a pointer makes sense only for the glMultiDrawArrays call.

Animate CCTexture2D using spritesheet

I have made spritesheet in Zwoptex , I know TexturePacker is better than this but i have just started with cocos2d iphone, so haven't purchase that.
I have made CCTexture2D using following code.

texture = [[CCTextureCache sharedTextureCache] addImage:#"1.png"];
self.shaderProgram = [[CCShaderCache sharedShaderCache] programForKey:kCCShader_PositionTexture];
CC_NODE_DRAW_SETUP();
And i use this CCtexture2D object to draw texture around soft body.Using Following code.
ccGLEnableVertexAttribs(kCCVertexAttribFlag_Position | kCCVertexAttribFlag_TexCoords);
ccGLBindTexture2D([texture name]);
glVertexAttribPointer(kCCVertexAttrib_TexCoords, 2, GL_FLOAT, GL_FALSE, 0, textCoords);
glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_TRUE, 0, triangleFanPos);
glDrawArrays(GL_TRIANGLE_FAN, 0, NUM_SEGMENT+2);
ccGLEnableVertexAttribs( kCCVertexAttribFlag_Color);
Now I want to animate texture of soft body. I know how to animate sprite using spritesheet. But now i am confuse how to make CCTexture2D using spritesheet and how can i animate texture using different images like we do in sprite animation?
Can anyone give me any direction in solving this issue?
First of all i want to say sorry for this question. I want to share my code here. I want to animate the texture which i am giving to the soft body.
I was confused , how can i give new image to the cctexture2d object and rotate that at the angle of previous textures angle. But I came to know that cocos2d iphone automatically handles that.
I animate my texture2d object by just giving it the image sequences with delay. Code for that is shown below.
Called Function
-(void)changeTexture1:(NSNumber*)i{
texture = [[CCTextureCache sharedTextureCache] addImage:[NSString stringWithFormat:#"blink%#.png",i]];
}
Calling function:
-(void)makeAnimation{
for (int i = 2; i<7; i++) {
[self performSelector:#selector(changeTexture1:) withObject:[NSNumber numberWithInt:i] afterDelay:0.1*i];
}
int j = 1;
for(int i=6;i>0;i--){
[self performSelector:#selector(changeTexture1:) withObject:[NSNumber numberWithInt:i] afterDelay:0.1*6 + 0.1*j++];
}
}

LWJGL 3D picking

So I have been trying to understand the concept of 3D picking but as I can't find any video guides nor any concrete guides that actually speak English, it is proving to be very difficult. If anyone is well experienced with 3D picking in LWJGL, could you give me an example with line by line explanation of what everything means. I should mention that all I am trying to do it shoot the ray out of the center of the screen (not where the mouse is) and have it detect just a normal cube (rendered in 6 QUADS).
Though I am not an expert with 3D picking, I have done it before, so I will try to explain.
You mentioned that you want to shoot a ray, rather than go by mouse position; as long as this ray is parallel to the screen, this method will still work, just the same as it will for a random screen coordinate. If not, and you actually wish to shoot a ray out, angled in some direction, things get a little more complicated, but I will not go in to it (yet).
Now how about some code?
Object* picking3D(int screenX, int screenY){
//Disable any lighting or textures
glDisable(GL_LIGHTING);
glDisable(GL_TEXTURE);
//Render Scene
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
orientateCamera();
for(int i = 0; i < objectListSize; i++){
GLubyte blue = i%256;
GLubyte green = min((int)((float)i/256), 255);
GLubyte red = min((int)((float)i/256/256), 255);
glColor3ub(red, green, blue);
orientateObject(i);
renderObject(i);
}
//Get the pixel
GLubyte pixelColors[3];
glReadPixels(screenX, screenY, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, pixelColors);
//Calculate index
int index = pixelsColors[0]*256*256 + pixelsColors[1]*256 + pixelColors[2];
//Return the object
return getObject(index);
}
Code Notes:
screenX is the x location of the pixel, and screenY is the y location of the pixel (in screen coordinates)
orientateCamera() simply calls any glTranslate, glRotate, glMultMatrix, etc. needed to position (and rotate) the camera in your scene
orientateObject(i) does the same as orientateCamera, except for object 'i' in your scene
when I 'calculate the index', I am really just undoing the math I performed during the rendering to get the index back
The idea behind this method is that each object will be rendered exactly how the user sees it, except that all of a model is a solid colour. Then, you check the colour of the pixel for the screen coordinate requested, and which ever model the colour is indexed to: that's your object!
I do recommend, however, adding a check for the background color (or your glClearColor), just in case you don't actually hit any objects.
Please ask for further explanation if necessary.