I'm writing a 2D game in Opengl. I already set up the orthogonal projection so I can easily know where a quad will end up on screen. The problem is, I also want to be able to map pixels directly to texture coords, so I also applied an orthogonal transformation (using gluOrtho2d) to the texture. Now I can map pixels directly using integers and glTexCoord2i. The thing is, after googling/reading/asking, I found out no one really knows (apparently) the behavior of glTexCoord2i, but it works just fine the way I'm using. Some sample test code I wrote follows:
glBegin(GL_QUADS);
glTexCoord2i(16,0);
glVertex2f(X, Y);
glTexCoord2i(16,16);
glVertex2f(X, Y+32);
glTexCoord2i(32, 16);
glVertex2f(X+32, Y+32);
glTexCoord2i(32, 0);
glVertex2f(X+32, Y);
glEnd();
So, is there any problem with what I'm doing, or is what I'm doing correct?
There's nothing special about glTexCoord*i, it's just one variant of glTexCoord that takes integer arguments for convenience, the values will be transformed by the texture transform in the same way as all other tex coords.
If you want to express your texture coordinates in pixels your code looks totally ok.
EDIT: If you want to take your OpenGL coding to the "next level", you should consider dropping the immediate mode rendering (glBegin/glEnd) and instead build vertex buffers and draw them with glDrawArrays, this is way faster.
Related
I have a rectangle and a circle. Of the circle i have all points' coordinates because i calculate them to draw it using math rules. the rectangle in drawed using two triangles so 4 vertices. Now these are free to translate and route in the plan and i want to determinate when one of them touch the other one. so I thought that this happens when one of the coordinates of one of them is the same that one of the others of the other object. The problem is that i haven't an array of all coordinates of the rectangle. Is there a method that return all coordinates that a drawed triangles and not only the vertices' ones in OpenGL?
There is method to record coords and commands supplied to OpenGL, using stencil buffer, but that's a rather inefficient way, because you would need to decompile commands inside buffer.
If you didn't had an array of coordinates , you already used the most inefficient way to supply geometry to OpenGL:
glBegin(...);
glVertex3f(...);
glVertex3f(...);
...
glVertex3f(...);
glEnd();
The more efficient way to do that is to use vertex buffer, which automatically requires to have array of coordinates. With large amount of vertices, VBO methods is times faster than vertex by vertex copying.
OpenGL doesn't store the coordinates you've supplied to it any longer than it is required, i.e. until rasterization. Whole goal of OpenGL is to create image on screen, not to solve some abstract tasks.
So when drawing a rectangle on OpenGL, if you give the corners of the rectangle texture coordinates of (0,0), (1,0), (1,1) and (0, 1), you'll get the standard rectangle.
However, if you turn it into something that's not rectangular, you'll get a weird stretching effect. Just like the following:
I saw from this page below that this can be fixed, but the solution given is only for trapezoidal values only. Also, I have to be doing this over many rectangles.
And so, the questions is, what is the proper way, and most efficient way to get the right "4D" texture coordinates for drawing stretched quads?
Implementations are allowed to decompose quads into two triangles and if you visualize this as two triangles you can immediately see why it interpolates texture coordinates the way it does. That texture mapping is correct ... for two independent triangles.
That diagonal seam coincides with the edge of two independently interpolated triangles.
Projective texturing can help as you already know, but ultimately the real problem here is simply interpolation across two triangles instead of a single quad. You will find that while modifying the Q coordinate may help with mapping a texture onto your quadrilateral, interpolating other attributes such as colors will still have serious issues.
If you have access to fragment shaders and instanced vertex arrays (probably rules out OpenGL ES), there is a full implementation of quadrilateral vertex attribute interpolation here. (You can modify the shader to work without "instanced arrays", but it will require either 4x as much data in your vertex array or a geometry shader).
Incidentally, texture coordinates in OpenGL are always "4D". It just happens that if you use something like glTexCoord2f (s, t) that r is assigned 0.0 and q is assigned 1.0. That behavior applies to all vertex attributes; vertex attributes are all 4D whether you explicitly define all 4 of the coordinates or not.
I'm drawing some 3D structures in a Fl_Gl_Window in FLTK's implementation of opengl. This images are drawn and rotated so the code looks something like
glTranslatef(-xshift,-yshift,-zshift);
glRotatef(ang1,ang2,ang3);
glTranslatef(xshift,yshift,zshift);
glColor4f((120.0/256.0),(120.0/256.0),(120.0/256.0),0.2);
for (int side=0;side<num_sides;side++){
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glEnable( GL_BLEND );
glBegin(GL_TRIANGLES);
//draw shape
glEnd();
glDisable(GL_BLEND);
}
and it almost works apart from at different angles the transparency doesn't work properly. For example, if I draw a cube from one side it will look transparent all the way through without being able to discern the two sides but from the other one side will appear darker as it is supposed to. It's as if it calculates the transparency too 'early' as in before the rotation. Am I doing something wrong? Should I move the rotation to below the transparency effects (i.e. before them in execution) or does the order of the triangles matter?
The order of the triangles matters. To get the desired effect for transparency you need to render the triangles in back to front order because the hardware blending works by reading the color for the fragment in the depth buffer and blending it with the fragment currently being shaded. That's why you are getting different results when you rotate your cube since you are not changing the order of the triangles in the cube. You may also want to look into Order Independent Transparency techniques.
Depending on how many triangles you have sorting them every frame can get really expensive. One approximation technique is to presort the triangles along the x, y, and z axes and then choose the sorted ordered that most closely matches your viewing direction. This only works to a certain extent. One popular type of order independent transparency technique is depth peeling. Here's a tutorial with some code for implementing it: http://mmmovania.blogspot.com/2010/11/order-independent-transparency.html?m=1. You might also want to read the original paper to get a better understanding of the technique: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.18.9286&rep=rep1&type=pdf.
I wish to bind a texture on a cube (creating cube using GlutSolidCube and not glvertex) but the whole texture is bound. In the image file I have all textures together (for speed and because the teacher requested) and I only want part of the texture to be bound. How can I do that????
Textures are the unit of texture binding. If you want to "cut out" part of a texture, you do so by adjusting the texture coordinates that you use.
Instead of using the full range of 0..1, use smaller values that match the sub-texture's location inside the texture.
What you're looking to do is not possible, because glutSolidCube does not generate texture coordinates.
However, you will also note that an answer to that question indicates that you may use the following to have OpenGL generate texture coordinates for you on a call to glutSolidCube:
glEnable(GL_TEXTURE_GEN_S);
glEnable(GL_TEXTURE_GEN_T);
Some more information on using OpenGL's automatic texture coordinate generation is available here. However, I would like to note that this seems to come out of the days of immediate-mode OpenGL, which is deprecated. Also, GLUT is no longer maintained, but freeglut is.
To summarize, you're better off using glVertex calls and specifying your own specific texture coordinates, as unwind has suggested. You can try OpenGL's texture coordinate generation, but it might be too strict to handle what you need.
I'm trying to implement a multi-pass rendering method using OpenSceneGraph. However, I'm not entirely certain my problem is theoretical or due to a lack of applied knowledge of OSG. Thus far, I've successfully implemented multi-pass shading by rendering to a texture using an orthogonal projection, but I cannot seem to make a perspective projection work.
It may be that I don't quite understand how to implement multi-pass shading. Of course, I have to pre-render the entire scene with the multi-pass shaders to a texture, then use the texture in the final render. However, I'm not talking about creating a separate texture for each object in the scene, but effectively capturing a screenshot of the entire prerendered scene. Then, from that texture alone, applying the rendered effects to the individual geometries.
I assume this means I would have to do an extra conversion of the vertex coordinates for each geometry in the vertex shader. That is, after computing:
gl_Position = ModelViewProjectionMatrix * Vertex;
I would need to go a step further and calculate the vertex's screen coordinates in order to map the vertices correctly (again, given that the texture consists of an entire screen shot of the scene).
If I am correct, then I must be able to pre-render the scene in a perspective view identical to the view used in the final render, rather than an orthogonal view. This is where I have troubles. I can make an orthogonal view do what I want, but not the perspective view.
Am I correct in my approach? The only other approach I can imagine is to render everything to a screen-filling quad (in effect, the same thing as converting to screen coordinates), but that doesn't alleviate the need to use a perspective projection in the pre-render stage.
Thoughts? Links??
edit: I should also point out that in my successful attempts, I used a fragment shader only. The perspective projection worked, but, of course, the screen aligned quad I was using was offset rather than centered. I added a pass-through vertex shader and everything went blank.
As it turns out, my approach was correct. It's especially nice as it avoids having to add another camera to my scene graph to render the final output - I can simply use the main camera. Unfortunately, it means that all of my output textures are rendered at the screen resolution, rather than a resolution appropriate to the size of the object. That is, if my screen is 1024 x 1024, then so is the output texture, one for each pre-render camera in the graph. Not exactly efficient, but it'll do for now.