gluCylinder() and texture coordinates offset / multiplier? - c++

How can i set the texture coordinate offset and multiplier for the gluCylinder() and gluDisk() etc. functions?
So if normally the texture would start at point 0, i would like to set it start at point 0.6 or 3.2 etc. by multiplier i mean the texture would either get bigger or smaller.
The solution cant be glScalef() because 1) im using normals, 2) i want to adjust the texture start position as well.

Try using the texture matrix stack:
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glTranslatef(0.6f, 3.2f, 0.0f);
glScalef(2.0f, 2.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
drawObject();

The solution has nothing to do with the GLU functions and is indeed glScalef (and glTranslatef for the offset adjustment), but applying it to the texture matrix (assuming you don't use shaders). The texture matrix, selected by calling glMatrixMode with GL_TEXTURE, transforms the vertices' texture coordinates before they are interpolated and used to access the texture (no matter how these texture coordinates are computed, in this case by GLU, which just computes them on the CPU and calls glTexCoord2f).
So to let the texture start at (0.1,0.2) (in texture space, of course) and make it 2 times as large, you just call:
glMatrixMode(GL_TEXTURE);
glTranslatef(0.1f, 0.2f, 0.0f);
glScalef(0.5f, 0.5f, 1.0f);
before calling gluCylinder. But be sure to revert these changes afterwards (probably wrapping it between glPush/PopMatrix).
But if you want to change the texture coordinates based on the world space coordinates, this might involve some more computation. And of course you can also use a vertex shader to have complete control over the texture coordinate generation.

Related

2D Texture morph in Ortographic Projection

I'm having a hard time figuring out what's going on with my texture:
Basically I am fetching a webcam stream as my underlying 2d texture canvas in OpenGL, and in my paintGL() I'm drawing stuff on it (as RGBA images with GL_BLEND).
Since I'm using a Kinect as a data source, I'm also getting the depth values from a tracked skeleton (a person), and converting them into GL values (XYZ varying between 0.0f and 1.0f).
So my goal is that, for instance, a loaded 2D Texture like a shirt, is now properly tracking the person in my RGB output display. But it seems my understanding of orthographic projection is wrong:
I'm constantly loading the 4 converted vertices into a VBO, but whenever I put the texture on top of this dynamic quad, it's always facing the screen.
I thought that putting this dynamic quad between the "background" canvas and the camera would result in a proper projection of the quad onto the canvas, which would give me the impression of a warping 2D texture, that seems to "bend" whenever the person rotates.
But the texture is always facing the camera and doesnt rotate.
I've also tried to manually rotate via a matrix and set that into my shader, but again, it only rotates the vertice quad itself (as: rotation simply makes the texture smaller) , and THEN puts the texture on top, instead of rotating the texture with it.
So, is it somehow possible to properly apply this to the texture?
I've thought about mixing a perspective projection in, but actually have no idea how to implement this...
EDIT:
I've actually already set my projection matrix up like the following:
In resizeGL():
projection.setToIdentity();
projection.ortho(0.0f, 1.0f, 0.0f, 1.0f, 2.0f, -5.0f);
projection.translate(0.0f, 0.0f, 3.0f);
In paintGL():
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDisable(GL_DEPTH_TEST); // turning this on/off makes no difference
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, &textureID);
program.setUniformValue("mvp_matrix", projection);
program.setUniformValue("texture", 0);
//draw 2d background quad
drawQuad();
glClear(GL_DEPTH_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// switch to frustum to give perspective view
projection.setToIdentity();
projection.frustum(0.0f, 1.0f, 0.0f, 1.0f, 2.0f, -5.0f);
projection.translate(0.0f, 0.0f, 3.0f);
// bind cloth texture and draw ontop 2d quad
clothTexture->bind();
program.setUniformValue("mpv_matrix", projection);
drawShirtQuad();
// reset to ortho view
projection.setToIdentity();
projection.ortho(0.0f, 1.0f, 0.0f, 1.0f, 2.0f, -5.0f);
// release texture
clothTexture->release();
glDisable(GL_BLEND);
clothTexture is a QOpenGLTexture that has successfully loaded an RGBA image from a file.
Result: whenever I activate the frustum perspective, it results in a black screen. I think everything is correctly set up: POV is traversed towards positive z-axis in resizeGL(), and all the cloth vertices vary between 0 and 1 in XYZ, while the background is positioned at:
(0.0f, 0.0f, -1.0f), (1.0f, 0.0f, -1.0f), (1.0f, 1.0f, -1.0f), (0.0f, 1.0f, -1.0f).
So the cloth object is always positioned between background plane and POV. Am i missing something in the frustum setup ? I've simply set it up the same way as ortho...
EDIT:
Sorry for not mentiong; the matrix I'm using is a QMatrix4x4 type:
Frustum
These functions multiply the current matrix with the one you define as an argument, which should yield the same result as if I define a View matrix for instance, and then define my shader uniform "mvp_matrix" as projection * view, if I'm not mistaken. Maybe something like lookAt will do the trick; I'll just try messing around more. :)
You need to use a perspective projection to achieve desired result. Look here for example code for perspective projection matrix creation with glm.
Moving vertices wouldn't be needed as you will get proper positions with rotation applied in your model matrix.
EDIT: in your code where can i look at .frustum and .translate methods or from what library projection object is? It doesn't look like you are doing Projection * View by moving frustum matrix. Some info about roles of standard matrices.
Considering debugging if you get on screen black color instead of GL_COLOR_BUFFER_BIT color problem is not with matrix but earlier. Also i recommend to console.log your perspective matrix and compare it to correct one (which you can get for example in glm library).

Depth buffer not working for alpha render pass in OpenGL ES 2

I have a scene with transparent and opaque 2d items. I first render the opaque items with depth test and depth mask (writing) enabled, in front to back order. Then I set the depth mask to false (without disabling the depth test), enable blending and render the transparent ones from back to front.
But the problem is that the transparent items are not drawn properly. When I use glDepthFunc(GL_LESS) for them they are not drawn at all and when I use glDepthFunc(GL_EQUAL) they are drawn but the ones that should be obscured by opaque items are not. They just render on top of everything really.
The code in the render routine looks like this:
// Set the clear color
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClearDepthf(1.f);
glDepthRangef(0.f, 1.f);
glDepthMask(GL_TRUE);
glEnable(GL_DEPTH_TEST);
// Draw opaque items
glDepthFunc(GL_LESS);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
renderOpaque();
// Draw transparent items
glDepthMask(GL_FALSE);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
renderAlpha();
glDisable(GL_BLEND);
The z values for the items are set in the following manner:
// Bottom layer (background) is 0 and then layer is incremented
// by one for each view that sits on top
// I use the far value (something like 10000.f) to divide to get
// something between 0.0 and 1.0 (from back to front: 0.0/10k, 1.0/10k,
// 2.0/10k etc)
float zTranslation = static_cast<float>(GetLayer()) /
TheCamera::Instance().GetFar();
glm::mat4 model = glm::translate(glm::mat4(1.f),
glm::vec3(m_absoluteFrame.origin.x,
m_absoluteFrame.origin.y,
zTranslation));
glm::mat4 MVP = muiKit::TheCamera::Instance().GetProjection() *
muiKit::TheCamera::Instance().GetView() *
model;
The MVP matrix is then passed to the shader to set gl_Position...
gl_Position = MVP * vec4(imageVertex.xyz, 1);
And I also pass texture coordinates and indexes and so on and draw using glDrawElements in batches.
What am I doing wrong here?
Also, even though I'm new to this I get the feeling that the depth function should be GL_GREATER when I render transparent items... no? Somehow it makes sense to me knowing that I'm rendering them in back to front order.
Same old. I found what was wrong. Since I am rendering everything in batches I am not actually using the MVP I calculate for each of the views, but only the MVP of the views that own the batch. So no matter what z value I put here it wasn't going to the shader. I guess I will either have to pass a depth value for each geometry the batch is rendering, or an array of their matrices if possible. Thanks #Tommy for the help anyway.

How to apply texture globally on a set of objects from a particular position?

I know very little of OpenGL.
I want to apply a 2D texture globally onto the scene in OPENGL 3.1 as in figure in this link as if the texture is viewed from the point P.
While the texture projection parameters e.g. focal length f , position P, etc. all are known, how can I do this in OpenGL, so that I can view it from another position?
N.B. The lighting and texture pasting need to be of the form GL_MODULATE.
With the fixed pipeline, this can be achieved by applying a projection matrix to the texture coordinates.
Aside from the very commonly used values of GL_MODELVIEW and GL_PROJECTION, glMatrixMode() also supports a value of GL_TEXTURE, which exposes a mechanism for applying arbitrary transformations to texture coordinates.
In this case, you can use the original world coordinates as the input texture coordinates. So if you used:
glVertexPointer(3, GL_FLOAT, 0, coord);
you use the same for the texture coordinates:
glTexCoordPointer(3, GL_FLOAT, 0, coord);
Then you set up view and projection transformations very similarly to the way you do for the primary view/projection. Keep in mind that transformations are specified in the reverse order of being applied. So the sequence would be something like this:
glMatrixMode(GL_TEXTURE);
// Projection maps to [-1.0, 1.0] range, while texture coordinates are in
// range [0.0, 1.0]. Translate/scale to adjust for this.
glScalef(0.5f, 0.5f, 0.5f);
glTranslatef(1.0f, 1.0f, 1.0f);
// Apply projection.
gluPerspective(...)
// Apply view transformation, using P as eye point.
gluLookAt(...);
glMatrixMode(GL_MODELVIEW);

Crop a quad in existing texture_2D

I have texture already rendered and I'm mapping a quad/rectangle on it. (Quad may be smaller or equal to total texture size)
Once the Quad is mapped, I want to remove the rest (what ever is drawn outside quad).
So far i can map quad and get my sub texture(not to be removed) however I'm unable to delete the remaining region(outside quad).
Following Images show the procedure;
1.Original Image
2.Original Image with quad in red color
3.Everything removed except quad. Texture after Cropping
I don't know how you compute your texture coordinates in your code but there is not millions way to do it, so I'll give a solution for the three easiest way I have in mind :
You only have a vertex array containing the positions of your vertice for your quad, and use them to compute your texture coordinates. In that case, just modify the position of your vertice to your crop area before drawing.
You have a vertex array containing both the positions and texture coordinates (or two vertex arrays, one for each). You must change the area covered in both. For your specific use case I would advise to compute the texture coordinates from the vertice positions in the vetex shader for simplicity and efficiency.
You send your cropping area as a uniform to your fragment shader. This solution assumes you work in ortho space at the picture will always fill the screen. In that case, from the input vector position, you know where you are. With a simple if condition, you can check if you are out of boundaries. If so, set the pixel to black or use discard to cancel the drawing of the pixel. Conditions are time consuming so I would only advise this solution is you wish to set the cropped pixels to black. If you prefer to have them not displayed at all, the solution 1 is the fastest.
I have solved it using Nehe's Lesson 3. I used
glColor3f(0.0f,0.0f,0.0f); // Set The Color To Black
glBegin(GL_QUADS); // Start Drawing Quads
glVertex3f(-1.0f, 1.0f, 0.0f); // Left And Up 1 Unit (Top Left)
glVertex3f( 1.0f, 1.0f, 0.0f); // Right And Up 1 Unit (Top Right)
glVertex3f( 1.0f,-1.0f, 0.0f); // Right And Down One Unit(Bottom Right)
glVertex3f(-1.0f,-1.0f, 0.0f); // Left And Down One Unit (Bottom Left)
glEnd(); // Done Drawing A Quad`
to draw 4 quads of black color, to crop the region outside my selected region.
Thanks to Nehe.

OpenGL cube not rendering properly

I have a problem when rendering cubes in OpenGL.I am drawing two cubes, one is a wire cube and is centered around the origin, while the other is offset from the origin and is solid. I have mapped some keys to rotate the objects by some degrees wrt to the origin, so the whole scene can rotate around the origin.
The problem is, when I render the scene, when the wire cube is supposed to be infront of the other solid cube, it does not display itself correctly.
In the image above, the colored cube is supposed to be behind the wire cube. i.e. the green wire cube should be on top.
Also the cube is not behaving properly.
After I rotate it a little bit around the x axis (current horizontal line).
The cube has missing faces and is not rendering correctly.
What am I doing wrong?
I have coded the following
Note that rotateX,rotateY,rotateZ are mapped to keys, and are my global rotation variables.
//The Initialize function, called once:
void Init(){
glEnable(GL_TEXTURE_2D);
glShadeModel(GL_SMOOTH); // Enable Smooth Shading
glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // Black Background
glClearDepth(1.0f); // Depth Buffer Setup
glEnable(GL_DEPTH_TEST); // Depth Buffer Setup // Enables Depth Testing
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Really Nice Perspective Calculations
glEnable(GL_LIGHTING);
}
void draw(){
//The main draw function
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity ();
gluPerspective(45, 640/480.0, .5, 100);
glMatrixMode(GL_MODELVIEW); //select the modelview matrix.
glLoadIdentity ();
gluLookAt(0,0,5,
0,0,0,
0,1,0);
glRotatef(rotateX,1,0,0);
glRotatef(rotateY,0,1,0);
glRotatef(rotateZ,0,0,1);
drawScene(); // this just draws the main axis lines,
glutWireCube(1);
glPopMatrix();
glPushMatrix();
glTranslatef(-2,1,0);
drawNiceCube();
glPopMatrix();
glutSwapBuffers();
}
The code for the drawNiceCube() is just using GL_QUADS, while the drawWireCube is built in in GLUT.
EDIT:
I have posted the full code at http://pastebin.com/p1kwPjEM, sorry if it is not well documented.
Did you also request a window with a depth buffer?
glutInitDisplayMode( ... | GLUT_DEPTH | ...);
Update:
Did you somewhere enable face culling?
glEnable(GL_CULL_FACE);
This is may be cause of clockwise
10.090 How does face culling work? Why doesn't it use the surface normal?
OpenGL face culling calculates the signed area of the filled primitive in window coordinate space. The signed area is positive when the window coordinates are in a counter-clockwise order and negative when clockwise. An app can use glFrontFace() to specify the ordering, counter-clockwise or clockwise, to be interpreted as a front-facing or back-facing primitive. An application can specify culling either front or back faces by calling glCullFace(). Finally, face culling must be enabled with a call to glEnable(GL_CULL_FACE); .
OpenGL uses your primitive's window space projection to determine face culling for two reasons. To create interesting lighting effects, it's often desirable to specify normals that aren't orthogonal to the surface being approximated. If these normals were used for face culling, it might cause some primitives to be culled erroneously. Also, a dot-product culling scheme could require a matrix inversion, which isn't always possible (i.e., in the case where the matrix is singular), whereas the signed area in DC space is always defined.
However, some OpenGL implementations support the GL_EXT_ cull_vertex extension. If this extension is present, an application may specify a homogeneous eye position in object space. Vertices are flagged as culled, based on the dot product of the current normal with a vector from the vertex to the eye. If all vertices of a primitive are culled, the primitive isn't rendered. In many circumstances, using this extension
from here
Also you can read here
datenwolf solved my problem. I quote him:
"#JonathanSimbahan: Parts of your code are redundant, but something is missing: You forgot to call Init(); after creating your GLUT window, hence depth testing and all the other state never get enabled. I for one suggest you don't use Init at all and move it's code into the drawing code, where it actually belongs."