I have a question about clipping in OpenGL. So I have a small viewport and I want to render one part of a large image. If I draw a large image like
glEnable(GL_TEXTURE_RECTANGLE)
glColor3f(1,1,1)
glTexCoord2f(0,height)
glVertex2f(0,0)
glTexdoord2f(width,height)
glVertex2f(width,0)
glTexCoord2f(width,0)
glVertex2f(width,height)
glTexCoord(0,0)
glVertex2f(0,height)
The width and height is the size of the image(texture), which is much larger than the viewport.
My question is: Is OpenGL going to firstly clip the rectangle with respect to the size of viewport and then draw, or firstly render the whole image and then clip?
It will first clip the geometry, then draw. More specifically, OpenGL does all the geometry processing before going down to render individual framebuffer elements (pixels). Here's a relevant quote from the 3.3 GL spec (Section 2.4):
The final resulting primitives are clipped to a viewing volume in
preparation for the next stage, rasterization. The rasterizer produces
a series of framebuffer addresses and values using a two-dimensional
description of a point, line segment, or polygon.
Related
Let's say there is one texture: 6000x6000
I only need to blur one part, let's say the center rectangle 100x100
If I use vertex shader to put the interested area to this center rectangle, by inputting the coordinates of the 4 points and their corresponding texture coordinates in the big texture, I think the fragment shader only process the pixels in the center rectangle.
In my understanding, a regular GPU cannot really handle 6000x6000 pixels concurrently; it will divide to several segments.
Now with 100x100, all pixels can be processed simultaneously, so it would be faster.
Is my understanding correct?
You can do a "render to texture", so you can use your "vertex shader" to select the area you want to blur... and then your fragment shader will apply the blur only in that area.
your understanding seems to be correct: consider that the GPU will only spend efford processing the fragments INSIDE the area determined by your vertex shader, so if you set your vertex to a subset of your target [just like the screen, your target may be a texture, via framebuffers], then your GPU will process only the desired area.
I want to render two textures on the screen at the same time at different positions, but, I'm confused about the vertex coordinates.
How could I write a vertex shader to meet my goal?
Just to address the "two images to the screen separately" bit...
A texture maps image colours onto geometry. To be pedantic, you can't draw a texture but you can blit and you can draw geometry with a mapped texture (using per-vertex texture coordinates).
You can bind two textures at once while drawing, but you'll need both a second set of texture coordinates and to handle how they blend (or don't in your case). Even then the shader will be quite specific and because the images are separate there'll be unnecessary code running for each pixel to handle the other image. What happens when you want to draw 3 images, or 100?
Instead, just draw a quad with one image twice (binding each texture in turn before drawing). The overhead will be tiny unless you're drawing lots, at which point you might look at texture atlases and drawing all the geometry with one draw call (really getting towards the "at the same time" part of the question).
I want to do a texture based volume render of CT data. I have a stack of 2d CT images that I'd like to use as a 3d texture in opengl (jogl really). I have to do it the way with polygon proxy geometry that shifts when viewing parameters change. How can I convert the 2d images to one 3d texture? I have not been able to find anything about how opengl expects 3d images to be formatted. I saw this: https://stackoverflow.com/questions/13509191/how-to-convert-2d-image-into-3d-image , but I don't it's the same.
Also, I am still in confusion about this volume rendering technique. Is it possible to take a 3d location in the 3d texture and map it to a 2d corner of a quad? I found this example: http://www.felixgers.de/teaching/jogl/texture3D.html but I don't know if it means you have to use 3d vertices. Does anyone know more sources with explicit examples?
See
http://www.opengl.org/registry/doc/glspec40.core.20100311.pdf
section 3.8.3, on defining 3D texture images.
This results in a 3d cube of texels, and yes, you can map a 3d location in this cube to a corner of a quad.
OpenGL does know a 3D texture format where each texel is a small subvolume in a [0;1]^3 cube. When you texture a triangle or a quad with this texture, it is like if you cut out a thin slice of this volume. If you want a volumetric you must write a volume raycaster. If you Google "GPU direct volume rendering" you should find plenty of tutorials.
I'm developing a render-to-texture process that involves using several cameras which render an entire scene of geometry. The output of these cameras is then combined and mapped directly to the screen by converting each geometry's vertex coordinates to screen coordinates in a vertex shader (I'm using GLSL, here).
The process works fine, but I've realized a small problem: every RTT camera I create will create a texture the same dimensions as the screen output. That is, if my viewport is sized to 1024x1024, even if the geometry occupies a 256x256 section of the screen, each RTT camera will render the 256x256 geometry in a 1024x1024 texture.
The solution seems reasonably simple - adjust the RTT camera texture sizes to match the actual screen area the geometry occupies, but I'm not sure how to do that. That is, how can I (for example) determine that a geometry occupies a 256x256 area of a screen so that I can correspondingly set the RTT camera's output texture to 256x256 pixels?
The API I use (OpenSceneGraph) uses axis-aligned bounding boxes, so I'm out of luck there..
Thoughts?
why out of luck? can't you use the axis-aligned bounding box to compute the area?
My idea:
take the 8 corner points of the bounding box and project them onto the image plane of the camera.
for the resulting 2d points on the image plane you can determine an axis-aligned 2d bounding box again
This should be a correct upper bound for the space the geometry can occupy.
What i'd like to do:
I have a 3d transformed, uvmapped object with a white texture as well as a screenspace image.
I want to bake the screenspace image into the texture of the object, such that it's 3d transformed representation on screen exactly matches the screenspace image (so i want to project it onto the uv space).
I'd like to do this with image_load_and store. I imagine it as:
1st pass: render the transformed 3d objects uvcoordinates into a offscreen texture
2nd pass: render screensized quad, on each pixel, check the value of the texture rendered in the first pass, if there are valid texturecoordinates there, look up the screenspace image with the screenspace quad's own uv textures and write this texel color with image_load_and_store into a texturebuffer by using the uv textures read from the input texture as index.
As I never worked with this feature before, I'd just like to ask whether someone who worked with it already considers this feasible and whether there maybe are already some examples that do something in this direction?
Your proposed way is certainly one method to do it, and actually it's quite common. The other way is to to a back projection from screen space to texture space. It's not that hard as it might sound at first. Basically for each triangle you have to find the transformation of the tangent space vectors (UV) on the models surface to their screen counterparts. In addition to that transform the triangle itself to find the boundaries of the screen space triangle in the picture. Then you invert that projection.