So I know that glDrawPixels is deprecated. Is there any function that does the same thing?
I thought of using textures, but they are modified by the current matrix, unlike pixels that are drawn by glDrawPixels.
I thought of using textures, but they are modified by the current matrix
The "current matrix" is deprecated in 3.0 and removed in 3.1+ as well. So if you're not using glDrawPixels, you wouldn't be using matrix functions either. So it's nothing to be concerned about.
You could use a fragment shader where a function of gl_FragCoord is used to sample a rectangular texture.
Alternatively, you could use a more traditional approach and just set up your transformation matrices to approximate the pixel coordinate system of your window and then draw a textured quad with your image.
You need to draw a quad with :
A specific ModelViewProjection Matrix which will place it where you want (as Nicol said, there is no "current" matrix anymore)
A simple vertex shader which will use said Matrix to actually transform the vertices
A simple fragment shader which will sample the texture
And of course, adequate texture coordinates.
For starters, use an Identity matrix, and a mesh with X and Y coords between 0 and 1.
You might want to use a mix of http://www.opengl-tutorial.org/beginners-tutorials/tutorial-2-the-first-triangle/ and http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-11-2d-text/ (though the latter one should be improved regarding the matrix used)
Related
I am exploring some GLSL and have something I want to try to implement. Here is the situation:
I have a previously rendered texture which stores only world-space coordinates of fragments (rgb = xyz). This texture is being passed to another render pass, is it possible take the world position texture and sample it to test the current fragments' world-space coordinate to see if they are a match?
An example could be 2 cameras, testing to see if any of the points in 3D space rendered to texture by camera A can also be seen by camera B.
Also, is it possible to have a texture that can be modified between several different shaders? i.e. having a camera render a texture, then pass that texture to another shader and change it?
Any help is greatly appreciated, thanks :)
I have a previously rendered texture which stores only world-space coordinates of fragments (rgb = xyz). This texture is being passed to another render pass, is it possible take the world position texture and sample it to test the current fragments' world-space coordinate to see if they are a match?
An example could be 2 cameras, testing to see if any of the points in 3D space rendered to texture by camera A can also be seen by camera B.
Yes, it is possible. This is essentially a shadow-map, but now you'll have to calculate the distances manually during the sampling. It's unclear why you insist on storing the world-space XYZ coordinates and what's the use-case of this. It should be much simpler and more efficient to store the depths in a depth texture and use the built-in depth-texture lookup.
Also, is it possible to have a texture that can be modified between several different shaders? i.e. having a camera render a texture, then pass that texture to another shader and change it?
Yes. You can render a texture and then use imageLoad and imageStore (and related APIs) in another shader to modify it. You must be careful, however, with feedback loops. Because of the parallel nature of the GPUs, and their cache-incoherent architecture, it might be complicated and a detailed answer would depend on the exact thing you're trying to achieve.
So when drawing a rectangle on OpenGL, if you give the corners of the rectangle texture coordinates of (0,0), (1,0), (1,1) and (0, 1), you'll get the standard rectangle.
However, if you turn it into something that's not rectangular, you'll get a weird stretching effect. Just like the following:
I saw from this page below that this can be fixed, but the solution given is only for trapezoidal values only. Also, I have to be doing this over many rectangles.
And so, the questions is, what is the proper way, and most efficient way to get the right "4D" texture coordinates for drawing stretched quads?
Implementations are allowed to decompose quads into two triangles and if you visualize this as two triangles you can immediately see why it interpolates texture coordinates the way it does. That texture mapping is correct ... for two independent triangles.
That diagonal seam coincides with the edge of two independently interpolated triangles.
Projective texturing can help as you already know, but ultimately the real problem here is simply interpolation across two triangles instead of a single quad. You will find that while modifying the Q coordinate may help with mapping a texture onto your quadrilateral, interpolating other attributes such as colors will still have serious issues.
If you have access to fragment shaders and instanced vertex arrays (probably rules out OpenGL ES), there is a full implementation of quadrilateral vertex attribute interpolation here. (You can modify the shader to work without "instanced arrays", but it will require either 4x as much data in your vertex array or a geometry shader).
Incidentally, texture coordinates in OpenGL are always "4D". It just happens that if you use something like glTexCoord2f (s, t) that r is assigned 0.0 and q is assigned 1.0. That behavior applies to all vertex attributes; vertex attributes are all 4D whether you explicitly define all 4 of the coordinates or not.
I'm trying to implement a multi-pass rendering method using OpenSceneGraph. However, I'm not entirely certain my problem is theoretical or due to a lack of applied knowledge of OSG. Thus far, I've successfully implemented multi-pass shading by rendering to a texture using an orthogonal projection, but I cannot seem to make a perspective projection work.
It may be that I don't quite understand how to implement multi-pass shading. Of course, I have to pre-render the entire scene with the multi-pass shaders to a texture, then use the texture in the final render. However, I'm not talking about creating a separate texture for each object in the scene, but effectively capturing a screenshot of the entire prerendered scene. Then, from that texture alone, applying the rendered effects to the individual geometries.
I assume this means I would have to do an extra conversion of the vertex coordinates for each geometry in the vertex shader. That is, after computing:
gl_Position = ModelViewProjectionMatrix * Vertex;
I would need to go a step further and calculate the vertex's screen coordinates in order to map the vertices correctly (again, given that the texture consists of an entire screen shot of the scene).
If I am correct, then I must be able to pre-render the scene in a perspective view identical to the view used in the final render, rather than an orthogonal view. This is where I have troubles. I can make an orthogonal view do what I want, but not the perspective view.
Am I correct in my approach? The only other approach I can imagine is to render everything to a screen-filling quad (in effect, the same thing as converting to screen coordinates), but that doesn't alleviate the need to use a perspective projection in the pre-render stage.
Thoughts? Links??
edit: I should also point out that in my successful attempts, I used a fragment shader only. The perspective projection worked, but, of course, the screen aligned quad I was using was offset rather than centered. I added a pass-through vertex shader and everything went blank.
As it turns out, my approach was correct. It's especially nice as it avoids having to add another camera to my scene graph to render the final output - I can simply use the main camera. Unfortunately, it means that all of my output textures are rendered at the screen resolution, rather than a resolution appropriate to the size of the object. That is, if my screen is 1024 x 1024, then so is the output texture, one for each pre-render camera in the graph. Not exactly efficient, but it'll do for now.
Is there any way to apply the texture to an object without specifying texture coordinates?
In fixed-function OpenGL, you can generate texture coordinates by activating texture coordinate generation modes. There are a couple of fixed algorithms (spherical coordinates, reflection coordinates), and there is one that multiplies a vertex component by a 4x4 matrix to generate a texture coordinate.
In shaders, you can use anything you can algorithmically generate.
However, without telling us how you want a texture mapped to the surface, there's no way to know if what you want is possible. There is no glTextureMyObject that does "something"; either explicit texture coordinates must be used or some algorithm must generate them.
I'm wondering how I could create a gradient wuth multiple stops and a direction if I'm making polygons. Right now I'm creating gradients by changing the color of the verticies but this is limiting. Is there another way to do this?
Thanks
One option you may have is to render a simple polygon with a gradient to a texture, which you then use to texture your actual polygon.
Then you can rotate the source polygon and anything textured with its image will have its gradient rotate as well, without the actual geometry changing.
The most flexible way is probably to create a texture with the gradient you want, and then apply that to your geometry.
If you're using a shader, you can pass your vertex world positions into your vertex shader and they'll interpolate to your fragment shader, so for every fragment, you'll get where it is in world-space (of course you can use any space). Then it's just a matter of choosing whatever transfer function to change that value to a color. You can make any kind of elaborate algorithm using b-splines or whatever in your fragment shader.