I'm working on a projet with C++ and glsl (4.1).
I have implemented a mirror object which is a plane at height 0 that works as follow:
I render the scene with a MVP computed such that the camera position is inversed (cam.x, cam.y, -cam.z)
I store the rendered scene in a framebuffer
I render the scene with a normal MVP
I render the mirror object with the fragment color computed as: (size being the size of the texture I rendered to the framebuffer)
float _u = gl_FragCoord.x;
float _v = gl_FragCoord.y ;
_u = _u / size.x;
_v = 1 - (_v / size.y);
_uv = vec2(_u, _v);
reflection = texture(tex_mirror, _uv).rgb;
This implementation works well but I now need to work with a mirror which is not a plane. So I am looking for a way to lookup the same texture I store in the Framebuffer but with the texture coordinates computed from the reflected vector (R = reflect(view_dir, vertex_normal)).
Is it possible ? Or is there another way to do this lookup without having to compute a environment cubemap (which is way too costly since my environment is dynamic and I would have to compute 6 textures at each frame)
I'v been looking to find a way all afternoon and I am really stuck...
Thank you for your help !
Related
I've written a GLSL shader to emulate a vintage arcade game's indexed color tile-based graphics. I made a couple of shaders, one that does this with point sprites, and another using polygons. The point sprite shader converts gl_PointCoord to a pixel coordinate within each tile like so:
vec2 pixelFloat = gl_PointCoord * tileSizeInPixels;
ivec2 pixel = ivec2(int(pixelFloat.x), int(pixelFloat.y));
// pixel is now used in conjunction with a tile 'ID' uniform
// to locate indexed colors with a texture lookup from a
// large texture representing the game's ROM, with GL_NEAREST filtering.
// very clever 😋
The polygon shader instead uses an attribute buffer to pass pixel coordinates (which range {0.0 … 32.0} for a 32-pixel square tile, for example). After conversion to int, each fragment within the tile sees pixel coordinate values ranging x {0 … 31} y {0 … 31}, except:
This worked fine apart from artefacts sometimes showing at the edge of the tile with the higher numbered pixel coordinate at certain resolutions. I guessed that would be due to the fragment being at just the right location to be right on the maximum value of either gl_PointCoord or the vertex attribute value of 32.0, causing that fragment to sample the wrong tile.
These artefacts went away when I clamped the pixel ivec like this:
vec2 pixelFloat = gl_PointCoord * tileSizeInPixels;
ivec2 pixel = ivec2(
min(int(pixelFloat.x), tileSizeInPixels - 1),
min(int(pixelFloat.y), tileSizeInPixels - 1));
which solved the problem and didn't introduce any new artefacts.
My question is: Is there some way of controlling the interpolation of gl_PointCoord or my pixel coordinate attribute such that we can guarantee the interpolated value will range
minimum value <= interpolated value < maximum value
as opposed to
minimum value <= interpolated value <= maximum value
Is there some way I can avoid using min() here?
NB: GL_CLAMP_* is not an option here, as the pixel coordinate is used to look up the pixel's index color from a much larger texture, which is essentially the game's sprite ROM loaded into a single large texture buffer.
I have a simple deferred rendering setup and I'm trying to get depth map shadows to work. I know that I have a simple error in my code because I can see the shadows working, but they seem to move and lose resolution when I move my camera. This is not a Shadow Acne Problem. The shadow is shifted entirely. I have also seen most of the similar questions here, but none solve my problem.
In my final shader, I have a texture for world space positions of pixels, and the depth map rendered from the light source, as well as the light source's model-view-projection matrix. The steps I take are:
Get worldspace position of pixel from pre-rendered pass.
Multiply worldspace position with the light source's model-view-projection matrix. I am using orthogonal projection (a directional light).
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-0.0025, 0.0025, -0.0025, 0.0025, -0.1, 0.1);
glGetFloatv(GL_PROJECTION_MATRIX,m_lightViewProjectionMatrix);
I place the directional light at a fixed distance around my hero. Here is how I get my light's modelview matrix:
CSRTTransform lightTrans = CSRTTransform();
CVector camOffset = CVector(m_vPlanet[0].m_vLightPos);
camOffset.Normalize();
camOffset *= 0.007;
camOffset += hero.GetLocation();
CVector LPN = m_vLightPos*(-1); LPN.Normalize();
CQuaternion lRot;
lRot = CQuaternion( CVector(0,1,0), asin(LPN | CVector(-1,0,0) ) );
lightTrans.m_qRotate = lRot;
lightTrans.m_vTranslate = camOffset;
m_worldToLightViewMatrix = lightTrans.BuildViewMatrix();
And the final light mvp matrix is:
CMatrix4 lightMVPMatrix = m_lightViewProjectionMatrix * m_worldToLightViewMatrix;
m_shHDR.SetUniformMatrix("LightMVPMatrix", lightMVPMatrix);
For now I am only rendering the shadow casters in my shadow pass. As you can see, the shadow pass seems fine, my hero is centered in the frame and rotated correctly. It is worth noting that these two matrices are passed to the vertex shader of the hero so it seems like they are correct because the hero is rendered correctly in the shadow pass (shown here with more contrast for visibility).
https://lh3.googleusercontent.com/-pxBZ5jnmlfM/V0SBT75yB1I/AAAAAAAABEY/007j_toVO7M41iyEiEnJgvr7K1m5GSceQCCo/s1024/shadow_pass.jpg
And finally, in my deferred shader I do:
vec4 projectedWorldPos = LightMVPMatrix * vec4(worldPos,1.0);
vec3 projCoords = projectedWorldPos.xyz;//projectedWorldPos.w;
projCoords.xyz = (projCoords.xyz * 0.5) + 0.5;
float calc_depth = projCoords.z;
float tD = texture2DRect(shadowPass,vec2(projCoords.x*1280.0,projCoords.y*720.0)).r ;
return ( calc_depth < tD ) ;
I have the /w division commented out because I'm using orthogonal projection. Having it produces the same result anyways.
https://lh3.googleusercontent.com/-b2i7AD_Nnf0/V0SGPyfelsI/AAAAAAAABFE/mvWnhcdQSbsU3l8sd0974jWDA94r6PkxACCo/s1024/render3.jpg
In certain positions for my hero (close to initial position) (1), the shadow looks fine. But as soon as I move the hero, the shadow moves incorrectly (2), and the further away I move, the shadow starts to lose resolution (3).
It is worth noting all my textures and passes are the same size (1280x720). I'm using an NVidia graphics card. The problem seems to be matrix related, but as mentioned, the shadow pass renders OK so I'm not sure what's going on...
Here's my situation: I need to draw a rectangle on the screen for my game's Gui. I don't really care how big this rectangle is or might be, I want to be able to handle any situation. How I'm doing it right now is I store a single VAO that contains only a very basic quad, then I re-draw this quad using uniforms to modify the size and position of it on the screen each time.
The VAO contains 4 vec4 vertices:
0, 0, 0, 0;
1, 0, 1, 0;
0, 1, 0, 1;
1, 1, 1, 1;
And then I draw it as a GL_TRIANGLE_STRIP. The XY of each vertex is it's position, and the ZW is it's texture co-ordinates*. I pass in the rect for the gui element I'm currently drawing as a uniform vec4, which offsets the vertex positions in the vertex shader like so:
vertex.xy *= guiRect.zw;
vertex.xy += guiRect.xy;
And then I convert the vertex from screen pixel co-ordinates into OpenGL NDC co-ordinates:
gl_Position = vec4(((vertex.xy / screenSize) * 2) -1, 0, 1);
This changes the range from [0, screenWidth | screenHeight] to [-1, 1].
My problem comes in when I want to do texture wrapping. Simply passing vTexCoord = vertex.zw; is fine when I want to stretch a texture, but not for wrapping. Ideally, I want to modify the texture co-ordinates such that 1 pixel on the screen is equal to 1 texel in the gui texture. Texture co-ordinates going beyond [0, 1] is fine at this stage, and is in fact exactly what I'm looking for.
I plan to implement texture atlasses for my gui textures, but managing the offsets and bounds of the appropriate sub-texture will be handled in the fragment shader - as far as the vertex shader is concerned, our quad is using one solid texture with [0, 1] co-ordinates, and wrapping accordingly.
*Note: I'm aware that this particular vertex format isn't neccesarily useful for this particular case, I could be using vec2 vertices instead. For the sake of convenience I'm using the same vertex format for all of my 2D rendering, and other objects ie text actually do need those ZW components. I might change this in the future.
TL/DR: Given the size of the screen, the size of a texture, and the location/size of a quad, how do you calculate texture co-ordinates in a vertex shader such that pixels and texels have a 1:1 correspondence, with wrapping?
That is really very easy math: You just need to relate the two spaces in some way. And you already formulated a rule which allows you to do so: a window space pixel is to map to a texel.
Let's assume we have both vec2 screenSize and vec2 texSize which are the unnormalized dimensions in pixels/texels.
I'm not 100% sure what exactly you wan't to achieve. There is something missing: you actaully did not specify where the origin of the texture shall lie. Should it always be but to the bottom left corner of the quad? Or should it be just gloablly at the bottom left corner of the viewport? I'll assume the lattter here, but it should be easy to adjust this for the first case.
What we now need is a mapping between the [-1,1]^2 NDC in x and y to s and t. Let's first map it to [0,1]^2. If we have that, we can simply multiply the coords by screenSize/texSize to get the desired effect. So in the end, you get
vec2 texcoords = ((gl_Position.xy * 0.5) + 0.5) * screenSize/texSize;
You of course already have caclulated (gl_Position.xy * 0.5) + 0.5) * screenSize implicitely, so this could be changed to:
vec2 texcoords = vertex.xy / texSize;
I'm using SOIL in my project, and I need to take in a single texture, and than convert it into an array of textures using different parts of the first texture. (To use a sprite sheet).
I'm using SDL and OpenGL by the way.
The typical way to use sprite sheeting with a modern 3D api like OpenGL is to use texture coordinates to address different parts of your individual texture. While you can split it up it is much more resource friendly to use texture coordinates.
For example, if you had a simple sprite sheet with 3 frames horizontally, each 32 pixels by 32 pixels (for a total size of 96x32), you would use the following code to draw the 3rd frame:
// I assume you have bound your source texture
// This is the U coordinate's origin in texture space
float xStart = 64.0f / 96.0f;
// This is one frame width in texture space
float xIncrement = 32.0f / 96.0f;
glBegin(GL_QUADS);
glTexCoord2f(xStart, 0);
glVertex2f(-16.0f, 16.0f);
glTexCoord2f(xStart, 1.0f);
glVertex2f(-16.0f, -16.0f);
glTexCoord2f(xStart + xIncrement, 0);
glVertex2f(16.0f, 16.0f);
glTexCoord2f(xStart + xIncrement, 1.0f);
glVertex2f(16.0f, -16.0f);
glEnd();
i need to get the color at a particular coordinate from a texture. There are 2 ways i can do this, by getting and looking at the raw png data, or by sampling my generated opengl texture. Is it possible to sample an opengl texture to get the color (RGBA) at a given UV or XY coord? If so, how?
Off the top of my head, your options are
Fetch the entire texture using glGetTexImage() and check the texel you're interested in.
Draw the texel you're interested in (eg. by rendering a GL_POINTS primitive), then grab the pixel where you rendered it from the framebuffer by using glReadPixels.
Keep a copy of the texture image handy and leave OpenGL out of it.
Options 1 and 2 are horribly inefficient (although you could speed 2 up somewhat by using pixel-buffer-objects and doing the copy asynchronously). So my favourite by FAR is option 3.
Edit: If you have the GL_APPLE_client_storage extension (ie. you're on a Mac or iPhone) then that's option 4 which is the winner by a long way.
The most efficient way I've found to do it is to access the texture data (you should have our PNG decoded to make into a texture anyway) and interpolate between the texels yourself. Assuming your texcoords are [0,1], multiply texwidthu and texheightv and then use that to find the position on the texture. If they're whole numbers, just use the pixel directly, otherwise use the int parts to find the bordering pixels and interpolate between them based on the fractional parts.
Here's some HLSL-like psuedocode for it. Should be fairly clear:
float3 sample(float2 coord, texture tex) {
float x = tex.w * coord.x; // Get X coord in texture
int ix = (int) x; // Get X coord as whole number
float y = tex.h * coord.y;
int iy = (int) y;
float3 x1 = getTexel(ix, iy); // Get top-left pixel
float3 x2 = getTexel(ix+1, iy); // Get top-right pixel
float3 y1 = getTexel(ix, iy+1); // Get bottom-left pixel
float3 y2 = getTexel(ix+1, iy+1); // Get bottom-right pixel
float3 top = interpolate(x1, x2, frac(x)); // Interpolate between top two pixels based on the fractional part of the X coord
float3 bottom = interpolate(y1, y2, frac(x)); // Interpolate between bottom two pixels
return interpolate(top, bottom, frac(y)); // Interpolate between top and bottom based on fractional Y coord
}
As others have suggested, reading back a texture from VRAM is horribly inefficient and should be avoided like the plague if you're even remotely interested in performance.
Two workable solutions as far as I know:
Keep a copy of the pixeldata handy (wastes memory though)
Do it using a shader