OpenGL subimages using pixel coordinates - c++

I've worked through a couple of tutorials in the breakout series on learnopengl.com, so I have a very simple 2D renderer. I want to add a subimage feature to it, though, where I can specify a vec4 for a kind of "source rectangle", so if the vec4 was (10, 10, 32, 32), it would only render a rectangle at 10, 10 with a width and height of 32, kind of like how the SDL renderer works.
The way the renderer is set up is there is a quad VAO which all the sprites use, which contains the texture coordinates. Initially, I though I could use an array of VAO's for each sprite, each with different texture coordinates, but I'd like to be able to change the source rectangle before the sprite gets drawn, to make things like animation easier... My second idea was to have a seperate uniform vec4 passed into the fragment shader for the source rectangle, but how do I only render that section in pixel coordinates?

Use the Primitiv type GL_TRIANGLE_STRIP or GL_TRIANGLE_FAN to render a quad. Use integral one-dimensional vertex coordinates instead of floating-point vertex coordinates. The vertex coordinates are the indices of the quad corners. For a GL_TRIANGLE_FAN they are:
vertex 1: 0
vertex 2: 1
vertex 3: 2
vertex 4: 3
Set the rectangle definition (10, 10, 32, 32) in the vertex shader uisng a Uniform variable of type vec4. With this information, you can calculate the vertex coordinate in the vertex shader:
in int cornerIndex;
uniform vec4 rectangle;
void main()
{
vec2 vertexArray[4] =
vec2[4](rectangle.xy, rectangle.zy, rectangle.zw, rectangle.xw);
vec2 vertex = vertexArray[cornerIndex];
// [...]
}
The Vertex Shader provides the built-in input gl_VertexID, which specifies the index of the vertex currently being processed. This variable could be used instead of cornerIndex in this case. Note that it is not necessary for the vertex shader to have any explicit input.

I ended up doing this in the vertex shader. I passed in the vec4 as a uniform to the vertex shader, as well as the size of the image, and used the below calculation:
// convert pixel coordinates to vertex coordinates
float widthPixel = 1.0f / u_imageSize.x;
float heightPixel = 1.0f / u_imageSize.y;
float startX = u_sourceRect.x, startY = u_sourceRect.y, width = u_sourceRect.z, height = u_sourceRect.w;
v_texCoords = vec2(widthPixel * startX + width * widthPixel * texPos.x, heightPixel * startY + height * heightPixel * texPos.y);
v_texCoords is a varying that the fragment shader uses to map the texture.

Related

sampling GL_DEPTH_COMPONENTs of type GL_UNSIGNED_SHORT in GLSL shader

I have access to a depth camera's output. I want to visualise this in opengl using a compute shader.
The depth feed is given as a frame and i know the width and height ahead of time. How do I sample the texture and retrieve the depth value in the shader? Is this possible? I've read through the OpenGl types here and can't find anything on unsigned shorts so am starting to worry. Are there any workarounds?
My current compute shader
#version 430
layout(local_size_x = 1, local_size_y = 1) in;
layout(rgba32f, binding = 0) uniform image2D img_output;
uniform float width;
uniform float height;
uniform sampler2D depth_feed;
void main() {
// get index in global work group i.e x,y position
vec2 sample_coords = ivec2(gl_GlobalInvocationID.xy) / vec2(width, height);
float visibility = texture(depth_feed, sample_coords).r;
vec4 pixel = vec4(1.0, 1.0, 0.0, visibility);
// output to a specific pixel in the image
imageStore(img_output, ivec2(gl_GlobalInvocationID.xy), pixel);
}
The depth texture definition is as follows:
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, width, height, 0,GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, nullptr);
Currently my code produces a plain yellow screen.
If you use perspective projection, then the depth value is not linear. See LearnOpenGL - Depth testing.
If all the depth values are near 0.0, and you use the following expression:
vec4 pixel = vec4(vec3(visibility), 1.0);
then all the pixels appear almost black. Actually the pixels are not completely black, but the difference is barely noticeable.
This happens, when the far plane is "too" far away. To verify that you can compute the power of 1.0 - visibility, to make the different depth values ​​recognizable. For instance:
float exponent = 5.0;
vec4 pixel = vec4(vec3(pow(1.0-visibility, exponent)), 1.0);
If you want a more sophisticated solution, you can linearize the depth values as explained in the answer to How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?.
Please note that for a satisfactory visualization you should use the entire range of the depth buffer ([0.0, 1.0]). The geometry must be between the near and far planes, but try to move the near and far planes as close to the geometry as possible.

Given a coordinate in 2D clipspace, how can i sample the background texture

Consider a texture with the same dimensions as gl.canvas, what would be the proper method to sample a pixel from the texture at the same screen location as a clip-space coordinate (-1,-1 to 1,1)? Currently i'm using:
//u_screen = vec2 of canvas dimensions
//u_data = sampler2D of texture to sample
vec2 clip_coord = vec2(-0.25, -0.25);
vec2 tex_pos = (clip_coord / u_screen) + 0.5;
vec4 sample = texture2D(u_data, tex_pos);
This works but doesn't seem to properly take into account the canvas size and text_pos seems offset the closer clip_coord gets to -1 or 1.
In the following is assumed that the texture has the same size as the canvas and is rendered 1:1 on the entire canvas, as mentioned in the question.
If clip_coord is a 2 dimension fragment shader input where each component is in range [-1, 1], then you've to map the coordinate to the range [0, 1]:
vec4 sample = texture2D(u_data, clip_coord*0.5+0.5);
Note, the texture coordinates range from 0.0 to 1.0.
Another possibility is to use gl_FragCoord. gl_FragCoord is a fragment shader built-in variable and contains the window relative coordinates of the fragment.
If you use WebGL 2.0 respectively GLSLES 3.00,
then gl_FragCoord.xy can be use for texture lookup of a 2 dimensional texture, by texelFetch, to get the texel which correspond to the fragment:
vec4 sample = texelFetch(u_data, ivec2(gl_FragCoord.xy), 0);
Note, texelFetch performs a lookup of a single texel. You can think about the coordinate as the 2 dimensional texel index.
If you use WebGL 1.0 respectively GLSL ES 1.00,
then gl_FragCoord.xy can divided by the (2 dimensional) size of the texture. The result can be used for a texture lookup, by texture2D, to get the texel which correspond to the fragment:
vec4 sample = texture2D(u_data, gl_FragCoord.xy / u_size);

How to reduce the number of drawing calls to a large number of texture?

I'm trying to develop a map for a 2D tile based game, the approach I'm using is to save the map images in a large texture (tileset) and draw only the desired tiles on the screen by updating the positions through vertex shader, however on a 10x10 map involves 100 glDrawArrays calls, looking through the task manager, this consumes 5% of CPU usage and 4 ~ 5% of GPU, imagine if it was a complete game with dozens of calls, there is a way to optimize this, such as preparing the whole scene and just make 1 draw call, drawing all at once, or some other approach?
void GameMap::draw() {
m_shader - > use();
m_texture - > bind();
glBindVertexArray(m_quadVAO);
for (size_t r = 0; r < 10; r++) {
for (size_t c = 0; c < 10; c++) {
m_tileCoord - > setX(c * m_tileHeight);
m_tileCoord - > setY(r * m_tileHeight);
m_tileCoord - > convert2DToIso();
drawTile(0);
}
}
glBindVertexArray(0);
}
void GameMap::drawTile(GLint index) {
glm::mat4 position_coord = glm::mat4(1.0 f);
glm::mat4 texture_coord = glm::mat4(1.0 f);
m_srcX = index * m_tileWidth;
GLfloat clipX = m_srcX / m_texture - > m_width;
GLfloat clipY = m_srcY / m_texture - > m_height;
texture_coord = glm::translate(texture_coord, glm::vec3(glm::vec2(clipX, clipY), 0.0 f));
position_coord = glm::translate(position_coord, glm::vec3(glm::vec2(m_tileCoord - > getX(), m_tileCoord - > getY()), 0.0 f));
position_coord = glm::scale(position_coord, glm::vec3(glm::vec2(m_tileWidth, m_tileHeight), 1.0 f));
m_shader - > setMatrix4("texture_coord", texture_coord);
m_shader - > setMatrix4("position_coord", position_coord);
glDrawArrays(GL_TRIANGLES, 0, 6);
}
--Vertex Shader
#version 330 core
layout (location = 0) in vec4 vertex; // <vec2 position, vec2 texCoords>
out vec4 TexCoords;
uniform mat4 texture_coord;
uniform mat4 position_coord;
uniform mat4 projection;
void main()
{
TexCoords = texture_coord * vec4(vertex.z, vertex.w, 1.0, 1.0);
gl_Position = projection * position_coord * vec4(vertex.xy, 0.0, 1.0);
}
-- Fragment Shader
#version 330 core
out vec4 FragColor;
in vec4 TexCoords;
uniform sampler2D image;
uniform vec4 spriteColor;
void main()
{
FragColor = vec4(spriteColor) * texture(image, vec2(TexCoords.x, TexCoords.y));
}
The Basic Technique
The first thing you want to do is set up your 10x10 grid vertex buffer. Each square in the grid is actually two triangles. And all the triangles will need their own vertices because the UV coordinates for adjacent tiles are not the same, even though the XY coordinates are the same. This way each triangle can copy the area out of the texture atlas that it needs to and it doesn't need to be contiguous in UV space.
Here's how the vertices of two adjacent quads in the grid will be set up:
1: xy=(0,0) uv=(Left0 ,Top0)
2: xy=(1,0) uv=(Right0,Top0)
3: xy=(1,1) uv=(Right0,Bottom0)
4: xy=(1,1) uv=(Right0,Bottom0)
5: xy=(0,1) uv=(Left0 ,Bottom0)
6: xy=(0,0) uv=(Left0 ,Top0)
7: xy=(1,0) uv=(Left1 ,Top1)
8: xy=(2,0) uv=(Right1,Top1)
9: xy=(2,1) uv=(Right1,Bottom1)
10: xy=(2,1) uv=(Right1,Bottom1)
11: xy=(1,1) uv=(Left1 ,Bottom1)
12: xy=(1,0) uv=(Left1 ,Top1)
These 12 vertices define 4 triangles. The Top, Left, Bottom, Right UV coordinates for the first square can be completely different from the coordinates of the second square, thus allowing each square to be textured by a different area of the texture atlas. E.g. see below to see how the UV coordinates for each triangle map to a tile in the texture atlas.
In your case with your 10x10 grid, you would have 100 quads, or 200 triangles. With 200 triangles at 3 vertices each, that would be 600 vertices to define. But it's a single draw call of 200 triangles (600 vertices). Each vertex has its own x, y, u, v, coordinates. To change which tile a quad is, you have to update the uv coordinates of 6 vertices in your vertex buffer.
You will likely find that this is the most convenient and efficient approach.
Advanced Approaches
There are more memory efficient or convenient ways of setting this up with multiple streams to reduce duplication of vertices and leverage shaders to do the work of setting it up if you're willing to trade off computation time for memory or convenience. Find the balance that is right for you. But you should grasp the basic technique first before trying to optimize.
But in the multiple-stream approach, you could specify all the xy vertices separately from all the uv vertices to avoid duplication. You could also specify a second set of texture coordinates which was just the top-left corner of the tile in the atlas and let the uv coordinates just go from 0,0 (top left) to 1,1 (bottom right) for each quad, then let your shader scale and transform the uv coordinates to arrive at final texture coordinates. You could also specify a single uv coordinate of the top-left corner of the source area for each primitive and let a geometry shader complete the squares. And even smarter, you could specify only the x,y coordinates (omitting the uv coordinates entirely) and in your vertex shader, you can sample a texture that contains the "tile numbers" of each quad. You would sample this texture at coordinates based on the x,y values in the grid, and then based on the value you read, you could transform that into the uv coordinates in the atlas. To change the tile in this system, you just change the one pixel in the tile map texture. And finally, you could skip generating the primitives entirely and derive them entirely from a single list sent to the geometry shader and generate the x,y coordinates of the grid which gets sent downstream to the vertex shader to complete the triangle geometry and uv coordinates of the grid, this is the most memory efficient, but relies on the GPU to compute the setup at runtime.
With a static 6-vertices-per-triangle setup, you free up GPU processing at the cost of a little extra memory. Depending on what you need for performance, you may find that using up more memory to get higher fps is desirable. Vertex buffers are tiny compared to textures anyway.
So as I said, you should start with the basic technique first as it's likely also the optimal solution for performance as well, especially if your map doesn't change very often.
You can upload all parameters to gpu memory and draw everything using only one draw call.
This way it's not required to update vertex shader uniforms and you should have zero cpu load.
It's been 3 years since I used OpenGL so I can only point you into the right direction.
Start reading some material like for instance:
https://ferransole.wordpress.com/2014/07/09/multidrawindirect/
https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glDrawArraysIndirect.xhtml
Also, keep in mind this is GL 4.x stuff, check your target platform (software+hardware) GL version support.

GLSL Shader to convert six textures to Equirectangular projection

I want to create an equirectangular projection from six quadratic textures, similar to converting a cubic projection image to an equirectangular image, but with the separate faces as textures instead of one texture in cubic projection.
I'd like to do this on the graphics card for performance reasons, and therefore want to use a GLSL Shader.
I've found a Shader that converts a cubic texture to an equirectangular one: link
Step 1: Copy your six textures into a cube map texture. You can do this by binding the textures to FBOs and using glBlitFramebuffer().
Step 2: Run the following fragment shader. You will need to vary the Coord attribute from (-1,-1) to (+1,+1) over the quad.
#version 330
// X from -1..+1, Y from -1..+1
in vec2 Coord;
out vec4 Color;
uniform samplercube Texture;
void main() {
// Convert to (lat, lon) angle
vec2 a = Coord * vec2(3.14159265, 1.57079633);
// Convert to cartesian coordinates
vec2 c = cos(a), s = sin(a);
Color = sampler(Texture, vec3(vec2(s.x, c.x) * c.y, s.y));
}

Opengl GLSL render to texture

I'm trying to render to texture with OpenGL + GLSL shaders. For start I'm trying to fill every pixel of 30x30 texture with white color. I'm passing to vertex shader index from 0 to 899, representing each pixel of texture. Is this correct?
Vertex shader:
flat in int index;
void main(void) {
gl_Position = vec4((index % 30) / 15 - 1, floor(index / 30) / 15 - 1, 0, 1);
}
Fragment shader:
out vec4 color;
void main(void) {
color = vec4(1, 1, 1, 1);
}
You are trying to render 900 vertices, with one vertex per pixel? Why are you doing that? What primitive type are you using. It would only make sense if you were using points, but then you would need some slight modification of the output coordinates to actually hit the fragment centers.
The usual way for this is to render just a quad (easily represented as a triangle strip with just 4 vertices) which is filling the whole framebuffer. To achieve this, you just need to setup the viewport to the full frambeuffer and render are quad from (-1,-1) to (1,1).
Note that in both approaches, you don't need vertex attributes. You could just use gl_VertexID (directly as replacement for index in your approach, or as aan index to a const array of 4 vertex coords for the quad).