I am trying to reconstruct an image that has been rendered by column.
A counter cpt_x is increments in a loop from 0 to 4.
At each pass only one pixel out of 5 is displayed, every 5 pixels.
Thus, during the first pass, the pixels 0, 5, 10, 15, 20, 25, etc. are displayed. => cpt_x = 0
then in the second pass, pixels 1, 6, 11, 16, 21, 26, etc. are displayed. => cpt_x = 1
in the third pass, pixels 2, 7, 12, 17, 22, 27, etc. are displayed. => cpt_x = 2
in the fourth pass, pixels 3, 8, 13, 18, 23, 28, etc. are displayed. => cpt_x = 3
in the fifth pass, pixels 4, 9, 14, 19, 24, 29 etc. => cpt_x = 4
The last step reconstructs the image, since all the pixels have been created.
The generation of images goes well, I can even reconstruct the final image after with an offset copy in a buffer texture using:
glCopyTexSubImage2D (GL_TEXTURE_2D, 0, 0, 0, cpt_x, 0, 1920, 1080);
As I now need different cpt_x values for each pixel, I can't use this trick anymore.
I try to reconstruct the image in a fragment shader, but nothing is displayed.
The objective of this shader is to copy the pixels of the image to their locations, it will be called 5 times, at each generation of a 1/5 part of the final image, and copy the multiple pixels of cpt_x in the buffer of the final image.
The first pixels of the first line must be assembled as follows:
1st pixel of image 1, 1st pixel of image 2, [...], 1st pixel of image 5, 2nd pixel of image 1, 2nd pixel of image 2, [...], 2nd pixel of image 5, etc.
#version 330 core
out vec4 FragColor;
in vec2 TexCoords;
uniform int max_x; // equal to 4
uniform int cpt_x; // from 0 to 4
uniform sampler2D my_texture; // this texture contain only 1 column filled every 5 pixels
int coord_x = floor(gl_FragCoord.x / max_x);
vec2 pixel_size = 1.0 / vec2(textureSize(my_texture, 0));
vec4 res = texture(my_texture, vec2(coord_x + cpt_x * pixel_size.x, TexCoords.y));
if (texOneView.a != 0.0)
FragColor = res;
As i said, nothing is displayed, I suspect the coord_x creation, since I think it's a problem of coordinates.
I wasn't in same space for all my variables. The code is simpler now:
#version 330 core
out vec4 FragColor;
uniform vec2 max;
uniform vec2 cpt;
uniform sampler2D my_texture;
void main()
{
vec2 uv = (gl_FragCoord.xy + cpt) / vec2(textureSize(my_texture, 0);
FragColor = texture(my_texture, uv);
}
Related
I'm currently having a problem with my compute shader failing to properly get an element at a certain index of an input array.
I've read the buffers manually using NVidia NSight and it seems to be input properly, the problem seems to be with indexing.
It's supposed to be drawing voxels on a grid, take this case as an example (What is supposed to be drawn is highlighted in red while blue is what I am getting):
And here is the SSBO buffer capture in NSight transposed:
This is the compute shader I'm currently using:
#version 430
layout(local_size_x = 1, local_size_y = 1) in;
layout(rgba32f, binding = 0) uniform image2D img_output;
layout(std430) buffer;
layout(binding = 0) buffer Input0 {
ivec2 mapSize;
};
layout(binding = 1) buffer Input1 {
bool mapGrid[];
};
void main() {
// base pixel colour for image
vec4 pixel = vec4(1, 1, 1, 1);
// get index in global work group i.e x,y position
ivec2 pixel_coords = ivec2(gl_GlobalInvocationID.xy);
vec2 normalizedPixCoords = vec2(gl_GlobalInvocationID.xy) / gl_NumWorkGroups.xy;
ivec2 voxel = ivec2(int(normalizedPixCoords.x * mapSize.x), int(normalizedPixCoords.y * mapSize.y));
float distanceFromMiddle = length(normalizedPixCoords - vec2(0.5, 0.5));
pixel = vec4(0, 0, mapGrid[voxel.x * mapSize.x + voxel.y], 1); // <--- Where I'm having the problem
// I index the voxels the same exact way on the CPU code and it works fine
// output to a specific pixel in the image
//imageStore(img_output, pixel_coords, pixel * vec4(vignettecolor, 1) * imageLoad(img_output, pixel_coords));
imageStore(img_output, pixel_coords, pixel);
}
NSight doc file: https://ufile.io/wmrcy1l4
I was able to fix the problem by completely ditching SSBOs and using a texture buffer, turns out the problem was that OpenGL treated each value as a 4-byte value and stepped 4 bytes instead of one for each index.
Based on this post: Shader storage buffer object with bytes
I'm trying to render a bunch of small axis-aligned (2d) quads in Vulkan, and rather than using a non-indexed draw call, I thought to try and minimize transfer overhead and use indexed draw with the following scheme:
#version 450
layout(location = 0) in vec2 inTopleft;
layout(location = 1) in vec2 inExtent;
vec2 positions[6] = vec2[](
vec2(0, 0),
vec2(0, 1),
vec2(1, 0),
vec2(0, 1),
vec2(1, 1),
vec2(1, 0)
);
void main() {
vec2 position = positions[gl_VertexIndex % 6];
gl_Position = vec4(inTopleft + position * inExtent, 0, 1);
}
That way I only need to send one vertex per quad, and then I just put the same vertex six times in the index buffer like:
index_buffer = [0,0,0,0,0,0, 1,1,1,1,1,1, 2,2,2,2,2,2, ... n,n,n,n,n,n]
but this scheme doesn't seem to work because gl_VertexIndex I suspect is giving the value of the element in the index_buffer, right? I mean for the first quad gl_VertexIndex is 0 for all six verticies, and then the second is 1 for all six verticies, and so on. It's not actually giving 0,1,2,3,4,5 for the first quad, and 6,7,8,9,10,11 for the second quad, and so on.
Is that right? And if so, is there any way to do what I'm trying to do?
So I ended up using an instance draw call (non-indexed) with one instance per quad, and that seems to about double performance (200 fps -> 500fps, rendering about 10k quads), at least on my graphics card (NVIDIA Corporation GP106 [GeForce GTX 1060 6GB] (rev a1)).
What I mean is each quad has its own instance and so the draw call is called like draw(nvertexes=6, ninstances=nquads), and so the shader can change from:
- vec2 position = positions[gl_VertexIndex % 6];
+ vec2 position = positions[gl_VertexIndex];
and of course the vertex buffer is now per instance instead of per vertex.
How can I use different color outputs within a fragment shader?
Say, my vshader looks like this:
#version 330
uniform mat4 mvpmatrix;
layout(location=0) in vec4 position;
layout(location=1) in vec2 texcoord;
out vec2 out_texcoord;
void main()
{
gl_Position = mvpmatrix * position;
out_texcoord = texcoord;
}
// fshader
#version 330
uniform sampler2D texture;
in vec2 out_texcoord;
out vec4 out_color;
out vec4 out_color2;
void main()
{
out_color = texture2D(texture, out_texcoord);
// out_color2 = vec3(1.0, 1.0, 1.0, 1.0);
}
Accessing them like so:
m_program->enableAttributeArray(0); // position
m_program->setAttributeBuffer(0, GL_FLOAT, 0, 3, sizeof(Data));
m_program->enableAttributeArray(1); // texture
m_program->setAttributeBuffer(1, GL_FLOAT, sizeof(QVector3D), 2, sizeof(Data));
So far, everything uses the default output of the fragment shader, which is a texture. But how can access different fragment outputs ? Do I have to use layouts as well there? And, its probably a dumb question...but are layout locations of the vshader/fshader bound to each other? So, if I'm enabling my buffer on AttributeArray(1), i'm forced to use layout location 1 of BOTH shaders?
You can bind another attribute location for sending color information to your fragment shader any time but let me show you another trick :)
I use 2 attribute location, one to represent the location of the vertex and the other one to represent the color of the vertex.
glBindAttribLocation(program_, 0, "vs_in_pos");
glBindAttribLocation(program_, 1, "vs_in_col");
This is my mesh definition, where Vertex contain two 3D vector:
Vertex vertices[] = {
{glm::vec3(-1, -1, 1), glm::vec3(1, 0, 0)},
{glm::vec3(1, -1, 1), glm::vec3(1, 0, 0)},
{glm::vec3(-1, 1, 1), glm::vec3(1, 0, 0)},
{glm::vec3(1, 1, 1), glm::vec3(1, 0, 0)},
{glm::vec3(-1, -1, -1), glm::vec3(0, 1, 0)},
{glm::vec3(1, -1, -1), glm::vec3(0, 1, 0)},
{glm::vec3(-1, 1, -1), glm::vec3(0, 1, 0)},
{glm::vec3(1, 1, -1), glm::vec3(0, 1, 0)},
};
GLushort indices[] = {
// Front
0, 1, 2, 2, 1, 3,
// Back
4, 6, 5, 6, 7, 5,
// Top
2, 3, 7, 2, 7, 6,
// Bottom
0, 5, 1, 0, 4, 5,
// Left
0, 2, 4, 4, 2, 6,
// Right
1, 5, 3, 5, 7, 3
};
This will represent a cube. I will mix this pre-defined color with a calculated value. This means the color of the cube will be changed due to its position. Set up a 3D vector for RGB values and set up to use it in the fragment shader:
loc_col_ = glGetUniformLocation(program_, "color");
Now in my render function I place the cubes in a 2D circle, moving them, rotating them:
for (int i = 0; i < num_of_cubes_; ++i) {
double fi = 2 * PI * (i / (double) num_of_cubes_);
glm::mat4 position = glm::translate<float>(cubes_radius_ * cos(fi), cubes_radius_ * sin(fi), 0);
glm::mat4 crackle = glm::translate<float>(0, 0.1 * (sin(2 * PI * (SDL_GetTicks() / 500.0) + i)), 0);
glm::mat4 rotate = glm::rotate<float>(360 * (SDL_GetTicks() / 16000.0), 0, 0, 1);
world_ = position * crackle * rotate;
glm::vec3 color = glm::vec3((1 + cos(fi)) * 0.5, (1 + sin(fi)) * 0.5, 1 - ((1 + cos(fi)) * 0.5));
glUniformMatrix4fv(loc_world_, 1, GL_FALSE, &(world_[0][0]));
glUniform3fv(loc_col_, 1, &(color[0]));
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, 0);
}
You can see here I send not only the world matrix, but the color vector as well.
Linear interpolation in the fragment shader is achived by the mix() function:
#version 130
in vec3 vs_out_col;
in vec3 vs_out_pos;
out vec4 fs_out_col;
uniform vec3 color;
void main() {
fs_out_col = vec4(mix(color, vs_out_col, 0.5), 1);
}
Color is a value passed in the render while vs_out_col coming from the vertex shader which was arrived there in "channel" 1.
I hope you can understand me.
Layout locations on vertex and fragment shaders are independent. QT may be misleading with enableAttributeArray because in OpenGL this function is called glEnableVertexAttribArray - vertex is the keyword here. So you can pass per vertex data only into vertex shader, and then pass it into fragment shader using in/out (interpolation).
If you want to use multiple outputs from fragment shader you have to use locations and Output buffers.
This link should also be helpful, I'll summarize it later.
I'm trying to color single vertices of quads that are drawn through glDrawElements, I'm working with cocos2d libray, so I've been able to scavenge the source code to understand exactly what is happening, code is the following:
glBindVertexArray( VAOname_ );
glDrawElements(GL_TRIANGLES, (GLsizei) n*6, GL_UNSIGNED_SHORT, (GLvoid*) (start*6*sizeof(indices_[0])) );
glBindVertexArray(0);
So vertex array objects are used. I'm trying to modify single vertices color of the objects that are passed and it seems to work but with a glitch which is described by the following image:
Here I tried to change the color of the lower left and right vertex. The result is different, I guess this is because the quad is rendered as a couple of triangles with shared hypotenuse which resides on the diagonal which goes from lower left vertex to higher right vertex. So this could cause the different result.
Now I would like to have the second result also for the first case. Is there a way to obtain it?
Your guess is right. The OpenGL driver tesselates your quad into two triangles, in which the vertex colours are interpolated barycentrically, which results in what you see.
The usual approach to solve this, is by performing the interpolation "manually" in a fragment shader, that takes into account the target topology, in your case a quad. Or in short you have to perform barycentric interpolation not based on a triangle but on a quad. You might also want to apply perspective correction.
I don't have ready to read resources at hand right now, but I'll update this answer as soon as I have (might actually mean, I'll have to write it myself).
Update
First we must understand the problem: Most OpenGL implementations break down higher primitives into triangles and render them localized, i.e. without further knowledge about the rest of the primitive, e.g. a quad. So we have to do this ourself.
This is how I'd do it.
#version 330 // vertex shader
Of course we also need the usual uniforms
uniform mat4x4 MV;
uniform mat4x4 P;
First we need the position of the vertex processed by this shader execution instance
layout (location=0) in vec3 pos;
Next we need some vertex attributes which we use to describe the quad itself. This means its corner positions
layout (location=1) in vec3 qp0;
layout (location=2) in vec3 qp1;
layout (location=3) in vec3 qp2;
layout (location=4) in vec3 qp3;
and colors
layout (location=5) in vec3 qc0;
layout (location=6) in vec3 qc1;
layout (location=7) in vec3 qc2;
layout (location=8) in vec3 qc3;
We put those into varyings for the fragment shader to process.
out vec3 position;
out vec3 qpos[4];
out vec3 qcolor[4];
void main()
{
qpos[0] = qp0;
qpos[1] = qp1;
qpos[2] = qp2;
qpos[3] = qp3;
qcolor[0] = qc0;
qcolor[1] = qc1;
qcolor[2] = qc2;
qcolor[3] = qc3;
gl_Position = P * MV * position;
}
In the fragment shader we use this to implement a distance weighting for the color components:
#version 330 // fragment shader
in vec3 position;
in vec3 qpos[4];
in vec3 qcolor[4];
void main()
{
vec3 color = vec3(0);
The following can be simplified combinatorical, but for sake of clarity I write it out:
For each corner point of the vertex mix with the colors of all corner points with the projection of the position on the edge between them as mix factor.
for(int i=0; i < 4; i++) {
vec3 p = position - qpos[i];
for(int j=0; j < 4; j++) {
vec3 edge = qpos[i] - qpos[j];
float edge_length = length(edge);
edge = normalize(edge);
float tau = dot(edge_length, p) / edge_length;
color += mix(qcolor[i], qcolor[j], tau);
}
}
Since we looked at each corner point 4 times, scale down by 1/4
color *= 0.25;
gl_FragColor = color; // and maybe other things.
}
We're almost done. On the client side we need to pass the additional information. Of course we don't want to duplicate data. For this we use glVertexBindingDivisor so that a vertex attribute advances only every 4 vertices (i.e. a quad), on the qp… and qc… locations, i.e. location 1 to 8
typedef float vec3[3];
extern vec3 *quad_position;
extern vec3 *quad_color;
glVertexAttribute(0, 3, GL_FLOAT, GL_FALSE, 0, &quad_position[0]);
glVertexBindingDivisor(1, 4);
glVertexAttribute (1, 3, GL_FLOAT, GL_FALSE, 0, &quad_position[0]);
glVertexBindingDivisor(2, 4);
glVertexAttribute (2, 3, GL_FLOAT, GL_FALSE, 0, &quad_position[1]);
glVertexBindingDivisor(3, 4);
glVertexAttribute (3, 3, GL_FLOAT, GL_FALSE, 0, &quad_position[2]);
glVertexBindingDivisor(4, 4);
glVertexAttribute (4, 3, GL_FLOAT, GL_FALSE, 0, &quad_position[3]);
glVertexBindingDivisor(5, 4);
glVertexAttribute (5, 3, GL_FLOAT, GL_FALSE, 0, &quad_color[0]);
glVertexBindingDivisor(6, 4);
glVertexAttribute (6, 3, GL_FLOAT, GL_FALSE, 0, &quad_color[1]);
glVertexBindingDivisor(7, 4);
glVertexAttribute (7, 3, GL_FLOAT, GL_FALSE, 0, &quad_color[2]);
glVertexBindingDivisor(8, 4);
glVertexAttribute (8, 3, GL_FLOAT, GL_FALSE, 0, &quad_color[3]);
It makes sense to put the above into a Vertex Array Object. Also using a VBO would make sense, but then you must calculate the offset sizes manually; due to the typedef float vec3 the compiler does the math for us ATM.
With all this being set you can finally tesselation independently draw your quad.
I'm trying to sort out how can I achieve palette swap using fragment shaders (looking at this post https://gamedev.stackexchange.com/questions/43294/creating-a-retro-style-palette-swapping-effect-in-opengl) I am new to open gl so I'd be glad if someone could explain me my issue.
Here is code snippet which I am trying to reproduce:
http://www.opengl.org/wiki/Common_Mistakes#Paletted_textures
I set up Open GL environment so that I can create window, load textures, shaders and render my single square which is mapped to corners of window (when I resize window image get stretched too).
I am using vertex shader to convert coordinates from screen space to texture space, so my texture is stretched too
attribute vec2 position;
varying vec2 texcoord;
void main()
{
gl_Position = vec4(position, 0.0, 1.0);
texcoord = position * vec2(0.5) + vec2(0.5);
}
The fragment shader is
uniform float fade_factor;
uniform sampler2D textures[2];
varying vec2 texcoord;
void main()
{
vec4 index = texture2D(textures[0], texcoord);
vec4 texel = texture2D(textures[1], index.xy);
gl_FragColor = texel;
}
textures[0] is indexed texture (that one I'm trying to colorize)
Every pixel has color value of (0, 0, 0, 255), (1, 0, 0, 255), (2, 0, 0, 255) ... (8, 0, 0, 255) - 8 colors total, thats why it looks almost black. I want to encode my colors using value stored in "red channel".
textures[1] is table of colors (9x1 pixels, each pixel has unique color, zoomed to 90x10 for posting)
So as you can see from fragment shader excerpt I want to read index value from first texture, for example (5, 0, 0, 255), and then look up actual color value from pixel stored at point (x=5, y=0) in second texture. Same as written in wiki.
But instead of painted image I get:
Actually I see that I can't access pixels from second texture if I explicitly set X point like vec2(1, 0),vec2(2, 0), vec2(4, 0) or vec2(8, 0). But I can get colors when I use vec2(0.1, 0) or vec2(0.7, 0). Guess that happens because texture space is normalized from my 9x1 pixels to (0,0)->(1,1). But how can I "disable" that feature and simply load my palette texture so I could just ask "give me color value of pixel stored at (x,y), please"?
Every pixel has color value of (0, 0, 0, 255), (1, 0, 0, 255), (2, 0, 0, 255) ... (8, 0, 0, 255)
Wrong. Every pixel has the color values: (0, 0, 0, 1), (0.00392, 0, 0, 1), (0.00784, 0, 0, 1) ... (0.0313, 0, 0, 1).
Unless you're using integer or float textures (and you're not), your colors are stored as normalized floating point values. So what you think is "255" is really just "1.0" when you fetch it from the shader.
The correct way to handle this is to first transform the normalized values back into their non-normalized form. This is done by multiplying the value by 255. Then convert them into texture coordinates by dividing by the palette texture's width (- 1). Also, your palette texture should not be 2D:
#version 330 //Always include a version.
uniform float fade_factor;
uniform sampler2D palattedTexture;
uniform sampler1D palette;
in vec2 texcoord;
layout(location = 0) out vec4 outColor;
void main()
{
float paletteIndex = texture(palattedTexture, texcoord).r * 255.0;
outColor = texture(palette, paletteIndex / (textureSize(palette).x - 1));
gl_FragColor = texel;
}
The above code is written for GLSL 3.30. If you're using earlier versions, translate it accordingly.
Also, you shouldn't be using RGBA textures for your paletted texture. It's just one channel, so either use GL_LUMINANCE or GL_R8.