I'm making 2D game with large-pixel graphics. To achieve this effect I'm rendering all images to framebuffer with texture 2 times smaller than my window. And then, I'm rendering this texture to window using quad ({{-1,-1},{1,-1},{1,1},{-1,1}}).
This works fine, but coordinate system when rendering to texture is a bit strange. For example, when I use
glBegin(GL_POINTS);
glVertex2f(-0.75, -0.75);
glEnd();
It renders 2x2 point. I would expect this point to be at (win_w * 1/8, win_h * 7/8) but whis point is at (win_w * 1/4, win_h * 3/4).
If I change framebuffer texture size from ((win_w + 1) / 2, (win_h + 1) / 2) (2 times smaller than my screen)
to ((win_w + 3) / 4, (win_h + 3) / 4) (4 times smaller than my screen) that point is now has 4x4 size and it is at (win_w * 1/2, win_h * 1/2) (center of window).
I think this is incorrect. AFAIK, framebuffer coordinate system does not depend on framebuffer texture size; 1,1 is a top-right corner on any texture size, right?
There is no transformation matrixes or sometring like this, so OpenGL must not transform my coordinates.
I still can render with this strange coordinate system, but I don't understand why it works this way.
So, question is: i want to render vertices is same place inside window with any framebuffer texture size. Is it possible? (I don't want to use trasformation matrixes inside shaders, because it should work without them. I hope there is another solutions.)
Shaders:
// Vertex:
#version 430
in layout(location = 0) vec2 pos;
out vec2 vPos;
void main()
{
vPos = pos;
gl_Position = vec4(pos.x, pos.y, 0, 1);
}
// Fragment:
#version 430
uniform layout(location = 0) sampler2D tex;
in vec2 vPos;
out vec4 color;
void main()
{
color = texture(tex, (vPos + 1) / 2);
}
Problem solved. (Thanks to #RetoKoradi.) Now my code looks like this:
glViewport(0, 0, 800, 600);
/// Switch shaders and framebuffer
DrawQuadWithTexture();
glViewport(0, 0, 400, 300);
/// Switch shaders and framebuffer
DrawAllStuff();
Related
I have access to a depth camera's output. I want to visualise this in opengl using a compute shader.
The depth feed is given as a frame and i know the width and height ahead of time. How do I sample the texture and retrieve the depth value in the shader? Is this possible? I've read through the OpenGl types here and can't find anything on unsigned shorts so am starting to worry. Are there any workarounds?
My current compute shader
#version 430
layout(local_size_x = 1, local_size_y = 1) in;
layout(rgba32f, binding = 0) uniform image2D img_output;
uniform float width;
uniform float height;
uniform sampler2D depth_feed;
void main() {
// get index in global work group i.e x,y position
vec2 sample_coords = ivec2(gl_GlobalInvocationID.xy) / vec2(width, height);
float visibility = texture(depth_feed, sample_coords).r;
vec4 pixel = vec4(1.0, 1.0, 0.0, visibility);
// output to a specific pixel in the image
imageStore(img_output, ivec2(gl_GlobalInvocationID.xy), pixel);
}
The depth texture definition is as follows:
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, width, height, 0,GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, nullptr);
Currently my code produces a plain yellow screen.
If you use perspective projection, then the depth value is not linear. See LearnOpenGL - Depth testing.
If all the depth values are near 0.0, and you use the following expression:
vec4 pixel = vec4(vec3(visibility), 1.0);
then all the pixels appear almost black. Actually the pixels are not completely black, but the difference is barely noticeable.
This happens, when the far plane is "too" far away. To verify that you can compute the power of 1.0 - visibility, to make the different depth values ​​recognizable. For instance:
float exponent = 5.0;
vec4 pixel = vec4(vec3(pow(1.0-visibility, exponent)), 1.0);
If you want a more sophisticated solution, you can linearize the depth values as explained in the answer to How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?.
Please note that for a satisfactory visualization you should use the entire range of the depth buffer ([0.0, 1.0]). The geometry must be between the near and far planes, but try to move the near and far planes as close to the geometry as possible.
I have a problem with rendering my quads in OpenGL. They look darker when translucency is applied, if the camera is below a certain point. How can I fix this? The objects are lots of quads with tiny amounts of Z difference. I have implemented rendering of translucent objects from this webpage: http://www.alecjacobson.com/weblog/?p=2750
Render code:
double alpha_factor = 0.75;
double alpha_frac = (r_alpha - alpha_factor * r_alpha) / (1.0 - alpha_factor * r_alpha);
double prev_alpha = r_alpha;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_BLEND);
// quintuple pass to get the rendering of translucent objects, somewhat correct
// reverse render order for getting alpha going!
// 1st pass: only depth checks
glDisable(GL_CULL_FACE);
glDepthFunc(GL_LESS);
r_alpha = 0;
// send alpha for each pass
// reverse order
drawobjects(RENDER_REVERSE);
// 2nd pass: guaranteed back face display with normal alpha
glEnable(GL_CULL_FACE);
glCullFace(GL_FRONT);
glDepthFunc(GL_ALWAYS);
r_alpha = alpha_factor * (prev_alpha + 0.025);
// reverse order
drawobjects(RENDER_REVERSE);
// 3rd pass: depth checked version of fraction of calculated alpha. (minus 1)
glEnable(GL_CULL_FACE);
glCullFace(GL_FRONT);
glDepthFunc(GL_LEQUAL);
r_alpha = alpha_frac + 0.025;
// normal order
drawobjects(RENDER_NORMAL);
// 4th pass: same for back face
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glDepthFunc(GL_ALWAYS);
r_alpha = alpha_factor * (prev_alpha + 0.025);
// reverse order
drawobjects(RENDER_REVERSE);
// 5th pass: just put out the entire thing now
glDisable(GL_CULL_FACE);
glDepthFunc(GL_LEQUAL);
r_alpha = alpha_frac + 0.025;
// normal order
drawobjects(RENDER_NORMAL);
glDisable(GL_BLEND);
r_alpha = prev_alpha;
GLSL shaders:
Vertex shader:
#version 330 core
layout(location = 0) in vec3 vPos_ModelSpace;
layout(location = 1) in vec2 vertexUV;
layout(location = 2) in mat4 model_instance;
out vec2 UV;
out float alpha;
flat out uint alpha_mode;
// model + view + proj matrix
uniform mat4 proj;
uniform mat4 view;
uniform float v_alpha;
uniform uint v_alpha_mode;
void main() {
gl_Position = proj * view * model_instance * vec4(vPos_ModelSpace, 1.0);
// send to frag shader
UV = vertexUV;
alpha = v_alpha;
alpha_mode = v_alpha_mode;
}
Fragment shader:
#version 330 core
// texture UV coordinate
in vec2 UV;
in float alpha;
flat in uint alpha_mode;
out vec4 color;
// Values that stay constant for the whole mesh.
uniform sampler2D texSampler;
void main() {
int amode = int(alpha_mode);
color.rgb = texture(texSampler, UV).rgb;
color.a = alpha;
if(amode == 1)
color.rgb *= alpha;
}
Image when problem happens:
Image comparison for how it should look regardless of my position:
The reason it fades away in the center is because when you look at the infinitely thin sides of the planes they disappear. As for the brightness change top vs bottom, it's due to how your passes treat surface normals. The dark planes are normals facing away from the camera but with no planes facing the camera to lighten them up.
It looks like you are rendering many translucent planes in a cube to estimate a volume. Here is a simple example of a volume rendering: https://www.shadertoy.com/view/lsG3D3
http://developer.download.nvidia.com/books/HTML/gpugems/gpugems_ch39.html is a fantastic resource. It explains different ways to render volume, shows how awesome it is. For reference, that last example used a sphere as proxy geometry to raymarch a volume fractal.
Happy coding!
I'm writing an OpenGL library that can draw points, lines, and rectangles using the screen coordinates. However, I do not know how to convert the screen coordinates to clip or camera coordinates. I am using modern OpenGL (vertex arrays and vertex buffers, as well as shaders).
This is basically what I'm working towards:
DrawPoint(10, 10, 5); // draws a point at pixel 10, 10 with a radius of 5
The same concept for drawing lines and rectangles.
Also, I'm not providing code because that isn't what I'm looking for, I'm looking for concepts and math.
What you probably want is an Orthogonal Projection Matrix. This code will go in your draw loop:
int width = getFramebufferWidth();
int height = getFramebufferHeight();
glm::mat4 mvp = glm::ortho(0, width, 0, height);
glUniformMatrix4fv(glGetUniformLocation(program, "mvp"), 1, false, glm::value_ptr(mvp));
glViewport(0, width, 0, height);
//Draw the Objects, clear the screen, whatever it is you need to do.
Then, in your Vertex Shader, you'll have something like this:
#version 330
layout(location = 0) in vec2 position;
uniform mat4 mvp;
void main() {
gl_Position = mvp * vec4(position, 0, 1);
}
Then, when you specify something to be drawn at position <10, 10>, it'll be drawn at exactly that position.
This code uses GLM to build the matrix in question.
I'm using GLSL to draw sprites from a sprite-sheet. I'm using jME 3, yet there are only small differences, and only with regards to deprecated functions.
The most important part of drawing a sprite from a sprite sheet is to draw only a subset/range of pixels, for example the range from (100, 0) to (200, 100). In the following test case sprite-sheet, and using the previous bounds, only the green part of the sprite-sheet would be drawn.
.
This is what I have so far:
Definition:
MaterialDef Solid Color {
//This is the list of user-defined variables to be used in the shader
MaterialParameters {
Vector4 Color
Texture2D ColorMap
}
Technique {
VertexShader GLSL100: Shaders/tc_s1.vert
FragmentShader GLSL100: Shaders/tc_s1.frag
WorldParameters {
WorldViewProjectionMatrix
}
}
}
.vert file:
uniform mat4 g_WorldViewProjectionMatrix;
attribute vec3 inPosition;
attribute vec4 inTexCoord;
varying vec4 texture_coordinate;
void main(){
gl_Position = g_WorldViewProjectionMatrix * vec4(inPosition, 1.0);
texture_coordinate = vec4(inTexCoord);
}
.frag:
uniform vec4 m_Color;
uniform sampler2D m_ColorMap;
varying vec4 texture_coordinate;
void main(){
vec4 color = vec4(m_Color);
vec4 tex = texture2D(m_ColorMap, texture_coordinate);
color *= tex;
gl_FragColor = color;
}
In jME 3, inTexCoord refers to gl_MultiTexCoord0, and inPosition refers to gl_Vertex.
As you can see, I tried to give the texture_coordinate a vec4 type, rather than a vec2, so as to be able to reference its p and q values (texture_coordinate.p and texture_coordinate.q). Modifying them only resulted in different hues.
m_Color refers to the color, inputted by the user, and serves the purpose of altering the hue. In this case, it should be disregarded.
So far, the shader works as expected and the texture displays correctly.
I've been using resources and tutorials from NeHe (http://nehe.gamedev.net/article/glsl_an_introduction/25007/) and Lighthouse3D (http://www.lighthouse3d.com/tutorials/glsl-tutorial/simple-texture/).
Which functions/values I should alter to get the desired effect of displaying only part of the texture?
Generally, if you want to only display part of a texture, then you change the texture coordinates associated with each vertex. Since you don't show your code for how you're telling OpenGL about your vertices, I'm not sure what to suggest. But in general, if you're using older deprecated functions, instead of doing this:
// Lower Left of triangle
glTexCoord2f(0,0);
glVertex3f(x0,y0,z0);
// Lower Right of triangle
glTexCoord2f(1,0);
glVertex3f(x1,y1,z1);
// Upper Right of triangle
glTexCoord2f(1,1);
glVertex3f(x2,y2,z2);
You could do this:
// Lower Left of triangle
glTexCoord2f(1.0 / 3.0, 0.0);
glVertex3f(x0,y0,z0);
// Lower Right of triangle
glTexCoord2f(2.0 / 3.0, 0.0);
glVertex3f(x1,y1,z1);
// Upper Right of triangle
glTexCoord2f(2.0 / 3.0, 1.0);
glVertex3f(x2,y2,z2);
If you're using VBOs, then you need to modify your array of texture coordinates to access the appropriate section of your texture in a similar manner.
For the sampler2D the texture coordinates are normalized so that the leftmost and bottom-most coordinates are 0, and the rightmost and topmost are 1. So for your example of a 300-pixel-wide texture, the green section would be between 1/3rd and 2/3rds the width of the texture.
I'm trying to use a 3d texture in opengl to implement volume rendering. Each voxel has an rgba colour value and is currently rendered as a screen facing quad.(for testing purposes). I just can't seem to get the sampler to give me a colour value in the shader. The quads always end up black. When I change the shader to generate a colour (based on xyz coords) then it works fine. I'm loading the texture with the following code:
glGenTextures(1, &tex3D);
glBindTexture(GL_TEXTURE_3D, tex3D);
unsigned int colours[8];
colours[0] = Colour::AsBytes<unsigned int>(Colour::Blue);
colours[1] = Colour::AsBytes<unsigned int>(Colour::Red);
colours[2] = Colour::AsBytes<unsigned int>(Colour::Green);
colours[3] = Colour::AsBytes<unsigned int>(Colour::Magenta);
colours[4] = Colour::AsBytes<unsigned int>(Colour::Cyan);
colours[5] = Colour::AsBytes<unsigned int>(Colour::Yellow);
colours[6] = Colour::AsBytes<unsigned int>(Colour::White);
colours[7] = Colour::AsBytes<unsigned int>(Colour::Black);
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA, 2, 2, 2, 0, GL_RGBA, GL_UNSIGNED_BYTE, colours);
The colours array contains the correct data, i.e. the first four bytes have values 0, 0, 255, 255 for blue. Before rendering I bind the texture to the 2nd texture unit like so:
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_3D, tex3D);
And render with the following code:
shaders["DVR"]->Use();
shaders["DVR"]->Uniforms["volTex"].SetValue(1);
shaders["DVR"]->Uniforms["World"].SetValue(Mat4(vl_one));
shaders["DVR"]->Uniforms["viewProj"].SetValue(cam->GetViewTransform() * cam->GetProjectionMatrix());
QuadDrawer::DrawQuads(8);
I have used these classes for setting shader params before and they work fine. The quaddrawer draws eight instanced quads. The vertex shader code looks like this:
#version 330
layout(location = 0) in vec2 position;
layout(location = 1) in vec2 texCoord;
uniform sampler3D volTex;
ivec3 size = ivec3(2, 2, 2);
uniform mat4 World;
uniform mat4 viewProj;
smooth out vec4 colour;
void main()
{
vec3 texCoord3D;
int num = gl_InstanceID;
texCoord3D.x = num % size.x;
texCoord3D.y = (num / size.x) % size.y;
texCoord3D.z = (num / (size.x * size.y));
texCoord3D /= size;
texCoord3D *= 2.0;
texCoord3D -= 1.0;
colour = texture(volTex, texCoord3D);
//colour = vec4(texCoord3D, 1.0);
gl_Position = viewProj * World * vec4(texCoord3D, 1.0) + (vec4(position.x, position.y, 0.0, 0.0) * 0.05);
}
uncommenting the line where I set the colour value equal to the texcoord works fine, and makes the quads coloured. The fragment shader is simply:
#version 330
smooth in vec4 colour;
out vec4 outColour;
void main()
{
outColour = colour;
}
So my question is, what am I doing wrong, why is the sampler not getting any colour values from the 3d texture?
[EDIT]
Figured it out but can't self answer (new user):
As soon as I posted this I figured it out, I'll put the answer up to help anyone else (it's not specifically a 3d texture issue, and i've also fallen afoul of it before, D'oh!). I didn't generate mipmaps for the texture, and the default magnification/minification filters weren't set to either GL_LINEAR, or GL_NEAREST. Boom! no textures. Same thing happens with 2d textures.
As soon as I posted this I figured it out, I'll put the answer up to help anyone else (it's not specifically a 3d texture issue, and i've also fallen afoul of it before, D'oh!). I didn't generate mipmaps for the texture, and the default magnification/minification filters weren't set to either GL_LINEAR, or GL_NEAREST. Boom! no textures. Same thing happens with 2d textures.