This code only renders a dodecahedron and completely ignores the glBegin(GL_TRIANGLES) block:
glutSolidDodecahedron();
glBegin(GL_TRIANGLES);
glNormal3f(1, 0, 0);
glVertex3f(11, 0, 0);
glNormal3f(0, 1, 1);
glVertex3f(-11, 0, 0);
glNormal3f(0, 0, 1);
glVertex3f(0, 0, 11);
glEnd();
The two shaders are quite simplistic:
the vertex shader:
varying vec3 normal;
void main()
{
gl_Position = ftransform();
gl_FrontColor = gl_Color;
gl_BackColor = gl_Color;
normal = gl_Normal;
normal = gl_NormalMatrix * normal;
}
and the frag:
uniform vec3 lightDir;
varying vec3 normal;
void main()
{
float intensity = dot(lightDir, normal);
gl_FragColor = 0.5 * (1.5 + intensity) * gl_Color;
}
While glutSolidX type of functions work well with this example (based on the Lightouse3D tutorial), how can one quickly draw triangles that change coordinates from frame to frame (I tried arrays and GL_DYNAMIC_DRAW, but that's too much work as compared to the old "fixed pipeline" approach). I saw other people still managing to use glBegin(..); glEnd(); blocks with GLSL shaders successfully, so it must be possible. What could be missing?
The coordinates of the vertices of the triangle in the glBegin/glEnd block are
11, 0, 0
-11, 0, 0
0, 0, 11
which means it lies completely flat in the view. This is like viewing a sheet of paper from such a hard angle, it becomes a line. Because triangles have no thickness, not even this line is drawn and the triangle seems invisible.
Related
uniform sampler2D textureY;
uniform sampler2D textureUV;
uniform int rect_width;
layout(origin_upper_left) in vec4 gl_FragCoord;
void main(void)
{
vec3 yuv;
vec3 rgb;
yuv.x = texture2D(textureY, textureOut.st).r - 0.0625;
yuv.y = texture2D(textureUV, textureOut.st).r - 0.5;
yuv.z = texture2D(textureUV, textureOut.st).g - 0.5;
rgb = mat3( 1, 1, 1,
0, -0.39465, 2.03211,
1.13983, -0.58060, 0) * yuv;
gl_FragColor = vec4(rgb, 1); //This Works, output is correct texture
vec3 green = vec3(0, 255, 0);
//vec3 green = vec3(0, 0.5, 0); //doesnt help too
//vec3 green = vec3(0, 100.5, 0); //doesnt help too
//vec3 green = vec3(0, 100.5f, 0); //write warning, doesnt help too
int width = rect_width;
gl_FragColor = (green, 1);//whole sceen became white
if(TL;DR) gl_FragColor = (green, 1); //Creates desired rectangle but white color not green
}
OpenGL 3.3, with Qt if it is matter. I cant draw texture normally, but when i change color of fragment explicitely, it always became white, no matter which params i use. I tried "vec3 green = vec3(0, 0.5, 0); gl_FragColor = (green, 1);" "vec3 green = vec3(0, 255, 0); gl_FragColor = (green, 1);" and other variants, they just don't working. I even can output unfilled rectangle, but only white.
gl_FragColor is a vec4. You should explicitly define its type. Also, those colors are normalized, that means they are in a range from 0.0f to 1.0f.
This should work:
vec3 green = vec3(0, 1.0f, 0);
gl_FragColor = vec4(green, 1);
Something something comma operator (https://en.wikipedia.org/wiki/Comma_operator). The expression (green, 1) evaluates to just 1. So your code is effectively gl_FragColor = 1;. If you want a vector type expression, you must specify the vector type.
From your code:
gl_FragColor = (green, 1);//whole sceen became white
That assignment uses the comma operator which makes the expression return 1. So you are essentially assigning 1 to gl_FragColor
According to the specifications of GLSL 150 which is for OpenGL 3.2
out vec4 gl_FragColor; // deprecated
In the list of allowed implicit conversions your int (1) to vec4 is not there.
So we need to sort out that issue first,
example solution:
gl_FragColor = vec4(green.rgb, 1.0);
The next problem is the values of green. OpenGL expects gl_FragColor to contain normalized color values. ( I couldn't easily find an obvious source that states this )
Changing green to satisfy these requirements would result in something like this:
vec3 green = vec3(0.0, 1.0, 0.0);
gl_FragColor = vec4(green.rgb, 1.0);
Note: Be wary of the types when assigning values. Some graphics drivers lets you assign eg: 1 or vec3 to gl_FragColor while others adhere more closely to the spefications and will just return an error while compiling the shader. You can find the list of allowed implicit conversions earlier in my post.
What I want to attrive is to render many small quads with this opengl function "glDrawArraysInstanced", the space between them is the same. For example, please refer to the follwing image:
The code is as follow:
void OpenGLShowVideo::displayBySmallMatrix()
{
// Now use QOpenGLExtraFunctions instead of QOpenGLFunctions as we want to
// do more than what GL(ES) 2.0 offers.
QOpenGLExtraFunctions *f = QOpenGLContext::currentContext()->extraFunctions();
f->glClearColor(9.f/255.0f, 14.f/255.0f, 15.f/255.0f, 1);
glClear(GL_COLOR_BUFFER_BIT);
f->glViewport(0, 0, this->width(), this->height());
m_displayByMatrixProgram->bind();
f->glActiveTexture(GL_TEXTURE0 + m_acRenderToScreenTexUnit);
f->glBindTexture(GL_TEXTURE_2D, m_renderWithMaskFbo->texture());
if (m_uniformsDirty) {
m_uniformsDirty = false;
m_displayByMatrixProgram->setUniformValue(m_samplerLoc, m_acRenderToScreenTexUnit);
m_proj.setToIdentity();
m_proj.perspective(INIT_VERTICAL_ANGLE, float(this->width()) / float(this->height()), m_fNearPlane, m_fFarPlane);
m_displayByMatrixProgram->setUniformValue(m_projMatrixLoc, m_proj);
QMatrix4x4 camera;
camera.lookAt(m_eye, m_eye + m_target, QVector3D(0, 1, 0));
m_displayByMatrixProgram->setUniformValue(m_camMatrixLoc, camera);
m_world.setToIdentity();
float fOffsetZ = m_fVerticalAngle / INIT_VERTICAL_ANGLE;
m_world.translate(m_fMatrixOffsetX, m_fMatrixOffsetY, fOffsetZ);
m_proj.scale(MATRIX_INIT_SCALE_X, MATRIX_INIT_SCALE_Y, 1.0f);
m_world.rotate(180, 1, 0, 0);
QMatrix4x4 wm = m_world;
m_displayByMatrixProgram->setUniformValue(m_worldMatrixLoc, wm);
QMatrix4x4 mm;
mm.setToIdentity();
m_displayByMatrixProgram->setUniformValue(m_myMatrixLoc, mm);
m_displayByMatrixProgram->setUniformValue(m_lightPosLoc, QVector3D(0, 0, 70));
QSize tmpSize = QSize(m_viewPortWidth, m_viewPortHeight);
m_displayByMatrixProgram->setUniformValue(m_resolutionLoc, tmpSize);
int whRatioVal = m_viewPortWidth / m_viewPortHeight;
m_displayByMatrixProgram->setUniformValue(m_whRatioLoc, whRatioVal);
}
m_geometries->bindBufferForArraysInstancedDraw();
f->glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, 4, m_viewPortWidth * m_viewPortHeight);
}
And the vertex shader code is as follow:
#version 330
layout(location = 0) in vec4 vertex;
out vec3 color;
uniform mat4 mvp_matrix;
uniform mat4 projMatrix;
uniform mat4 camMatrix;
uniform mat4 worldMatrix;
uniform mat4 myMatrix;
uniform vec2 viewResolution;
uniform int whRatio;
uniform sampler2D sampler;
void main() {
int posX = gl_InstanceID % int(viewResolution.x);
int posY = gl_InstanceID / int(viewResolution.y);
if( posY % whRatio < whRatio) {
posY = gl_InstanceID / int(viewResolution.x);
}
ivec2 pos = ivec2(posX, posY);
vec2 t = vec2( pos.x * 3.0, pos.y * 3.0 );
mat4 wm = mat4(1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, t.x, t.y, 1, 1) * worldMatrix;
color = texelFetch(sampler,pos,0).rgb;
gl_Position = projMatrix * camMatrix * wm * vertex;
}
And the fragment shader is as follow:
#version 330 core
in vec3 color;
out vec4 fragColor;
void main() {
fragColor = vec4(color, 1.0);
}
However, when I move the camera far from the screen (by changing the [camera.lookAt (m_eye, m_eye + m_target, QVector3D (0, 1, 0);] "m_eye" parameter value), I got sth like this:
The space between quads is different, and the size of the quad is also different. But when I move the camera closer to the screen, it looks much better.
I think what you're seeing there is the result of rounding the coordinates to the nearest integer pixel coordinate.
To get something that looks more even, you want to use some form of anti-aliasing. The options that spring to mind are:
Enable some sort of full screen anti-aliasing like MSAA. This is simple to enable, but can have a significant performance cost.
Put your pattern in a texture, and tile that texture over a single quad. Texture filtering and mip maps should take care of the anti-aliasing for you, and it will probably be faster to render that way as well because you only need a single quad.
I have an image that I am loading using the Slick library, and the image renders fine without my shader active. When I use my shader to overlay a transparent color over the image the entire image is replaced by the transparent color.
without the shader
With the shader
Vertex Shader
varying vec4 vertColor;
void main(){
vec4 posMat = gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * posMat;
vertColor = vec4(0.5, 1.0, 1.0, 0.2);
}
Fragment Shader
varying vec4 vertColor;
void main(){
gl_FragColor = vertColor;
}
Sprite Rendering Code
Color.white.bind();
GL11.glBindTexture(GL11.GL_TEXTURE, image.getTextureID());
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0, 0);
GL11.glVertex2f(this.x, this.y);
GL11.glTexCoord2f(1, 0);
GL11.glVertex2f(x + w, y);
GL11.glTexCoord2f(1, 1);
GL11.glVertex2f(x + w, y + h);
GL11.glTexCoord2f(0, 1);
GL11.glVertex2f(x, y + h);
GL11.glEnd();
GL11.glBindTexture(GL11.GL_TEXTURE, 0);
}
OpenGL Initialization
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GL11.glOrtho(0, Screen.getW(), Screen.getH(), 0, -1, 1);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glEnable(GL11.GL_TEXTURE_2D);
GL11.glEnable(GL11.GL_BLEND);
GL11.glBlendFunc(GL11.GL_SRC_ALPHA, GL11.GL_ONE_MINUS_SRC_ALPHA);
a) vertColor = vec4(0.5, 1.0, 1.0, 0.2);
b) gl_FragColor = vertColor;
the shader does exactly what you asked of it - it sets the color of all fragments to that color. If you want to blend colors, you should add/multiply them in the shader in some fashion (e.g. have a color attribute and/or texture sampler, and then, after exporting the attribute from vertex shader to fragment shader, use gl_FragColor = vertexColor * textureColor * blendColor; etc).
also note: you're mixing fixed-function pipeline with immediate mode (glBegin/glEnd) with shaders... that's not a good idea. Also, I don't see where your uniforms are set; using shaders without uniforms == asking for trouble.
IMO the best solution would be to either use regular OpenGL >= 3.1 with proper, compliant shaders etc. or only use fixed-function pipeline and no shaders with legacy OpenGL.
As to how to load a texture with GLSL: (see https://www.opengl.org/sdk/docs/tutorials/ClockworkCoders/texturing.php for more info if needed)
a) you feed the data to GPU by creating a texture & binding it to GPU texture unit by calling
int id = glGenTexture();
glBindTexture( GL_TEXTURE_2D, id );
glTexImage2D( ... );
// see https://www.opengl.org/sdk/docs/man/html/glTexImage2D.xhtml for details
(what I suppose you've already done, since you're using glBindTexture with image param already)
b) you provide UV texture coordinates for your geometry; you're already doing it by supplying glTexCoord2f, which will probably allow you to use legacy attribute names as in https://www.opengl.org/sdk/docs/tutorials/ClockworkCoders/attributes.php, but the proper way way would be to pass it as a part of packed attribute structure,
c) you use the bound texture by sampling the texture in the shader, e.g. (legacy GLSL follows)
// vertex shader
varying vec2 vTexCoord;
void main() {
vTexCoord = gl_MultiTexCoord0;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
// fragment shader
uniform sampler2D texture;
varying vec2 vTexCoord;
void main() {
vec4 colorMultiplier = vec4(0.5, 1.0, 1.0, 0.2);
gl_FragColor = texture2D(texture, vTexCoord) * colorMultiplier;
}
still, if you intend on changing it at runtime, it'd be best to pass the colorMultiplier as a uniform.
I implemented a new rendering pipeline in my engine and rendering is broken now. When I directly draw a texture of the G-Buffer to screen, it shows up correctly. So the G-Buffer is fine. But somehow the lighting pass makes trouble. Even if I don't use the resulting texture of it but try to display albedo from G-Buffer after the lighting pass, it shows a solid gray color.
I can't explain this behavior and the strange thing is that there are no OpenGL errors at any point.
Vertex Shader to draw a fullscreen quad.
#version 330
in vec4 vertex;
out vec2 coord;
void main()
{
coord = vertex.xy;
gl_Position = vertex * 2.0 - 1.0;
}
Fragment Shader for lighting.
#version 330
in vec2 coord;
out vec3 image;
uniform int type = 0;
uniform sampler2D positions;
uniform sampler2D normals;
uniform vec3 light;
uniform vec3 color;
uniform float radius;
uniform float intensity = 1.0;
void main()
{
if(type == 0) // directional light
{
vec3 normal = texture2D(normals, coord).xyz;
float fraction = max(dot(normalize(light), normal) / 2.0 + 0.5, 0);
image = intensity * color * fraction;
}
else if(type == 1) // point light
{
vec3 pixel = texture2D(positions, coord).xyz;
vec3 normal = texture2D(normals, coord).xyz;
float dist = max(distance(pixel, light), 1);
float magnitude = 1 / pow(dist / radius + 1, 2);
float cutoff = 0.4;
float attenuation = clamp((magnitude - cutoff) / (1 - cutoff), 0, 1);
float fraction = clamp(dot(normalize(light - pixel), normal), -1, 1);
image = intensity * color * attenuation * max(fraction, 0.2);
}
}
Targets and samplers for the lighting pass. Texture ids are mapped to attachment respectively shader location.
unordered_map<GLenum, GLuint> targets;
targets.insert(make_pair(GL_COLOR_ATTACHMENT2, ...)); // light
targets.insert(make_pair(GL_DEPTH_STENCIL_ATTACHMENT, ...)); // depth and stencil
unordered_map<string, GLuint> samplers;
samplers.insert(make_pair("positions", ...)); // positions from G-Buffer
samplers.insert(make_pair("normals", ...)); // normals from G-Buffer
Draw function for lighting pass.
void DrawLights(unordered_map<string, GLuint> Samplers, GLuint Program)
{
auto lis = Entity->Get<Light>();
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
glUseProgram(Program);
int n = 0; for(auto i : Samplers)
{
glActiveTexture(GL_TEXTURE0 + n);
glBindTexture(GL_TEXTURE_2D, i.second);
glUniform1i(glGetUniformLocation(Program, i.first.c_str()), n);
n++;
}
mat4 view = Entity->Get<Camera>(*Global->Get<unsigned int>("camera"))->View;
for(auto i : lis)
{
int type = i.second->Type == Light::DIRECTIONAL ? 0 : 1;
vec3 pos = vec3(view * vec4(Entity->Get<Form>(i.first)->Position(), !type ? 0 : 1));
glUniform1i(glGetUniformLocation(Program, "type"), type);
glUniform3f(glGetUniformLocation(Program, "light"), pos.x, pos.y, pos.z);
glUniform3f(glGetUniformLocation(Program, "color"), i.second->Color.x, i.second->Color.y, i.second->Color.z);
glUniform1f(glGetUniformLocation(Program, "radius"), i.second->Radius);
glUniform1f(glGetUniformLocation(Program, "intensity"), i.second->Intensity);
glBegin(GL_QUADS);
glVertex2i(0, 0);
glVertex2i(1, 0);
glVertex2i(1, 1);
glVertex2i(0, 1);
glEnd();
}
glDisable(GL_BLEND);
glActiveTexture(GL_TEXTURE0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
}
I found the error and it was such a stupid one. The old rendering pipeline bound the correct framebuffer before calling the draw function of that pass. But the new one didn't so each draw function had to do that itself. Therefore I wanted to update all draw function, but I missed the draw function of the lighting pass.
Therefore the framebuffer of the G-Buffer was still bound and the lighting pass changed its targets.
Thanks to you guys, you had no change to find that error, since I hadn't posted my complete pipeline system.
I am new to shader programming. I am trying to draw a circle with glsl. I used a point with a Size and tried to filter out the points outside the radius.(Altering the alpha value).
The code is as follows:
Fragment Shader:
#version 130
varying vec2 textureCoordinate;
const float circleBorderWidth = 0.08;//for anti aliasing
void main() {
float d = smoothstep(circleBorderWidth,0.1, 1.0-length(textureCoordinate));
gl_FragColor = vec4(0.0, 1.0, 0.0, d);
}
Vertex Shader:
#version 130
attribute vec4 coord3d;
attribute vec2 varPos;
varying vec2 textureCoordinate;
void
main()
{
textureCoordinate = varPos;
gl_FrontColor = gl_Color;
gl_Position = vec4(coord3d.xyz,1.);
gl_PointSize = coord3d.w;
}
Data:
float pos[] = {
-1, -1,
-1, 1,
1, 1,
1, -1,
};
float vertices[]={0.0,0.0f,0.0f,100.0f};
Draw Method:
void drawScene() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
program->makeCurrent();
glEnable(GL_POINT_SMOOTH);
glEnable(GL_VERTEX_PROGRAM_POINT_SIZE);
glEnable(GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
if (varPosAttrib>=0) {
glVertexAttribPointer( varPosAttrib, 2, GL_FLOAT, GL_FALSE,
0, pos ); // -->varPos in Vertex Shader.
glEnableVertexAttribArray( varPosAttrib );
}
if (posAttrib>=0) {
glVertexAttribPointer(posAttrib, 4, GL_FLOAT, GL_FALSE, 0, vertices); // -->coord3d in vertex shader
glEnableVertexAttribArray(posAttrib);
glDrawArrays(GL_POINTS, 0, 1);
}
glDisable(GL_POINT_SMOOTH);
glDisable(GL_VERTEX_PROGRAM_POINT_SIZE);
glDisable(GL_BLEND);
program->release();
glutSwapBuffers(); //Send the 3D scene to the screen
}
This results in drawing a square if I replace d with 1.0 in the following line (in the fragment shader):
gl_FragColor = vec4(0.0, 1.0, 0.0, d); // -> if d is replaced by 1.0
I tried to replace the x and y values in gl_FragColor with textureCoordinate.x and textureCoordinate.y. Result was black (so I assume the values are 0.0). The thing which I don't understand is that if I take the length of textureCoordinate than it is always 1.0.(experimented by replacing the value in gl_fragcolor). I am unable to figure out as to what I am doing wrong here. I was expecting the textureCoordinate value to interpolate with respect to the passed in data (varPos).
Here's my current attempt at it. It works, in the sense that it draw a disc with a smooth border. I use a distance field approach ie. I compute the distance from the disc's border
Fragment shader
#version 110
varying vec2 uv;
void
main() {
float border = 0.01;
float radius = 0.5;
vec4 color0 = vec4(0.0, 0.0, 0.0, 1.0);
vec4 color1 = vec4(1.0, 1.0, 1.0, 1.0);
vec2 m = uv - vec2(0.5, 0.5);
float dist = radius - sqrt(m.x * m.x + m.y * m.y);
float t = 0.0;
if (dist > border)
t = 1.0;
else if (dist > 0.0)
t = dist / border;
gl_FragColor = mix(color0, color1, t);
}
Vertex shader
#version 110
varying vec2 uv;
void
main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
uv = vec2(gl_MultiTexCoord0);
}
It's meant to be drawn on quad with texture coordinate (-0.5, -0.5)x(0.5, 0.5)