OpenGL Fragment Shaders - Changing a fixed color - opengl

At the moment I have simple fragment shader which returns one color (red). If I want to change it a different RGBA color from C code, how should I be doing that?
Is it possible to change an attribute within the fragment shader from C directly or should I be changing a solid color attribute in my vertex shader and then passing that color to the fragment shader? I'm drawing single solid colour rectangles - nothing special.
void main()
{
gl_FragColor = vec4( 1.0, 0, 0, 1 );"
}

If you are talking about generating the shader at runtime, then you COULD use the c string formatting functions to insert the color into the line "gl_FragColor..."
I would not recommend you do this since it will be unneccessary work. The standard method to doing this is using uniforms as so:
// fragment shader:
uniform vec3 my_color; // A UNIFORM
void main()
{
gl_FragColor.rgb = my_color;
gl_FragColor.a = 1; // the alpha component
}
// your rendering code:
glUseProgram(SHADER_ID);
....
GLint color_location = glGetUniformLocation(SHADER_ID, "my_color");
float color[3] = {r, g, b};
glUniform3fv(color_location, 1, color);
....
glDrawArrays(....);

Related

Overlaying a transparent color over a Texture with GLSL

I have an image that I am loading using the Slick library, and the image renders fine without my shader active. When I use my shader to overlay a transparent color over the image the entire image is replaced by the transparent color.
without the shader
With the shader
Vertex Shader
varying vec4 vertColor;
void main(){
vec4 posMat = gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * posMat;
vertColor = vec4(0.5, 1.0, 1.0, 0.2);
}
Fragment Shader
varying vec4 vertColor;
void main(){
gl_FragColor = vertColor;
}
Sprite Rendering Code
Color.white.bind();
GL11.glBindTexture(GL11.GL_TEXTURE, image.getTextureID());
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0, 0);
GL11.glVertex2f(this.x, this.y);
GL11.glTexCoord2f(1, 0);
GL11.glVertex2f(x + w, y);
GL11.glTexCoord2f(1, 1);
GL11.glVertex2f(x + w, y + h);
GL11.glTexCoord2f(0, 1);
GL11.glVertex2f(x, y + h);
GL11.glEnd();
GL11.glBindTexture(GL11.GL_TEXTURE, 0);
}
OpenGL Initialization
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GL11.glOrtho(0, Screen.getW(), Screen.getH(), 0, -1, 1);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glEnable(GL11.GL_TEXTURE_2D);
GL11.glEnable(GL11.GL_BLEND);
GL11.glBlendFunc(GL11.GL_SRC_ALPHA, GL11.GL_ONE_MINUS_SRC_ALPHA);
a) vertColor = vec4(0.5, 1.0, 1.0, 0.2);
b) gl_FragColor = vertColor;
the shader does exactly what you asked of it - it sets the color of all fragments to that color. If you want to blend colors, you should add/multiply them in the shader in some fashion (e.g. have a color attribute and/or texture sampler, and then, after exporting the attribute from vertex shader to fragment shader, use gl_FragColor = vertexColor * textureColor * blendColor; etc).
also note: you're mixing fixed-function pipeline with immediate mode (glBegin/glEnd) with shaders... that's not a good idea. Also, I don't see where your uniforms are set; using shaders without uniforms == asking for trouble.
IMO the best solution would be to either use regular OpenGL >= 3.1 with proper, compliant shaders etc. or only use fixed-function pipeline and no shaders with legacy OpenGL.
As to how to load a texture with GLSL: (see https://www.opengl.org/sdk/docs/tutorials/ClockworkCoders/texturing.php for more info if needed)
a) you feed the data to GPU by creating a texture & binding it to GPU texture unit by calling
int id = glGenTexture();
glBindTexture( GL_TEXTURE_2D, id );
glTexImage2D( ... );
// see https://www.opengl.org/sdk/docs/man/html/glTexImage2D.xhtml for details
(what I suppose you've already done, since you're using glBindTexture with image param already)
b) you provide UV texture coordinates for your geometry; you're already doing it by supplying glTexCoord2f, which will probably allow you to use legacy attribute names as in https://www.opengl.org/sdk/docs/tutorials/ClockworkCoders/attributes.php, but the proper way way would be to pass it as a part of packed attribute structure,
c) you use the bound texture by sampling the texture in the shader, e.g. (legacy GLSL follows)
// vertex shader
varying vec2 vTexCoord;
void main() {
vTexCoord = gl_MultiTexCoord0;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
// fragment shader
uniform sampler2D texture;
varying vec2 vTexCoord;
void main() {
vec4 colorMultiplier = vec4(0.5, 1.0, 1.0, 0.2);
gl_FragColor = texture2D(texture, vTexCoord) * colorMultiplier;
}
still, if you intend on changing it at runtime, it'd be best to pass the colorMultiplier as a uniform.

Qt and OpenGL and draw a triangle if i use attributes

I have a problem with a simple shader.
I plan to draw a triangle (one for a start) in color. What i want: i culculete color for each node of triangle and give it to vertex shader, then pass to fragmant and get a colorfull triangle. What i get is nothing - no triangle. So i decided to simplify a littel - i give parameters to shaders, but i not use them. And i get same result. It's C++ code:
QVector4D colors[3];
...
glBegin(GL_TRIANGLES);
invers_sh.setAttributeValue("b_color", colors[1]);
glVertex2d(0, 0);
invers_sh.setAttributeValue("b_color", colors[1]);
glVertex2d(2.0, 0);
invers_sh.setAttributeValue("b_color", colors[2]);
glVertex2d(0, 2.0);
glEnd();
Vertex shader:
in vec4 vertex;
attribute vec4 b_color;
varying vec4 color_v;
uniform mat4 qt_ModelViewProjectionMatrix;
void main( void )
{
gl_Position = qt_ModelViewProjectionMatrix * vertex;
color_v = b_color;
}
Fragment shader:
varying vec4 color_v;
void main( void )
{
gl_FragColor = vec4(1.0, 0, 0, 0);
}
I figured that i get my red triangle if i comment all setAttributeValue in C++ code and line
color_v = b_color;
in vertex shader.
Help me.
Can you test the following invers_sh.setAttributeValue("b_color", colors[0]);
=> replace that line with
invers_sh.setAttributeValue(b_colorLocation, colors[1]);
set a global for colorLocation
int b_colorLocation;
and add this to where you compile your shaders get location of b_color:
b_colorLocation = invers_sh.attributeLocation("b_color");

Display Part of Texture in GLSL

I'm using GLSL to draw sprites from a sprite-sheet. I'm using jME 3, yet there are only small differences, and only with regards to deprecated functions.
The most important part of drawing a sprite from a sprite sheet is to draw only a subset/range of pixels, for example the range from (100, 0) to (200, 100). In the following test case sprite-sheet, and using the previous bounds, only the green part of the sprite-sheet would be drawn.
.
This is what I have so far:
Definition:
MaterialDef Solid Color {
//This is the list of user-defined variables to be used in the shader
MaterialParameters {
Vector4 Color
Texture2D ColorMap
}
Technique {
VertexShader GLSL100: Shaders/tc_s1.vert
FragmentShader GLSL100: Shaders/tc_s1.frag
WorldParameters {
WorldViewProjectionMatrix
}
}
}
.vert file:
uniform mat4 g_WorldViewProjectionMatrix;
attribute vec3 inPosition;
attribute vec4 inTexCoord;
varying vec4 texture_coordinate;
void main(){
gl_Position = g_WorldViewProjectionMatrix * vec4(inPosition, 1.0);
texture_coordinate = vec4(inTexCoord);
}
.frag:
uniform vec4 m_Color;
uniform sampler2D m_ColorMap;
varying vec4 texture_coordinate;
void main(){
vec4 color = vec4(m_Color);
vec4 tex = texture2D(m_ColorMap, texture_coordinate);
color *= tex;
gl_FragColor = color;
}
In jME 3, inTexCoord refers to gl_MultiTexCoord0, and inPosition refers to gl_Vertex.
As you can see, I tried to give the texture_coordinate a vec4 type, rather than a vec2, so as to be able to reference its p and q values (texture_coordinate.p and texture_coordinate.q). Modifying them only resulted in different hues.
m_Color refers to the color, inputted by the user, and serves the purpose of altering the hue. In this case, it should be disregarded.
So far, the shader works as expected and the texture displays correctly.
I've been using resources and tutorials from NeHe (http://nehe.gamedev.net/article/glsl_an_introduction/25007/) and Lighthouse3D (http://www.lighthouse3d.com/tutorials/glsl-tutorial/simple-texture/).
Which functions/values I should alter to get the desired effect of displaying only part of the texture?
Generally, if you want to only display part of a texture, then you change the texture coordinates associated with each vertex. Since you don't show your code for how you're telling OpenGL about your vertices, I'm not sure what to suggest. But in general, if you're using older deprecated functions, instead of doing this:
// Lower Left of triangle
glTexCoord2f(0,0);
glVertex3f(x0,y0,z0);
// Lower Right of triangle
glTexCoord2f(1,0);
glVertex3f(x1,y1,z1);
// Upper Right of triangle
glTexCoord2f(1,1);
glVertex3f(x2,y2,z2);
You could do this:
// Lower Left of triangle
glTexCoord2f(1.0 / 3.0, 0.0);
glVertex3f(x0,y0,z0);
// Lower Right of triangle
glTexCoord2f(2.0 / 3.0, 0.0);
glVertex3f(x1,y1,z1);
// Upper Right of triangle
glTexCoord2f(2.0 / 3.0, 1.0);
glVertex3f(x2,y2,z2);
If you're using VBOs, then you need to modify your array of texture coordinates to access the appropriate section of your texture in a similar manner.
For the sampler2D the texture coordinates are normalized so that the leftmost and bottom-most coordinates are 0, and the rightmost and topmost are 1. So for your example of a 300-pixel-wide texture, the green section would be between 1/3rd and 2/3rds the width of the texture.

OpenGL Instance Rendering with per instance color & offset

Hi I am trying to render lots of axis aligned cubes, with glDrawArraysInstanced(). Each cube of fixed size can only vary on its center position and color. Also each cube only takes few different colors. So I want to potentially render millions of cubes with following per instance data:
struct CubeInfo {
Eigen::Vector3f center; // center of the cube (x,y,z)
int labelId; // label of the cube which affects its color
};
So I am using the following vertex shader:
#version 330
uniform mat4 mvp_matrix;
//regular vertex attributes
layout(location = 0) in vec3 vertex_position;
// Per Instance variables
layout(location = 1) in vec3 cube_center;
layout(location = 2) in int cube_label;
// color out to frag shader
out vec4 color_out;
void main(void) {
// Add offset cube_center
vec4 new_pos = vec4(vertex_position + cube_center, 1);
// Calculate vertex position in screen space
gl_Position = mvp_matrix * new_pos;
// Set color_out based on label
switch (cube_label) {
case 1:
color_out = vec4(0.5, 0.25, 0.5, 1);
break;
case 2:
color_out = vec4(0.75, 0.0, 0.0, 1);
break;
case 3:
color_out = vec4(0.0, 0.0, 0.5, 1);
break;
case 4:
color_out = vec4(0.75, 1.0, 0.0, 1);
break;
default:
color_out = vec4(0.5, 0.5, 0.5, 1); // Grey
break;
}
}
and the corresponding fragment shader:
#version 330
in vec4 color_out;
out vec4 fragColor;
void main()
{
// Set fragment color from texture
fragColor = color_out;
}
However color_out always takes the default gray value, even though, cube_label values are between 1 to 4. This is my problem. Am I doing something wrong in the shader above**?**
I initialized the cubeInfo vbo with random labelIds between 1-4. So I am expecting to see a colorful output than following:
This is my render code, which makes use of Qt's QGLShaderProgram and QGLBuffer wrapper:
// Enable back face culling
glEnable(GL_CULL_FACE);
cubeShaderProgram_.bind();
// Set the vertexbuffer stuff (Simply 36 vertices for cube)
cubeVertexBuffer_.bind();
cubeShaderProgram_.setAttributeBuffer("vertex_position", GL_FLOAT, 0, 3, 0);
cubeShaderProgram_.enableAttributeArray("vertex_position");
cubeVertexBuffer_.release();
// Set the per instance buffer stuff
cubeInstanceBuffer_.bind();
cubeShaderProgram_.setAttributeBuffer("cube_center", GL_FLOAT, offsetof(CubeInfo,center), 3, sizeof(CubeInfo));
cubeShaderProgram_.enableAttributeArray("cube_center");
int center_location = cubeShaderProgram_.attributeLocation("cube_center");
glVertexAttribDivisor(center_location, 1);
cubeShaderProgram_.setAttributeBuffer("cube_label", GL_INT, offsetof(CubeInfo,labelId), 1, sizeof(CubeInfo));
cubeShaderProgram_.enableAttributeArray("cube_label");
int label_location = cubeShaderProgram_.attributeLocation("cube_label");
glVertexAttribDivisor(label_location, 1);
cubeInstanceBuffer_.release();
// Do Instanced Renering
glDrawArraysInstanced(GL_TRIANGLES, 0, 36, displayed_num_cubes_ );
cubeShaderProgram_.disableAttributeArray("vertex_position");
cubeShaderProgram_.disableAttributeArray("cube_center");
cubeShaderProgram_.disableAttributeArray("cube_label");
cubeShaderProgram_.release();
Apart from my primary question above (color problem), is this a good way to do Minecraft?
Update
If I change my CubeInfo.labelId attribute from int to float, and the corresponding vertex shader variable cube_label to also float, it Works!!. Why is it so? This page says GLSL suppoers int type. For me, I would prefer labelId/cube_label to be some int/short.
Update2:
Even if i just change to GL_FLOAT instead of GL_INT in the following line of my render code, I get proper colors.
cubeShaderProgram_.setAttributeBuffer("cube_label", GL_INT, offsetof(CubeInfo,labelId), 1, sizeof(CubeInfo));
The problem with your label attribute is, that it is an integer attribute, but your don't set it as integer attribute. Qt's setAttributeBuffer functions don't know anything about integer attributes, they all use glVertexAttribPointer under the hood, which takes the vertex data in any arbitrary format and converts it into float to pass it into an in float attribute, which doesn't match the in int from your shader (so the attribute will probably just remain at some random default value, or get some undefined values).
To actually pass data into a real integer vertex attribute (which is something entirely different from a float attribute and wasn't introduced until GL 3+), you need the function glVertexAttribIPointer (note the I in there, and similar D for in double attributes, just using GL_DOUBLE won't work in this case either). But sadly enough Qt, not being really fit for GL 3+ yet, doesn't seem to have a wrapper for that. So you will either have to do it manually using:
glVertexAttribIPointer(cubeShaderProgram_.attributeLocation("cube_label"),
1, GL_INT, sizeof(CubeInfo),
static_cast<const char*>(0)+offsetof(CubeInfo,labelId));
instead of the cubeShaderProgram_.setAttributeBuffer call, or use an in float attribute instead.
If you want to use the color assigned in vertex shader, you should at least write a trivial fragment shader like:
void main()
{
gl_FragColor = color_out;
}
Update
I think that either you can't pass cube_label to your vertex shader or you didn't set them in the first place (in the structure). Latter one is more likely but you can replace your switch-case with the following line to see the real value passed.
color_out = vec4(float(cube_label) / 4.0, 0, 0, 1.0);
Update2
Once I had a similar problem with Intel GPUs (drivers). No matter what I tried couldn't pass integer values. However, same shader was working flawlessly on NVIDIA so as in your case I converted int into float.

Combining two texture in fragment shader

I'm working on implementing deferred shading to my game. I have rendered the diffuse textures to a render target, and I have lighting rendered to a render target. Both of which I know are fine because I can render them straight to the screen with no problems. What I want to do is combine both the diffuse map and the light map in a shader to create a final image. Here is my current fragment shader, which results in a black screen.
#version 110
uniform sampler2D diffuseMap;
uniform sampler2D lightingMap;
void main()
{
vec4 color = texture(diffuseMap, gl_TexCoord[0].st);
vec4 lighting = texture(lightingMap, gl_TexCoord[0].st);
vec4 finalColor = color;
gl_FragColor = finalColor;
}
Shouldn't this result in the same thing as just straight up drawing the diffuse map?
I set the sampler2d with this method
void ShaderProgram::setUniformTexture(const std::string& name, GLint t) {
GLint var = getUniformLocation(name);
glUniform1i(var, t);
}
GLint ShaderProgram::getUniformLocation(const std::string& name) {
if(mUniformValues.find(name) != mUniformValues.end()) {
return mUniformValues[name];
}
GLint var = glGetUniformLocation(mProgram, name.c_str());
mUniformValues[name] = var;
return var;
}
EDIT: Some more information. Here is the code where I use the shader. I set the two textures, and draw a blank square for the shader to use. I know for sure, my render targets are working, as I said before, because I can draw them fine using the same getTextureId as I do here.
graphics->useShader(mLightingCombinedShader);
mLightingCombinedShader->setUniformTexture("diffuseMap", mDiffuse->getTextureId());
mLightingCombinedShader->setUniformTexture("lightingMap", mLightMap->getTextureId());
graphics->drawPrimitive(mScreenRect, 0, 0);
graphics->clearShader();
void GraphicsDevice::useShader(ShaderProgram* p) {
glUseProgram(p->getId());
}
void GraphicsDevice::clearShader() {
glUseProgram(0);
}
And the vertex shader
#version 110
varying vec2 texCoord;
void main()
{
texCoord = gl_MultiTexCoord0.xy;
gl_Position = ftransform();
}
In GLSL version 110 you should use:
texture2D(diffuseMap, gl_TexCoord[0].st); // etc.
instead of just the texture function.
And then to combine the textures, just multiply the colours together, i.e.
gl_FragColor = color * lighting;
glUniform1i(var, t);
The glUniform functions affect the program that is currently in use. That is, the last program that glUseProgram was called on. If you want to set the uniform for a specific program, you have to use it first.
The problem ended up being that I didn't enable the texture coordinates for the screen rectangle I was drawing.