Use only a Fragment Shader in libGDX? - opengl

I am porting my project from pure LWJGL to libGDX, and I'd like to know if there is a way to create a program only with the Fragment Shader?
What I'd like to do is to change the color of my texture where it is gray to a color I receive as a parameter. The shader worked perfectly before, but now, it seems that I need to add a Vertex Shader - what does not make any sense to me. So I wrote this:
void main() {
gl_Position = ftransform();
}
That, according to the internet, is the simplest possible VS: it does nothing, and it really shouldn't. But it doesn't work: nothing is displayed, and no compilation errors or warnings are thrown. I tried to replace my Fragment Shader with a simpler one, but the results were even stranger:
void main() {
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}
This should only paint everything in red. But when I run, the program crashes, no error is thrown, no Exception, very odd behavior. I know that libGDX adds tilting, but I don't think it would work either, because I need to replace only the gray-scale colors, and depending on the intensity, I need to correctly modulate the correct color (that is varying). Another shader I am using (Fragment only, again) is to set the entire scene to gray-scale (when on the pause menu). It won't work either.

When it comes to shaders in modern OpenGL you always have to supply at least a vertex and a fragment shader. OpenGL-2 gave you some leeway, at the drawback, that you then still have to struggle with the arcane fixed function state machine.
That, according to the internet, is the simplest possible VS: it does nothing, and it really shouldn't.
What makes you think it does "nothing". Of course it does something: It translates incoming, raw numbers into something sensible, namely vertices in clip space.

Related

Why is glUseProgram called every frame with glUniform?

I am following an OpenGL v3.3 tutorial that instructs me to modify a uniform attribute in a fragment shader using glUniform4f (refer to the code below). As far as I understand, OpenGL is a state machine, we don't unbind the current shaderProgram being used, we rather modify an attribute in one of the shaders attached to the program, so why do we need to call glUseProgram on every frame?
I understand that this is not the case for later versions of OpenGL, but I'd still like to understand why it's the case for v3.3
OpenGL Program:
while (!glfwWindowShouldClose(window))
{
processInput(window);
glClearColor(0.2f, 1.0f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(shaderProgram); // the function in question
float redValue = (sin(glfwGetTime()) / 2.0f) + 0.5f;
int colorUniformLocation = glGetUniformLocation(shaderProgram, "ourColor");
glUniform4f(colorUniformLocation, redValue, 0.0f, 0.0f, 1.0f);
std::cout << colorUniformLocation << std::endl;
glBindVertexArray(VAO[0]);
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(VAO[1]);
glDrawArrays(GL_TRIANGLES, 0, 3);
glfwSwapBuffers(window);
glfwPollEvents();
}
Fragment Shader
#version 330 core
out vec4 FragColor;
uniform vec4 ourColor;
void main()
{
FragColor = ourColor;
}
Edit: I forgot to point out that glUniform4f sets a new color (in a periodic fashion) each frame, the final output of the code are 2 triangles with animating color, removing glUseProgram from the while loop while result in a static image, which isn't the intended goal of the code.
In your case you probably don't have to set it every frame.
However in bigger program you'll use multiple shaders so will need to set the one you want before you use it each time, and likely the samples are just written to do that.
Mutable global variables (which is effectively what state is with OpenGL) are inherently dangerous. One of the most important dangers of mutable globals is making assumptions about their current state which turn out to be wrong. These kinds of failures make it incredibly difficult to understand whether or not a piece of code will work correctly, since its behavior is dependent on something external. Something that is assumed about the nature of the world rather than defined by the function that expects it.
Your code wants to issue two drawing commands that use a particular shader. By binding that shader at the point of use, this code us not bound to any assumptions as to the current shader. It doesn't matter what the previous shader was when you start the loop; you're setting it to what it needs to be.
This makes this code insulated to any later changes you might make. If you want to render a third thing that uses a different shader, your code continues to work: you reset the shader at the start of each loop. If you had only set the shader outside of the loop, and didn't reset it each time, then your code would be broken by any subsequent shader changes.
Yes, in a tiny toy program like this, that's probably an easy problem to track down and fix. But when you're dealing with code that spans hundreds of files with tens of thousands of lines of code, with dependency relationship scattered across 3rd party libraries all that might modify any particular OpenGL state? Yeah, it's probably best not to assume too much about the nature of the world.
Learning good habits early on is a good thing.
Now to be fair, re-specifying a bunch of OpenGL state at every point in the program is also a bad idea. Making assumptions/expectations about the nature of the OpenGL context as part of a function is not a-priori bad. If you have some rendering function for a mesh, it's OK for that function to assume that the user has bound the shader it intends to use. It's not the job of this function to specify all of the other state that needs to be specified for rendering. And indeed, it would be a bad mesh class/function if it did that, since you would be unable to render the same mesh with different state.
But at the beginning of each frame, or the start of each major part of your rendering process, specifying a baseline of OpenGL state is perfectly valid. When you loop back to the beginning of a new frame, you should basically assume nothing about OpenGL's state. Not because OpenGL won't remember, but because you might be wrong.
As the answers and comments demonstrated, in the example stated in my question, glUseProgram can only be written once outside the while loop to produce the intended output, which is 2 triangles with colors animating periodically. The misunderstanding I had is a result of the following chapter in learnopengl.com e-book https://learnopengl.com/Getting-started/Shaders where it states:
"updating a uniform does require you to first use the program (by calling glUseProgram), because it sets the uniform on the currently active shader program."
I thought that every time I wanted to update the uniform via glUniform* I had to also issue a call to glUseProgram which is an incorrect understanding.

how to render only two end points when using GL_LINES in OpenGL?

I'm working on a picking function with OpenGL. I know that I can render 3 lines using glPolygonMode(GL_FRONT_AND_BACK, GL_Lines); when the model is a triangle:
I also know that I can render 3 points using glPolygonMode(GL_FRONT_AND_BACK, GL_Points); when the model is a triangle:
Now I'm running into this problem: I cannot find a way to render 2 endpoints when rendering a line using GL_LINES.
Is there anything similar to glPolygonMode() that controls how GL_LINES works?
GL_LINES describes how the primitive (triangle) is filled in this case.
You can vaguely make out the original primitive even if it is represented as a series of lines or unconnected points rather than a filled triangle. However, for lines, if you simplify them to nothing but points you loose critical information necessary to make any sense of what you are seeing (how those points were connected).
A line mode would make no sense in light of this, and the closest thing that really ever existed would probably be line stippling.
Just use GL_POINTS as your primitive instead, you clearly do not require lines for whatever you are trying to accomplish.
Is there anything similar to glPolygonMode() that controls how GL_LINES works?
In one word: No. You'll have to implement this yourself (by simply submitting just the endpoints).
You can render them as GL_POINTS primitives rather than GL_LINES. Of course, you will need to apply a point size for them to be larger more than just a single dot.
As was already pointed out, there is no such thing as a glLineMode() call that would allow you to turn lines into points with a simple state setting.
If you can change the draw calls, you can obviously use GL_POINTS as the primitive type when you want to draw points.
Contrary to what the other answers claim, I believe there is a way to do this, even if hypothetically you can't modify the draw calls. You could use a geometry shader that has lines as the input primitive type, and points as the output primitive type. The geometry shader could look like this:
layout(lines) in;
layout(points, max_vertices = 2) out;
void main() {
gl_Position = gl_in[0].gl_Position;
EmitVertex();
gl_Position = gl_in[0].gl_Position;
EmitVertex();
EndPrimitive();
}

glsl Shader does not draw obj when including not used parameters

I setup a phong shader with glsl which works fine.
When I render my object without "this line", it works. But when I uncomment "this line" the world is stil built but the object is not rendered anymore, although "LVN2" is not used anywhere in the glsl code. The shader executes without throwing errors. I think my problem is a rather general glsl question as how the shader works properly.
The main code is written in java.
Vertex shader snippet:
// Light Vector 1
vec3 lightCamSpace = vec4(viewMatrix * modelMatrix * lightPosition).xyz;
out_LightVec = vec3(lightCamSpace - vertexCamSpace).xyz;
// Light Vector 2
vec3 lightCamSpace2 = vec4(viewMatrix * modelMatrix * lightPosition2).xyz;
out_LightVec2 = vec3(lightCamSpace2 - vertexCamSpace).xyz;
Fragment shader snippet:
vec3 LVN = normalize(out_LightVec);
//vec3 LVN2 = normalize(out_LightVec2); // <---- this line
EDIT 1:
GL_MAX_VERTEX_ATTRIBS is 29 and glGetError is already implemented but not throwing any errors.
If I change
vec3 LVN2 = normalize(out_LightVec2);
to
vec3 LVN2 = normalize(out_LightVec);
it actually renders the object again. So it really seems like something is maxed out. (LVN2 is still not used at any point in the shader)
I actually found my absolutly stupid mistake. In the main program I was giving the shader the wrong viewMatrix location... But I'm not sure why it sometimes worked.
I can't spot an error in your shaders. One thing that's possible is that you are exceeding GL_MAX_VERTEX_ATTRIBS by using a fifth four-component out slot. (Although limit of 4 would be weird, according to this answer the minimum supported amount should be 16, and it shouldn't even link in this case. Then again you are using GLSL 1.50, which implies OpenGL 3.2, which is pretty old. I couldn't find a specification stating minimum requirements for the attribute count.)
The reason for it working with the line uncommented could be the shader compiler being able to optimize the unused in/out parameter away, an unable to do so when it's referenced in the fragment shader body.
You could test my guess by querying the limit:
int limit;
glGetIntegerv(GL_MAX_VERTEX_ATTRIBS, &limit);
Beyond this, I would suggest inserting glGetError queries before and after your draw call to see if there's something else going on.

Rendering to a texture using the image API (no Framebuffer)

As an experiment I decided to try rendering to a texture using the image API exclusively. At first the results were obviously wrong as the texture write occurred before the depth test. So I enabled the early_fragment_tests, which I though was introduced for pretty much this type of use case, but now I get a weird sort of flickering which seems like Z-fighting, which seems strange since it should be performing the same depth test that works for regular rendering.
Anyway, I've included an image of the problem, and I'm curious if anyone has an explanation as what is going on, and why this doesn't work. Can it be made to work?
Here's a minimal reproducer
#version 420
in vec3 normal;
layout(binding = 0) writeonly uniform image2D outputTex;
void main()
{
vec4 fragColor = vec4(normal, 1);
imageStore(outputTex, ivec2(gl_FragCoord.xy), fragColor);
}
I'm going to make some assumptions about the code you didn't show. Because you didn't show it. I'm going to assume that:
You used proper memory coherence operations when you went to display this image (whether to the actual screen or in a glReadPixels/glGetTexImage operations).
You rendered this scene using a regular rendering command, with no special ordering of triangles or anything. You did not render each triangle with a separate rendering command with memory coherence operations between each.
In short, I'm going to assume that your problem is actually due to your shader. It may well be due to many other things. But since you didn't deign to show the rest of your code, I can't tell. Therefore, this answer may not in fact answer your problem. But garbage in, garbage out.
The problem I can see from your shader and the above assumptions is really quite simple: incoherent memory accesses (like image load/store) are completely unordered. You performed an image write operation. Therefore, you have no guarantees about this write operation unless you take steps to make those guarantees.
Yes, you used early fragment tests. But that doesn't mean that the order of incoherent memory accesses from your fragment shader will be in any particular order.
Consider what happens if you render a triangle, then render a triangle in front of it that completely covers it. Early fragment tests won't change anything, since the top fragment happens after the bottom one. And image load/store does not guarantee anything about the ordering of writes to the same pixel. Therefore, it is very possible for writes to the bottom triangle to complete after writes to the top triangle.
As far as I know, ordering writes to the same pixel from different fragment shaders like this is not possible. Even if you issued a memoryBarrier after your write, my reading of the spec doesn't suggest that this will guarantee the write ordering here.
The correct answer is to not do this at all. Write to fragment shader outputs; that's what they're there for.

ATI glsl point sprite problems

I've just moved my rendering code onto my laptop and am having issues with opengl and glsl.
I have a vertex shader like this (simplified):
uniform float tile_size;
void main(void) {
gl_PointSize = tile_size;
// gl_PointSize = 12;
}
and a fragment shader which uses gl_Pointcoord to read a texture and set the fragment colour.
In my c++ program I'm trying to bind tile_size as follows:
glEnable(GL_TEXTURE_2D);
glEnable(GL_POINT_SPRITE);
glEnable(GL_VERTEX_PROGRAM_POINT_SIZE);
GLint unif_tilesize = glGetUniformLocation(*shader program*, "tile_size");
glUniform1f(unif_tilesize, 12);
(Just to clarify I've already setup a program used glUseProgram, shown is just the snippet regarding this particular uniform)
Now setup like this I get one-pixel points and have discovered that opengl is failing to bind unif_tilesize (it gets set to -1).
If I swap the comments round in my vertex shader I get 12px point sprites fine.
Peculiarly the exact same code on my other computer works absolutely fine. The opengl version on my laptop is 2.1.8304 and it's running an ATI radeon x1200 (cf an nvidia 8800gt in my desktop) (if this is relevant...).
EDIT I've changed the question title to better reflect the problem.
You forgot to call glUseProgram before setting the uniform.
So after another day of playing around I've come to a point where, although I haven't solved my original problem of not being able to bind a uniform to gl_PointSize, I have modified my existing point sprite renderer to work on my ATI card (an old x1200) and thought I'd share some of the things I'd learned.
I think that something about gl_PointSize is broken (at least on my card); in the vertex shader I was able to get 8px point sprites using gl_PointSize=8.0;, but using gl_PointSize=tile_size; gave me 1px sprites whatever I tried to bind to the uniform tile_size.
Luckily I don't need different sized tiles for each vertex so I called glPointSize(tile_size) in my main.cpp instead and this worked fine.
In order to get gl_PointCoord to work (i.e. return values other than (0,0)) in my fragment shader, I had to call glTexEnvf( GL_POINT_SPRITE_ARB, GL_COORD_REPLACE_ARB, GL_TRUE ); in my main.cpp.
There persisted a ridiculous problem in which my varyings were beign messed up somewhere between my vertex and fragment shaders. After a long game of 'guess what to type into google to get relevant information', I found (and promptly lost) a forum where someone said that in come cases if you don't use gl_TexCoord[0] in at least one of your shaders, your varying will be corrupted.
In order to fix that I added a line at the end of my fragment shader:
_coord = gl_TexCoord[0].xy;
where _coord is an otherwise unused vec2. (note gl_Texcoord is not used anywhere else).
Without this line all my colours went blue and my texture lookup broke.