Implementing a fragment shader that uses a uniform Sampler2D (lwjgl) - opengl

I am unable to successfully run a shader, and I seem to be missing some step to make it all work. I end up with the error of:
Exception in thread "main" org.lwjgl.opengl.OpenGLException: Invalid operation (1282)
at org.lwjgl.opengl.Util.checkGLError(Util.java:59)
at org.lwjgl.opengl.GL20.glUniform1i(GL20.java:374)
at sprites.Sprite.draw(Sprite.java:256)
at gui.Game.drawFrame(Game.java:238)
at gui.Game.gameLoop(Game.java:205)
at gui.Game.startGame(Game.java:244)
at tests.simple.SimpleShader.main(SimpleShader.java:36)
My initialization begins with:
int frag = FilterLoader.createShader("/tests/resources/shaders/grayscale.frag", GL20.GL_FRAGMENT_SHADER);
and the createShader method looks like the following:
int shader = GL20.glCreateShader(type);
if(shader == 0)
return 0;
StringBuilder code = new StringBuilder("");
String line;
try
{
String path = FilterLoader.class.getResource(filename).getPath();
BufferedReader reader = new BufferedReader(new FileReader(path));
while((line = reader.readLine()) != null)
{
code.append(line + "\n");
}
}
catch(Exception e)
{
e.printStackTrace();
System.err.println("Error reading in " + type + " shader");
return 0;
}
GL20.glShaderSource(shader, code);
GL20.glCompileShader(shader);
return shader;
I then attach the shader to the specific Sprite with:
two.addFragmentShader(frag); //two is a Sprite
which is just simply:
fragmentShader = fragment_shader;
GL20.glAttachShader(shader, fragment_shader);
GL20.glLinkProgram(shader);
The int shader has been previously initialized in the Sprites constructor with:
shader = GL20.glCreateProgram();
This was a previous problem, but no longer obviously. Now I get to where the actual error occured, in the Sprites (two in this case) draw method, which looks like so:
if(true)
{
GL20.glUseProgram(shader);
}
glPushMatrix();
glActiveTexture(GL13.GL_TEXTURE0);
imageData.getTexture().bind();
//The line below is where the error occurs.
GL20.glUniform1i(fragmentShader, GL13.GL_TEXTURE0);
int tx = (int)location.x;
int ty = (int)location.y;
glTranslatef(tx, ty, location.layer);
float texture_X = ((float)which_column/(float)columns);
float texture_Y = ((float)which_row/(float)rows);
float texture_XplusWidth = ((float)(which_column+wide)/(float)columns);
float texture_YplusHeight = ((float)(which_row+tall)/(float)rows);
glBegin(GL_QUADS);
{
GL11.glTexCoord2f(texture_X, texture_Y);
glVertex2f(0, 0);
GL11.glTexCoord2f(texture_X, texture_YplusHeight);
glVertex2f(0, getHeight());
GL11.glTexCoord2f(texture_XplusWidth, texture_YplusHeight);
glVertex2f(getWidth(), getHeight());
GL11.glTexCoord2f(texture_XplusWidth, texture_Y);
glVertex2f(getWidth(), 0);
}
glEnd();
GL20.glUseProgram(0);
glPopMatrix();
And the error occurs at this line:
GL20.glUniform1i(fragmentShader, GL13.GL_TEXTURE0);
And for reference my shader:
// simple fragment shader
uniform sampler2D texture;
void main()
{
vec4 color, texel;
color = gl_Color;
texel = texture2DRect(texture, gl_TexCoord[0].xy);
color *= texel;
float gray = dot(color.rgb, vec3(0.299, 0.587, 0.144));
gl_FragColor = vec4(gray, gray, gray, color.a);
}
I've gone through the tutorials, read about the error, and I can't figure out what step I have missed.

GL20.glUniform1i(fragmentShader, GL13.GL_TEXTURE0);
This is wrong. The first parameter of glUniform1i is the uniform location, which you can get with glGetUniformLocation.
The second parameter is an integer, but for texture sampler, you need to pass the texture unit number (0, 1, 2, etc), and bind the texture to that texture unit, for example:
glUseProgram(program);
int loc = glGetUniformLocation(program, "texture");
glUniform1i(loc, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texId);
Then it should work.

Related

Unable to get Tessallation shader working

I've just started following OpenGL SuperBible 7th ed, and translating the examples into LWJGL, but have become stuck on the tessellation shader. In the following program there is the line " //IF THESE TWO LINES..." if the following two lines are commented out then the vertex and fragment shaders work but when the control.tess.glsl and eval.tess.glsl are included then the triangle no longer renders.
I've uploaded my program onto github but will reproduce the code here as well:
package com.ch3vertpipeline;
public class App {
public static void main(String [] args){
LwjglSetup setup = new LwjglSetup();
setup.run();
}
}
package com.ch3vertpipeline;
import java.nio.IntBuffer;
import java.util.Scanner;
import org.lwjgl.*;
import org.lwjgl.glfw.*;
import org.lwjgl.opengl.*;
import org.lwjgl.system.*;
import static org.lwjgl.glfw.Callbacks.*;
import static org.lwjgl.glfw.GLFW.*;
import static org.lwjgl.opengl.GL11.*;
import static org.lwjgl.opengl.GL20.*;
import static org.lwjgl.opengl.GL30.*;
import static org.lwjgl.system.MemoryStack.stackPush;
import static org.lwjgl.system.MemoryUtil.NULL;
public class LwjglSetup {
private long window;
private int vertex_shader;
private int fragment_shader;
private int tess_control_shader;
private int tess_evaluation_shader;
private int program;
private int vertex_array_object;
public LwjglSetup() {
}
private void init() {
GLFWErrorCallback.createPrint(System.err).set();
if (!glfwInit()) {
throw new IllegalStateException("Unable to initialize GLFW");
}
// Configure GLFW
glfwDefaultWindowHints(); // optional, the current window hints are already the default
glfwWindowHint(GLFW_VISIBLE, GLFW_FALSE); // the window will stay hidden after creation
glfwWindowHint(GLFW_RESIZABLE, GLFW_TRUE); // the window will be resizable
// Create the window
window = glfwCreateWindow(300, 300, "Hello World!", NULL, NULL);
if (window == NULL) {
throw new RuntimeException("Failed to create the GLFW window");
}
// Setup a key callback. It will be called every time a key is pressed, repeated or released.
glfwSetKeyCallback(window, (window, key, scancode, action, mods) -> {
if (key == GLFW_KEY_ESCAPE && action == GLFW_RELEASE) {
glfwSetWindowShouldClose(window, true); // We will detect this in the rendering loop
}
});
// Get the thread stack and push a new frame
try (MemoryStack stack = stackPush()) {
IntBuffer pWidth = stack.mallocInt(1); // int*
IntBuffer pHeight = stack.mallocInt(1); // int*
// Get the window size passed to glfwCreateWindow
glfwGetWindowSize(window, pWidth, pHeight);
// Get the resolution of the primary monitor
GLFWVidMode vidmode = glfwGetVideoMode(glfwGetPrimaryMonitor());
// Center the window
glfwSetWindowPos(
window,
(vidmode.width() - pWidth.get(0)) / 2,
(vidmode.height() - pHeight.get(0)) / 2
);
} // the stack frame is popped automatically
// Make the OpenGL context current
glfwMakeContextCurrent(window);
// Enable v-sync
glfwSwapInterval(1);
// Make the window visible
glfwShowWindow(window);
}
public void run() {
System.out.println("Hello LWJGL " + Version.getVersion() + "!");
init();
loop();
// Free the window callbacks and destroy the window
glfwFreeCallbacks(window);
glfwDestroyWindow(window);
// Terminate GLFW and free the error callback
glfwTerminate();
glfwSetErrorCallback(null).free();
}
private void loop() {
GL.createCapabilities();//Critical
System.out.println("OpenGL Verion: " + glGetString(GL_VERSION));
this.compileShader();
vertex_array_object = glGenVertexArrays();
glBindVertexArray(vertex_array_object);
while (!glfwWindowShouldClose(window)) {
double curTime = System.currentTimeMillis() / 1000.0;
double slowerTime = curTime;//assigned direcly but I was applying a factor here
final float colour[] = {
(float) Math.sin(slowerTime) * 0.5f + 0.5f,
(float) Math.cos(slowerTime) * 0.5f + 0.5f,
0.0f, 1.0f};
glClearBufferfv(GL_COLOR, 0, colour);
glUseProgram(program);
final float attrib[] = {
(float) Math.sin(slowerTime) * 0.5f,
(float) Math.cos(slowerTime) * 0.6f,
0.0f, 0.0f};
//glPatchParameteri(GL_PATCH_VERTICES, 3);//this is the default so is unneeded
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glVertexAttrib4fv(0, attrib);
glDrawArrays(GL_TRIANGLES, 0, 3);
glfwSwapBuffers(window); // swap the color buffers
glfwPollEvents();
}
glDeleteVertexArrays(vertex_array_object);
glDeleteProgram(program);
}
private String readFileAsString(String filename) {
String next = new Scanner(LwjglSetup.class.getResourceAsStream(filename), "UTF-8").useDelimiter("\\A").next();
System.out.println("readFileAsString: " + next);
return next;
}
private void compileShader() {
//int program;
//NEW CODE
//create and compile vertex shader
String vertShaderSource = readFileAsString("/vert.glsl");
vertex_shader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertex_shader, vertShaderSource);
glCompileShader(vertex_shader);
//check compilation
if (glGetShaderi(vertex_shader, GL_COMPILE_STATUS) != 1) {
System.err.println(glGetShaderInfoLog(vertex_shader));
System.exit(1);
}
//create and compile fragment shader
String fragShaderSource = readFileAsString("/frag.glsl");
fragment_shader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fragment_shader, fragShaderSource);
glCompileShader(fragment_shader);
//check compilation
if (glGetShaderi(fragment_shader, GL_COMPILE_STATUS) != 1) {
System.err.println(glGetShaderInfoLog(fragment_shader));
System.exit(1);
}
//create and compile tessellation shader
String tessControlShaderSource = readFileAsString("/control.tess.glsl");
tess_control_shader = glCreateShader(GL40.GL_TESS_CONTROL_SHADER);
glShaderSource(tess_control_shader, tessControlShaderSource);
glCompileShader(tess_control_shader);
//check compilation
if (glGetShaderi(tess_control_shader, GL_COMPILE_STATUS) != 1) {
System.err.println(glGetShaderInfoLog(tess_control_shader));
System.exit(1);
}
//create and compile tessellation shader
String tessEvaluationShaderSource = readFileAsString("/eval.tess.glsl");
tess_evaluation_shader = glCreateShader(GL40.GL_TESS_EVALUATION_SHADER);
glShaderSource(tess_evaluation_shader, tessEvaluationShaderSource);
glCompileShader(tess_evaluation_shader);
//check compilation
if (glGetShaderi(tess_evaluation_shader, GL_COMPILE_STATUS) != 1) {
System.err.println(glGetShaderInfoLog(tess_evaluation_shader));
System.exit(1);
}
//create program and attach it
program = glCreateProgram();
glAttachShader(program, vertex_shader);
glAttachShader(program, fragment_shader);
//IF THESE TWO LINES ARE COMMENTED PROGRAM WORKS...although there
//is no tessallation...
glAttachShader(program, tess_control_shader);
glAttachShader(program, tess_evaluation_shader);
glLinkProgram(program);
//check link
if (glGetProgrami(program, GL_LINK_STATUS) != 1) {
System.err.println(glGetProgramInfoLog(program));
System.exit(1);
}
glValidateProgram(program);
if (glGetProgrami(program, GL_VALIDATE_STATUS) != 1) {
System.err.println(glGetProgramInfoLog(program));
System.exit(1);
}
//delete shaders as the program has them now
glDeleteShader(vertex_shader);
glDeleteShader(fragment_shader);
glDeleteShader(tess_control_shader);
glDeleteShader(tess_evaluation_shader);
//return program;
}
}
vert.glsl
#version 440 core
//'offset' is an input vertex attribute
layout (location=0) in vec4 offset;
layout (location=1) in vec4 color;
out vec4 vs_color;
void main(void)
{
const vec4 vertices[3] = vec4[3]( vec4( 0.25, -0.25, 0.5, 1.0),
vec4(-0.25, -0.25, 0.5, 1.0),
vec4( 0.25, 0.25, 0.5, 1.0));
//Add 'offset' to hour hard-coded vertex position
gl_Position = vertices[gl_VertexID] + offset;
//Output a fixed value for vs_color
vs_color = color;
}
frag.glsl
#version 440 core
in vec4 vs_color;
out vec4 color;
void main(void)
{
color = vs_color;
}
control.tess.glsl
#version 440 core
layout (vertices=3) out;
void main(void)
{
//Only if I am invocation 0
if (gl_InvocationID == 0){
gl_TessLevelInner[0] = 5.0;
gl_TessLevelOuter[0] = 5.0;
gl_TessLevelOuter[1] = 5.0;
gl_TessLevelOuter[2] = 5.0;
}
//Everybody copies their input to their output?
gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position;
}
eval.tess.glsl
#version 440 core
layout (triangles, equal_spacing, cw) in;
void main(void){
gl_Position = (gl_TessCoord.x * gl_in[0].gl_Position) +
(gl_TessCoord.y * gl_in[1].gl_Position) +
(gl_TessCoord.z * gl_in[2].gl_Position);
}
Finally, if it helps here is some version information, which is printed at the start of the application:
Hello LWJGL 3.1.5 build 1!
OpenGL Verion: 4.4.0 NVIDIA 340.107
glDrawArrays(GL_TRIANGLES, 0, 3);
When you draw something with tessellation, you are drawing patches, not triangles. Hence, you have to specify GL_PATCHES:
glDrawArrays(GL_PATCHES, 0, 3);
//Everybody copies their input to their output?
The reason is that the input vertices and output vertices of the tessellation control shader are not related to each other. The input vertices are taken from the input stream, i.e. your vertex buffers (after being processed by the vertex shader). Their number is specified by the GL_PATCH_VERTICES parameter. Each invocation takes this number of vertices from the buffer. The output vertices are kept internally in the pipeline. Their number is specified by the layout directive. This number can be different from the number of input vertices. They can also have different attributes. I find it more intuitive to think of these vertices as pieces of data instead of actual vertices with a geometric meaning. In some cases, this interpretation might make sense, but definitely not in all.

Memory barrier problems for writing and reading an image OpenGL

i'm having a problem trying to reading an image from a fragment shader, first i write into the image in shader porgram A (im just painting blue on the image) then i'm reading from another shader program B to display the image, but the reading part is not getting the right color i'm getting a black image
Unexpected result
This is my application code:
void GLAPIENTRY MessageCallback(GLenum source, GLenum type, GLuint id, GLenum severity, GLsizei length, const GLchar* message, const void* userParam)
{
std::cout << "GL CALLBACK: type = " << std::hex << type << ", severity = " << std::hex << severity << ", message = " << message << "\n"
<< (type == GL_DEBUG_TYPE_ERROR ? "** GL ERROR **" : "") << std::endl;
}
class ImgRW
: public Core
{
public:
ImgRW()
: Core(512, 512, "JFAD")
{}
virtual void Start() override
{
glEnable(GL_DEBUG_OUTPUT);
glDebugMessageCallback(MessageCallback, nullptr);
shader_w = new Shader("w_img.vert", "w_img.frag");
shader_r = new Shader("r_img.vert", "r_img.frag");
glGenTextures(1, &space);
glBindTexture(GL_TEXTURE_2D, space);
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA32F, 512, 512);
glBindImageTexture(0, space, 0, GL_FALSE, 0, GL_READ_WRITE, GL_RGBA32F);
glGenVertexArrays(1, &vertex_array);
glBindVertexArray(vertex_array);
}
virtual void Update() override
{
shader_w->use(); // writing shader
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glMemoryBarrier(GL_TEXTURE_FETCH_BARRIER_BIT | GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
shader_r->use(); // reading shader
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
virtual void End() override
{
delete shader_w;
delete shader_r;
glDeleteTextures(1, &space);
glDeleteVertexArrays(1, &vertex_array);
}
private:
Shader* shader_w;
Shader* shader_r;
GLuint vertex_array;
GLuint space;
};
#if 1
CORE_MAIN(ImgRW)
#endif
and these are my fragment shaders:
Writing to image
Code glsl:
#version 430 core
layout (binding = 0, rgba32f) uniform image2D img;
out vec4 out_color;
void main()
{
imageStore(img, ivec2(gl_FragCoord.xy), vec4(0.0f, 0.0f, 1.0f, 1.0f));
}
Reading from image
Code glsl:
#version 430 core
layout (binding = 0, rgba32f) uniform image2D img;
out vec4 out_color;
void main()
{
vec4 color = imageLoad(img, ivec2(gl_FragCoord.xy));
out_color = color;
}
The only way that i get the correct result is if i change the order of the drawing commands and i dont need the memory barriers, like this (in the Update fuction of above):
shader_r->use(); // reading shader
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
shader_w->use(); // writing shader
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
I don't know if the problem is the graphics card or the drivers or if i'm missing some kind of flag that enables memoryBarriers or if i put the wrong barrier bits or if i placed the barriers in the code in the wrong part
The Vertex shader for both shader programs is the next:
#version 430 core
void main()
{
vec2 v[4] = vec2[4]
(
vec2(-1.0, -1.0),
vec2( 1.0, -1.0),
vec2(-1.0, 1.0),
vec2( 1.0, 1.0)
);
vec4 p = vec4(v[gl_VertexID], 0.0, 1.0);
gl_Position = p;
}
and in my init function is:
void Window::init()
{
glfwInit();
window = glfwCreateWindow(getWidth(), getHeight(), name, nullptr, nullptr);
glfwMakeContextCurrent(window);
glfwSetFramebufferSizeCallback(window, framebufferSizeCallback);
glfwSetCursorPosCallback(window, cursorPosCallback);
//glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED);
assert(gladLoadGLLoader((GLADloadproc)glfwGetProcAddress) && "Couldn't initilaize OpenGL");
glEnable(GL_DEPTH_TEST);
}
and in my function run i'm calling my start, update and end functions
void Core::Run()
{
std::cout << glGetString(GL_VERSION) << std::endl;
Start();
float lastFrame{ 0.0f };
while (!window.close())
{
float currentFrame = static_cast<float>(glfwGetTime());
Time::deltaTime = currentFrame - lastFrame;
lastFrame = currentFrame;
glViewport(0, 0, getWidth(), getHeight());
glClearBufferfv(GL_COLOR, 0, &color[0]);
glClearBufferfi(GL_DEPTH_STENCIL, 0, 1.0f, 0);
Update();
glfwSwapBuffers(window);
glfwPollEvents();
}
End();
}
glEnable(GL_DEPTH_TEST);
As I suspected.
Just because a fragment shader doesn't write a color output doesn't mean that those fragments will not affect the depth buffer. If the fragment passes the depth test and the depth write mask is on (assuming no other state is involved), it will update the depth buffer with the current fragment's depth (and the color buffer with uninitialized values, but that's a different matter).
Since you're drawing the same geometry both times, the second rendering's fragments will get the same depth values as the corresponding fragments from the first rendering. But the default depth function is GL_LESS. Since any value is not less than itself, this means that all fragments from the second rendering fail the depth test.
And therefore, they don't get rendered.
So just turn off the depth test. And while you're at it, turn off color writes for your "writing" rendering pass, since you're not writing to the color buffers.
Now, you do properly need the memory barrier between the two draw calls. But you only need the GL_SHADER_IMAGE_ACCESS_BARRIER_BIT, since that's how you're reading the data (via image load/store, not samplers).

GLSL Shader Draws Only Black Screen LWJGL

I am very new to shaders, and I got some GLSL code to compile properly (well without any compiler errors), and for some reason I keep getting a black screen. I am using LWJGL. I keep getting a black screen when I try to render a triangle, even though it should be white according to the color that I passed to the fragment shader. I've posted snippets of the code below, hopefully there's enough to figure out what the problem is.
Fragment Shader Source Code
void main()
{
gl_FragColor = vec4(1.0f, 1.0f, 1.0f, 1.0f);
{
Vertex Shader Source Code
void main()
{
gl_Position = ftransform();
}
Shader Reader Code
vertShaderString = shaderName + ".vert";
fragShaderString = shaderName + ".frag";
shader = GL20.glCreateProgram();
vertShader = GL20.glCreateShader(GL20.GL_VERTEX_SHADER);
try{
String temp;
BufferedReader reader = new BufferedReader(new FileReader(new File(vertShaderString)));
while ((temp = reader.readLine()) != null){
vertSource.append(temp).append("\n");
}
reader.close();
BufferedReader fragReader = new BufferedReader(new FileReader(new File(fragShaderString)));
String otherTemp;
while ((otherTemp = fragReader.readLine()) != null){
fragSource.append(otherTemp).append("\n");
}
fragReader.close();
}catch (Exception e){
e.printStackTrace();
}
Shader Setup Code
GL20.glShaderSource(vertShader, vertSource);
GL20.glCompileShader(vertShader);
if (GL20.glGetShaderi(vertShader, GL20.GL_COMPILE_STATUS) == GL11.GL_FALSE){
System.err.println("Failed to compile vertex shader");
}
GL20.glShaderSource(fragShader, fragSource);
GL20.glCompileShader(fragShader);
if (GL20.glGetShaderi(fragShader, GL20.GL_COMPILE_STATUS) == GL11.GL_FALSE){
System.err.println("Failed to compile fragment shader");
}
GL20.glAttachShader(shader, vertShader);
GL20.glAttachShader(shader, fragShader);
GL20.glLinkProgram(shader);
GL20.glValidateProgram(shader);
Enable and Disable Shader Code
public void begin(){
GL20.glUseProgram(shader);
}
public void end(){
GL20.glUseProgram(0);
}
Render Method
public void render(){
GL11.glClearColor(0, 0, 0, 1);
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT);
GL11.glColor4f(1, 0, 0, 1);
shader.begin();
GL11.glBegin(GL11.GL_TRIANGLES);
GL11.glVertex2i(0, 0);
GL11.glVertex2i(500, 0);
GL11.glVertex2i(250, 250);
GL11.glEnd();
shader.end();
}
The problem has been solved, thanks to jozxyqk. I had forgotten to initialize the fragment shader variable.
fragShader = GL20.glCreateShader(GL20.GL_FRAGMENT_SHADER);
By adding this line of code the problem was solved.

GLSL - Incorrect results when retrieving values from shadow cubemap

When using cubemaps I'm getting inconsistent results in my shaders as opposed to my program.
For testing purposes I wrote a test-program that simply creates a depth cubemap texture and writes '1' to all sides of it:
unsigned int frameBuffer;
glGenFramebuffers(1,&frameBuffer);
unsigned int texture;
glGenTextures(1,&texture);
glBindFramebuffer(GL_FRAMEBUFFER,frameBuffer);
glBindTexture(GL_TEXTURE_CUBE_MAP,texture);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
glTexParameteri(GL_TEXTURE_CUBE_MAP,GL_DEPTH_TEXTURE_MODE,GL_LUMINANCE);
glTexParameteri(GL_TEXTURE_CUBE_MAP,GL_TEXTURE_COMPARE_FUNC,GL_LEQUAL);
glTexParameteri(GL_TEXTURE_CUBE_MAP,GL_TEXTURE_COMPARE_MODE,GL_COMPARE_R_TO_TEXTURE);
unsigned int width = 512;
unsigned int height = 512;
for(unsigned int i=0;i<6;i++)
{
glTexImage2D(
GL_TEXTURE_CUBE_MAP_POSITIVE_X +i,
0,
GL_DEPTH_COMPONENT16,
width,height,
0,GL_DEPTH_COMPONENT,
GL_FLOAT,
0
);
glFramebufferTexture2D(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,GL_TEXTURE_CUBE_MAP_POSITIVE_X +i,texture,0);
}
unsigned int status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status != GL_FRAMEBUFFER_COMPLETE)
return;
float *data = new float[width *height];
for(unsigned long long i=0;i<(width *height);i++)
data[i] = 1.f;
for(unsigned int i=0;i<6;i++)
{
glTexSubImage2D(
GL_TEXTURE_CUBE_MAP_POSITIVE_X +i,
0,
0,
0,
width,height,
GL_DEPTH_COMPONENT,
GL_FLOAT,
&data[0]
);
}
delete[] data;
// Check to see if data has been written correctly
data = new float[width *height];
for(unsigned int i=0;i<6;i++)
{
glFramebufferTexture2D(GL_READ_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,GL_TEXTURE_CUBE_MAP_POSITIVE_X +i,texture,0);
glReadPixels(0,0,width,height,GL_DEPTH_COMPONENT,GL_FLOAT,&data[0]);
for(unsigned long long j=0;j<(width *height);j++)
{
if(data[j] != 1.f)
return;
}
}
delete[] data;
// Check end
Rendering:
float screenVerts[18] = {
-1.f,-1.f,0.f,
1.f,-1.f,0.f,
-1.f,1.f,0.f,
-1.f,1.f,0.f,
1.f,-1.f,0.f,
1.f,1.f,0.f
};
unsigned int vertexBuffer;
glGenBuffers(1,&vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER,vertexBuffer);
glBufferData(GL_ARRAY_BUFFER,sizeof(float) *18,&screenVerts[0],GL_STATIC_DRAW);
glUseProgram(shader);
glBindTexture(GL_TEXTURE_CUBE_MAP,texture);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER,vertexBuffer);
glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,(void*)0);
glDrawArrays(GL_TRIANGLES,0,6);
glDisableVertexAttribArray(0);
Vertex Shader:
#version 330 core
layout(location = 0) in vec3 vertPos;
out vec2 UV;
void main()
{
gl_Position = vec4(vertPos,1);
UV = (vertPos.xy +vec2(1,1)) /2.0;
}
Fragment Shader:
#version 330 core
in vec2 UV;
out vec3 color;
uniform samplerCubeShadow testShadow;
void main()
{
color.r = texture(testShadow,vec4(0,0,1,1));
// Just grab the value from a random direction and put it out as red color
}
(I've ported the C++ code from another language, so if you find some syntax errors in there, don't mind those, they're not in the actual code)
glGetError() does not return any errors.
ReadPixels proves that the writing process worked, however the result is a black screen. That means that the texture-call inside the shader returns 0, which should be impossible regardless of what I use as the direction vector.
What am I missing?
You consistently have the arguments for the glBind*() calls reversed. They all take a target as the first argument, and the object id (aka name) as the second argument. Instead of this:
glBindFramebuffer(frameBuffer,GL_FRAMEBUFFER);
glBindTexture(texture,GL_TEXTURE_CUBE_MAP);
glBindBuffer(vertexBuffer,GL_ARRAY_BUFFER);
It should be this:
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
glBindTexture(GL_TEXTURE_CUBE_MAP, texture);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
There are multiple instances of some of these in the code, so make sure that you catch them all.
Other than that, if this is the complete code, you're not rendering to the screen. You have an FBO without color attachment bound, so the output goes nowhere. If you want to render to the screen, you'll need to unbind the FBO:
glBindFramebuffer(GL_FRAMEBUFFER, 0);
You're also leaving the g and b components of the fragment output undefined, since you only write to r.

GLGS: How do I "connect" a sampler to a texture?

I am trying to read from a 3D texture inside a geometry shader:
#version 150
layout(points) in; // origo of cell
layout(points, max_vertices = 1) out;
uniform sampler3D text;
void main (void)
{
for(int i = 0; i < gl_in.length(); ++i)
{
// texture coordinates:
float u, v, w;
// set u, v, and w somehow
...
float value = texture(text, vec3(u, v, w)).r;
bool show;
// set show based on value somehow:
...
if(show) {
gl_Position = gl_in[i].gl_Position;
EmitVertex();
EndPrimitive();
}
}
}
This is how I set up my texure inside my initialize GL code:
int nx = 101;
int ny = 101;
int nz = 101;
float *data = new float[nx*ny*nz];
// set data[] somehow:
...
glEnable(GL_TEXTURE_3D);
glBindTexture(GL_TEXTURE_3D , texture);
glTexParameteri( GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexImage3D( GL_TEXTURE_3D,
0, // level-of-detail number. 0 is the base image level.
GL_RED, // internal format
nx, ny, nz,
0, // border
GL_RED, // pixel format
GL_FLOAT, // data type of the pixel data
data);
But how do I associate the sampler "text" inside the geometry shader with my texture?
So far I have not told OpenGL that there is a sampler named "text" and that it shall sample texture.
EDIT: I tried the following:
GLint textLoc = glGetUniformLocation(program, "text");
glUniform1i(textLoc, 0); // sends 0 to "text" in shader
// why do I not just hardcode 0 inside the geometry shader instead ?
glBindTexture(GL_TEXTURE_3D , texture);
glActiveTexture(GL_TEXTURE0); // same as 0 ?
GLuint sampler_state = 0;
glGenSamplers(1, &sampler_state);
glBindSampler(0, sampler_state); // what does this do?
What is wrong here?