Using GL_TEXTURE_2D_ARRAY as a draw target - c++

I've created an array of 2D textures and initialized it with glTexImage3D. Then I attached separate textures to color attachments with glFramebufferTextureLayer, Framebuffer creation doesn't throw an error and everything seems fine until the draw call happens.
When shader tries to access color attachment the following message appears:
OpenGL Debug Output message : Source : API; Type : ERROR; Severity : HIGH;
GL_INVALID_OPERATION error generated. <location> is invalid.
Shaders are accessing layers of an array with location qualifier:
layout (location = 0) out vec3 WorldPosOut;
layout (location = 1) out vec3 DiffuseOut;
layout (location = 2) out vec3 NormalOut;
layout (location = 3) out vec3 TexCoordOut;
Documentation says that glFramebufferTextureLayer works just like glFramebufferTexture2D, except the layer parameter, so can I use location qualifiers with texture array, or some other way exsists?

I finally managed to bind texture array as a color buffer. It is hard to find useful information on the topic, so here is an instruction:
№1. You need to create a texture array and initialize it properly:
glGenTextures(1, &arrayBuffer);
glBindTexture(GL_TEXTURE_2D_ARRAY, arrayBuffer);
// we should initialize layers for each mipmap level
for (int mip = 0; mip < mipLevelCount; ++mip) {
glTexImage3D(GL_TEXTURE_2D_ARRAY, mip, internalFormat, ImageWidth, ImageHeight,
layerCount, 0, GL_RGB, GL_UNSIGNED_INT, 0);
glTexParameterf(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, textureFilter);
glTexParameterf(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, textureFilter);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAX_LEVEL, mipLevelCount - 1);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
Keep in mind, that setting texture parameters like MIN/MAG filters and BASE/MAX mipmap level is important. OpenGL sets maximum mipmap level to 1000 and if you didn't provide the whole thousand of mipmaps you will get an incomplete texture, you won't get anything except the black screen.
№2. Don't forget to bind arrayBuffer to the GL_TEXTURE_2D_ARRAY target before attaching the layers to the color buffers:
glBindTexture(GL_TEXTURE_2D_ARRAY, arrayBuffer);
for (unsigned int i = 0; i < NUMBER_OF_TEXTURES; i++) {
glFramebufferTextureLayer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, arrayBuffer, 0, i);
}
Don't forget to set the GL_TEXTURE_2D_ARRAY target to 0 with glBindTexture or it can get modified outside of the initialization code.
№3. Since the internalFormat of each image in the array must stay the same, I recommend to create a separate texture for the depth/stencil buffer:
glGenTextures(1, &m_depthTexture);
...
glBindTexture(GL_TEXTURE_2D, m_depthTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH32F_STENCIL8, WindowWidth,
WindowHeight, 0, GL_DEPTH_STENCIL, GL_FLOAT_32_UNSIGNED_INT_24_8_REV, NULL);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT,
GL_TEXTURE_2D, m_depthTexture, 0);
Don't forget to set up index for each color buffer:
for (int i = 0; i < GBUFFER_NUM_TEXTURES; ++i)
DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; //Sets appropriate indices for each color buffer
glDrawBuffers(ARRAY_SIZE_IN_ELEMENTS(DrawBuffers), DrawBuffers);
In shaders you can use layout(location = n) qualifiers to specify the color buffer.
OpenGL 3 Note (NVIDIA): glFramebufferTextureLayer is available since OpenGL 3.2 (Core profile), but on NVIDIA GPU's drivers will force OpenGL version to 4.5, so you should specify the exact version of OpenGL if you care about compatibility. I use SDL2 in my application, so I use the following calls to set OpenGL version:
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE);
Results of the deferred shading:

Related

OpenGL, render to texture with floating point color without clipping value

I am not really sure what the English name for what I am trying to do is, please tell me if you know.
In order to run some physically based lighting calculations. I need to write floating point data to a texture using one OpenGL shader, and read this data again in another OpenGL shader, but the data I want to store may be less than 0 or above 1.
To do this, I set up a render buffer to render to this texture as follows (This is C++):
//Set up the light map we will use for lighting calculation
glGenFramebuffers(1, &light_Framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, light_Framebuffer);
glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA);//Needed for light blending (true additive)
glGenTextures(1, &light_texture);
glBindTexture(GL_TEXTURE_2D, light_texture);
//Initialize empty, and at the size of the internal screen
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_FLOAT, 0);
//No interpolation, I want pixelation
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
//Now the light framebuffer renders to the texture we will use to calculate dynamic lighting
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, light_texture, 0);
GLenum DrawBuffers[1] = { GL_COLOR_ATTACHMENT0 };
glDrawBuffers(1, DrawBuffers);//Color attachment 0 as before
Notice that I use type GL_FLOAT and not GL_UNSIGNED_BYTE, according to this discussion Floating point type texture should not be clipped between 0 and 1.
Now, just to test that this is true, I simply set the color somewhere outside this range in the fragment shader which creates this texture:
#version 400 core
void main()
{
gl_FragColor = vec4(2.0,-2.0,2.0,2.0);
}
After rendering to this texture, I send this texture to the program which should use it like any other texture:
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, light_texture );//This is the texture I rendered too
glUniform1i(surf_lightTex_ID , 1);//This is the ID in the main display program
Again, just to check that this is working I have replaced the fragment shader with one which tests that the colors have been saved.
#version 400 core
uniform sampler2D lightSampler;
void main()
{
color = vec4(0,0,0,1);
if (texture(lightSampler,fragment_pos_uv).r>1.0)
color.r=1;
if (texture(lightSampler,fragment_pos_uv).g<0.0)
color.g=1;
}
If everything worked, everything should turn yellow, but needless to say this only gives me a black screen. So I tried the following:
#version 400 core
uniform sampler2D lightSampler;
void main()
{
color = vec4(0,0,0,1);
if (texture(lightSampler,fragment_pos_uv).r==1.0)
color.r=1;
if (texture(lightSampler,fragment_pos_uv).g==0.0)
color.g=1;
}
And I got
The parts which are green are in shadow in the testing scene, nevermind them; the main point is that all the channels of light_texture get clipped to between 0 and 1, which they should not do. I am not sure if the data is saved correctly and only clipped when I read it, or if the data is clipped to 0 to 1 when saving.
So, my question is, is there some way to read and write to an OpenGL texture, such that the data stored may be above 1 or below 0.
Also, No can not fix the problem by using 32 bit integer per channel and by applying a Sigmoid function before saving and its inverse after reading the data, that would break alpha blending.
The type and format arguments glTexImage2D only specify the format of the source image data, but do not affect the internal format of the texture. You must use a specific internal format. e.g.: GL_RGBA32F:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);

QOpenGLWidget optimal texture buffer allocation

My question is possiibly not related with Qt and/or QOpenGLWidget itself, but rather with OpenGL buffers in general. Anyway, I'm trying to implement a crossplatform renderer of YUV video frames, which requires converting YUV to RGB and rendering the result on a widget afterwards.
So far, I succeeded in the following:
Found two proper shaders (1 fragment & 1 vertex) to improve YUV 2 RGB conversion (Our project supports only Qt 5.6 so far, no better way for me to do it)
Used QOpenGLWidget to obtain a properly-behaving widget
Did my best with QOpenGLTexture to make the drawing
Here is my very sketchy code, which displays video frames from a raw YUV file and most of the job is done by GPU. I would be happy if it were not for the trouble of buffer allocations. The point is, frames are received from some legacy code, which provides me with custom wrappers around something like unsigned char *data, that is why I have to copy it like this:
//-----------------------------------------
GLvoid* mBufYuv; // buffer somewhere
int mFrameSize;
//-------------------------
void OpenGLDisplay::DisplayVideoFrame(unsigned char *data, int frameWidth, int frameHeight)
{
impl->mVideoW = frameWidth;
impl->mVideoH = frameHeight;
memcpy(impl->mBufYuv, data, impl->mFrameSize);
update();
}
While testing the concept, frame and buffer sizes were hardcoded like:
// Called from the outside, assuming video frame height/width are constant
void OpenGLDisplay::InitDrawBuffer(unsigned bsize)
{
impl->mFrameSize = bsize;
impl->mBufYuv = new unsigned char[bsize];
}
Qt texture classes served well for the pupose, so...
// Create y, u, v texture objects respectively
impl->mTextureY = new QOpenGLTexture(QOpenGLTexture::Target2D);
impl->mTextureU = new QOpenGLTexture(QOpenGLTexture::Target2D);
impl->mTextureV = new QOpenGLTexture(QOpenGLTexture::Target2D);
impl->mTextureY->create();
impl->mTextureU->create();
impl->mTextureV->create();
// Get the texture index value of the return y component
impl->id_y = impl->mTextureY->textureId();
// Get the texture index value of the returned u component
impl->id_u = impl->mTextureU->textureId();
// Get the texture index value of the returned v component
impl->id_v = impl->mTextureV->textureId();
And after all the rendering itself looks like:
void OpenGLDisplay::paintGL()
{
// Load y data texture
// Activate the texture unit GL_TEXTURE0
glActiveTexture(GL_TEXTURE0);
// Use the texture generated from y to generate texture
glBindTexture(GL_TEXTURE_2D, impl->id_y);
// Use the memory mBufYuv data to create a real y data texture
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, impl->mVideoW, impl->mVideoH, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, impl->mBufYuv);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// Load u data texture
glActiveTexture(GL_TEXTURE1);//Activate texture unit GL_TEXTURE1
glBindTexture(GL_TEXTURE_2D, impl->id_u);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, impl->mVideoW/2, impl->mVideoH/2
, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, (char*)impl->mBufYuv + impl->mVideoW * impl->mVideoH);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// Load v data texture
glActiveTexture(GL_TEXTURE2);//Activate texture unit GL_TEXTURE2
glBindTexture(GL_TEXTURE_2D, impl->id_v);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, impl->mVideoW / 2, impl->mVideoH / 2
, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, (char*)impl->mBufYuv + impl->mVideoW * impl->mVideoH * 5/4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// Specify y texture to use the new value can only use 0, 1, 2, etc. to represent
// the index of the texture unit, this is the place where opengl is not humanized
//0 corresponds to the texture unit GL_TEXTURE0 1 corresponds to the
// texture unit GL_TEXTURE1 2 corresponds to the texture unit GL_TEXTURE2
glUniform1i(impl->textureUniformY, 0);
// Specify the u texture to use the new value
glUniform1i(impl->textureUniformU, 1);
// Specify v texture to use the new value
glUniform1i(impl->textureUniformV, 2);
// Use the vertex array way to draw graphics
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
As I've mentioned above, it works fine, but it's only a demo sketch, the goal was to implement generic video renderer, which means aspect ratio, resolution and frame fize may change dynamically; thus, the buffer (GLvoid* mBufYuv; in the code above) has to be reallocated and, even worse, I'll have to memcpy data to it 25 times per second. Looks definitely like something, that wouldn't work way too fast with Full HD video, for example.
Of course, several trivial optimizations are possible, leading to reduction of data copying, but Google told me that there are different ways to allocate buffers in OpenGL directly, those PBO/PUBO things and QOpenGLBuffer at least.
Now, there is a problem -- I'm quite confused with all those many ways to handle textures and don't know neither the best/optimal, nor the one best fitting my case.
Any piece of advice is appreciated.

Writing to an empty 3D texture in a Compute Shader

I am attempting to create an empty 3D texture, the dimensions and format of which are loaded in at runtime. I then want to modify the values of this texture, which I am then volume rendering with a ray tracer. I know my rendering function works fine, as I can render the volume that the dimensions and format comes from without a problem. The empty volume also renders, but I am unable to write any data to it, and so it is just white all the way through.
//My function that creates the blank texture initially
//Its part of a larger class that reads in a populated volume and a transfer function,
//I'm just initialising it in this way so it is identical to the other volume, but empty
GLuint Texture3D::GenerateBlankTexture(VolumeDataset volume)
{
GLuint tex;
glEnable(GL_TEXTURE_3D);
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_3D, tex);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
// Reverses endianness in copy
if (!volume.littleEndian)
glPixelStoref(GL_UNPACK_SWAP_BYTES, true);
if (volume.elementType == "MET_UCHAR")
{
// texture format, ?, channels, dimensions, ?, pixel format, data type, data
glTexImage3D(GL_TEXTURE_3D, 0, GL_R8, volume.xRes, volume.yRes, volume.zRes, 0, GL_RED, GL_UNSIGNED_BYTE, NULL);
glBindImageTexture(0, tex, 0, GL_TRUE, 0, GL_READ_WRITE, GL_R8);
}
else if (volume.elementType == "SHORT")
{
glTexImage3D(GL_TEXTURE_3D, 0, GL_R16F, volume.xRes, volume.yRes, volume.zRes, 0, GL_RED, GL_UNSIGNED_SHORT, NULL);
glBindImageTexture(0, tex, 0, GL_TRUE, 0, GL_READ_WRITE, GL_R16F);
}
else if (volume.elementType == "FLOAT")
{
glTexImage3D(GL_TEXTURE_3D, 0, GL_R32F, volume.xRes, volume.yRes, volume.zRes, 0, GL_RED, GL_FLOAT, NULL);
glBindImageTexture(0, tex, 0, GL_TRUE, 0, GL_READ_WRITE, GL_R32F);
}
glPixelStoref(GL_UNPACK_SWAP_BYTES, false);
GLenum err = glGetError();
glBindTexture(GL_TEXTURE_3D, 0);
return tex;
}
With the volume created, I then read it into a compute shader in my display function:
glUseProgram(Compute3DShaderID);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_3D, tex_output);
glDispatchCompute((GLuint)volume.xRes/4, (GLuint)volume.yRes/4, (GLuint)volume.zRes/4);
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
Within my shader, all I'm trying to do is change the colour based on its position in the volume:
#version 430
layout (local_size_x = 4, local_size_y = 4, local_size_z = 4) in;
layout (r8, binding = 0) uniform image3D tex_output;
void main()
{
ivec3 dims = imageSize (tex_output);
ivec3 pixel_coords = ivec3(gl_GlobalInvocationID.xyz);
vec4 pixel = vec4(pixel_coords.x/dims.x, pixel_coords.y/dims.y, pixel_coords.y/dims.y, 1.0);
imageStore (tex_output, pixel_coords, pixel);
}
I'm sure the error is something to do with access to writing being denied, but I can't pinpoint exactly what it is.
Note: I'm using GL_RED and such because this is volume data, and this is how I have it in the rest of my volume renderer and it seems to work fine.
So, stupid mistake. Turns out my shaders were working fine. What I hadn't anticipated was that the values I was attempting to write to the volume mapped to a white colour on my transfer function. Once I pulled up the schematic for the transfer function, and tested with values that should work fine, I got actual colours.
Anyone seeing this question in the future, if your code isn't working, it should be as follows:
Create your texture, and set it as an image texture using glTexImage3D. Then, when you wish to use it, call glBindImageTextureand draw, making sure you set layered to GL_TRUE since its a 3D texture. Also make sure that you bind to the correct binding (In my code above I bind to 0, but I've since added a second texture thats bound to 1) and unbind if you're going to use a second set of textures and shaders.
If you're having trouble, set it so that every iteration in your compute adds 0.01 to the final value, so you can see the colour change in real time.

GL_INVALID_OPERATION when attempting to sample cubemap texture

I'm working on shadow casting using this lovely tutorial. The process is, we render the scene to a frame buffer, attached to which is a cubemap to hold the depth values. Then, we pass this cubemap to a fragment shader which samples it and gets the depth values from there.
I took a slight deviation from the tutorial in that instead of using a geometry shader to render the entire cubemap at once, I instead render the scene six times to get the same effect - largely because my current shader system doesn't support geometry shaders and for now I'm not too concerned about the performance hit.
The depth cubemap is being drawn to fine, here's a screenshot from gDEBugger:
Everything seems to be in order here.
However, I'm having issues in my fragment shader when I attempt to sample this cubemap. After the call to glDrawArrays, a call to glGetError returns GL_INVALID_OPERATION, and as best as I can tell, it's coming from here: (The offending line has been commented)
struct PointLight
{
vec3 Position;
float ConstantRolloff;
float LinearRolloff;
float QuadraticRolloff;
vec4 Color;
samplerCube DepthMap;
float FarPlane;
};
uniform PointLight PointLights[NUM_POINT_LIGHTS];
[...]
float CalculateShadow(int lindex)
{
// Calculate vector between fragment and light
vec3 fragToLight = FragPos - PointLights[lindex].Position;
// Sample from the depth map (Comment this out and everything works fine!)
float closestDepth = texture(PointLights[lindex].DepthMap, vec3(1.0, 1.0, 1.0)).r;
// Transform to original value
closestDepth *= PointLights[lindex].FarPlane;
// Get current depth
float currDepth = length(fragToLight);
// Test for shadow
float bias = 0.05;
float shadow = currDepth - bias > closestDepth ? 1.0 : 0.0;
return shadow;
}
Commenting out the aforementioned line seems to make everything work fine - so I'm assuming it's the call to the texture sampler that's causing issues. I saw that this can be attributed to using two textures of different types in the same texture unit - but according to gDEBugger this isn't the case:
Texture 16 is the depth cube map.
In case it's relevant, here's how I'm setting up the FBO: (called only once)
// Generate frame buffer
glGenFramebuffers(1, &depthMapFrameBuffer);
// Generate depth maps
glGenTextures(1, &depthMap);
// Set up textures
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_CUBE_MAP, depthMap);
for (int i = 0; i < 6; ++i)
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_DEPTH_COMPONENT,
ShadowmapSize, ShadowmapSize, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
// Set texture parameters
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
// Attach cubemap to FBO
glBindFramebuffer(GL_FRAMEBUFFER, depthMapFrameBuffer);
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthMap, 0);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
ERROR_LOG("PointLight created an incomplete frame buffer!\n");
glBindTexture(GL_TEXTURE_CUBE_MAP, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Here's how I'm drawing with it: (called every frame)
// Set up viewport
glViewport(0, 0, ShadowmapSize, ShadowmapSize);
// Bind frame buffer
glBindFramebuffer(GL_FRAMEBUFFER, depthMapFrameBuffer);
// Clear depth buffer
glClear(GL_DEPTH_BUFFER_BIT);
// Render scene
for(int i = 0; i < 6; ++i)
{
sh->SetUniform("ShadowMatrix", lightSpaceTransforms[i]);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,
GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, depthMap, 0);
Space()->Get<Renderer>()->RenderScene(sh);
}
// Unbind frame buffer
glBindFramebuffer(GL_FRAMEBUFFER, 0);
And here's how I'm binding it before drawing:
std::stringstream ssD;
ssD << "PointLights[" << i << "].DepthMap";
glActiveTexture(GL_TEXTURE4 + i);
glBindTexture(GL_TEXTURE_CUBE_MAP, pointlights[i]->DepthMap()); // just returns the ID of the light's depth map
shader->SetUniform(ssD.str().c_str(), i + 4); // just a wrapper around glSetUniform1i
Thank you for reading, and please let me know if I can supply more information!
It is old post, but i think it may be useful for other people from the search.
Your problem here:
glActiveTexture(GL_TEXTURE4 + i);
glBindTexture(GL_TEXTURE_CUBE_MAP, pointlights[i]->DepthMap());
This replacement should fix problem:
glActiveTexture(GL_TEXTURE4 + i);
glUniform1i(glGetUniformLocation("programId", "cubMapUniformName"), GL_TEXTURE4 + i);
glBindTexture(GL_TEXTURE_CUBE_MAP, pointlights[i]->DepthMap());
It set texture unit number for shader sampler

OpenGL array textures not rendering at all

I'm currently working to convert a project from using a texture atlas to an array texture, but for the life of me I can't get it working.
Some notes about my environment:
I'm using OpenGL 3.3 core context with GLSL version 3.30
The textures are all 128x128 and rendered perfectly fine when using an atlas (barring the edge artifacts which convinced me to switch)
Problems I believe I've ruled out:
Resolution issues - 128x128, being a power of two, should be fine
Texture loading (it works perfectly as it did before)
Incomplete textures (mipmap issues) - I've gone through the common issues regarding mipmaps and I don't believe OpenGL should be expecting them
Here's my code for creating the array texture:
public void createTextureArray() {
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D_ARRAY, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
int handle = glGenTextures();
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D_ARRAY, handle);
glPixelStorei(GL_UNPACK_ROW_LENGTH, Texture.SIZE);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexStorage3D(GL_TEXTURE_2D_ARRAY, 1, GL_RGBA8, Texture.SIZE, Texture.SIZE, textures.size());
try {
int layer = 0;
for (Texture tex : textures.values()) {
// Next few lines are just for loading the texture. They've been ruled out as the issue.
PNGDecoder decoder = new PNGDecoder(ImageHelper.asInputStream(tex.getImage()));
ByteBuffer buffer = BufferUtils.createByteBuffer(decoder.getWidth() * decoder.getHeight() * 4);
decoder.decode(buffer, decoder.getWidth() * 4, PNGDecoder.Format.RGBA);
buffer.flip();
glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, layer, decoder.getWidth(), decoder.getHeight(), 1,
GL_RGBA, GL_UNSIGNED_BYTE, buffer);
tex.setLayer(layer);
layer++;
}
} catch (IOException ex) {
ex.printStackTrace();
System.err.println("Failed to create/load texture array");
System.exit(-1);
}
}
The code for creating the VAO/VBO:
private static int prepareVbo(int handle, FloatBuffer vbo) {
IntBuffer vaoHandle = BufferUtils.createIntBuffer(1);
glGenVertexArrays(vaoHandle);
glBindVertexArray(vaoHandle.get());
glBindBuffer(GL_ARRAY_BUFFER, handle);
glBufferData(GL_ARRAY_BUFFER, vbo, GL_STATIC_DRAW);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D_ARRAY, GraphicsMain.TEXTURE_REGISTRY.atlasHandle);
glEnableVertexAttribArray(positionAttrIndex);
glEnableVertexAttribArray(texCoordAttrIndex);
glVertexAttribPointer(positionAttrIndex, 3, GL_FLOAT, false, 24, 0);
glVertexAttribPointer(texCoordAttrIndex, 3, GL_FLOAT, false, 24, 12);
glBindVertexArray(0);
vaoHandle.rewind();
return vaoHandle.get();
}
Fragment shader:
#version 330 core
uniform sampler2DArray texArray;
varying vec3 texCoord;
void main() {
gl_FragColor = texture(texArray, texCoord);
}
(texCoord is working fine; it's being passed from the vertex shader correctly.)
I'm about out of ideas, so being still somewhat new to modern OpenGL, I'd like to know if there's anything glaringly wrong with my code.
Some considerations:
you don't need any more to have power of two textures
be sure that every layer has the same number of levels/mipmaps, as the wiki says
the first four glTexParameteri will affect what is bound to GL_TEXTURE_2D_ARRAY at that moment, so you better want to move them after glBindTexture
how can you specify how many textures you want to create with glGenTextures()? If you have the possibility for a more specific method, please use it
GL_UNPACK_ROW_LENGTH if greater than 0, defines the number of pixels in a row. I suppose then Texture.SIZE is not really the texture size but the dimension on one side (128 in your case). Anyway you don't need to set that, you can skip it
set GL_UNPACK_ALIGNMENT to 4 only if your row lenght is a multiple of it. Most of time people set it to 1 before loading a texture to avoid any trouble and then set it back to 4 once done
last argument of glTexStorage3D is expected to be the number of layers, I hope textures.size() better returns that rather than the size (128x128)
glActiveTexture and glBindTexture inside prepareVbo are useless, they are not part of the vao
don't use varying in glsl, it's deprecated, switch to a simple in out
you may want to take inspiration from this sample
use sampler, they give you more flexibility
use Debug Output if available, otherwise glGetError(), some silent errors may not be seen explicitely by the rendering output
you called it prepareVbo but you do initialize in it both vao and vbo