sampler2DArray - setup and use - opengl

I'm studying OpenGL and I had to use sampler2DArray. I am in torment all day long - all to no avail. I have two questions:
How to create a list of textures?
How to use sampler2DArray in the shader?
Here is the result of my attempts to create a list of textures:
// textures - ids loaded textures
private int createTextureArray(GL2 gl, int[] textures, int width, int height) {
int layerCount = textures.length;
int mipLevelCount = 1;
IntBuffer texture = IntBuffer.allocate(1);
gl.glGenTextures(1, texture);
gl.glActiveTexture(GL.GL_TEXTURE0);
gl.glBindTexture(GL2.GL_TEXTURE_2D_ARRAY, texture.get(0));
gl.glTexStorage3D(GL2.GL_TEXTURE_2D_ARRAY, mipLevelCount, GL2.GL_RGBA8, width, height, layerCount);
for (int i = 0; i<textures.length; i++) {
gl.glTexSubImage3D(GL2.GL_TEXTURE_2D_ARRAY, i, // error here
0, 0, 0,
width, height, layerCount,
GL2.GL_RGBA, GL2.GL_UNSIGNED_BYTE,
textures[i]);
}
// Always set reasonable texture parameters
gl.glTexParameteri(GL2.GL_TEXTURE_2D_ARRAY, GL2.GL_TEXTURE_MIN_FILTER, GL2.GL_LINEAR);
gl.glTexParameteri(GL2.GL_TEXTURE_2D_ARRAY, GL2.GL_TEXTURE_MAG_FILTER, GL2.GL_LINEAR);
gl.glTexParameteri(GL2.GL_TEXTURE_2D_ARRAY, GL2.GL_TEXTURE_WRAP_S, GL2.GL_CLAMP_TO_EDGE);
gl.glTexParameteri(GL2.GL_TEXTURE_2D_ARRAY, GL2.GL_TEXTURE_WRAP_T, GL2.GL_CLAMP_TO_EDGE);
return texture.get(0);
}
Shader example:
#version 130
uniform sampler2DArray textures;
varying vec2 UV;
...
void main() {
...
int layer = 0;
gl_FragColor = texture2DArray(textures, vec3(UV, layer));
}
I will be grateful for the help.

An array texture is not a "list of textures". An array texture is a single OpenGL texture, one which individually has a number of quasi-independent layers in it. While you may conceptually think of each layer of an array texture as a separate conceptual texture, in OpenGL (and GLSL), it is a single object.
Given this, the interface in your function is incorrect. It should return a single texture object, and it should take as a parameter, not an array of int (note: OpenGL objects are unsigned integers), but a single integer: the number of array layers to create in that texture.
How you use an array texture in GLSL is simple. Your uniform for the sampler uses an array-texture sampler type (for example sampler2DArray for 2D array textures). You bind the array texture to the same texture image unit that you specified as the binding for the sampler uniform (just as you would for a non-array 2D texture).
Your GLSL is missing one thing. There is no texture2DArray function. The correct function to use is just texture. The texture type in post-GL 3.0 is specified solely by the parameter, not by the name of the function anymore.

In addition to what #NicolBolas already said: There is a bunch of problems with the shader code, mostly due to functionality that has been deprecated in version 130:
There is no method texture2DArray in any standard glsl version. There has been one in the EXT_texture_array extension, but this has never been integrated since in glsl 130 all texture lookup functions (texture2D, texture3D, ...) have been replaced by a overloaded texture command. If you are targeting 130 without extensions, you should use texture(textures, vec3(UV, layer))
The varying keyword is deprecated in glsl 130 and should be replaced by in/out
gl_FragColor is deprecated and a user defined output (out) variable should be used.
You might want to have a look at Section 1.2.1 of the GLSL 130 Spec which describes the deprecations and how they should be handled. In general I would encourage everyone not to use 130 at all today unless there is a special reason for it. Better move to OpenGL 3.3+ Core Profile and GLSL 330+.

Related

Drawing a monochrome 2D array in OpenGL

I need to use OpenGL for a very specific purpose. I've got a 1D array of floats of size [SIZE][SIZE] (it's always square), that represents a 2D image. Drawing is just an extra here since I've been doing it using third programs by outputting the array to a text file, but I would like to give the option of doing it in the program itself.
This array is being constantly updated in a loop as it's supposed to represents values of a simulated field, the details of which are quite irrelevant, but the important point is that the value of each of them is going to be a float between -1 and 1. Now, I would like to just draw this array as 2D image (in real time), every N steps of the main loop. I tried using the pixel drawing tool of X11 (I'm doing this on Linux), and drawing the array by just looping over it and going pixel by pixel on a SIZE X SIZE window, but this was very slow and was taking much more than the simulation itself. I've been looking into OpenGL and from what I've read the ideal solution would be to reinterprate my array as a 2D Texture and then printing it on a quad. Apparently to use bare OpenGL I would have to readapt my code to work with the main loop of the OpenGL drawing and this is a bit inpractical, so if the same can be done in GLFW, I'm happy with it.
The image to draw is always square, and its orientation is completely irrelevant, it doesn't matter if it's drawn mirrored, upside down, transposed, etc, as it's supposed to be completely isotropic.
The main backbone of the program follows the next scheme
#include <iostream>
#include <GLFW/glfw3.h>
using namespace std;
int main(int argc, char** argv)
{
if (GFX) //GFX is a bool, only draw stuff if it's 1 (its value doesnt change)
{
//Initialize GLFW
}
float field[2*SIZE][SIZE] = {0}; //This is the array to print (only the first SIZE * SIZE components)
for (int i = 0; i < totalTime; i++)
{
for (int x=0; x < SIZE; x++)
{
for (int y=0; y < SIZE; y++)
{
//Each position of the array is then updates here
}
}
if (GFX)
{
//The drawing should be done here
}
}
return 0;
}
I've tried some code snippets and modified some other samples I've found around, but haven't been able to make it work, either they have to call a glLoop that breaks my own simulation loop, or it just prints a pixel in the centre.
So my main question is how to make a texture out of the first SIZE X SIZE components of field, and then draw it on a QUAD.
Thanks!
The simplest for a rookie is to use old API without shaders. In order to make that work you simply encode your data into 1D linear array of floats in range <0.0,1.0> which can be done from <-1,+1> pretty fast on CPU side with single for loop like this:
for (i=0;i<size*size;i++) data[i]=0.5*(data[i]+1.0);
I do not use GLUT nor code for your platform so I stick just to rendering:
//---------------------------------------------------------------------------
const int size=512; // data resolution
const int size2=size*size;
float data[size2]; // your float size*size data
GLuint txrid=-1; // GL texture ID
//---------------------------------------------------------------------------
void init() // this must be called once (after GL is initialized)
{
int i;
// generate float data
Randomize();
for (i=0;i<size2;i++) data[i]=Random();
// create texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_MODULATE);
glDisable(GL_TEXTURE_2D);
}
//---------------------------------------------------------------------------
void exit() // this must be called once (before GL is unitialized)
{
// release texture
glDeleteTextures(1,&txrid);
}
//---------------------------------------------------------------------------
void gl_draw()
{
glClear(GL_COLOR_BUFFER_BIT);
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// bind texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
// copy your actual data into it
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE32F_ARB, size, size, 0, GL_LUMINANCE, GL_FLOAT, data);
// render single textured QUAD
glColor3f(1.0,1.0,1.0);
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0); glVertex2f(-1.0,-1.0);
glTexCoord2f(1.0,0.0); glVertex2f(+1.0,-1.0);
glTexCoord2f(1.0,1.0); glVertex2f(+1.0,+1.0);
glTexCoord2f(0.0,1.0); glVertex2f(-1.0,+1.0);
glEnd();
// unbind texture (so it does not mess with othre rendering)
glBindTexture(GL_TEXTURE_2D,0);
glDisable(GL_TEXTURE_2D);
glFlush();
SwapBuffers(hdc); // ignore this GLUT should make it on its own
}
//---------------------------------------------------------------------------
Here preview:
In order to make this work you need to call init() at start of your app after the GLUT creates GL context and exit() on Apps end before GLUT closes the GL context. The gl_draw() will render your data so it must be called in the drawing event of GLUT.
In case you do not want to do the range conversion to <0,1> on CPU side you can move it to shaders (very simple vertex and fragment shader) but I got the feeling you're rookie and shaders would be simply too much for you to start with. If you really want to go that way see:
complete GL+GLSL+VAO/VBO C++ example
It also covers the GL initialization without GLUT but on Windows ...
Now some notes to the program above:
I used GL_LUMINANCE32F_ARB texture format extention
its 32 bit floating point texture format that is not clamped so your data stays as is. It should be present on all nowadays gfx HW. I did this to ease up the transition to shaders latter on where you can operate at your raw data directly ...
size
in original GL specification the texture size should be power of 2 so 16,32,64,128,256,512,... If not you need to use rectangle texture extention but that is native in gfx HW for years now so no need to change anything. But on linux and MAC there might be problems with GL implementation so if something does not work try to use power of 2 size (just in case)...
Also do not get too craze with size as gfx cards has limits usually 2048 is safe limit for lowend stuff. If yo need more then do a mosaic of more QUADS/textures
GL_CLAMP_TO_EDGE
this is also extention (now native to HW) so your texture coordinates go from 0 to 1 instead of from 0+pixel/2 to 1-pixel/2 ...
However all of these are not GL 1.0 stuff so you need to add extentions to your App (if GLUT or whatever you use does not already). All of these are just tokens/constants no function calls so in case compiler complains it should be enough to:
#include <gl\glext.h>
After gl.h is included or add the defines directly instead:
#define GL_CLAMP_TO_EDGE 0x812F
#define GL_LUMINANCE32F_ARB 0x8818
btw. your code does not look like GLUT app (but I might be wrong as I do not use it) see this for example:
simple GLUT app example
Your header suggest GLFW3 that is something entirely different (unless its derived from GLUT) than GLUT so maybe you should edit tags and OP to match what you really have/use.
Now the shaders:
if you generate your data in <-1,+1> range:
for (i=0;i<size2;i++) data[i]=(2.0*Random())-1.0;
And use these shaders:
Vertex:
// Vertex
#version 400 core
layout(location = 0) in vec2 pos; // position
layout(location = 8) in vec2 tex; // texture
out vec2 vpos;
out vec2 vtex;
void main()
{
vpos=pos;
vtex=tex;
gl_Position=vec4(pos,0.0,1.0);
}
Fragment:
// Fragment
#version 400 core
uniform sampler2D txr;
in vec2 vpos; // position
in vec2 vtex; // texture
out vec4 col;
void main()
{
vec4 c;
c=texture(txr,vtex);
c=(c+1.0)*0.5;
col=c;
}
Then the result is the same (appart of faster conversion on GPU side). However you need to convert the GL_QUADS into VAO/VBO ( unless nVidia card is used but even then you definately should use VBO/VAO).

Pass texture with float values to shader

I am developing an application in C++ in QT QML 3D. I need to pass tens of float values (or vectors of float numbers) to fragment shader. These values are positions and colors of lights, so I need values more then range from 0.0 to 1.0. However, there is no support to pass an array of floats or integers to a shader in QML. My idea is to save floats to a texture, pass the texture to a shader and get the values from that.
I tried something like this:
float array [4] = {100.5, 60.05, 63.013, 0.0};
uchar data [sizeof array * sizeof(float)];
memcpy(data, &array, sizeof array * sizeof(float));
QImage img(data, 4, 1, QImage::Format_ARGB32);
and pass this QImage to a fragment shader as sampler2D. But is there there a method like memcpy to get values from texture in GLSL?
texelFetch method only returns me vec4 with float numbers range 0.0 to 1.0. So how can I get the original value from the texture in a shader?
I prefer to use sampler2D type, however, is there any other type supporting direct memory access in GLSL?
This will create a float texture from a float array.
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, width, height, 0, GL_RED, GL_FLOAT, &array);
You can access float values from your fragment shader like this:
uniform sampler2D float_texture;
...
float float_texel = float( texture2D(float_texture, tex_coords.xy) );
"OpenGL float texture" is a well documented subject.
What exactly is a floating point texture?
The texture accessing functions do directly read from the texture. The problem is that Format_ARGB32 is not 32-bits per channel; it's 32-bits per pixel. From this documentation page, it seems clear that QImage cannot create floating-point images; it only deals in normalized formats. QImage itself seems to be a class intended for dealing with GUI matters, not for loading OpenGL images.
You'll have to ditch Qt image shortcuts and actually use OpenGL directly.

Heat haze/distortion effect in OpenGL (GLSL) and how it should be achieved

you can skip to the TL;DR at the bottom for the conclusion. I preferred to provide as much information as I could, so as to help narrow down the question further.
I've been having an issue with a heat haze effect I've been working on.
This is the sort of effect that I was thinking of but since this is a rather generalized system it would apply to any so called screen space refraction:
The haze effect is not where my issue lies as it is just a distortion of sampling coordinates, rather it's with what is sampled. My first approach was to render the distortions to another render target. This method was fairly successful but has a major downfall that's easy to foresee if you've dealt with screen space textures before. the problem is that because of the offset to the sampling coordinate, if an object is in front of the refractor, its edges will be taken into the refraction calculation.
as you can see it looks fine when all the geometry is either the environment (no depth test) or back geometry. and here with a cube closer than the refractor. As you can see it, there is this effect I'll call bleeding of the closer geometry.
relevant shader code for reference:
/* transparency.frag */
layout (location = 0) out vec4 out_color; // frag color
layout (location = 1) out vec4 bright; // used for bloom effect
layout (location = 2) out vec4 deform; // deform buffer
[...]
void main(void) {
[...]
vec2 n = __sample_noise_texture_with_time__{};
deform = vec4(n * .1, 0, 1);
out_color = vec4(0, 0, 0, .0);
bright = vec4(0.0, 0.0, 0.0, .9);
}
/* post_process.frag */
in vec2 texel;
uniform sampler2D screen_t;
uniform sampler2D depth_t;
uniform sampler2D bright_t;
uniform sampler2D deform_t;
[...]
void main(void) {
[...]
vec3 noise_sample = texture(deform_t, texel).xyz;
vec2 texel_c = texel + noise_sample.xy;
[sample screen and bloom with texel_c, gama corect, output to color buffer]
}
To try to combat this, I tried a technique that involved comparing depth components. to do this, i made the transparent object write its frag_depth tp the z component of my deform buffer like so
/* transparency.frag */
[...]
deform = vec4(n * .1, gl_FragCoord.z, 1);
[...]
and then to determine what is in front of what a quick check in the post processing shader.
[...]
float dist = texture(depth_t, texel_c).x;
float dist1 = noise_sample.z; // what i wrote to the deform buffer z
if (dist + .01 < dist1) { /* do something liek draw debug */ }
[...]
this worked somewhat but broke down as i moved away, even i i linearized the depth values and compared the distances.
EDIT 3: added better screenshots for the depth test phase
(In yellow where it's sampling something that's in front, couldn't be bothered to make it render the polygons as well so i drew them in)
(and here demonstrating it partially failing the depth comparison test from further away)
I also had some 'fun' with another technique where i passed the color buffer directly to the transparency shader and had it output the sample to its color output. In theory if the scene is Z sorted, this should produce the desired result. i'll let you be the judge of that.
(I have a few guesses as to what the patterns that emerge are since they are similar to the rasterisation patterns of GPUs however that's not very relevant sine that 'solution' was more of a desperation effort than anything)
TL;DR and Formal Question: I've had a go at a few techniques based on my knowledge and haven't been able to find much literature on the subject. so my question is: How do you realize sch effects as heat haze/distortion (that do not cover the whole screen might i add) and is there literature on the subject. For reference to what sort of effect I would be looking at, see my Overwatch screenshot and all other similar effects in the game.
Thought I would also mention just for completeness sake I'm running OpenGL 4.5 (on windows) with most shaders being version 4.00, and am working with a custom engine.
EDIT: If you want information about the software part of the engine feel free to ask. I didn't include any because it I didn't deem it relevant however i'd be glad to provide specs and code snippets as well as more shaders on demand.
EDIT 2: I thought i'd also mention that this could be achieved by using a second render pass and a clipping plane however, that would be costly and feels unnecessary since the viewpoint is the same. It might be that's this is the only solution but i don't believe so.
Thanks for your answers in advance!
I think the issue is you are trying to distort something that's behind an occluded object and that information is not available any more, because the object in front have overwitten the color value there. So you can't distort in information from a color buffer that does not exist anymore.
You are trying to solve it by depth testing and skipping the pixels that belong to an object closer to the camera than your transparent heat object, but this is causing the edge to leak into the distortion. Even if you get the edge skipped, if there was an object right behind the transparent object, occluded by the cube in the front, it wont distort in because the color information is not available.
Additional Render Pass
As you mention additional rendering pass with a clipping plane is certainly one solution to this problem.
Multiple render targets
Another solution similar to that would be to use multiple render targets, render the depth of the transparent object before hand, test for fragments that are behind it, and render them to another color buffer. Later use this buffer to distort instead of the full color buffer. You could also consider deffered shading.
Here is a code snippet of how you would setup multiple render targets.
//create your fbo
GLuint fboID;
glGenFramebuffers(1, &fboID);
glBindFramebuffer(GL_FRAMEBUFFER, fboID);
//create the rbo for depth
GLuint rboID;
glGenRenderbuffers(1, &rboID);
glBindRenderbuffer(GL_RENDERBUFFER, &rboID);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboID);
//create two color textures (one for distort)
Gluint colorTexture, distortcolorTexture;
glGenTextures(1, &colorTexture);
glGenTextures(1, &distortcolorTexture);
glBindTexture(GL_TEXTURE_2D, colorTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, distortcolorTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
//attach both textures
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, colorTexture, 0);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, distortcolorTexture, 0);
//specify both the draw buffers
GLenum drawBuffers[2] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1};
glDrawBuffers(2, DrawBuffers);
First render the transparent obj's depth. Then in your fragment shader for other objects
//compute color with your lighting...
//write color to colortexture
gl_FragData[0] = color;
//check if fragment behind your transparent object
if( depth >= tObjDepth )
{
//write color to distortcolortexture
gl_FragData[1] = color;
}
finally use the distortcolortexture for your distort shader.
Depth test for a matrix of pixels instead of single pixel.
I think the edge is leaking because maybe you don't simply distort one pixel but more of a matrix of pixels, perhaps you could also try checking the max depth for the matrix (eg: 3x3 pixels centered on current pixel) and discard it if it fails the depth test. (note : this still won't distort objects behind the occluding object which you might want distorted in).

Compute Shader write to texture

I have implemented CPU code that copies a projected texture to a larger texture on a 3d object, 'decal baking' if you will, but now I need to implement it on the GPU. To do this I hope to use compute shader as its quite difficult to add an FBO in my current setup.
Example image from my current implementation
This question is more about how to use Compute shaders but for anyone interested, the idea is based on an answer I got from user jozxyqk, seen here: https://stackoverflow.com/a/27124029/2579996
The texture that is written-to is in my code called _texture, whilst the one projected is _textureProj
Simple compute shader
const char *csSrc[] = {
"#version 440\n",
"layout (binding = 0, rgba32f) uniform image2D destTex;\
layout (local_size_x = 16, local_size_y = 16) in;\
void main() {\
ivec2 storePos = ivec2(gl_GlobalInvocationID.xy);\
imageStore(destTex, storePos, vec4(0.0,0.0,1.0,1.0));\
}"
};
As you see I currently only want to have the texture updated to some arbitrary (blue) color.
Update function
void updateTex(){
glUseProgram(_computeShader);
const GLint location = glGetUniformLocation(_computeShader, "destTex");
if (location == -1){
printf("Could not locate uniform location for texture in CS");
}
// bind texture
glUniform1i(location, 0);
glBindImageTexture(0, *_texture, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA32F);
// ^second param returns the GLint id - that im sure of.
glDispatchCompute(_texture->width() / 16, _texture->height() / 16, 1);
glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
glUseProgram(0);
printOpenGLError(); // reports no errors.
}
Problem
If i call the updateTex() outside of my main program object i see zero effect, whereas if I call it within its scope like so:
glUseProgram(_id); // vert, frag shader pipe
updateTex();
// Pass uniforms to shader
// bind _textureProj & _texture (latter is the one im trying to update)
glUseProgram(0);
Then upon rendering I see this:
QUESTION:
I realise that setting the update method within the main program object scope is not the proper way of doing it, however its the only way to get any visual results. It seems to me that what happens is that it pretty much eliminates the fragmentshader and draws to screenspace...
What can I do to get this working properly? (my main focus is to be able to write anything to the texture & update)
Please let me know if more code needs posting.
I believe in this case an FBO would be easier and faster, and would recommend that instead. But the question itself is still quite valid.
I'm surprised to see a sphere, given you're writing blue to the entire texture (minus any edge bits if the texture size is not a multiple of 16). I guess this is from code elsewhere.
Anyway, it seems your main problem is being able to write to the texture from a compute shader outside the setup code for regular rendering. I suspect this is related to how you bind your destTex image. I'm not sure what your TexUnit and activate methods do, but to bind a GL texture to an image unit, do this:
int imageUnitIndex = 0; //something unique
int uniformLocation = glGetUniformLocation(...);
glUniform1i(uniformLocation, imageUnitIndex); //program must be active
glBindImageTexture(imageUnitIndex, textureHandle, ...);
see:
https://www.opengl.org/sdk/docs/man/html/glBindImageTexture.xhtml
https://www.opengl.org/wiki/Image_Load_Store#Images_in_the_context
Lastly, as you're using image2D so GL_SHADER_IMAGE_ACCESS_BARRIER_BIT is the barrier to use. GL_SHADER_STORAGE_BARRIER_BIT is for storage buffer objects.

offscreen rendering opengl 4.5 multisample FBO

I'm referencing OpenGL Superbible 6 in my code.
First I simply wanted to implement object picking in my 3d scene. Eventually I've decided to use framebuffer objects and I have succeeded and then I understood the problem with the need to solve the problem of polygon edge aliasing, so, i've rewritten my code again to make use of GL_TEXTURE_2D_MULTISAMPLE
Here is the initialization code for framebuffer
void window_glview::init_framebuffer()
{
//CREATE FRAMEBUFFER OBJECT
GLenum gl_error=glGetError();
glGenTextures(1,&texture_id_framebuffer_color);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE,texture_id_framebuffer_color);
glTexStorage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE,ANTIALIASING_SAMPLES,GL_RGBA8,client_area.right,client_area.bottom,GL_TRUE);
glGenTextures(1,&texture_id_framebuffer_objectid);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE,texture_id_framebuffer_objectid);
glTexStorage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE,ANTIALIASING_SAMPLES,GL_RGBA8,client_area.right,client_area.bottom,GL_TRUE);
glGenTextures(1,&texture_id_framebuffer_depth);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE,texture_id_framebuffer_depth);
glTexStorage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE,ANTIALIASING_SAMPLES,GL_DEPTH_COMPONENT32,client_area.right,client_area.bottom,GL_TRUE);
gl_error=glGetError();
glGenFramebuffers(1,&buffer_id_framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER,buffer_id_framebuffer);
gl_error=glGetError();
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_color,0);
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_objectid,0);
glFramebufferTexture(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,texture_id_framebuffer_depth,0);
GLenum draw_buffers[] =
{
GL_COLOR_ATTACHMENT0,
GL_COLOR_ATTACHMENT1
};
glDrawBuffers(2,draw_buffers);
GLenum status=glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status!=GL_FRAMEBUFFER_COMPLETE)
MessageBox(0,L"Failed to create framebuffer object",0,0);
glBindFramebuffer(GL_FRAMEBUFFER,0);
}
It's pretty common to most of the internet listings on the same topic.
Now here is my drawing code
void window_glview::paint()
{
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
//DRAW TO CUSTOM FRAMEBUFFER
glBindFramebuffer(GL_FRAMEBUFFER,buffer_id_framebuffer);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glLineWidth(1.0);
draw_viewport();
viewport_object_count=0;
draw_lights();
glLineWidth(1.5);
for (unsigned short i=0;i<mesh_count;i++)
{
draw_mesh(mesh_table[i],GL_TRIANGLES,false);
}
//DRAW TO DEFAULT
glBindFramebuffer(GL_FRAMEBUFFER,0);
//USE TEXTURE FROM FRAMEBUFFER COLOR_ATTACHMENT0
glUseProgram(program_id_screen_render);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE,texture_id_framebuffer_color);
//HERE IS A QUAD DRAWING PROCESS
glBindBuffer(GL_ARRAY_BUFFER,buffer_id_screen_quad);
glVertexAttribPointer(0,4,GL_FLOAT,GL_FALSE,24,0);
glEnableVertexAttribArray(0);
glDrawArrays(GL_QUADS,0,4);
SwapBuffers(hDC);
}
vertex shader is simple
#version 450
layout(location=0) in vec4 _pos;
void main(void)
{
gl_Position=_pos;
}
fragment shader is written with the purpose of iterpreting multisamples
#version 450
uniform sampler2DMS screen_texture;
layout(location=0) out vec4 out_color;
void main(void)
{
ivec2 coord=ivec2(gl_FragCoord.xy);
vec4 result=vec4(0.0);
int i;
for (i=0;i<4;i++)
{
result=max(result,texelFetch(screen_texture,coord,i));
}
out_color=result;
}
I end up with a black screen. If i change out_color to something lice out_color=vec4(1.0,0.0,0.0,1.0) i get red screen.
What could go wrong?
In my initializer function for framebuffer when i pass GL_DEPTH_COMPONENT to glTexStorage2DMultisample, then i get error. I decided to pass GL_DEPTH_COMPONENT16 and it works. Why is that?
Should I better use RENDERBUFFER for some perpose and if yes, how can i read it to texture?
The texture with id texture_id_framebuffer_color, which is the texture you use for your final rendering, is not attached to the FBO while you render to the FBO:
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_color,0);
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_objectid,0);
Only one texture can be attached to a given attachment point at a time. So when you specify a second texture to be attached to COLOR_ATTACHMENT0, the first one automatically gets un-attached.
If you want to have two attachments, they will need to use different attachment points:
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_color,0);
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT1,texture_id_framebuffer_objectid,0);