I need to use OpenGL for a very specific purpose. I've got a 1D array of floats of size [SIZE][SIZE] (it's always square), that represents a 2D image. Drawing is just an extra here since I've been doing it using third programs by outputting the array to a text file, but I would like to give the option of doing it in the program itself.
This array is being constantly updated in a loop as it's supposed to represents values of a simulated field, the details of which are quite irrelevant, but the important point is that the value of each of them is going to be a float between -1 and 1. Now, I would like to just draw this array as 2D image (in real time), every N steps of the main loop. I tried using the pixel drawing tool of X11 (I'm doing this on Linux), and drawing the array by just looping over it and going pixel by pixel on a SIZE X SIZE window, but this was very slow and was taking much more than the simulation itself. I've been looking into OpenGL and from what I've read the ideal solution would be to reinterprate my array as a 2D Texture and then printing it on a quad. Apparently to use bare OpenGL I would have to readapt my code to work with the main loop of the OpenGL drawing and this is a bit inpractical, so if the same can be done in GLFW, I'm happy with it.
The image to draw is always square, and its orientation is completely irrelevant, it doesn't matter if it's drawn mirrored, upside down, transposed, etc, as it's supposed to be completely isotropic.
The main backbone of the program follows the next scheme
#include <iostream>
#include <GLFW/glfw3.h>
using namespace std;
int main(int argc, char** argv)
{
if (GFX) //GFX is a bool, only draw stuff if it's 1 (its value doesnt change)
{
//Initialize GLFW
}
float field[2*SIZE][SIZE] = {0}; //This is the array to print (only the first SIZE * SIZE components)
for (int i = 0; i < totalTime; i++)
{
for (int x=0; x < SIZE; x++)
{
for (int y=0; y < SIZE; y++)
{
//Each position of the array is then updates here
}
}
if (GFX)
{
//The drawing should be done here
}
}
return 0;
}
I've tried some code snippets and modified some other samples I've found around, but haven't been able to make it work, either they have to call a glLoop that breaks my own simulation loop, or it just prints a pixel in the centre.
So my main question is how to make a texture out of the first SIZE X SIZE components of field, and then draw it on a QUAD.
Thanks!
The simplest for a rookie is to use old API without shaders. In order to make that work you simply encode your data into 1D linear array of floats in range <0.0,1.0> which can be done from <-1,+1> pretty fast on CPU side with single for loop like this:
for (i=0;i<size*size;i++) data[i]=0.5*(data[i]+1.0);
I do not use GLUT nor code for your platform so I stick just to rendering:
//---------------------------------------------------------------------------
const int size=512; // data resolution
const int size2=size*size;
float data[size2]; // your float size*size data
GLuint txrid=-1; // GL texture ID
//---------------------------------------------------------------------------
void init() // this must be called once (after GL is initialized)
{
int i;
// generate float data
Randomize();
for (i=0;i<size2;i++) data[i]=Random();
// create texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_MODULATE);
glDisable(GL_TEXTURE_2D);
}
//---------------------------------------------------------------------------
void exit() // this must be called once (before GL is unitialized)
{
// release texture
glDeleteTextures(1,&txrid);
}
//---------------------------------------------------------------------------
void gl_draw()
{
glClear(GL_COLOR_BUFFER_BIT);
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// bind texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
// copy your actual data into it
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE32F_ARB, size, size, 0, GL_LUMINANCE, GL_FLOAT, data);
// render single textured QUAD
glColor3f(1.0,1.0,1.0);
glBegin(GL_QUADS);
glTexCoord2f(0.0,0.0); glVertex2f(-1.0,-1.0);
glTexCoord2f(1.0,0.0); glVertex2f(+1.0,-1.0);
glTexCoord2f(1.0,1.0); glVertex2f(+1.0,+1.0);
glTexCoord2f(0.0,1.0); glVertex2f(-1.0,+1.0);
glEnd();
// unbind texture (so it does not mess with othre rendering)
glBindTexture(GL_TEXTURE_2D,0);
glDisable(GL_TEXTURE_2D);
glFlush();
SwapBuffers(hdc); // ignore this GLUT should make it on its own
}
//---------------------------------------------------------------------------
Here preview:
In order to make this work you need to call init() at start of your app after the GLUT creates GL context and exit() on Apps end before GLUT closes the GL context. The gl_draw() will render your data so it must be called in the drawing event of GLUT.
In case you do not want to do the range conversion to <0,1> on CPU side you can move it to shaders (very simple vertex and fragment shader) but I got the feeling you're rookie and shaders would be simply too much for you to start with. If you really want to go that way see:
complete GL+GLSL+VAO/VBO C++ example
It also covers the GL initialization without GLUT but on Windows ...
Now some notes to the program above:
I used GL_LUMINANCE32F_ARB texture format extention
its 32 bit floating point texture format that is not clamped so your data stays as is. It should be present on all nowadays gfx HW. I did this to ease up the transition to shaders latter on where you can operate at your raw data directly ...
size
in original GL specification the texture size should be power of 2 so 16,32,64,128,256,512,... If not you need to use rectangle texture extention but that is native in gfx HW for years now so no need to change anything. But on linux and MAC there might be problems with GL implementation so if something does not work try to use power of 2 size (just in case)...
Also do not get too craze with size as gfx cards has limits usually 2048 is safe limit for lowend stuff. If yo need more then do a mosaic of more QUADS/textures
GL_CLAMP_TO_EDGE
this is also extention (now native to HW) so your texture coordinates go from 0 to 1 instead of from 0+pixel/2 to 1-pixel/2 ...
However all of these are not GL 1.0 stuff so you need to add extentions to your App (if GLUT or whatever you use does not already). All of these are just tokens/constants no function calls so in case compiler complains it should be enough to:
#include <gl\glext.h>
After gl.h is included or add the defines directly instead:
#define GL_CLAMP_TO_EDGE 0x812F
#define GL_LUMINANCE32F_ARB 0x8818
btw. your code does not look like GLUT app (but I might be wrong as I do not use it) see this for example:
simple GLUT app example
Your header suggest GLFW3 that is something entirely different (unless its derived from GLUT) than GLUT so maybe you should edit tags and OP to match what you really have/use.
Now the shaders:
if you generate your data in <-1,+1> range:
for (i=0;i<size2;i++) data[i]=(2.0*Random())-1.0;
And use these shaders:
Vertex:
// Vertex
#version 400 core
layout(location = 0) in vec2 pos; // position
layout(location = 8) in vec2 tex; // texture
out vec2 vpos;
out vec2 vtex;
void main()
{
vpos=pos;
vtex=tex;
gl_Position=vec4(pos,0.0,1.0);
}
Fragment:
// Fragment
#version 400 core
uniform sampler2D txr;
in vec2 vpos; // position
in vec2 vtex; // texture
out vec4 col;
void main()
{
vec4 c;
c=texture(txr,vtex);
c=(c+1.0)*0.5;
col=c;
}
Then the result is the same (appart of faster conversion on GPU side). However you need to convert the GL_QUADS into VAO/VBO ( unless nVidia card is used but even then you definately should use VBO/VAO).
Related
I'm studying OpenGL and I had to use sampler2DArray. I am in torment all day long - all to no avail. I have two questions:
How to create a list of textures?
How to use sampler2DArray in the shader?
Here is the result of my attempts to create a list of textures:
// textures - ids loaded textures
private int createTextureArray(GL2 gl, int[] textures, int width, int height) {
int layerCount = textures.length;
int mipLevelCount = 1;
IntBuffer texture = IntBuffer.allocate(1);
gl.glGenTextures(1, texture);
gl.glActiveTexture(GL.GL_TEXTURE0);
gl.glBindTexture(GL2.GL_TEXTURE_2D_ARRAY, texture.get(0));
gl.glTexStorage3D(GL2.GL_TEXTURE_2D_ARRAY, mipLevelCount, GL2.GL_RGBA8, width, height, layerCount);
for (int i = 0; i<textures.length; i++) {
gl.glTexSubImage3D(GL2.GL_TEXTURE_2D_ARRAY, i, // error here
0, 0, 0,
width, height, layerCount,
GL2.GL_RGBA, GL2.GL_UNSIGNED_BYTE,
textures[i]);
}
// Always set reasonable texture parameters
gl.glTexParameteri(GL2.GL_TEXTURE_2D_ARRAY, GL2.GL_TEXTURE_MIN_FILTER, GL2.GL_LINEAR);
gl.glTexParameteri(GL2.GL_TEXTURE_2D_ARRAY, GL2.GL_TEXTURE_MAG_FILTER, GL2.GL_LINEAR);
gl.glTexParameteri(GL2.GL_TEXTURE_2D_ARRAY, GL2.GL_TEXTURE_WRAP_S, GL2.GL_CLAMP_TO_EDGE);
gl.glTexParameteri(GL2.GL_TEXTURE_2D_ARRAY, GL2.GL_TEXTURE_WRAP_T, GL2.GL_CLAMP_TO_EDGE);
return texture.get(0);
}
Shader example:
#version 130
uniform sampler2DArray textures;
varying vec2 UV;
...
void main() {
...
int layer = 0;
gl_FragColor = texture2DArray(textures, vec3(UV, layer));
}
I will be grateful for the help.
An array texture is not a "list of textures". An array texture is a single OpenGL texture, one which individually has a number of quasi-independent layers in it. While you may conceptually think of each layer of an array texture as a separate conceptual texture, in OpenGL (and GLSL), it is a single object.
Given this, the interface in your function is incorrect. It should return a single texture object, and it should take as a parameter, not an array of int (note: OpenGL objects are unsigned integers), but a single integer: the number of array layers to create in that texture.
How you use an array texture in GLSL is simple. Your uniform for the sampler uses an array-texture sampler type (for example sampler2DArray for 2D array textures). You bind the array texture to the same texture image unit that you specified as the binding for the sampler uniform (just as you would for a non-array 2D texture).
Your GLSL is missing one thing. There is no texture2DArray function. The correct function to use is just texture. The texture type in post-GL 3.0 is specified solely by the parameter, not by the name of the function anymore.
In addition to what #NicolBolas already said: There is a bunch of problems with the shader code, mostly due to functionality that has been deprecated in version 130:
There is no method texture2DArray in any standard glsl version. There has been one in the EXT_texture_array extension, but this has never been integrated since in glsl 130 all texture lookup functions (texture2D, texture3D, ...) have been replaced by a overloaded texture command. If you are targeting 130 without extensions, you should use texture(textures, vec3(UV, layer))
The varying keyword is deprecated in glsl 130 and should be replaced by in/out
gl_FragColor is deprecated and a user defined output (out) variable should be used.
You might want to have a look at Section 1.2.1 of the GLSL 130 Spec which describes the deprecations and how they should be handled. In general I would encourage everyone not to use 130 at all today unless there is a special reason for it. Better move to OpenGL 3.3+ Core Profile and GLSL 330+.
you can skip to the TL;DR at the bottom for the conclusion. I preferred to provide as much information as I could, so as to help narrow down the question further.
I've been having an issue with a heat haze effect I've been working on.
This is the sort of effect that I was thinking of but since this is a rather generalized system it would apply to any so called screen space refraction:
The haze effect is not where my issue lies as it is just a distortion of sampling coordinates, rather it's with what is sampled. My first approach was to render the distortions to another render target. This method was fairly successful but has a major downfall that's easy to foresee if you've dealt with screen space textures before. the problem is that because of the offset to the sampling coordinate, if an object is in front of the refractor, its edges will be taken into the refraction calculation.
as you can see it looks fine when all the geometry is either the environment (no depth test) or back geometry. and here with a cube closer than the refractor. As you can see it, there is this effect I'll call bleeding of the closer geometry.
relevant shader code for reference:
/* transparency.frag */
layout (location = 0) out vec4 out_color; // frag color
layout (location = 1) out vec4 bright; // used for bloom effect
layout (location = 2) out vec4 deform; // deform buffer
[...]
void main(void) {
[...]
vec2 n = __sample_noise_texture_with_time__{};
deform = vec4(n * .1, 0, 1);
out_color = vec4(0, 0, 0, .0);
bright = vec4(0.0, 0.0, 0.0, .9);
}
/* post_process.frag */
in vec2 texel;
uniform sampler2D screen_t;
uniform sampler2D depth_t;
uniform sampler2D bright_t;
uniform sampler2D deform_t;
[...]
void main(void) {
[...]
vec3 noise_sample = texture(deform_t, texel).xyz;
vec2 texel_c = texel + noise_sample.xy;
[sample screen and bloom with texel_c, gama corect, output to color buffer]
}
To try to combat this, I tried a technique that involved comparing depth components. to do this, i made the transparent object write its frag_depth tp the z component of my deform buffer like so
/* transparency.frag */
[...]
deform = vec4(n * .1, gl_FragCoord.z, 1);
[...]
and then to determine what is in front of what a quick check in the post processing shader.
[...]
float dist = texture(depth_t, texel_c).x;
float dist1 = noise_sample.z; // what i wrote to the deform buffer z
if (dist + .01 < dist1) { /* do something liek draw debug */ }
[...]
this worked somewhat but broke down as i moved away, even i i linearized the depth values and compared the distances.
EDIT 3: added better screenshots for the depth test phase
(In yellow where it's sampling something that's in front, couldn't be bothered to make it render the polygons as well so i drew them in)
(and here demonstrating it partially failing the depth comparison test from further away)
I also had some 'fun' with another technique where i passed the color buffer directly to the transparency shader and had it output the sample to its color output. In theory if the scene is Z sorted, this should produce the desired result. i'll let you be the judge of that.
(I have a few guesses as to what the patterns that emerge are since they are similar to the rasterisation patterns of GPUs however that's not very relevant sine that 'solution' was more of a desperation effort than anything)
TL;DR and Formal Question: I've had a go at a few techniques based on my knowledge and haven't been able to find much literature on the subject. so my question is: How do you realize sch effects as heat haze/distortion (that do not cover the whole screen might i add) and is there literature on the subject. For reference to what sort of effect I would be looking at, see my Overwatch screenshot and all other similar effects in the game.
Thought I would also mention just for completeness sake I'm running OpenGL 4.5 (on windows) with most shaders being version 4.00, and am working with a custom engine.
EDIT: If you want information about the software part of the engine feel free to ask. I didn't include any because it I didn't deem it relevant however i'd be glad to provide specs and code snippets as well as more shaders on demand.
EDIT 2: I thought i'd also mention that this could be achieved by using a second render pass and a clipping plane however, that would be costly and feels unnecessary since the viewpoint is the same. It might be that's this is the only solution but i don't believe so.
Thanks for your answers in advance!
I think the issue is you are trying to distort something that's behind an occluded object and that information is not available any more, because the object in front have overwitten the color value there. So you can't distort in information from a color buffer that does not exist anymore.
You are trying to solve it by depth testing and skipping the pixels that belong to an object closer to the camera than your transparent heat object, but this is causing the edge to leak into the distortion. Even if you get the edge skipped, if there was an object right behind the transparent object, occluded by the cube in the front, it wont distort in because the color information is not available.
Additional Render Pass
As you mention additional rendering pass with a clipping plane is certainly one solution to this problem.
Multiple render targets
Another solution similar to that would be to use multiple render targets, render the depth of the transparent object before hand, test for fragments that are behind it, and render them to another color buffer. Later use this buffer to distort instead of the full color buffer. You could also consider deffered shading.
Here is a code snippet of how you would setup multiple render targets.
//create your fbo
GLuint fboID;
glGenFramebuffers(1, &fboID);
glBindFramebuffer(GL_FRAMEBUFFER, fboID);
//create the rbo for depth
GLuint rboID;
glGenRenderbuffers(1, &rboID);
glBindRenderbuffer(GL_RENDERBUFFER, &rboID);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboID);
//create two color textures (one for distort)
Gluint colorTexture, distortcolorTexture;
glGenTextures(1, &colorTexture);
glGenTextures(1, &distortcolorTexture);
glBindTexture(GL_TEXTURE_2D, colorTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, distortcolorTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
//attach both textures
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, colorTexture, 0);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, distortcolorTexture, 0);
//specify both the draw buffers
GLenum drawBuffers[2] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1};
glDrawBuffers(2, DrawBuffers);
First render the transparent obj's depth. Then in your fragment shader for other objects
//compute color with your lighting...
//write color to colortexture
gl_FragData[0] = color;
//check if fragment behind your transparent object
if( depth >= tObjDepth )
{
//write color to distortcolortexture
gl_FragData[1] = color;
}
finally use the distortcolortexture for your distort shader.
Depth test for a matrix of pixels instead of single pixel.
I think the edge is leaking because maybe you don't simply distort one pixel but more of a matrix of pixels, perhaps you could also try checking the max depth for the matrix (eg: 3x3 pixels centered on current pixel) and discard it if it fails the depth test. (note : this still won't distort objects behind the occluding object which you might want distorted in).
I'm writing a small video player using QOpenGLWidget. At the moment I'm struggling to get asynchronous texture upload working. In an earlier version of my code I wait for a "next frame" signal, upon which the frame is read from the hard drive, uploaded to the GPU and then rendered. Now I want to get this working asynchronously using a ring buffer on the GPU. I want a separate thread to upload the next N textures and the main thread to take one of this textures, display and invalidate it. As a first step I wrote a class to upload a single texture which I want to use from my QOpenGLWidget. I created shared contexts between my class and the QOpenGLWidget.
class GLWidget : public QOpenGLWidget, protected QOpenGLFunctions;
void GLWidget::paintGL() {
QOpenGLVertexArrayObject::Binder vaoBinder(&m_vao);
m_program->bind();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glFrontFace(GL_CCW);
m_program->setUniformValue("textureSamplerRed", 0);
m_program->setUniformValue("textureSamplerGreen", 1);
m_program->setUniformValue("textureSamplerBlue", 2);
glUniformMatrix4fv(m_matMVP_Loc, 1, GL_FALSE, &m_MVP[0][0]);
m_vertice_indices_Vbo.bind();
m_vertices_Vbo.bind();
m_texture_coordinates_Vbo.bind();
glDrawElements(
GL_TRIANGLE_STRIP, // mode
m_videoFrameTriangles_indices.size(), // count
GL_UNSIGNED_INT, // type
(void*)0 // element array buffer offset
);
m_program->release();
}
I wait for the GLWidget::initializeGL() to finish, emit a signal which is connected to the initialization of my texture loading class:
class TextureLoader2 : public QObject, protected QOpenGLFunctions;
void TextureLoader2::initialize(QOpenGLContext *context)
{
// sharing the OpenGL context with GLWidget
m_context.setFormat(context->format()); // need this?
m_context.setShareContext(context);
m_context.create();
m_context.makeCurrent(context->surface());
m_surface = context->surface();
}
And here is how I load a new frame:
void TextureLoader2::loadNextFrame(const int frameIdx)
{
QElapsedTimer timer;
timer.start();
bool is_current = m_context.makeCurrent(m_surface);
// some code which reads the frame from disk and sends to the GPU.
// srcR is a pointer to the the data for red. the upload for G and B is similar
if(!m_texture_Rdata)
{
m_texture_Rdata = std::make_shared<QOpenGLTexture>(QOpenGLTexture::Target2D);
m_texture_Rdata->create();
m_texture_Rdata->setSize(m_frameWidth,m_frameHeight);
m_texture_Rdata->setFormat(QOpenGLTexture::R8_UNorm);
m_texture_Rdata->allocateStorage(QOpenGLTexture::Red,QOpenGLTexture::UInt8);
m_texture_Rdata->setData(QOpenGLTexture::Red, QOpenGLTexture::UInt8, srcR);
// Set filtering modes for texture minification and magnification
m_texture_Rdata->setMinificationFilter(QOpenGLTexture::Nearest);
m_texture_Rdata->setMagnificationFilter(QOpenGLTexture::Linear);
m_texture_Rdata->setWrapMode(QOpenGLTexture::ClampToBorder);
}
else
{
m_texture_Rdata->setData(QOpenGLTexture::Red, QOpenGLTexture::UInt8, srcR);
}
// these are QOpenGLTextures
m_texture_Rdata->bind(0);
m_texture_Gdata->bind(1);
m_texture_Bdata->bind(2);
emit frameUploaded();
}
My QOpenGLWidget is displaying nothing unfortunately. I don't know how to proceed.
I know that the code for reading and sending the texture to the GPU is working, since if I leave out the line
bool is_current = m_context.makeCurrent(m_surface);
my whole window (not just the frame containing the QOpenGLWidget) is overwritten, displaying the texture.
I've been searching quite a bit, but I couldn't find any simple working example code for what I want to do. Hope someone has an idea what the issue might be. I've seen people mentioning using QOffscreenSurface or a second hidden widget it similar, but different contexts. Maybe I have to use one of those?
My fragment shader:
#version 330 core
// Interpolated values from the vertex shaders
in vec2 fragmentUV;
// Ouput data
out vec4 color_0;
// Values that stay constant for the whole mesh.
uniform sampler2D textureSamplerRed;
uniform sampler2D textureSamplerGreen;
uniform sampler2D textureSamplerBlue;
void main(){
vec3 myColor;
myColor.r = texture2D( textureSamplerRed, fragmentUV ).r;
myColor.g = texture2D( textureSamplerGreen, fragmentUV ).r;
myColor.b = texture2D( textureSamplerBlue, fragmentUV ).r;
color_0 = vec4(, 1.0f);
}
I'm referencing OpenGL Superbible 6 in my code.
First I simply wanted to implement object picking in my 3d scene. Eventually I've decided to use framebuffer objects and I have succeeded and then I understood the problem with the need to solve the problem of polygon edge aliasing, so, i've rewritten my code again to make use of GL_TEXTURE_2D_MULTISAMPLE
Here is the initialization code for framebuffer
void window_glview::init_framebuffer()
{
//CREATE FRAMEBUFFER OBJECT
GLenum gl_error=glGetError();
glGenTextures(1,&texture_id_framebuffer_color);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE,texture_id_framebuffer_color);
glTexStorage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE,ANTIALIASING_SAMPLES,GL_RGBA8,client_area.right,client_area.bottom,GL_TRUE);
glGenTextures(1,&texture_id_framebuffer_objectid);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE,texture_id_framebuffer_objectid);
glTexStorage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE,ANTIALIASING_SAMPLES,GL_RGBA8,client_area.right,client_area.bottom,GL_TRUE);
glGenTextures(1,&texture_id_framebuffer_depth);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE,texture_id_framebuffer_depth);
glTexStorage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE,ANTIALIASING_SAMPLES,GL_DEPTH_COMPONENT32,client_area.right,client_area.bottom,GL_TRUE);
gl_error=glGetError();
glGenFramebuffers(1,&buffer_id_framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER,buffer_id_framebuffer);
gl_error=glGetError();
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_color,0);
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_objectid,0);
glFramebufferTexture(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,texture_id_framebuffer_depth,0);
GLenum draw_buffers[] =
{
GL_COLOR_ATTACHMENT0,
GL_COLOR_ATTACHMENT1
};
glDrawBuffers(2,draw_buffers);
GLenum status=glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status!=GL_FRAMEBUFFER_COMPLETE)
MessageBox(0,L"Failed to create framebuffer object",0,0);
glBindFramebuffer(GL_FRAMEBUFFER,0);
}
It's pretty common to most of the internet listings on the same topic.
Now here is my drawing code
void window_glview::paint()
{
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
//DRAW TO CUSTOM FRAMEBUFFER
glBindFramebuffer(GL_FRAMEBUFFER,buffer_id_framebuffer);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glLineWidth(1.0);
draw_viewport();
viewport_object_count=0;
draw_lights();
glLineWidth(1.5);
for (unsigned short i=0;i<mesh_count;i++)
{
draw_mesh(mesh_table[i],GL_TRIANGLES,false);
}
//DRAW TO DEFAULT
glBindFramebuffer(GL_FRAMEBUFFER,0);
//USE TEXTURE FROM FRAMEBUFFER COLOR_ATTACHMENT0
glUseProgram(program_id_screen_render);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE,texture_id_framebuffer_color);
//HERE IS A QUAD DRAWING PROCESS
glBindBuffer(GL_ARRAY_BUFFER,buffer_id_screen_quad);
glVertexAttribPointer(0,4,GL_FLOAT,GL_FALSE,24,0);
glEnableVertexAttribArray(0);
glDrawArrays(GL_QUADS,0,4);
SwapBuffers(hDC);
}
vertex shader is simple
#version 450
layout(location=0) in vec4 _pos;
void main(void)
{
gl_Position=_pos;
}
fragment shader is written with the purpose of iterpreting multisamples
#version 450
uniform sampler2DMS screen_texture;
layout(location=0) out vec4 out_color;
void main(void)
{
ivec2 coord=ivec2(gl_FragCoord.xy);
vec4 result=vec4(0.0);
int i;
for (i=0;i<4;i++)
{
result=max(result,texelFetch(screen_texture,coord,i));
}
out_color=result;
}
I end up with a black screen. If i change out_color to something lice out_color=vec4(1.0,0.0,0.0,1.0) i get red screen.
What could go wrong?
In my initializer function for framebuffer when i pass GL_DEPTH_COMPONENT to glTexStorage2DMultisample, then i get error. I decided to pass GL_DEPTH_COMPONENT16 and it works. Why is that?
Should I better use RENDERBUFFER for some perpose and if yes, how can i read it to texture?
The texture with id texture_id_framebuffer_color, which is the texture you use for your final rendering, is not attached to the FBO while you render to the FBO:
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_color,0);
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_objectid,0);
Only one texture can be attached to a given attachment point at a time. So when you specify a second texture to be attached to COLOR_ATTACHMENT0, the first one automatically gets un-attached.
If you want to have two attachments, they will need to use different attachment points:
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,texture_id_framebuffer_color,0);
glFramebufferTexture(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT1,texture_id_framebuffer_objectid,0);
So, I've been working on a little game project for a bit and I've hit a snag that's annoying me to no end. I load an obj file which then gets rendered after being put into a VBO. This part works fine, no problemo. However, I've been trying to get it to render the accompanying texture with the supplied UVs with no success. Currently, I just get a matte green colouration on my model. Upon investigating it in GDE, I've seen that texture gets loaded fine and occupies the GL_TEXTURE0 unit, so that's not the issue. I believe it may be my binding but I have no idea why this would fail...
void Model_Man::render_models()
{
for(int x=0; x<models.size(); x++)
{
if(models.at(x).visible==true)
{
glBindBuffer(GL_ARRAY_BUFFER,models.at(x).t_buff);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,models.at(x).i_buff);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT,0,0);
glClientActiveTexture(GL_TEXTURE0);
glTexCoordPointer(2,GL_FLOAT,0,&models.at(x).uvs[0]);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glActiveTexture(GL_TEXTURE0);
int tex_loc = glGetUniformLocation(models.at(x).shaderid,"color_texture");
glUniform1i(tex_loc,GL_TEXTURE0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, models.at(x).mats.at(0).texid);
c_render.use_program(models.at(x).shaderid);
glDrawElements(GL_TRIANGLES,models.at(x).f_index.size()*3,GL_UNSIGNED_INT,0);
c_render.use_program();
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
}
}
}
And my shader files...
Shader.frag
uniform sampler2D color_texture;
void main() {
// Set the output color of our current pixel
gl_FragColor = texture2D(color_texture, gl_TexCoord[0].st);
}
Shader.vert
void main() {
gl_TexCoord[0] = gl_MultiTexCoord0;
// Set the position of the current vertex
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
And yes, I know I'm currently being horribly inefficient with my render loop :P but I'm already planning on refactoring it, I am just attempting to get this single model to draw correctly with everything I'm aiming to do. I have no clue why it wouldn't be rendering with the texture correctly applied - unless it's because I need to interleave my arrays but I'm still supplying it with uv data so I don't see why it fails.
The call that set the sampler uniform shall not set GL_TEXTUE0, but actually 0.
Indeed:
glUniform1i(location, 0)
For setting up a sampler uniform do:
glUseProgram(progId);
// ...
glActiveTexture(GL_TEXTURE0 + texUnit);
glBindTexture(texId);
glUniform1i(texUnit);
The main concept is that the uniform variable are a shader program state (it is mantained until you re-link the program or reset the uniform value). Without binding a program, glUniform1i shall fail since there's not shader program at which it can set the uniform value!
As a general advice, call glGetError after each OpenGL call to detect these conditions. Most of those calls can be removed by preprocessor in release version.
Well, found out that the big issue was that while I was binding a texture, I wasn't actually setting it in a way that it was understood as being used. Setting glClientActiveTexture(GL_TEXTURE0 + texUnit); in combination with glActiveTexture(); ended up being the final solution.