I working with OpenGL and "GLFW" to mapping the texture image to 2D polygon which is the set of vertices generated from OpenCV.
My question is, can I use the result of texture mapping as the new texture (which is already distort by the first mapping) to map with other polygon.
I think my explanation is so bad, please look at the example;
Left image is my texture, and the right is the texture after mapping to polygon (the texture divided to 8 block for 8 set of vertices. What I want to do is using the mapping result on the right side as the new texture.
It is possible to do with OpenGL or OpenCV?
Render your scene to a FBO with a texture attachment, then use that texture to render more geometry.
You need 2 FBO and 2 texture.
You render the scene to the first FBO (fbo1) and you send the texture (texture1) to the shader and you render the scene to the second FBO (fbo2). After that you send the seconde texture (texture2) to the shader and render the scene in the main FBO (to display the scene) or in the first FBO (to make another pass).
example :
1) render the scene in FBO1
2) send texture 1 to the shader
3) render the texture 1 modified in FBO2
4) send texture 2 to the shader
5) render the texture 2 modifier in FBO1
6) send texture 1 to the shader
etc, etc...
this is a little code of one of my project to describe what i try to explain (to make a blur).
//render the scene to the fbo1
glBindFramebuffer(GL_FRAMEBUFFER, f1);
glClearColor(0.2f, 0.2f, 0.2f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
[self drawWithShader:_programRTT];
//apply horizontal blur the result go in the fbo2
glBindFramebuffer(GL_FRAMEBUFFER, f2);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, t1);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
[self drawWithShader:_programBH];
//apply vertical blur the result go in the fbo1
glBindFramebuffer(GL_FRAMEBUFFER, f1);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, t2);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
[self drawWithShader:_programBV];
//return to the main fbo and display result on screen
[view bindDrawable];
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, t1);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
[self drawWithShader:_program];
Related
This image is rendered using three passes.
In the first pass, I render a three axis.
in the second pass a transparent cylinder is rendered (glEnable(GL_BLEND)) alpha = 0.5f.
finally the golden and grey spheres are rendered in the third pass(glEnable(GL_BLEND)).
The alpha value of the golden spheres = 1.0f and the grey sphere = 0.2f.
The problem:
As you can see,
the cylinder overlaps the spheres even though we enable blending.
the axes overlap the cylinder and the spheres!
Here is my code:
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClearDepthf(1.0f);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glEnable(GL_CULL_FACE);
glFrontFace(GL_CCW);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
here the data is prepared and sent to shaders (first pass)
glDrawElements(GL_POINTS, 256, GL_UNSIGNED_INT, reinterpret_cast<void*>(0));
ps: a geometry shader is used to render lines from the given points.
Then we prepare and pass the cylinder data
glEnable(GL_BLEND);
glCullFace(GL_FRONT);
glDrawElements(GL_POINTS, 256, GL_UNSIGNED_INT, reinterpret_cast<void*>(0));
glCullFace(GL_BACK);
glDrawElements(GL_POINTS, 256, GL_UNSIGNED_INT, reinterpret_cast<void*>(0));
glDisable( GL_BLEND);
ps: a geometry shader, also, is used to render the mesh of the cylinders from the given points.
Finally, I render the golden sphere and the grey sphere in one pass
glEnable(GL_BLEND);
glDrawElements(GL_LINE_STRIP, goldenSphereNumber, GL_UNSIGNED_INT, (void*)0);
glDrawElements(GL_LINE_STRIP, sphereIndexCount, GL_UNSIGNED_INT, (void*)0);
glDisable( GL_BLEND);
ps: here also a geometry shader is used to render the mesh of the cylinders from the given lines.
Do you see any wrong? Could you help, please?
With the same code as in my previous question Rendering quad with tiling image? I don't understand why the triangle is not being rendered on top of the textured quad.
Can someone point out what am I missing?
You have depth test enabled which defaults to less (only pixels that are closer get drawn).
If you want a background then disable depth writing during the first pass.
void GLViewer::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDepthMask(GL_FALSE);
glDisable(GL_DEPTH_TEST);
m_backgroundShader.bind();
glBindVertexArray(m_backgroundVAO);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, m_textureID);
glUniform1i(glGetUniformLocation(m_backgroundShader.programId(),"tex"),0);
glDrawArrays(GL_TRIANGLE_STRIP,0,4);
glDepthMask(GL_TRUE);
glEnable(GL_DEPTH_TEST);
m_triangleShader.bind();
glBindVertexArray(m_VAO);
glDrawArrays(GL_TRIANGLES, 0, 3);
update();
}
i am using opengl to set texture to 3d object.then snapshot and blend it other picture.
i wanna to high resolution snapshot(3000*1500 px). is it possible in opengl?
my code is:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
DrawScene();
DrawText();
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
if (g_fboSamples > 0)
{
// Multisample rendering so copy the pixel data in the multisample
// color render buffer image to the FBO containing the offscreen
// texture image.
glBindFramebufferEXT(GL_READ_FRAMEBUFFER_EXT, g_fbo);
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, g_fboResolveTarget);
glBlitFramebufferEXT(0, 0, g_fboWidth, g_fboHeight,
0, 0, g_fboWidth, g_fboHeight,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
}
// At this point we now have our scene fully rendered to our offscreen
// texture 'g_offscreenTexture'. This is where you would perform any
// post processing to the offscreen texture.
// Finally to display the offscreen texture to the screen we draw a screen
// aligned full screen quad and attach the offscreen texture to it.
BYTE* pixels = new BYTE[ 3 *g_fboWidth*g_fboHeight];
//glPixelStorei(GL_PACK_ALIGNMENT, 1);
glReadPixels(0, 0, g_fboWidth, g_fboHeight, GL_RGB, GL_UNSIGNED_BYTE, pixels);
bitmap.create(g_fboWidth, g_fboHeight);
bitmap.setPixels(pixels,g_fboWidth, g_fboHeight,3);
bitmap.flipVertical();
bitmap.saveBitmap("test.png");
glViewport(0, 0, g_windowWidth, g_windowHeight);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
DrawFullScreenQuad();
//g_fboWidth= 2732, g_fboHeight=1536 , g_windowWidth=683 and g_windowHeight=384
test.png:
http://i.imgur.com/1MKySdV.jpg
It looks like your viewport in the framebuffer is to small. Call glViewport(g_fboWidth, g_fboHeight).
I'm trying to "read" the texture attached to a first FBO (fboA), modifying it (with fragment shader) and render to a second FBO (fboB).
I'm not able to figure it out, all I got is a black or white texture.
Here is the code:
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboB);
glEnable(GL_TEXTURE_RECTANGLE_ARB);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, textureFromFboA);
GLuint t1Location = glGetUniformLocation(shaderProgram, "texture");
glUniform1i(t1Location, 0);
glUseProgram(shaderProgram);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f (0.0f, 0.0f);
glTexCoord2f(imageRect.size.width, 0.0f);
glVertex2f (imageRect.size.width, 0.0f);
glTexCoord2f(imageRect.size.width, imageRect.size.height);
glVertex2f (imageRect.size.width, imageRect.size.height);
glTexCoord2f(0.0f, imageRect.size.height);
glVertex2f (0.0f,imageRect.size.height);
glEnd();
glDisable(GL_TEXTURE_RECTANGLE_ARB);
glFlush();
glUseProgram(0);
This is the fragment shader code:
#version 120
uniform sampler2D texture;
void main(void)
{
gl_FragColor = texture2D(texture, gl_TexCoord[0].st) * 0.8;
}
I would expect to have a darker texture rendered in fboB but I only get an all black texture. This happens also if I write gl_FragColor = texture2D(texture, gl_TexCoord[0].st);.
On the contrary, if I write gl_FragColor = vec2(1.0, 0.0, 0.0, 1.0); I correctly have an all red texture as output.
If I comment out the glUseProgram() statement, the code works fine and texture in fboB is an exact copy of texture in fboA.
Why this happens? Am I missing something?
uniform sampler2D texture;
Rectangle textures are not the same texture type as 2D textures. Yes, they're two-dimensional, but they still maintain a distinct texture type. Therefore, they cannot be accessed via a sampler2D.
So change that to a samplerRect. You will also need to use proper texture coordinates, because rectangle textures take texel-space coordinates instead of normalized coordinates.
Alternatively, you can just use a 2D texture. NPOT textures have been around for well over half a decade; you don't have to use rectangle textures to have non-power-of-two render targets.
You cannot use sampler2D for texture rectangles... you must use GL_TEXTURE_2D not GL_TEXTURE_RECTANGLE_ARB.
here you have a nice tutorial on the FBO:
http://www.songho.ca/opengl/gl_fbo.html
I'm trying to draw a 2d character sprite on top of a 2d tilemap, but when I draw the character he's got odd stuff behind him. This isn't in the sprite, so I think its the blending.
This is how my openGL is set up:
void InitGL(int Width, int Height) // We call this right after our OpenGL window is created.
{
glViewport(0, 0, Width, Height);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f); // This Will Clear The Background Color To Black
glClearDepth(1.0); // Enables Clearing Of The Depth Buffer
glDepthFunc(GL_LESS); // The Type Of Depth Test To Do
glDisable(GL_DEPTH_TEST); // Enables Depth Testing
//glShadeModel(GL_SMOOTH); // Enables Smooth Color Shading
glEnable(GL_TEXTURE_2D); // Enable Texture Mapping ( NEW )
glShadeModel(GL_FLAT);
glMatrixMode(GL_PROJECTION);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE , GL_ONE_MINUS_SRC_ALPHA);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glAlphaFunc(GL_GREATER, 0.5f);
glMatrixMode(GL_PROJECTION);//configuring projection matrix now
glLoadIdentity();//reset matrix
glOrtho(0, Width, 0, Height, 0.0f, 100.0f);//set a 2d projection matrix
}
How should I set this up to work properly (i.e. drawing the sprite without odd stuff behind him.
This is what I am talking about: http://i.stack.imgur.com/cmotJ.png
PS: I need to be able to put transparent/semi-transparent images on top of each other and have whats behind them visible too
Does your sprite have premultiplied alpha? Your glBlendFunc setup is a little unusual, if you don't have premultiplied alpha it could definitely be causing the issue.