OpenGL tiled texture uploading with PBO - opengl

I'm using PBO as follows:
glGenBuffersARB(1, &pboIds);
glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, pboIds);
glBufferDataARB(GL_PIXEL_UNPACK_BUFFER_ARB, FB_SIZE, 0, GL_DYNAMIC_DRAW_ARB);
unsigned char* ptr = (unsigned char*)glMapBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, GL_WRITE_ONLY_ARB);
memcpy(ptr, g_fb_addr, FB_SIZE);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, FB_WIDTH, FB_HEIGHT, FB_FORMAT, GL_UNSIGNED_BYTE, 0);
glBindBufferARB(GL_PIXEL_UNPACK_BUFFER_ARB, 0);
And, I'm using textureId to be displayed on screen. BTW, g_fb_addr, which is the source of image has tiled memory layout. So the displayed image is striped in horizontal axis.
My question is, is there a way to upload a tiled image into PBO?

Related

Rendering viewport to texture produces overlaps viewports?

I'm creating a 2D Engine and I want to implement docking, so I need to create a viewport and render the screen to a texture.
To render the viewport I'm saving the framebuffer into a FrameBufferObject and drawing as normally, I used this technique time ago and it worked with no problems, here is the Draw code:
glBindFramebuffer(GL_FRAMEBUFFER, fbo_msaa_id);
glViewport(0, 0, width, height);
DrawRoomObjects();
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo_msaa_id);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo_id);
glBlitFramebuffer(0, 0, width, height, // src rect
0, 0, width, height, // dst rect
GL_COLOR_BUFFER_BIT, // buffer mask
GL_LINEAR); // scale filter
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glViewport(0, 0, App->moduleWindow->screen_surface->w, App->moduleWindow->screen_surface->h);
I've made shure the DrawRoomsObjects() function is working correctly, and FBO is initialized correctly.
Here is the code to render the texture created using ImGui library:
glEnable(GL_TEXTURE_2D);
if (ImGui::Begin("Game Viewport", &visible, ImGuiWindowFlags_MenuBar)) {
ImGui::Image(viewportTexture->GetTextureID());
}
Before this chunk I make some calculations to fit the image to the dock, I'm not using viewportTexture any more on the code.
The problem comes when I get this weird artifact at the time of moving the quad, which I don't know how to call, click this link to see a gif of the bug.
It seems the texture is not cleaning the data correctly...?
You've to clear the framebuffer, before you render the objects to the framebuffer:
glBindFramebuffer(GL_FRAMEBUFFER, fbo_msaa_id);
glViewport(0, 0, width, height);
glClear(GL_COLOR_BUFFER_BIT);
DrawRoomObjects();

Reading from framebuffer GLSL to OpenCV

I'm just trying to feed a cvMat a texture that is generated by fragment shader, there is nothing appears on the screen, I don't know where is the problem, is this in the driver or glreadPixels.. I just loaded a TGA Image, to a fragment shader, then textured a quad, I wanted to feed that texture to a cvMat, so I used glReadPixesl then genereated a new texture, and drew it on the quad, but nothing appears.
Kindly note that the following code is executed at each frame.
cv::Mat pixels;
glPixelStorei(GL_PACK_ALIGNMENT, (pixels.step & 3) ? 1 : 4);
glReadPixels(0, 0, 1024, 1024, GL_RGB, GL_UNSIGNED_BYTE, pixels.data);
glEnable(GL_TEXTURE_2D);
GLuint textureID;
glGenTextures(1, &textureID);
//glDeleteTextures(1, &textureID);
// Create the texture
glTexImage2D(GL_TEXTURE_2D, // Type of texture
0, // Pyramid level (for mip-mapping) - 0 is the top level
GL_RGB, // Internal colour format to convert to
1024, // Image width i.e. 640 for Kinect in standard mode
1024, // Image height i.e. 480 for Kinect in standard mode
0, // Border width in pixels (can either be 1 or 0)
GL_RGB, // Input image format (i.e. GL_RGB, GL_RGBA, GL_BGR etc.)
GL_UNSIGNED_BYTE, // Image data type
pixels.data); // The actual image data itself
glActiveTexture ( textureID );
glBindTexture ( GL_TEXTURE_2D,textureID );
glDrawElements ( GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, indices );
textureID looks like an incomplete texture.
Set GL_TEXTURE_MIN_FILTER to GL_NEAREST or GL_LINEAR.
Or supply a complete set of mipmaps.

Converting GLSL Shader texture to OpenCV [duplicate]

I'm just trying to feed a cvMat a texture that is generated by fragment shader, there is nothing appears on the screen, I don't know where is the problem, is this in the driver or glreadPixels.. I just loaded a TGA Image, to a fragment shader, then textured a quad, I wanted to feed that texture to a cvMat, so I used glReadPixesl then genereated a new texture, and drew it on the quad, but nothing appears.
Kindly note that the following code is executed at each frame.
cv::Mat pixels;
glPixelStorei(GL_PACK_ALIGNMENT, (pixels.step & 3) ? 1 : 4);
glReadPixels(0, 0, 1024, 1024, GL_RGB, GL_UNSIGNED_BYTE, pixels.data);
glEnable(GL_TEXTURE_2D);
GLuint textureID;
glGenTextures(1, &textureID);
//glDeleteTextures(1, &textureID);
// Create the texture
glTexImage2D(GL_TEXTURE_2D, // Type of texture
0, // Pyramid level (for mip-mapping) - 0 is the top level
GL_RGB, // Internal colour format to convert to
1024, // Image width i.e. 640 for Kinect in standard mode
1024, // Image height i.e. 480 for Kinect in standard mode
0, // Border width in pixels (can either be 1 or 0)
GL_RGB, // Input image format (i.e. GL_RGB, GL_RGBA, GL_BGR etc.)
GL_UNSIGNED_BYTE, // Image data type
pixels.data); // The actual image data itself
glActiveTexture ( textureID );
glBindTexture ( GL_TEXTURE_2D,textureID );
glDrawElements ( GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, indices );
textureID looks like an incomplete texture.
Set GL_TEXTURE_MIN_FILTER to GL_NEAREST or GL_LINEAR.
Or supply a complete set of mipmaps.

Convert texture to GL_COMPRESSED_RGBA

I'm looking how to convert a GL_RGBA framebuffer texture to GL_COMPRESSED_RGBA texture, preferably on the GPU. Framebuffers apparently canĀ“t have the GL_COMPRESSED_RGBA internal format, thus I need a way to convert.
See this document that describes OpenGL Texture Compression. The sequence of steps is like (this is hacky - Buffer objects for the textures throughout would improve things somewhat)
GLUint mytex, myrbo, myfbo;
glGenTextures(1, &mytex);
glBindTexture(GL_TEXTURE_2D, mytex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGBA, width, height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, 0 );
glGenRenderbuffers(1, &myrbo);
glBindRenderbuffer(GL_RENDERBUFFER, myrbo);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA, width, height)
glGenFramebuffers(1, &myfbo);
glBindFramebuffer(GL_FRAMEBUFFER, myfbo);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER, myrbo);
// If you need a Z Buffer:
// create a 2nd renderbuffer for the framebuffer GL_DEPTH_ATTACHMENT
// render (i.e. create the data for the texture)
// Now get the data out of the framebuffer by requesting a compressed read
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGBA,
0, 0, width, height, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glDeleteRenderbuffers(1, &myrbo);
glDeleteFramebuffers(1, &myfbo);
// Validate it's compressed / read back compressed data
GLInt format = 0, compressed_size = 0;
glGetTexLevelParameteri(GL_TEXTURE_2D, 0, GL_TEXTURE_INTERNAL_FORMAT, &format);
glGetTexLevelParameteri(GL_TEXTURE_2D, 0, GL_TEXTURE_COMPRESSED_IMAGE_SIZE,
char *data = malloc(compressed_size);
glGetCompressedTexImage(GL_TEXTURE_2D, 0, data);
glBindTexture(GL_TEXTURE_2D, 0);
glDeleteTexture(1, &mytex);
// data now contains the compressed thing
If you'd use a PBO object for the texture, you'd be able to get away without the malloc().
If you would like to perform the compression on the GPU without transfer to the CPU - here's two samples you might be able to repurpose for OpenGL (they're DX based)
GPU accelerated texture compression
GPU accelerated texture compression 2
Hope this helps!

glReadPixels from FBO fails with multisampling

I have an FBO object with a color and depth attachment which I render to and then read from using glReadPixels() and I'm trying to add to it multisampling support.
Instead of glRenderbufferStorage() I'm calling glRenderbufferStorageMultisampleEXT() for both the color attachment and the depth attachment. The frame buffer object seem to have been created successfully and is reported as complete.
After rendering I'm trying to read from it with glReadPixels(). When the number of samples is 0 i.e. multisampling disables it works perfectly and I get the image I want. when I set the number of samples to something else, say 4, the frame buffer is still constructed OK but glReadPixels() fails with an INVALID_OPERATION
Anyone have an idea what could be wrong here?
EDIT: The code of glReadPixels:
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, ptr);
where ptr points to an array of width*height uints.
I don't think you can read from a multisampled FBO with glReadPixels(). You need to blit from the multisampled FBO to a normal FBO, bind the normal FBO, and then read the pixels from the normal FBO.
Something like this:
// Bind the multisampled FBO for reading
glBindFramebufferEXT(GL_READ_FRAMEBUFFER_EXT, my_multisample_fbo);
// Bind the normal FBO for drawing
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, my_fbo);
// Blit the multisampled FBO to the normal FBO
glBlitFramebufferEXT(0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
//Bind the normal FBO for reading
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, my_fbo);
// Read the pixels!
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
You can't read the multisample buffer directly with glReadPixels since it would raise an GL_INVALID_OPERATION error. You need to blit to another surface so that the GPU can do a downsample. You could blit to the backbuffer, but there is the problem of the "pixel owner ship test". It is best to make another FBO. Let's assume you made another FBO and now you want blit. This requires GL_EXT_framebuffer_blit. Typically, when your driver supports GL_EXT_framebuffer_multisample, it also supports GL_EXT_framebuffer_blit, for example the nVidia Geforce 8 series.
//Bind the MS FBO
glBindFramebufferEXT(GL_READ_FRAMEBUFFER_EXT, multisample_fboID);
//Bind the standard FBO
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, fboID);
//Let's say I want to copy the entire surface
//Let's say I only want to copy the color buffer only
//Let's say I don't need the GPU to do filtering since both surfaces have the same dimension
glBlitFramebufferEXT(0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
//--------------------
//Bind the standard FBO for reading
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboID);
glReadPixels(0, 0, width, height, GL_BGRA, GL_UNSIGNED_BYTE, pixels);
Source: GL EXT framebuffer multisample