I have created a sample application using glew and glut which reads a dds file and displays it. I manually read the dds file (NPOT(886 x 317) file in R8G8B8) and creates the data pointer(unsigned char*).
Then I prepared the texture using
void prepareTexture(int w, int h, unsigned char* data) {
/* Create and load texture to OpenGL */
glGenTextures(1, &textureID); /* Texture name generation */
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
w, h,
0, GL_RGB, GL_UNSIGNED_BYTE,
data);
glGenerateMipmap(GL_TEXTURE_2D);
}
In the above figure, First one shows the original dds file and second one is the rendering result of my application which is obviously wrong. If I re-size the image to 1024 x 512, both images will look same.
From the OpenGL Specification
I.3 Non-Power-Of-Two Textures
The restriction of textures to power-of-two dimensions has been relaxed for all texture targets, so
that non-power-of-two textures may be specified without generating
errors. Non-power-of-two textures was promoted from the ARB texture
non power of two extension.
From which what I understand is from OpenGl 2.0 we can use NPOT textures and OpenGL will handle this.
I tried using the DevIL image library to load the dds file but end up with same result. If I convert the image to a RGBA and and change the internal format and format of glTexImage2D to GL_RGBA I will get correct result even if the dds file is NPOT.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,
w, h,
0, GL_RGBA, GL_UNSIGNED_BYTE,
data);
I tried the application in PC's with NVIDA card and Radeon card and both of them are giving the same result.
My sample source code can be downloaded from the link
Can anybody tell me what is wrong with my application? Or OpenGL does not allow NPOT if the image is in R8G8B8.
This looks like an alignment issue. Add this before the glTexImage2D() call:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
This value specifies the row alignment of your data in bytes. The default value is 4.
With your texture width of 886 and 3 bytes per pixel for GL_RGB, each row is 886 * 3 = 2658 bytes, which is not a multiple of 4.
With the UNPACK_ALIGNMENT value at the default, the size would be rounded up to the next multiple of 4, which is 2660. So 2660 bytes will be read for each row, which explains the increasing shift for each row. The first row would be correct, the second one 2 bytes off, the 2nd row 4 bytes off, the 3rd row 6 bytes off, etc.
Related
I am using offscreen rendering to texture for a simple GPU calculation. I am using
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, texSize, texSize, 0, GL_RGBA, GL_FLOAT, nullptr);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
to allocate storage for the texture and
glReadPixels(0, 0, texSize, texSize, GL_RGBA, GL_FLOAT, data);
to read out the computed data. The problem is that the output from the fragment shader I am interested in is only vec2, so the first two slots of the color attachment are populated and the other two are garbage. I then need to post-process data to only take two out of each four floats, which takes needless cycles and storage.
If it was one value, I'd use GL_RED, if it was three, I'd use GL_RGB in my glReadPixels. But I couldn't find a format that would read two values. I'm only using GL_RGBA for convenience as it seems more natural to take 2 floats out of 2×2 than out of 3.
Is there another way which would read all the resulting vec2 tightly packed? I thought of reading RED only, somehow convincing OpenGL to skip four bytes after each value, and then reading GREEN only into the same array to fill in the gaps. To this end I tried to study about glPixelStore but it does not seem to be for this purpose. Is this, or any other way, even possible?
If you only want to read the RG components of the image, you use a transfer format of GL_RG in your glReadPixels command.
However, that's going to be a slow read unless your image also only stores 2 channels. So your image's internal format should be GL_RG32F.
I have an FBO with 4 frame buffer textures. These textures are 4 different sizes.
texture 1 = 512 * 360
texture 2 = 256 * 180
texture 3 = 128 * 90
texture 4 = 64 * 45
The problem is that if texture 4 and texture 1 for example are attached using:
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, GL_TEXTURE_2D, bl_64, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, bl_128, 0);
When I draw to texture 1 the image on the texture only takes up the size of texture 4.
m_blurrFBO.DrawingBind();
glUniformSubroutinesuiv(GL_FRAGMENT_SHADER, 1, &BlurrPass);
glViewport(0, 0, 512, 360);
glDrawBuffer(GL_COLOR_ATTACHMENT0);
DrawQuad();
FBO::UnbindDrawing();
The reason I'm using 4 different textures with different sizes is that I'm down sampling the same image 4 times each half the size as the last.
The problem is the code has been tested on 5 different computers all with either AMD or NVIDIA cards and it works as expected. I have the up to date drivers for my nvidia gtx 550 is this a known problem ?
This problem is called "That's how OpenGL works."
The total size of a framebuffer is based on the smallest size, in each dimension, of all of the attached images. You are not allowed to render outside of any attached image. Even if you use write masking or draw buffers state so that you don't actually render to it, the available viewport size is always limited to the smallest size of the attached images.
As a general rule, do not attach an image to a framebuffer unless you are serious about rendering to it. If you want to do downsampling, swap FBOs or change attached images between each sampling pass.
Oh and BTW: it is undefined behavior (unless you're using GL 4.5 or ARB/NV_texture_barrier) to read from any texture object that is currently attached to the FBO. Again, write masks and draw buffers state is irrelevant; what matters is that the image is attached. So again, don't attach something unless you are writing to it.
I have created a sample application using glew and glut which reads a dds file and displays it. I manually read the dds file (NPOT(886 x 317) file in R8G8B8) and creates the data pointer(unsigned char*).
Then I prepared the texture using
void prepareTexture(int w, int h, unsigned char* data) {
/* Create and load texture to OpenGL */
glGenTextures(1, &textureID); /* Texture name generation */
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
w, h,
0, GL_RGB, GL_UNSIGNED_BYTE,
data);
glGenerateMipmap(GL_TEXTURE_2D);
}
In the above figure, First one shows the original dds file and second one is the rendering result of my application which is obviously wrong. If I re-size the image to 1024 x 512, both images will look same.
From the OpenGL Specification
I.3 Non-Power-Of-Two Textures
The restriction of textures to power-of-two dimensions has been relaxed for all texture targets, so
that non-power-of-two textures may be specified without generating
errors. Non-power-of-two textures was promoted from the ARB texture
non power of two extension.
From which what I understand is from OpenGl 2.0 we can use NPOT textures and OpenGL will handle this.
I tried using the DevIL image library to load the dds file but end up with same result. If I convert the image to a RGBA and and change the internal format and format of glTexImage2D to GL_RGBA I will get correct result even if the dds file is NPOT.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,
w, h,
0, GL_RGBA, GL_UNSIGNED_BYTE,
data);
I tried the application in PC's with NVIDA card and Radeon card and both of them are giving the same result.
My sample source code can be downloaded from the link
Can anybody tell me what is wrong with my application? Or OpenGL does not allow NPOT if the image is in R8G8B8.
This looks like an alignment issue. Add this before the glTexImage2D() call:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
This value specifies the row alignment of your data in bytes. The default value is 4.
With your texture width of 886 and 3 bytes per pixel for GL_RGB, each row is 886 * 3 = 2658 bytes, which is not a multiple of 4.
With the UNPACK_ALIGNMENT value at the default, the size would be rounded up to the next multiple of 4, which is 2660. So 2660 bytes will be read for each row, which explains the increasing shift for each row. The first row would be correct, the second one 2 bytes off, the 2nd row 4 bytes off, the 3rd row 6 bytes off, etc.
I wish to process an image using glsl. For instance - for each pixel, output its squared value:
(r,g,b)-->(r^2,g^2,b^2). Then I want to read the result into cpu memory using glReadPixels.
This should be simple. However, most glsl examples that I find explain about shaders for image post-processing; thus, their output value already lies in [0,255]. In my example, however, I want to get output values in the range [0^2,255^2]; and I don't want them normalized to [0,255].
The main parts of my code are (after some trials and permutations):
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, width, height, 0, GL_BGR, GL_FLOAT, NULL);
glReadPixels(0, 0, width, height, GL_RGB, GL_FLOAT, data_float);
I don't post my entire code since I think these two lines is where my problem lies.
Edit
Following #Arttu's suggestion, and following this post and this post my code now reads as follows:
glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGBA32F_ARB, width, height, 0, GL_RGBA, GL_FLOAT, NULL);
glReadPixels(0, 0, width, height, GL_RGB, GL_FLOAT, data_float);
Still, this does not solve my problem. If I understand correctly - no matter what, my input values get scaled to [0,1] when I insert them. It's up to me to multiply later by 255 or by 255^2...
Using floating-point texture format will keep your values intact without clamping them to any specific range (in this case, within the limits of 16-bit float representation, of course). You didn't specify your OpenGL version, so this assumes 4.3.
You seem to have conflicting format and internalformat. You're specifying internalformat RGBA16F, but format BGR, without the alpha component (glTexImage2D man page). Try the following:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, width, height, 0, GL_BGRA, GL_FLOAT, NULL);
glReadPixels(0, 0, width, height, GL_RGBA, GL_FLOAT, data_float);
On the first line you're specifying a 2D texture with four-component, 16-bit floating point format, and OpenGL will expect the texture data to be in BGRA format. Since you have 0 as the last parameter, you're not specifying any image data. Remember that RGBA16F format gives you half values in your shader, which will be implicitly casted to 32-bit format if you're assigning the values to float or vec* variables.
On the second line, you're downloading image data from the device to data_float, this time in RGBA order.
If this doesn't solve your problem, you'll probably need to include some more code. Also, adding glGetError calls into your code will help you find the call that causes an error. Good luck :)
I have a texture which has only 1 channel as it's a grayscale image. When I pass the pixels in to glTexImage2D, it comes out red (obviously because channel 1 is red; RGB).
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA,
dicomImage->GetColumns(), dicomImage->GetRows(),
0, GL_RGBA, GL_UNSIGNED_BYTE, pixelArrayPtr);
Do I change GL_RGBA? If so, what to?
Change it to GL_LUMINANCE. See https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml
in the FragmentShader, you can write:
uniform sampler2D A;
vec3 result = vec3(texture(A, TexCoord).r);
in the cpp file,you can write:
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RED,
dicomImage->GetColumns(), dicomImage->GetRows(),
0, GL_RED, GL_UNSIGNED_BYTE, pixelArrayPtr);
It appears that I should use GL_LUMINANCE instead of GL_RGBA for the 3rd argument.
Edit (in reply to comments):
When I set the 7th argument to GL_LUMINANCE (as well as the 3rd), the picture goes completely distorted. With the DICOM pixel format, it appears that the 7th argument must be GL_RGBA for some reason.
The strange behavior is because I'm using the DICOM standard. The particular DICOM reader I am using outputs integer pixel values (as pixel values may exceed the normal maximum of 255). For some strange reason the combination of telling OpenGL that I am using an RGBA format, but passing in integer values rendered a perfect image.
Because I was truncating the DICOM > 255 pixel values anyway, it seemed logical to copy the values in to a GLbyte array. However, after doing so, a SIGSEGV (segmentation fault) occurred when calling glTexImage2D. Changing the 7th parameter to GL_LUMINANCE (as is normally required) returned the functionality to normal.
Weird eh?
So, a note to all developers using the DICOM image format: You need to convert the integer array to a char array before passing it to glTexImage2D, or just set the 7th argument to GL_RGBA (the later is probably not recommended).
You would use GL_LUMINANCE format in old versions of openGL, but now in modern 3.0+ OpenGL versions GL_LUMINANCE is deprecated, so new way of doing it is to use GL_RED format, but that would result in a red texture, so to get around this you should create a costum shader as above answers have shown, in that shader you grab red component of the texture, as it's the only one with data you have given and set green/blue channels to red channel's value, that will convert is to grayscale, because grayscale textures have all 3 RGB channels the same and Alpha/Transparency channel set to 1.