GLSL renders texture as black and blue - c++

I'm trying to render a texture with openGL and GLSL. The texture is supposed to be rendered on a floating cube.
texture: http://imgur.com/Actqtx1
result: http://imgur.com/MXIOEvS
The cube is a strange mix of blue and black. Even when I try other textures, the result is the same. In the screenshot above, I have rendered a plane using "fract(worldspace" to ensure that the shaders are working.
It is apparent that the "color = texture(myTextureSampler, UV).rgb;" is producing the wrong color, but I do not know why. The texture coordinates and texture data appear to be read and buffered correctly.
Has anyone seen this effect before? Does anyone know where my problem may lie? I can provide code snippets upon request.

You're looking in the wrong direction. It's not your shader that's wrong (well maybe it is). Your problems start much earlier in the texture loading process.
You see how your texture seems to be strangely skewed. That usually happens if the alignment and pixel row strides have not been properly set before calling glTexImage, see glPixelStorei(GL_UNPACK_…,) parameters.
The other problem I see is, that whatever you loaded into OpenGL has no resemblence of your original picture whatsoever. It looks like bitnoise. That is telling me, that you're probably feeding some compressed data, or maybe even a picture file as it is to OpenGL.
OpenGL does not know how to deal with image file formats. There are a few special compression formats it knows, but these are compression formats as used by GPUs to reduce memory bandwidth requirements, not something like PNG or JPEG.
If you don't have one of the special texture compression format at hand OpenGL expects a raw pixel array.

Related

Open gl es - How to improve performance, render to texture, blending

I am here because I'm working on an OpenGL program and I have some issues with performance. I work with OpenGL ES 3.0 on iMX6 soc.
Here is my algorithm :
I get an image from camera which is directly map to a texture.
Using an FBO, I render to texture to map the image on a specific form.
I do the same thing (with a second FBO) for another image which is sent via shared memory by another application. This step is performed only if the image is updated. Only once per second.
I blend these two textures in the default frame buffer to render the result to the screen.
If I perform these three steps separately, It works well and the screen is updated at 30FPS. But when I include the three step in one program the render is very slow and I got only 0.5FPS.
I am wondering if the GPU on the iMX6 is enough powerful, but I think it is not a complex algorithm. I think I am doing something in the wrong way, but what?
I use 3 different frame buffers, so is that a good way or should I use only one?
Can someone give me answer, clues, anything that can help me? :-)
My images dimensions are 1280x1024 x RGBA. Then I am doing some conversion from floating-point texture to integer and back to float, this is done to perform bitwise operation on pixels.
Thanks to #Columbo the problem came from all the conversion, I work with floating-point texture and only for the bitwise operations I do the conversion which improve a lot the performance of the algorithm.
Another point which decrease the performance was the texture format. For the first step, the image was 1280x1024 but only on one composent (grayscale image). To keep only the grayscale composant and not to use too much memory I worked with a GL_RED texture but this wasn't a good idea because when I changed it to GL_RGB, I double the framerate of the render too.

Overwrite pixel per pixel in an openGL 2d texture

I want to create an openGL 2D texture and set the RGBA values of every pixel by its own. Can someone give me an explanation for my problem? I didn't find one in the internet.
If you're just looking to write the pixels of a 2D texture, you can simply use glTexImage2D, which takes a buffer specifying the pixel data you wish to upload to the texture (https://www.opengl.org/sdk/docs/man/html/glTexImage2D.xhtml). Alternatively, you can use glTexSubImage2D to write a portion of the texture's pixels (https://www.khronos.org/opengles/sdk/docs/man/xhtml/glTexSubImage2D.xml). If you're instead looking to do the analogous thing with the framebuffer, you can use glDrawPixels (https://www.opengl.org/sdk/docs/man2/xhtml/glDrawPixels.xml).
If the target is the backbuffer, attempting to draw to a exact pixel values to a texture by binding it as a framebuffer, and then rendering a textured quad completely covering it is possible. However, this process is subject to blending and potentially pixel-center issues, whereas glDrawPixels is not.
I did something like this some time ago, when playing around with OpenGL.
Have a look at the code here, on GitHub.
You can find it in main.cpp.
Basically, my idea was to create an array of floats, set the values, copy to GPU with glBufferData and draw with glDrawElements.
As I remember it, doing it often was very bad in terms of performance, so it's probably not the best direction.
Please also note that this code is just my sandbox, and may not be the best possible example to be copied.

Accessing rendered OpenGL image

I am rendering an image using OpenGL on C++, and want to access the resulting image to do some more processing on it. (I'm rendering an image, have an actual image it's supposed to look like, and want to compute the pixel difference between the two.)
So far I have only been rendering images to the screen, though, and I can't figure out how to render an image and then later get access at the direct pixels which were drawn. I don't especially care if I can see the image on the screen or not, all I want is that the image gets rendered to some region of memory which I can access from the CPU. How do you do this?
Alternatively, would it be possible to send the image it's supposed to look like to OpenGL and compute the pixel difference on the GPU? Either option is fine with me, but the faster I can make it the better. (Right now, I can render about 100 frames per second, but still haven't figured out how to do the comparisons.)
Yes, you could do it on the GPU. Put the 2 images in textures. Draw a frame-filling quad multi-textured with the two textures, and be sure to provide texture coordinates. Write a fragment shader to compute the difference. (When a commenter asked if you wanted to use a programmable pipeline, this is one reason it matters. If you only use the fixed-function pipeline, you wouldn't have the option of writing a fragment shader.)
The obvious way would be to use glReadPixels to read the rendered results in the framebuffer to host memory.

Clarification needed on Bloom and Post-Processing (DirectX 10 / 11)

the last few days i was reading a lot articles about post-processing with bloom etc. and i was able to implement a render to texture functionality with this texture running through a sperate shader.
Now i have some questions regarding the whole thing.
Do i have to render both? The Scene and the Texture put on a full-screen quad?
How does Bloom, or any other Post-Processing (DOF, Blur) with this render to texture functionality work? Or is this something completly different?
I dont really understand the concept of the Back and Front-Buffer and how to make use of this for post processing.
I have read something about the volumetric light rendering where they render the scene like 6 times with different color settings. Isnt this quite inefficient? Or was my understanding there just incorrect?
Thanks for anyone care to explain this things to me ;)
Let me try to answer some of your questions
Yes, you have to render both
DOF is typically implemented by rendering a "blurriness" factor into an offscreen buffer, where a post-processing filter then uses this factor to blur certain pixels more than others (with some compensation for color-leaking between sharp and blurred objects). So yes, the basic idea is the same, render to a buffer, process it and then display it (with or without blending it on top of the original scene).
The back buffer is what you render stuff to (what the user will see on the next frame). All offscreen rendering is done to other rendertargets that you will create and use.
I don't quite understand what you mean. Please provide a link to what you read so I can try to understand and perhaps explain it.
Suppose that:
you have the "luminance" for each renderer pixel in a single texture
this texture hold floating point values that can be greater that 1.0
Now:
You do a blur pass (possibly a separate blur), only considering pixels
with a value greater than 1.0, and put the blur result in another
texture.
Finally:
In a last shader you do the final presentation to screen. You sample
from both the "luminance" (clamped to 1.0) and the "blurred excess luminance"
and add them, obtaining the so-called bloom effect.

Applying a shader to framebuffer object to get fisheye affect

Lets say i have an application ( the details of the application should be irrelevent for solving the problem ). Instead of rendering to the screen, i am somehow able to force the application to render to a framebuffer object instead of rendering to the screen ( messing with glew or intercepting a call in a dll ).
Once the application has rendered its content to the FBO is it possible to apply a shader to the contents of the FB? My knowledge is limited here, so from what i understand at this stage all information about vertices is no longer available and all the necessary tests have been applied, so whats left in the buffer is just pixel data. Is this correct?
If it is possible to apply a shader to the FBO, is is possible to get a fisheye affect? ( like this for example: http://idea.hosting.lv/a/gfx/quakeshots.html )
The technique used in the linke above is to create 6 different viewports and render each viewport to a cubemap face and then apply the texture to a mesh.
Thanks
A framebuffer object encapsulates several other buffers, specifically those that are implicitly indexed by fragment location. So a single framebuffer object may bundle together a colour buffer, a depth buffer, a stencil buffer and a bunch of others. The individual buffers are known as renderbuffers.
You're right — there's no geometry in there. For the purposes of reading back the scene you get only final fragment values, which if you're highjacking an existing app will probably be a 2d pixel image of the frame and some other things that you don't care about.
If your GPU has render-to-texture support (originally an extension circa OpenGL 1.3 but you'd be hard pressed to find a GPU without it nowadays, even in mobile phones) then you can link a texture as a renderbuffer within a framebuffer. So the rendering code is exactly as it would be normally but ends up writing the results to a texture that you can then use as a source for drawing.
Fragment shaders can programmatically decide which location of a texture map to sample in order to create their output. So you can write a fragment shader that applies a fisheye lens, though you're restricted to the field of view rendered in the original texture, obviously. Which would probably be what you'd get in your Quake example if you had just one of the sides of the cube available rather than six.
In summary: the answer is 'yes' to all of your questions. There's a brief introduction to framebuffer objects here.
Look here for some relevant info:
http://www.opengl.org/wiki/Framebuffer_Object
The short, simple explanation is that a FBO is the 3D equivalent of a software frame buffer. You have direct access to individual pixels, instead of having to modify a texture and upload it. You can get shaders to point to an FBO. The link above gives an overview of the procedure.