I just start to learn the OpenGL. I was confused by the Image and texture.
Is image only used to shade a 2D scene. And using vertexes and texture to shade the scene in 3D scene? (I mean in the operation order of OpenGL Programming Guide book. First we have vertex data and image data. We can use the image data as the texture or not. When not use as the texture. It's only can be use a background of the scene right. right?)
Is texture operation faster than image operation.
In terms of OpenGL an image is an array of pixel data in RAM. You can for instance load a smiley.tga in RAM using standard C functions, that would be an image. A texture is when the imagedata is loaded in video memory by OpenGL. This can be done like this:
GLuint *texID;
glGenTextures(1, (GLuint*)&texID);
glBindTexture(GL_TEXTURE_2D, texID);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, imagedata);
After the image has been loaded into video memory, the original imagedata in RAM can be free()ed. The texture can now be used by OpenGL.
I am new to OpenGL too. And not really good with low level computation/GPU architecture. But this is a thing I have just learned and I'm explaining to you.
In OpenGL you don't have an "image" object like in some other frameworks (QImage in Qt, BufferedImage in Java Swing, Mat object in OpenCV, etc), but you have some buffers in which you can render through the OpenGL render pipeline OR store images.
If you open an opengl context you are rendering in the default frame buffer (on the screen). But you can do some off-screen render in some other frame buffers, like in render buffer. Then you have some other buffers that can store images in the GPU in some buffers that can be used as texture.
Those OpenGL buffers (renderbuffers or texture) are stored inside the GPU (and the transfer from GPU to CPU or viceversa has some costs). The transfer from the GPU to the CPU is an operation called pixel transfer.
Some examples:
You can load an image (a bitmap from a file for example) and use it as a texture. What are you doing here? Load on the CPU an image from the disk (a matrix of pixel), then pass it to the GPU and store it in a texture buffer. Then you can say to OpenGL to render a cube with that texture.
You can render some complex scene to a render buffer (off-screen render) in the GPU, load it in the CPU though a transfer pixel operation, and save it into a file.
You can render a scene into a texture (render to texture), then change the camera parameters, render to screen, and use the texture previously rendered as texture in a mirror object.
The operation on the GPU are faster then the ones on the CPU. I think (but I'm not sure) that operation on textures are a little bit slower that operation on render buffer (because for example mipmap scaling that a texture could have and a render buffer not).
I think that for a background of the scene you have to use a texture (or some shaders that loads pixels from a buffer).
So, in the end, you don't have image object in OpenGL. You have buffers. In which you can store data (transfer from CPU to GPU), you can render inside of them (off screen render), you can read pixels (pixel transfer from GPU to CPU). Some of them can be used as a texture. That means some more things like that they can be scaled in different resolution with mipmap scaling and scaled with some gaussian operation, they can be antialiased in some different ways, clamped, etc..
Hope this helps a little bit. Sorry if there are some mistakes but I'm learning OpenGL too.
Related
I have a RGBA pixmap (e.g. an antialiased circular 4x4 dot) that I want to draw over a texture in a way similar to a brush stroke. The obvious solution of using glTexSubImage2D just overwrites a rectangular area with no respect to alpha value. Is there a better solution than the obvious maintaining a mirrored version of the texture in local RAM, doing a blending there and then using glTexSubImage2D to upload it - preferrably OpenGL/GPU based one? Is FBO the way to go?
Also, is using FBO for this efficient both in terms of maintaining 1:1 graphics quality (no artifacts, interpolation etc) and in terms of speed? With 4x4 object in RAM doing a CPU blending is basically transforming 4x4 matrix with basic float arithmetics, totalling to 16 simple math iterations & 1 glTexSubImage2D call... is setting an FBO, switching rendering contexts & doing the rendering still faster?
Benchmarking data would be very appreciated, as well as MVCEs/pseudocode for proposed solutions.
Note: creating separate alpha-blended quads for each stroke is not an option, mainly due to very high amount of strokes used. Go science!
You can render to a texture with a framebuffer object (FBO).
At the start of your program, create an FBO and attach the texture to it. Whenever you need to draw a stroke, bind the FBO and draw the stroke as if you were drawing it to the screen (with triangles). The stroke gets written to the attached texture.
For your main draw loop, unbind the FBO, bind the attached texture, and draw a quad over the entire screen (from -1,-1 to 1,1 without using any matrices).
Also, is using FBO for this efficient both in terms of maintaining 1:1 graphics quality (no artifacts, interpolation etc) and in terms of speed?
Yes.
If the attached texture is as big as the window, then there are no artifacts.
You only need to switch to the FBO when adding a new stroke, after which you can forget about the stroke since it's already rendered to the texture.
The GPU does all of the sampling, interpolation, blending, etc., and it's much better at it than the CPU (after all, it's what the GPU is designed for)
Switching FBO's isn't that expensive. Modern games can switch FBOs for render-to-texture several times a frame while still pumping out thousands of triangles; One FBO switch per frame isn't going to kill a 2D app, even on a mobile platform.
I'm trying to implement volumetric billboards in OpenGL 3.3+ as described here
and video here.
The problem I'm facing now (quite basic) is: how do I render a 3D object to a 3D texture (as described in the paper) efficiently? Assuming the object could be stored in a 256x256x128 tex creating 256*256*128*2 framebuffers (because it's said that it should be rendered twice at each axis: +X,-X,+Y,-Y,+Z,-Z) would be insane and there are too few texture units to process that many textures as far as I know (not to mention the amount of time needed).
Does anyone have any idea how to deal with something like that?
A slice of 3D texture can be directly attached to the current framebuffer. So, create a frame buffer, a 3D texture and then do rendering like:
glFramebufferTexture3D( GL_FRAMEBUFFER, Attachment, GL_TEXTURE_3D,
TextureID, 0, ZSlice );
...render to the slice of 3D texture...
So, you need only 1 framebuffer that will be iterated by the number of Z-slices in your target 3D texture.
In DirectX you are able to have separate render targets and depth buffers, so you can bind a render target and a depth buffer, do some rendering, remove the depth buffer and then do more rendering using the old depth buffer as a texture.
How would you go about this in opengl? From my understanding, you have a framebuffer object that contains both the color buffer(s) and an optional depth buffer. I don't think I can bind several framebuffer objects at the same time, would I have to recreate the framebuffer object every time it changes(probably several times a frame)? How do normal opengl programs do this?
A Framebuffer Object is nothing more than a series of references to images. These can be images in Textures (such as a mipmap layer of a 2D texture) or Renderbuffers (which can't be used as textures).
There is nothing stopping you from assembling an FBO that uses a texture's image for its color buffer and a texture's image for its depth buffer. Nor is there anything stopping you from later (so long as you're not rendering to that FBO while doing this) sampling from the texture as a depth texture. The FBO does not suddenly own these images exclusively or something.
In all likelihood, what has happened is that you've misunderstood the difference between an FBO and OpenGL's Default Framebuffer. The default framebuffer (ie: the window) is unchangeable. You can't take it's depth buffer and use it as a texture or something. What you do with an FBO is your own business, but OpenGL won't let you play with its default framebuffer in the same way.
You can bind multiple render targets to a single FBO, which should to the trick. Also since OpenGL is a state machine you can change the binding and number of targets anytime it is required.
I want to draw fullscreen frames of a sequence, and switch between them fast. I saw that I could attach multiply color attachments to a framebuffer.
I'm wondering if it could be far cheaper to use renderbuffer attachments instead of the current textured quads method.
How can I switch between attachments by the way? Is there a maximum number of attachments?
I want to draw fullscreen frames of a sequence, and switch between them fast.
Drawing images always means uploading the data to a texture and drawing a quad using that texture. Look into Pixel Buffer Objects to implement asynchronous data upload and glTexStorage (OpenGL-4.2 feature) for how to bolt the memory layout down.
I saw that I could attach multiply color attachments to a framebuffer.
The framebuffers this applies to are off-screen framebuffer objects, and not the on screen framebuffer. I.e. this won't help you in any way.
I need to display image in openGL window.
Image changes every timer tick.
I've checked on google how, and as I can see it can be done using or glBitmap or glTexImage2D functions.
What is the difference between them?
The difference? These two functions have nothing in common.
glBitmap is a function for drawing binary images. That's not a .BMP file or an image you load (usually). The function's name doesn't refer to the colloquial term "bitmap". It refers to exact that: a map of bits. Each bit in the bitmap represents a pixel. If the bit is 1, then the current raster color will be written to the framebuffer. If the bit is 0, then the pixel in the framebuffer will not be altered.
glTexImage2D is for allocating textures and optionally uploading pixel data to them. You can later draw triangles that have that texture mapped to them. But glTexImage2D by itself does not draw anything.
What you are probably looking for is glDrawPixels, which draws an image directly into the framebuffer. If you use glTexImage2D, you have to first update the texture with the new image, then draw a shape with that texture (say, a fullscreen quad) to actually render the image.
That said, you'll be better off with glTexImage2D if...
You're using a library like JOGL that makes binding textures from images an easy operation, or
You want to scale the image or display it in perspective