Choosing the smallest EGLConfig for offscreen rendering - opengl

I'm developing an application which runs on different EGL implementations and platfroms, incl. Windows (ANGLE), Linux (Mesa, NVidia). The application spends most of the time doing offscreen rendering in many threads, hence many EGLContexts with FBOs in them.
Is it necessary to choose EGLConfig with required color buffer size for an off-screen context and pbuffer, if I'm only rendering to FBO? Or is it sufficient to choose the smallest EGLConfig available?
Example: the system provides EGLConfigs with RGB565 and no depth/stencil (16-bit), RGB888 (24-bit), RGBA8888 (32-bit). I'm creating a EGLContext with RGB565 config, then I'm creating FBO and attaching an RGBA texture to it together with depth and stencil renderbuffers.

Related

Render at higher resolution than default framebuffer on OpenGL 2.1 without FBOs?

I am targeting an OpenGL 2.1 context which does not support framebuffers. My application originally used a texture framebuffer at double native resolution to perform shader effects and some basic supersampling. This texture is later drawn to the screen as a simple quad and can also be saved as a higher resolution image file.
Without framebuffer objects, I can use glCopyTexSubImage2D to copy pixels from the default framebuffer to a texture, however, I am unable to find a way to render my scene at a larger scale than initial given framebuffer.
Given my limitation on using framebuffer objects, and a smaller initial framebuffer size, is there any way to draw at a higher (2x scaled) resolution without additional FBOs?
One idea I had was to draw my scene in four higher resolution sections then stitch them together, but this seems overly complex and slow.

Using layered rendering with default framebuffer

I know that we can attach a layered texture which is:
A mipmap level of a 1D/2D texture array
A mipmap levevl of a 3D texture
A mipmap levevl of a Cube Texture/ Cube Texture Array
to a FBO and do layered rendering.
The OpenGL wiki also says "Layered rendering is the process of having the GS send specific primitives to different layers of a layered framebuffer."
Can default framebuffer be a layered framebuffer? i.e Can I bind a 3d texture to the default FB and use a geometry shader to render to different layers of this texture?
I tried writing such a program but the screen is blank and I am not sure if this is right.
If it is not, what is possibly happening when I bind default FB for layered rendering?
If you use the default framebuffer for layered rendering, then everything you draw will go straight to the default framebuffer. They will behave as if everything were in the same layer.
OpenGL 4.4 Core Specification - 9.8 Layered Framebuffers - pp. 296
A framebuffer is considered to be layered if it is complete and all of its populated attachments are layered. When rendering to a layered framebuffer, each fragment generated by the GL is assigned a layer number.
[...]
A layer number written by a geometry shader has no effect if the framebuffer is not layered.

what's the difference between image and texture in OpenGL

I just start to learn the OpenGL. I was confused by the Image and texture.
Is image only used to shade a 2D scene. And using vertexes and texture to shade the scene in 3D scene? (I mean in the operation order of OpenGL Programming Guide book. First we have vertex data and image data. We can use the image data as the texture or not. When not use as the texture. It's only can be use a background of the scene right. right?)
Is texture operation faster than image operation.
In terms of OpenGL an image is an array of pixel data in RAM. You can for instance load a smiley.tga in RAM using standard C functions, that would be an image. A texture is when the imagedata is loaded in video memory by OpenGL. This can be done like this:
GLuint *texID;
glGenTextures(1, (GLuint*)&texID);
glBindTexture(GL_TEXTURE_2D, texID);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, imagedata);
After the image has been loaded into video memory, the original imagedata in RAM can be free()ed. The texture can now be used by OpenGL.
I am new to OpenGL too. And not really good with low level computation/GPU architecture. But this is a thing I have just learned and I'm explaining to you.
In OpenGL you don't have an "image" object like in some other frameworks (QImage in Qt, BufferedImage in Java Swing, Mat object in OpenCV, etc), but you have some buffers in which you can render through the OpenGL render pipeline OR store images.
If you open an opengl context you are rendering in the default frame buffer (on the screen). But you can do some off-screen render in some other frame buffers, like in render buffer. Then you have some other buffers that can store images in the GPU in some buffers that can be used as texture.
Those OpenGL buffers (renderbuffers or texture) are stored inside the GPU (and the transfer from GPU to CPU or viceversa has some costs). The transfer from the GPU to the CPU is an operation called pixel transfer.
Some examples:
You can load an image (a bitmap from a file for example) and use it as a texture. What are you doing here? Load on the CPU an image from the disk (a matrix of pixel), then pass it to the GPU and store it in a texture buffer. Then you can say to OpenGL to render a cube with that texture.
You can render some complex scene to a render buffer (off-screen render) in the GPU, load it in the CPU though a transfer pixel operation, and save it into a file.
You can render a scene into a texture (render to texture), then change the camera parameters, render to screen, and use the texture previously rendered as texture in a mirror object.
The operation on the GPU are faster then the ones on the CPU. I think (but I'm not sure) that operation on textures are a little bit slower that operation on render buffer (because for example mipmap scaling that a texture could have and a render buffer not).
I think that for a background of the scene you have to use a texture (or some shaders that loads pixels from a buffer).
So, in the end, you don't have image object in OpenGL. You have buffers. In which you can store data (transfer from CPU to GPU), you can render inside of them (off screen render), you can read pixels (pixel transfer from GPU to CPU). Some of them can be used as a texture. That means some more things like that they can be scaled in different resolution with mipmap scaling and scaled with some gaussian operation, they can be antialiased in some different ways, clamped, etc..
Hope this helps a little bit. Sorry if there are some mistakes but I'm learning OpenGL too.

Separate Frame Buffer and Depth Buffer in OpenGL

In DirectX you are able to have separate render targets and depth buffers, so you can bind a render target and a depth buffer, do some rendering, remove the depth buffer and then do more rendering using the old depth buffer as a texture.
How would you go about this in opengl? From my understanding, you have a framebuffer object that contains both the color buffer(s) and an optional depth buffer. I don't think I can bind several framebuffer objects at the same time, would I have to recreate the framebuffer object every time it changes(probably several times a frame)? How do normal opengl programs do this?
A Framebuffer Object is nothing more than a series of references to images. These can be images in Textures (such as a mipmap layer of a 2D texture) or Renderbuffers (which can't be used as textures).
There is nothing stopping you from assembling an FBO that uses a texture's image for its color buffer and a texture's image for its depth buffer. Nor is there anything stopping you from later (so long as you're not rendering to that FBO while doing this) sampling from the texture as a depth texture. The FBO does not suddenly own these images exclusively or something.
In all likelihood, what has happened is that you've misunderstood the difference between an FBO and OpenGL's Default Framebuffer. The default framebuffer (ie: the window) is unchangeable. You can't take it's depth buffer and use it as a texture or something. What you do with an FBO is your own business, but OpenGL won't let you play with its default framebuffer in the same way.
You can bind multiple render targets to a single FBO, which should to the trick. Also since OpenGL is a state machine you can change the binding and number of targets anytime it is required.

How to create textures within GPU

Can anyone pls tell me how to use hardware memory to create textures in OpenGL ? Currently I'm running my game in window mode, do I need to switch to fullscreen to get the use of hardware ?
If I can create textures in hardware, is there a limit for no of textures (other than the hardware memory) ? and then how can I cache my textures into hardware ? Thanks.
This should be covered by almost all texture tutorials for OpenGL. For example here, here and here.
For every texture you first need a texture name. A texture name is like a unique index for a single texture. Every name points to a texture object that can have its own parameters, data, etc. glGenTextures is used to get new names. I don't know if there is any limit besides the uint range (2^32). If there is then you will probably get 0 for all new texture names (and a gl error).
The next step is to bind your texture (see glBindTexture). After that all operations that use or affect textures will use the texture specified by the texture name you used as parameter for glBindTexture. You can now set parameters for the texture (glTexParameter) and upload the texture data with glTexImage2D (for 2D textures). After calling glTexImage you can also free the system memory with your texture data.
For static textures all this has to be done only once. If you want to use the texture you just need to bind it again and enable texturing (glEnable(GL_TEXTURE_2D)).
The size (width/height) for a single texture is limited by GL_MAX_TEXTURE_SIZE. This is normally 4096, 8192 or 16384. It is also limited by the available graphics memory because it has to fit into it together with some other resources like the framebuffer or vertex buffers. All textures together can be bigger then the available memory but then they will be swapped.
In most cases the graphics driver should decide which textures are stored in system memory and which in graphics memory. You can however give certain textures a higher priority with either glPrioritizeTextures or with glTexParameter.
Edit:
I wouldn't worry too much about where textures are stored because the driver normally does a very good job with that. Textures that are used often are also more likely to be stored in graphics memory. If you set a priority that's just a "hint" for the driver on how important it is for the texture to stay on the graphics card. It's also possible the the priority is completely ignored. You can also check where textures currently are with glAreTexturesResident.
Usually when you talk about generating a texture on the GPU, you're not actually creating texture images and applying them like normal textures. The simpler and more common approach is to use Fragment shaders to procedurally calculate the colors of for each pixel in real time from scratch for every single frame.
The canonical example for this is to generate a Mandelbrot pattern on the surface of an object, say a teapot. The teapot is rendered with its polygons and texture coordinates by the application. At some stage of the rendering pipeline every pixel of the teapot passes through the fragment shader which is a small program sent to the GPU by the application. The fragment shader reads the 2D texture coordinates and calculates the Mandelbrot set color of the 2D coordinates and applies it to the pixel.
Fullscreen mode has nothing to do with it. You can use shaders and generate textures even if you're in window mode. As I mentioned, the textures you create never actually occupy space in the texture memory, they are created on the fly. One could probably think of a way to capture and cache the generated texture but this can be somewhat complex and require multiple rendering passes.
You can learn more about it if you look up "GLSL" in google - the OpenGL shading language.
This somewhat dated tutorial shows how to create a simple fragment shader which draws the Mandelbrot set (page 4).
If you can get your hands on the book "OpenGL Shading Language, 2nd Edition", you'll find it contains a number of simple examples on generating sky, fire and wood textures with the help of an external 3D Perlin noise texture from the application.
To create a texture on GPU look into "render to texture" tutorials. There are two common methods: Binding a PBuffer context as texture, or using Frame Buffer Objects. PBuffer render to textures are the older method, and have the wider support. Frame Buffer Objects are easier to use.
Also you don't have to switch to "fullscreen" mode for OpenGL to be HW accelerated. In fact OpenGL doesn't know about windows at all. A fullscreen OpenGL window is just that: A toplvel window on top of all other windows with no decorations and the input focus grabed. Some drivers bypass window masking and clipping code, and employ a simpler, faster buffer swap method if the window with the active OpenGL context covers the whole screen, thus gaining a little performance, but with current hard- and software the effect is very small compared to other influences.