Example for rendering with Cg to a offscreen frame buffer object - c++

I would like to see an example of rendering with nVidia Cg to an offscreen frame buffer object.
The computers I have access to have graphic cards but no monitors (or X server). So I want to render my stuff and output them as images on the disk. The graphic cards are GTX285.

You need to create an off screen buffer and render to it the same way as you would render to a window.
See here for example (but without Cg) :
http://www.mesa3d.org/brianp/sig97/offscrn.htm
Since you have a Cg shader, just enable it the same way as you would render to a window.
EDIT:
For FBO example, take a look here :
http://www.songho.ca/opengl/gl_fbo.html
but that is not supported by all graphical cards.
You could also render to texture, and then copy the texture to the main memory, but that is not very good (performance wise)

Related

OpenGL: Objects are smooth if normally drawn, but edged when rendering to FBO

I have a problem with different visual results when using a FBO compared to the default framebuffer:
I render my OpenGL scene into a framebuffer object, because I use this for color picking. The thing is that if I render the scene directly to the default framebuffer, the output on the screen is quite smooth, meaning the edges of my objects look a bit like if they were anti-aliased. When I render the scene into the FBO and afterwards use the output to texture a quad that spans the whole viewport, the objects have very hard edges where you can easily see every single colored pixel that belongs to the objects.
Good:
Bad:
At the moment I have no idea what the reason for this could be. I am not using some kind of anti-aliasing.
System:
Fedora 18 x64
Intel HD Graphics 4000 and Nvidia GT 740M (same result)
Edit1:
As stated by Damon and Steven Lu, there is probably some kind of anti-aliasing enabled by the system by default. I couldn't figure out so far how to disable this feature.
The thing is that I was just curious why this setting only had an effect on the default framebuffer and not the one handled by the FBO. To get anti-aliased edges for the FBO too, I will probably have to implement my own AA method.
Once you draw your scene into custom FBO the externally defined MSAA level doesn't apply anymore.You must configure your FBO to have Multi-sample texture or render buffer attachments setting number of sample levels along the way.Here is a reference.

Blend FBO onto default framebuffer

To clarify, when I say 'default framebuffer' I mean the one provided by the windowing system and what ends up on your monitor.
To improve my rendering speeds for a CAD app, I've managed to separate out the 3D elements from the Qt-handled 2D ones, and they now each render into their own FBO. When the time comes to get them onto the screen, I blit the 3D FBO onto the default FB, and then I want to blend my 2D FBO on top of it.
I've gotten to the blitting part fine, but I can't see how to blend my 2D FBO onto it? Both FBOs are identical in size and format, and they are both the same as the default FB.
I'm sure it's a simple operation, but I can't find anything on the net - presumably I'm missing the right term for what I am trying to do. Although I'm using Qt, I can use native OpenGL commands without issue.
A blit operation is ultimately a pixel copy operation. If you want to layer one image on top of another, you can't blit it. You must instead render a full-screen quad as a texture and use the proper blending parameters for your blending operation.
You can use GL_EXT_framebuffer_blit to blit contents of the framebuffer object to the application framebuffer (or to any other). Although, as the spec states, it is not possible to use blending:
The pixel copy bypasses the fragment pipeline. The only fragment
operations which affect the blit are the pixel ownership test and
the scissor test.
So any blending means to use fragment shader as suggested. One fullscreen pass with blending should be pretty cheap, I believe there is nothing to worry about.
use shader to read back from frame buffer. this is OpenGL ES extension, not support by all hardware.
https://www.khronos.org/registry/gles/extensions/EXT/EXT_shader_framebuffer_fetch.txt

How to paint the same opengl drawing in two different borland builder windows?

I have one borland builder window form in which is drawn an opengl item on a timer.
And I want to draw this opengl item simultaneously in anther borland builder window.
Should I use Pixel Buffer Objects, or Frame Buffer objects ?
with glReadPixels ? or glBindFrameBuffer ?
When need I call these functions ? before my drawing or after ?
Or is it simpler to call the RC or DC of my first form in the second form ?
If it is possible, how may i call it ?
Create additional OpenGL contexts for the other windows, share the context objects using wglShareLists which also shares textures.
If the same view (same resolution, rendering etc.) shall be visible:
Use a texture as a framebuffer object's color buffer attachment, draw to this FBO. Then draw textured quads using this texture in all the windows.
If different view: Render each window individually.
Please not that there is no such thing like a "OpenGL item". OpenGL deals with only a single primitive (=triangle, quad, point, line) at a time and there's no kind of persistency in a rendering.

How to create textures within GPU

Can anyone pls tell me how to use hardware memory to create textures in OpenGL ? Currently I'm running my game in window mode, do I need to switch to fullscreen to get the use of hardware ?
If I can create textures in hardware, is there a limit for no of textures (other than the hardware memory) ? and then how can I cache my textures into hardware ? Thanks.
This should be covered by almost all texture tutorials for OpenGL. For example here, here and here.
For every texture you first need a texture name. A texture name is like a unique index for a single texture. Every name points to a texture object that can have its own parameters, data, etc. glGenTextures is used to get new names. I don't know if there is any limit besides the uint range (2^32). If there is then you will probably get 0 for all new texture names (and a gl error).
The next step is to bind your texture (see glBindTexture). After that all operations that use or affect textures will use the texture specified by the texture name you used as parameter for glBindTexture. You can now set parameters for the texture (glTexParameter) and upload the texture data with glTexImage2D (for 2D textures). After calling glTexImage you can also free the system memory with your texture data.
For static textures all this has to be done only once. If you want to use the texture you just need to bind it again and enable texturing (glEnable(GL_TEXTURE_2D)).
The size (width/height) for a single texture is limited by GL_MAX_TEXTURE_SIZE. This is normally 4096, 8192 or 16384. It is also limited by the available graphics memory because it has to fit into it together with some other resources like the framebuffer or vertex buffers. All textures together can be bigger then the available memory but then they will be swapped.
In most cases the graphics driver should decide which textures are stored in system memory and which in graphics memory. You can however give certain textures a higher priority with either glPrioritizeTextures or with glTexParameter.
Edit:
I wouldn't worry too much about where textures are stored because the driver normally does a very good job with that. Textures that are used often are also more likely to be stored in graphics memory. If you set a priority that's just a "hint" for the driver on how important it is for the texture to stay on the graphics card. It's also possible the the priority is completely ignored. You can also check where textures currently are with glAreTexturesResident.
Usually when you talk about generating a texture on the GPU, you're not actually creating texture images and applying them like normal textures. The simpler and more common approach is to use Fragment shaders to procedurally calculate the colors of for each pixel in real time from scratch for every single frame.
The canonical example for this is to generate a Mandelbrot pattern on the surface of an object, say a teapot. The teapot is rendered with its polygons and texture coordinates by the application. At some stage of the rendering pipeline every pixel of the teapot passes through the fragment shader which is a small program sent to the GPU by the application. The fragment shader reads the 2D texture coordinates and calculates the Mandelbrot set color of the 2D coordinates and applies it to the pixel.
Fullscreen mode has nothing to do with it. You can use shaders and generate textures even if you're in window mode. As I mentioned, the textures you create never actually occupy space in the texture memory, they are created on the fly. One could probably think of a way to capture and cache the generated texture but this can be somewhat complex and require multiple rendering passes.
You can learn more about it if you look up "GLSL" in google - the OpenGL shading language.
This somewhat dated tutorial shows how to create a simple fragment shader which draws the Mandelbrot set (page 4).
If you can get your hands on the book "OpenGL Shading Language, 2nd Edition", you'll find it contains a number of simple examples on generating sky, fire and wood textures with the help of an external 3D Perlin noise texture from the application.
To create a texture on GPU look into "render to texture" tutorials. There are two common methods: Binding a PBuffer context as texture, or using Frame Buffer Objects. PBuffer render to textures are the older method, and have the wider support. Frame Buffer Objects are easier to use.
Also you don't have to switch to "fullscreen" mode for OpenGL to be HW accelerated. In fact OpenGL doesn't know about windows at all. A fullscreen OpenGL window is just that: A toplvel window on top of all other windows with no decorations and the input focus grabed. Some drivers bypass window masking and clipping code, and employ a simpler, faster buffer swap method if the window with the active OpenGL context covers the whole screen, thus gaining a little performance, but with current hard- and software the effect is very small compared to other influences.

Fragment shader rendering to off-screen frame buffer

In a Qt based application I want to execute a fragment shader on two textures (both 1000x1000 pixels).
I draw a rectangle and the fragment shader works fine.
But, now I want to renderer the output into GL_AUX0 frame buffer to let the result read back and save to a file.
Unfortunately if the window size is less than 1000x1000 pixels the output is not correct. Just the window size area is rendered onto the frame buffer.
How can I execute the frame buffer for the whole texture?
The recommended way to do off-screen processing is to use Framebuffer Objects (FBO). These buffers act similar the render buffers you already know, but are not constrained by the window resolution or color depths. You can use the GPGPU Framebuffer Object Class to hide low-level OpenGL commands and use the FBO right away. If you prefer doing this on your own, have a look at the extension specification.