Im using SDL and my program use CPU a lot - sdl

how can I use graphic card instead of CPU when I call SDL functions?

You cannot, SDL is only a software renderer. However, you can use SDL to create a window, catch events and then you can perform your drawing using OpenGL.
If your program is eating 100 % CPU, make sure that you limit the FPS correctly (by adding SDL_Delay to the main loop).

If possible, you can use SDL2.0.
That supports Full 3D hardware acceleration using SDL1.2 similar functions.
http://wiki.libsdl.org/MigrationGuide

In SDL 2.0, create your renderer with SDL_RENDERER_ACCELERATED and SDL_RENDERER_PRESENTVSYNC flags. First one implies hardware acceleration when possible, and second limits your program to run 60(depends on monitor's refresh rate) fps at most, thus freeing CPU from continuous work.

Related

How does HWSURFACE differ from a texture?

When making a surface in SDL, there's an option to use the HWSURFACE tag, which I imagine means the surface is handled by the GPU instead of the CPU. But now SDL2 has textures, and I'm wondering, what's the difference? Will there be and performance difference using hardware surfaces instead of textures? Do they behave the same?
I've tried googling all over, but I can only find info on regular software surfaces.
There are no textures support in SDL1, and there is no HWSURFACE (or any other surface flag) in SDL2. flags in SDL_CreateRGBSurface in SDL2 commented as "The flags are obsolete and should be set to 0". There is no sane way to mix them.
Specifying SDL_HWSURFACE would cause the surface to be created using video memory. PCs without GPUs still have video memory and it is faster to put data in that memory area onto the screen than to copy it from system RAM.
Textures are uploaded to the GPU's own dedicated RAM and data must be passed through that memory in order to be put on the screen at all. SDL2 no longer has the SDL_HWSURFACE flag because the rendering subsystem uses the GPU via OpenGL or Direct3D and cannot use the old way to get graphics on the screen.

double buffering with FBO+RBO and glFinish()

I am using an FBO+RBO, and instead of regular double buffering on the default framebuffer, I am drawing to the RBO and then blit directly on the GL_FRONT buffer of the default FBO (0) in a single buffered OpenGL context.
It is fine and I dont get any flickering, but if the scene gets a bit complex, I experience a HUGE drop in fps, something so weird that I knew something had to be wrong. And I dont mean from 1/60 to 1/30 because of a skipped sync, I mean a sudden 90% fps drop.
I tried a glFlush() after the blit - no difference, then I tried a glFinish() after the blit, and I had a 10x fps boost.
So I used regular doble buffering on the default framebuffer and swapbuffers(), and the fps got a boost as well, as when using glFinish().
I cannot figure out what is happening. Why glFinish() makes so much of a difference when it should not? and, is it ok to use a RBO and blit directly on the front buffer, instead of using a swapbuffers call in a double buffering context? I know Im missing vsync but the composite manager will sync anyway (infact im not seeing any tearing), it is just as if the monitor is missing 9 out of 10 frames.
And just out of curiosity, does a native swapbuffers() use glFinish() on either windows or linux?
I believe it is a sync-related issue.
When rendering directly to the RBO and blitting to the front buffer, there is simply no sync whatsoever. Thus on complex scenes the GPU command queue will fill quite quickly, then the CPU driver queue will fill quickly as well, until a CPU sync will be forced by the driver during an OpenGL command. At that point the CPU thread will be halted.
What I mean is that, without any form of sync, complex renderings (renderings for which one or more OpenGL command will be put in a queue) will always cause the CPU thread to be halted at some point, since as the queues will fill, the CPU will be issuing more and more commands.
In order to get a smooth (more constant) user interaction, a sync is needed (either with a platform-specific swapbuffers() or a glFinish()) so to stop the CPU from making things worse issuing more and more commands (which in turn would bring the CPU thread to a stop later)
reference: OpenGL Synchronization
There are separate issues here, that are also a little bit connected.
1) Re-implementing double buffering yourself, while on spec the same thing, is not the same thing to the driver. Drivers are highly optimized for the common case. For example, many chips have distinct 2d and 3d units. The swap in swapBuffers is often handled by the 2d unit. Blitting a buffer is probably still done with the 3d unit.
2) glFlush (and Finish) are ignored by many drivers. Flush is a relic of client server rendering. Finish was intended for profiling. But it got abused to work around driver bugs. So now drivers often ignore it to improve the performance of legacy code that used Finish as a workaround.
3) Just don't do single buffered. There is no performance benefit and you are working off the "good" path of the driver. Window managers are super optimized for double buffered opengl.
4) What you are seeing looks a lot like you are simply leaking resources. Do you allocate buffers without freeing them? A quick and dirty way to check is if any glGen* functions return ever increasing ids.

opengl fixed function uses gpu or cpu?

I have a code which basically draws parallel coordinates using opengl fixed func pipeline.
The coordinate has 7 axes and draws 64k lines. SO the output is cluttered, but when I run the code on my laptop which has intel i5 proc, 8gb ddr3 ram it runs fine. One of my friend ran the same code in two different systems both having intel i7 and 8gb ddr3 ram along with a nvidia gpu. In those systems the code runs with shuttering and sometimes the mouse pointer becomes unresponsive. If you guys can give some idea why this is happening, it would be of great help. Initially I thought it would run even faster in those systems as they have a dedicated gpu. My own laptop has ubuntu 12.04 and both the other systems have ubuntu 10.x.
Fixed function pipeline is implemented using gpu programmable features in modern opengl drivers. This means most of the work is done by the GPU. Fixed function opengl shouldn't be any slower than using glsl for doing the same things, but just really inflexible.
What do you mean by coordinates having axes and 7 axes? Do you have screen shots of your application?
Mouse stuttering sounds like you are seriously taxing your display driver. This sounds like you are making too many opengl calls. Are you using immediate mode (glBegin glVertex ...)? Some OpenGL drivers might not have the best implementation of immediate mode. You should use vertex buffer objects for your data.
Maybe I've misunderstood you, but here I go.
There are API calls such as glBegin, glEnd which give commands to the GPU, so they are using GPU horsepower, though there are also calls to arrays, other function which have no relation to API - they use CPU.
Now it's a good practice to preload your models outside the onDraw loop of the OpenGL by saving the data in buffers (glGenBuffers etc) and then use these buffers(VBO/IBO) in your onDraw loop.
If managed correctly it can decrease the load on your GPU/CPU. Hope this helps.
Oleg

Constantly lag in opengl application

I'm getting some repeating lags in my opengl application.
I'm using the win32 api to create the window and I'm also creating a 2.2 context.
So the main loop of the program is very simple:
Clearing the color buffer
Drawing a triangle
Swapping the buffers.
The triangle is rotating, that's the way I can see the lag.
Also my frame time isn't smooth which may be the problem.
But I'm very very sure the delta time calculation is correct because I've tried plenty ways.
Do you think it could be a graphic driver problem?
Because a friend of mine run almost the exactly same program except I do less calculations + I'm using the standard opengl shader.
Also, His program use more CPU power than mine and the CPU % is smoother than mine.
I should also add:
On my laptop I get same lag every ~1 second, so I can see some kind of pattern.
There are many reasons for a jittery frame rate. Off the top of my head:
Not calling glFlush() at the end of each frame
other running software interfering
doing things in your code that certain graphics drivers don't like
bugs in graphics drivers
Using the standard windows time functions with their terrible resolution
Try these:
kill as many running programs as you can get away with. Use the process tab in the task manager (CTRL-SHIFT-ESC) for this.
bit by bit, reduce the amount of work your program is doing and see how that affects the frame rate and the smoothness of the display.
if you can, try enabling/disabling vertical sync (you may be able to do this in your graphic card's settings) to see if that helps
add some debug code to output the time taken to draw each frame, and see if there are anomalies in the numbers, e.g. every 20th frame taking an extra 20ms, or random frames taking 100ms.

Will i use the power of GPU when using a OpenGL program?

I've got a pretty old ATI HD 3400 video card , which have no support of OpenCL , so i'm wondering if i can actually play around with OpenGL libraries provided by ATI catalyst driver ?
If my algorithm is running in glutDisplayFunc ( displayFunc ) , that is in displayFun () , is it actually costing CPU power or GPU power ?
GLUT is just a library which manages platform-specific window and GL context creation. The function you pass to glutDisplayFunc is just called by GLUT at the appropriate time and context for the platform you're running on; it is not executed on the GPU.
It is not possible to have code that you've compiled in the normal fashion as part of a larger program run on the GPU.
However, the individual graphics operations run inside of your display func do of course perform the rendering on the GPU; the CPU is still computing which graphics operation to execute, but not actually rendering the results. Each gl function is a normal CPU function, but what it does is send a command through your system bus to your graphics card, which then does the actual rendering.
Furthermore, these operations are asynchronous; the gl functions don't wait for your GPU to finish the operation before letting your program continue. This is useful because your CPU and GPU can both be working simultaneously — the GPU draws graphics while the CPU figures out what graphics to draw. On the other hand, if you do need communication in the other direction — such as glReadPixels — then the CPU has to wait for the GPU to catch up. This is also the difference between glFlush and glFinish.