I am using SDL 1.2 for a project. It renders things just fine, but I want to do some small pixel shader effects. All of the examples for this show using OpenGl driver for SDL's video subsystem.
So, I start the video subsystem with opengl as the driver, and tell SDL_SetVideoMode() to use SDL_OPENGL. When I go to run the program, it now starts crashing on the SetVideoMode() call, which worked fine without forcing OpenGl).
I went back and ran the program without forcing OpenGl and dumped out SDL_VideoDriverName() and it says I am using the "directx" driver.
My question is two-pronged: what is wrong that it doesn't like the opengl driver, and how to I get SDL to use opengl without problems here? Or, how do I get the SDL surface into DirectX to apply pixel shader effects?
I would prefer to use OpenGl as it would be easier to port code to other platforms.
As an example, I have added this code that breaks when I try to use the OpenGl system:
#define WIN32_LEAN_AND_MEAN
#include <windows.h>
#include <SDL.h>
INT WINAPI WinMain( HINSTANCE hInst, HINSTANCE, LPSTR strCmdLine, INT )
{
SDL_putenv("SDL_VIDEODRIVER=opengl");
SDL_Init( SDL_INIT_EVERYTHING );
SDL_VideoInit("opengl",0);
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 ); // crashes here
SDL_Surface *mWindow = SDL_SetVideoMode(1024,768,32,SDL_HWSURFACE|SDL_HWPALETTE|SDL_DOUBLEBUF|SDL_OPENGL);
SDL_Quit();
return 0;
}
SDL without the OpenGL option uses DirectX to obtain direct access to a 3D drawing surface. Using OpenGL triggers a whole different codepath in SDL. And using OpenGL with SDL you no longer can use the SDL surface for direct access to the pixels It's very likely your program crashes, because you're still trying to directly access the surface.
Anyway, if you want to use pixel shaders, you no longer must use direct pixel buffer access, as provided by plain SDL. You have to do everything through OpenGL then.
Update
Some of the parameters you give to SDL are mutually exclusive. Also the driver name given to SDL_VideoInit makes no sense if used together with OpenGL (it's only relevant together with DirectDraw to select a specific output device).
Also, because you already did call SDL_Init(SDL_INIT_EVERYTHING) the call to SDL_VideoInit is redundant and maybe harmfull actually.
See this for a fully working OpenGL example:
http://sdl.beuc.net/sdl.wiki/Using_OpenGL_With_SDL
Related
I'm trying to make a SDL2 adapter for a graphics library. I believe this library assumes that all the things it draws in the screens stay in the screen as long as it never draws something in top of it. (See the end of the question for details about this)
What would be the best way to do it?
I've come across multiple solutions, including:
Hardware rendering without calling SDL_RenderClear. The problem with this is that SDL uses a double buffer, so the contents can end up in either of them, and I end up seing only a subset of the render at a time.
Software rendering, in which if I'm understanding SDL correctly, it would mean having a surface that I mapped to a texture, so I can render the texture, and I would edit the pixels field of the surface in main memory. This would be very slow, and also as the library expects everything rendered instantly and has no notion of frames would mean to send the data to the gpu every time there's an update (even single pixels).
I'm probably missing something about SDL2 and definitely about the library (Adafruit-GFX-Library). What does transaction api means in this context? I've not been able to find any information about it, and I feel like it could be something important.
In my understanding of SDL2 and general rendering application programming, SDL2 is designed that you draw a complete frame every time, meaning you would clear your "current window" either by OpenGL API calls
glClearColor(0, 1.0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
which clears the current OpenGL context i.e. frame buffer, of which you would have 2 or by using the SDL2 renderer which I am not familiar with.
Then you would swap the buffers, and repeat. (Which fits perfectly with this architecture proposal I am relying on)
So either you would have to replay the draw commands from your library for the second frame somehow, or you could also disable the double frame buffering, at least for the OpenGL backend, by
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 0);
(For the other OpenGL SDL2 setup code see this GitHub repository with a general helper class of mine
I want to write cross-platform 3D app (maybe game, who knows) with SFML and OpenGL 3.3 with the main purpose to learn C++.
SFML provides cool event model, handles textures, texts, inputs etc. I've done simple demo with cube (still in old glBegin/glEnd way, but I'll fix it when I'll find a way to attach OpenGL extensions).
The first problem which I got is a double bufferization. As you must know, usual (and logic) way to perform rendering uses two buffers, display buffer and render buffer. Rendering cycle performed on render buffer and when it ends, the result coping to display buffer (or maybe there's two same buffers just switching roles per cycle, don't know). That's prevents flickering and artifacts.
The trouble is that at any OpenGL example which I see authors using GLUT and functions like glutSwapBuffers. If I understand correct, double buffering is platform-specific (and that's strange for me, because I think it must be done on OpenGL part) and things like GLUT just hides platform-specific points. But I'm already using SFML for context and OpenGL initialization.
Is there any cross-platform way to deal with OpenGL double-buffering in pair with SFML? I'm not using SFML graphics in this project, but target is a RenderWindow.
SFML can handle double buffering, but if you are not using the SFML Graphics library you must use an sf::Window instance.
Double buffering is handled by calling sf::Window::setActive to set the window as the OpenGL rendering target, drawing content using OpenGL functions, and then calling sf::Window::display to swap the back buffer. More information can be found in the SFML API (linked version is v2.3.2).
I'm trying to run a an OpenGL program through gDEBugger (http://www.gremedy.com) and I'm seeing a couple of strange things:
The frames seem to be rendering MUCH faster with gDEBugger. For example, if I update some object's position every frame - it'll just fly across the screen really fast, but when the program is run without gDEBugger, it'll move at much slower speed.
Strangely, gDEBugger reports 8 GL frames/second. Which doesn't seem realistic: clearly, FPS is higher than 8 (btw I have checked every possible OpenGL Render Frame Terminator in the Debug Settings dialog). Here's a screenshot (click here for full resolution):
My program uses SDL to create an OpenGL rendering context:
Uint32 flags = SDL_HWSURFACE | SDL_DOUBLEBUF | SDL_OPENGL;
if(fullscreen) flags |= SDL_FULLSCREEN;
// Initialize SDL's video subsystem
SDL_Init(SDL_INIT_VIDEO) == -1;
// Attempt to set the video mode
SDL_GL_SetAttribute( SDL_GL_DOUBLEBUFFER, 1 );
SDL_Surface* s = SDL_SetVideoMode(width, height, 0, flags);
I'm using Windows 7 and an NVidia graphics card (geforce gtx 660m).
My question is, how does one explain the strange behavior that I'm seeing in 1) and 2) ? Could it be that for some reason the rendering is being performed in software instead of the graphics card?
UPD: Obviously, I'm calling SDL_GL_SwapBuffers (which isn't listed as one of render frame terminators) at the end of each frame, but I assume it should just call the windows SwapBuffers function.
Regarding issue 1: apparently gDebugger disables wait-for-vsync, which is why the framerate is much higher than 60 fps.
Regarding issue 2: for some reason, when working with SDL, 2 OpenGL contexts are created. One can see the correct number by adding performance counters for the second context.
I'm trying to use OpenGL to help with processing Kinect depth map input into an image. At the moment we're using the Kinect as a basic motion sensor, and the program counts how many people walk by and takes a screen shot each time it detects someone new.
The problem is that I need to get this program to run without access to a display. We're wanting to run it remotely over SSH, and the network traffic from other services is going to be too much for X11 forwarding to be a good idea. Attaching a display to the machine running the program is a possibility, but we're wanting to avoid doing that for energy consumption reasons.
The program does generate a 2D texture object for OpenGL, and usually just uses GLUT to render it before a read the pixels and output them to a .PNG file using FreeImage. The issue I'm running into is that once GLUT function calls are removed, all that gets printed to the .PNG files are just black boxes.
I'm using the OpenNI and NITE drivers for the Kinect. The programming language is C++, and I'm needing to use Ubuntu 10.04 due to the hardware limitations of the target device.
I've tried using OSMesa or FrameBuffer objects, but I am a complete OpenGL newbie so I haven't gotten OSMesa to render properly in place of the GLUT functions, and my compilers can't find any of the OpenGL FrameBuffer functions in GL/glext.h or GL/gl.h.
I know that textures can be read into a program from image files, and all I want to output is a single 2-D texture. Is there a way to skip the headache of off-screen rendering in this case and print a texture directly to an image file without needing OpenGL to render it first?
The OSMesa library is neither a drop-in replacement for GLUT, nor can work together. If you only need the offscreen rendering part without interaction you have to implement a simple event loop yourself.
For example:
/* init OSMesa */
OSMesaContext mContext;
void *mBuffer;
size_t mWidth;
size_t mHeight;
// Create RGBA context and specify Z, stencil, accum sizes
mContext = OSMesaCreateContextExt( OSMESA_RGBA, 16, 0, 0, NULL );
OSMesaMakeCurrent(mContext, mBuffer, GL_UNSIGNED_BYTE, mWidth, mHeight);
After this snipped you can use the normal OpenGL calls to render and after a glFinish() call the results can be accessed through the mBuffer pointer.
In you event loop you can call your normal onDisplay, onIdle, etc callbacks.
We're wanting to run it remotely over SSH, and the network traffic from other services is going to be too much for X11 forwarding to be a good idea.
If you forward X11 and create the OpenGL context on that display, OpenGL traffic will go over the net no matter if there is a window visible or not. So what you actually need to do (if you want to make use of GPU accelerated OpenGL) is starting an X server on the remote machine, and keeping it the active VT (i.e. the X server must be the program that "owns" the display). Then your program can make a connection to this very X server only. But this requires to use Xlib. Some time ago fungus wrote a minimalistic Xlib example, I extended it a bit so that it makes use of FBConfigs, you can find it here: https://github.com/datenwolf/codesamples/blob/master/samples/OpenGL/x11argb_opengl/x11argb_opengl.c
In your case you should render to a FBO or a PBuffer. Never use a visible window framebuffer to render stuff that's to be stored away! If you create a OpenGL window, like with the code I linked use a FBO. Creating a GLX PBuffer is not unlike creating a GLX Window, only that it will be off-screen.
The trick is, not to use the default X Display (of your SSH forward) but a separate connection to the local X Server. The key is the line
Xdisplay = XOpenDisplay(NULL);
Instead of NULL, you'd pass the connection to the local server there. To make this work you'll also need to (manually) add an xauth entry to, or disable xauth on the OpenGL rendering server.
You can use glGetTexImage to read a texture back from OpenGL.
I rendered this video stream in one opengl window (called by the Main window (UnitMainForm.cpp: I am using Borland Builder C++ 6.0)).
In this first openGL window, there is a timer on which timer a boolean "lmutex" is switched and a "DrawScene" function is called followed by a "Yield" function.
In this "DrawScene" function, the video stream frames are drawn by a function called "paintgl".
How can I render this video stream in another borland builder window, preferably with the use of pixel buffers?
This second borland builder is intended to be a preview window, so it can be of a smaller size (mipmap?) and with a slower timer (or the same size, same timer, it's ok too).
Here are the results I had with different techniques:
with pixel buffers, I achieved (all in the DrawScene function) to write the paintgl on a backbuffer and with wglShareLists to render this backbuffer to a texture mapped to a quad; but I can't manage to use this texture in another window, wglShareLists works in the first window but fails in the second window when I try to share the objects of the back_buffer with the new window RC (pixel format problem ?) (C++ problem perhaps? How to keep the buffer without it being released, and render it on a quad in a different DC (or same RC ?):
Access violation on wglBindTexImageARB ; due to WGL_FRONT_LEFT_ARB not defined allthoug wglext.h included?
wglShareLists fails with error 6 : ERROR_INVALID_HANDLE The handle is invalid
with calling two objects of the same class (the opengl window): I see one time on three times the two video streams correctly rendered; but one time on three times there is a constant flicker on one or both window, and one time on three one or the other window is constantly blank or constantly black; perhaps should I synchronize the timers or is there a way to have no flicker? but this solution seems to me sketchy: the video stream sometimes slows on one of the two windows, I think it heavy to call twice the capture video stream.
I tried to use FBO, with GLew, or with wgl functions but I got stuck on access violations on glGenFrameBuffer; perhaps Borland 6 (2002) is perhaps too old to support FBO (~2004 ?); I updated my really recent NVIDIA card (9800GT) drivers and downloaded the nvidia opengl SDK (which is just an exe file : strange) :
Using Frame Buffer Objects (FBO) in Borland C++ Builder 6
Is there a C++ program canvas, or pieces of code which would clarify how I can display in a second window the video I perfectly display in one window?
First of, all the left and right draw buffers are not meant to be used for rendering to two different render contexts, but to allow for stereoscopic rendering in one render context being signalled to some 3D hardware (e.g. shutter glasses) by the driver. Apart from that does your graphics hardware/driver not support that extension - the identifiers being defined in glew or not.
What you want to do is to render your video frames to a VBO and share that VBO with two render contexts. Basically a VBO is just a texture that you can use both as render target (render buffer) or render source (texture).
There are numerous VBO examples out there, most coded in C though. If you can read German, you may however want to check DelphiGL.com; the people there have very good OpenGL knowledge and quite a useful Wiki with docs, examples and tutorials.