I am looking for the ability to open OpenGL contexts and draw native OpenGL in windows, macOS, linux distros with X, android and iOS
I don't want to rely on the "native" device framework for the actual UI, I don't need to use native components, all I want is an OpenGL context to natively draw in OpenGL. Many of the cross platform SDKs like Marmalade and MoSync focus on making use of the native UI components and stuff like that, all I need is an OpenGl context to draw as I intend to, absolutely no native UI functionality is required, however, access to native hardware features like microphone, camera and other sensors is desired if possible, as well as access to audio/video/network.
I don't want to use QT, I want to do something that is closer to the hardware to work on the low level. The general idea is to make a lightweight cross platform hardware accelerated GUI, written on a level low enough to be truly hardware native, without relying on any native software framework. I know for android I may have to use a java wrapper to launch the native code, but the idea is to have this wrapper minimized, with very little modifications needed to deploy the low level and thus hopefully TRULY cross-platform code, that is only dependent on OpenGL hardware and OpenGL context for it to work wit.h
So I need a bare minimum solution to avoid using non-cross platform features as much as possible.
At the time the only library that comes to my mind is SDL, but I am not sure it supports android and iOS property, so besides library recommendations, more information on how SDL handles android and iOS devices and their hardware is welcomed too.
How about:
GLFW
SDL
GLUT (FreeGLUT or OpenGLUT)
Blender's GHOST framework?
Essentially they open a window, create a OpenGL context on it, deliver you the input events and leave the rest up to you.
What you want is nothing more than a platform-specific OpenGL context setup (which is quite simple and well documented: the NeHe OpenGL tutorial provides code for many environments that does just this here (the explanation is Windows specific, scroll down for the code on different OSes).
Once you have the OpenGL context, nothing prevents you from creating a full GUI with all OpenGL elements.
If you want, you could use Qt to only set up an OpenGL context (ie don't use any QWidgets or anything, other than the window showing you your OpenGL scene). It takes care of the whole setup process, but for only that, Qt becomes a huge dependency, as it only really replaces at most 100 lines of code per platform.
With regards to SDL+Android, have you checked the README?
And for iOS check the same file here.
Related
Is there not a simple sdk for opengl why there are glut or glew and what are the written using? Which one should I use? There are so many frameworks out there that it is so confusing? Why there isn't just one sdk for opengl?
why there are glut or glew and what are the written using?
GLUT, GLFW and similar frameworks exist, because the system level interfaces to create a window and a OpenGL context are extremely versatile in what they can do, but it also takes a lot of code, just to create a window and them some more to create the OpenGL context. …and then some lots more of code to also do things like UI event management and so on.
These system level APIs are that verbose, because on a system level, you don't want to pin down what you can and can not do with these APIs.
So the three calls
glutInitDisplayMode(…);
glutCreateWindow(…);
glutMainLoop();
encapsulate about 1.5k lines of code dealing with just the system. But then with GLUT you don't get nice buttons, menu bars and the likes. For that you'd use a different framework, like Qt, which has even more code, but uses the same system level APIs as GLUT does. Same goes for GLFW and so on.
GLEW we have, because OpenGL can be extended and the system level interface for extending it is again down for minimalism and flexibility.
Why there isn't just one sdk for opengl?
Because in the end OpenGL is a system level API and the "SDK" comes ready-to-use with your regular system default compiler.
If you want to, you can create OpenGL programs without GLEW, GLUT and so on. But it is tedious.
I have been using a few cross-platform GUI libraries (such as FLTK, wxWidgets, GTK++), however I feel like none fulfil my needs as I would like to create something that looks the same regardless of the platform (I understand that there will be people against building GUI's that don't have a native look on the platforms but that's not the issue here).
To build my controls, I usually rely on basic shapes provided by the library and make my way up binding & coding everything together...
So I decided to give it a try and do some opengl for 2D GUI programming (as it would still be cross-platform. With that in mind, I couldn't help to notice that the applications that I have written using wxWidgets & FLTK usually have a average RAM consume of 1/2MB, whereas a very basic openGL window with a simple background ranges from 6 to 9 MB.
This brings me to the actual question for this thread,
I thought that all the rendering of the screen was made using either opengl/direct (under the covers).
Could someone please explain or link me some sort of article that could give me some insight of how these things actually work?
Thanks for reading!
These multiplatform toolkits usually support quite a lot of backends which does the drawing. Even though some of the toolkits support OpenGL as their backend, the default is usually the "native" backend.
Take a look eg. at Qt. On Windows it uses GDI for drawing for its native backend. On linux it uses XRender I think. Same on Symbian and Mac. Qt also has its own software rasterizer. And of course there is an OpenGL backend.
So why the application using some of these GUI toolkits can take less memory than a simple OpenGL application? If the toolkit use the "native" backend, everything is already loaded in memory, because it is very likely that all visible GUI uses the same drawing API. The native APIs can also use only one buffer representing a whole screen in which all applications can draw.
However when using OpenGL you have your own buffer which represents the application window. Not to mention that an OpenGL application usually has several framebuffers, like z-buffer, stencil buffer, back buffer, which are not essential for 2D drawing, but they take some space (even though its probably the space in graphics card memory). Finally, when using OpenGL, it is possible that the necessary libraries are not yet loaded.
Your question is exceedingly vague, but it seems like you're asking about why your GL app takes up more memory than a basic GUI window.
It's because it's an OpenGL application. This means it has to store all of the machinery needed to make OpenGL work. It means it needs a hefty-sized framebuffer: back buffer, z-buffer, etc. It needs a lot of boilerplate to function.
Really, I wouldn't worry about it. It's something every application does.
I was wondering what is the difference between the windows that will render images on the screen (such as SDL, SFML or OpenGL) and the classic GUI window (with the gray background by default) where you can implement buttons like in Qt for C++ or AWT/Swing in Java?
What is going on in the background code? Are they the same type? Is there a rendering layer over the graphics window allowing to display such images?
Well first of all they are different APIs. SDL and SFML are libraries directed at making games and quite possibly other applications. OpenGL is a graphics API, it is not a full suite of libraries.
Note also that SFML pretty much uses OpenGL to render to the window. The actual window its self is created via platform specific functions. The Win32 API is used for windows and the X11 Window System is generally used on Linux.
The "classic GUI window" is pretty much the platform specific APIs. The differences in background code is really just defined by the purpose of the API. Note that in the end of the line Qt/SFML/SDL all go down to the platform specific API. OpenGL even requires you to interface with the platform specific API. SFML/SDL/QT essentially do the lower level work for you.
I hope I gave what you are looking for as this question really has a wide range of answers.
I have a simulation software (C++) that runs on the command line. It is platform independent (currently compiling and running on Windows, MacOS X and Linux). When the simulation ends, I visualize the result with SDL; it is a very basic 2d view, mainly color squares next to each other.
I would like to have a user interface on top of the simulation so that I can start and pause the simulation, and change the parameters on the fly. Something pretty simple I guess. Well, ideally I will also add a grapher somewhere to see the evolution over time of some parameters.
Now, I am wondering what direction I should go.
Should I try to use one of the UI libraries for SDL ?
Or maybe wxwidget in conjunction with SDL ?
Or simply wxwidget and get rid of SDL ?
Do you have any experience with this ?
Thanks in advance
Barth
PS: I tried to use AGAR, a SDL UI library. It seemed very promising but I couldn't get it working. Not even the helloworld.
It may be worth you time to look into Qt. It is generally the most mature free Gui framework available. It is cross platform. And it has hardware accelerated rendering if your drawing needs some speed.
Here is a comparison posted on WxWidgets site.
In the end if your windowing needs are minimal you should choose the framework you are most comfortable with.
Probably using wxWidgets without SDL would be the easiest way to go. SDL is a media layer -- it's supposed to allow cross-platform media application development. As you only need graphical display, you only need wxWidgets -- and it will be a lot easier too!
You would benefit from SDL if:
you'd need very fast blitting of very large amount of surfaces (we're talking the 60fps range here)
you'd use RLE, color keying or other graphics operations
you'd use other media (sound, advanced real-time input, etc)
you'd need to run the software on embedded systems (handheld consoles, etc)
If the answer to all 4 is "no", then you won't benefit from SDL, and using wx alone will be much easier.
I've been challenged with a C++ 3D application project that will use 3 displays, each one rendering from a different camera.
Recently I learned about Ogre3D but it's not clear if it supports output of different cameras to different displays/GPUs.
Does anyone have any experience with a similar Setup and Ogre or another engine?
At least on most systems (e.g., Windows, MacOS) the windowing system creates a virtual desktop, with different monitors mapped to different parts of the desktop. If you want to, you can (for example) create one big window that will cover all three displays. If you set that window up to use OpenGL, almost anything that uses OpenGL (almost certainly including Ogre3D) will work just fine, though in some cases producing that much output resolution can tax the graphics card to the point that it's a bit slower than usual.
If you want to deal with a separate window on each display, things might be a bit more complex. OpenGL itself doesn't (even attempt to) define how to handle display in multiple windows -- that's up to a platform-specific set of functions. On Windows, for example, you have a rendering context for each window, and have to use WGLMakeCurrent to pick which rendering context you draw to at any given time.
If memory serves, the Windows port of Ogre3D supports multiple rendering contexts, so this shouldn't be a problem either. I'd expect it can work with multiple windows on other systems as well, but I haven't used it on any other systems, so I can't say with any certainty.
My immediate guess, however, is that the triple monitor support will be almost inconsequential in your overall development effort. Of course, it does mean that you (can tell your boss) need a triple monitor setup for development and testing, which certainly isn't a bad thing! :-)
Edit: OpenGL itself doesn't specify anything about full-screen windows vs. normal windows. If memory serves, at least on Windows to get a full screen application, you use ChangeDisplaySettings with CDS_FULLSCREEEN. After that, it treats essentially the entire virtual desktop as a single window. I don't recall having done that with multiple monitors though, so I can't say much with any great certainty.
There are several things to be said about multihead support in the case of OGRE3D. In my experience, a working solution is to use the source version of Ogre 1.6.1 and apply this patch.
Using this patch, users have managed to render an Ogre application on a 6 monitors configuration.
Personnaly, I've successfully applied this patch, and used it with the StereoManager plugin to hook up Ogre applications with a 3D projector. I only used the Direct3D9 backend. The StereoManager plugin comes with a modified demo (Fresnel_Demo), which can help you to set up your first multihead application.
I should also add that the multihead patch is now part of the Ogre core, as of version 1.7. Ogre1.7 was recently released as a RC1, so this might be the quickest and easiest way to have it working.