Creating a GUI in OpenGL, is it possible? [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I'm trying to create a custom GUI in OpenGL from scratch in C++, but I was wondering is possible or not?
I'm getting started on some code right now, but I'm gonna stop until I get an answer.

YES.
If you play a video game, in general, every UIs should be implemented by APIs like OpenGL, DXD, Metal or Vulkan. Since a rendering surface has higher frame rate than OS UI APIs, using them together slows down the game.
Starting with making a view class as a base class, implement actual UI classes like button, table and so on inherited from the base class.
Making UIs using a GFX API is similar to making a game in terms of using same graphics techniques such as Texture Compression, Mipmap, MSAA and some special effects and so on. However, handling a font is a sort of huge part, for this reason, many game developers use a game engine/UI libraries.

https://www.twitch.tv/heroseh
Works on a Pure C + OpenGL User Interface Library daily at about 9AM(EST).
Here is their github repo for the project:
https://github.com/heroseh/vui
I myself am in the middle of stubbing in a half-assed user interface that
is just a list of clickable buttons. ( www.twitch.com/kanjicoder )
The basic idea I ran with is that both the GPU and CPU need to know about your
data. So I store all the required variables for my UI in a texture and then
sync that texture with the GPU every time it changes.
On the CPU side its a uint8 array of bytes.
On the GPU side it's unsigned 32 bit texture.
I have getters and setters both on the GPU (GLSL code) and CPU (C99) code that
manage the packing and unpacking of variables in and out of the pixels of the texture.
It's a bit crazy. But I wanted the "lowest-common-denominator" method of creating
a UI so I can easily port this to any graphics library of my choice in the future.
For example... Eventually I might want to switch from OpenGL to Vulkan. So if I keep
most of my logic as just manipulations of a big 512x512 array of pixels, I shoudn't
have too much refactoring work ahead of me.

Related

Want to write a screen capture recording app, where do I start? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I want to start writing an application that can capture screen content, or capture specific full screen app content, but I am not sure where to start.
Ideally this would be written using OpenGL but I don't know the capabilities for OpenGL to capture application screen content. If I could use OpenGL to capture, let's say World of Warcraft, that would be perfect.
the capabilities for OpenGL to capture application screen content
are nonexistent. OpenGL is an API for getting things on the screen. There's exactly one function to retrieve pixels back from OpenGL (glReadPixels) and it's only asserted to work for things that have been drawn by the very OpenGL context with which that call to glReadPixels is made; and even that is highly unreliable for anything but off-screen FBOs, since the operating system is at liberty to clobber, clear or otherwise alter the main window's framebuffer contents at any time.
Note that you can find several tutorials on how to do screenshots with OpenGL scattered around the internet. And none of them works on modern computer systems, because the undefined behaviour on which those rely (all windows on a screen share one large contiguous region of the GPUs scanout framebuffer) no longer holds in modern graphics systems (ever window owns its own, independent set of framebuffers and the on-screen image is composited from those).
Capturing screen content is a highly operating system dependent task and there's no silver bullet on how to approach it. Some systems provide ready to use screen capture APIs; however depending on the performance requirements those screen capture APIs may not be the best choice. Some capture programs inject a DLL into each and every process to tap into the rendering process right at the source of the generated images. And some screen capture systems install a custom kernel driver to get access to the graphics cards video scanout buffer (which is usually mapped into system address space), bypassing the graphics card's driver to copy out the contents.

Is OpenGL the right choice for highest quality renders, without time constraints? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
Background: I'm writing a program that creates generative art. I care about creating one final static image, and I don't need to render a bunch of frames per second. So far it's been 2D, and I'm on a Mac so I've been using the Core Graphics (aka Quartz) 2D drawing API. I've reached it's limits so I started messing with OpenGL, but I'm not happy with the antialiasing so far.
I'm wondering if I should invest in learning it, or whether it's not built for what I want. Is OpenGL more about creating moving graphics as fast as possible, mainly for games? If I want the highest quality rendering (high resolution, smooth curves, best antialiasing, arbitrary lighting and shading algorithms) do I need to write my own renderer, or does it make sense to learn OpenGL? Will I be able to use it as a base?
OpenGL is not a general purpose graphics library.
OpenGL is a API with the design being focused on controlling GPUs for purposes of drawing realtime graphics. If you know how to use it you can use OpenGL to generate high quality, close to photorealistic drawings. But it takes a lot of effort to do this.
Antialiasing is actually rather easy to do with high quality: Select a multisampled frambuffer format with a high subsampling density, enable multisampling and render.
However your use case sounds more like the task for an offline renderer like Renderman, Pixie, Yafa-Ray, and similar.
You want RenderMan, Pixar's CGI rendering software. Your program could either generate RIB files, which are intended to be the 3D equivalent of PostScript files; or you could use the RenderMan C API directly.
RM has a richer set of built-in primitives, for instance quadrics and subdivision surfaces, and since it's designed for film work you can do everything from toon shading to photorealism.
3Delight have a free/low cost RM renderer you can use on single systems, and Pixar announced a month or so back that they will be providing a free version of RenderMan for individual use Real Soon Now.
The RenderMan Companion by Steve Upstill is the classic guide to programming RM. A more recent book is Rendering for Beginners by Saty Raghavachary.
Hope this helps.
Is OpenGL the right choice for highest quality renders, without time constraints?
No. As datenwolf has explained, OpenGL is designed to take full advantage of the capabilities of the GPU to do real time rendering. There are existing products designed for extremely high quality renders.
Is OpenGL the right choice for your project?
Maybe. None of the capabilities require a high quality renderer, and all of them can be done fairly easily in OpenGL. The main thing that it does not support is ray tracing.* If you need your renders to be ray traced, then you would be better off looking at other options.
*It is theoretically possible to do ray tracing in OpenGL, but would be a lot of work.
For quality? No.
Sure, you can get far, but as fintelia stated, it does not support raytracing (could do that with OpenCL, but that's not really OpenGL).
There are indeed some impressive OGL/D3D renderers out there, but most of the renders seen today are software ones (CPU/Parallelism/CUDA/Compute), V-Ray, Mental Ray, Renderman.

Need advice on 3D and 2D C++ framework selection [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
We are a small company with only 2 programmers. We currently make small 2D and 3D games for desktop and mobile using Adobe Flash/Air. We want to stop using that framework and start learning and developing on C++ because there are much more and better libraries and frameworks available on C++.
I'm not sure about the libraries to use for rendering. I know that Ogre3D is a great rendering engine for 3D content but sometimes we need to make 2D games or "2.5D" games, sometimes with video playback, and all that need to be mixed with 3D scenes.
I know there are 2D frameworks like cocos2D-x and smfl that works with OpenGL (I don't know much about OpenGL) and can do all the 2D things I need, but can those frameworks be combined with Ogre3D? And can it be done without the need of knowing how all the Ogre3D internal stuff or OpenGL works?
If Ogre3D can be combined with any 2D engine, what do I need to learn to merge the frameworks?
Given that you have been using flash, I am guessing you are not porting old C++ code.
Also, since you don't want to know about how the internals of the framework you're using or how OpenGL actually works, you don't need a low level language like C++.
An abundance of open source libraries is not a very good reason to program your game in C++ either.
Unity3D has a free basic license, and provides everything you need out of the box.
For now, you can use planes with textures to do your 2D work, but Unity will also be coming out with a set of Native 2D Tools in the near future. Also, a new GUI system is being created.
For any C++ library you think you may need, there is probably something already built into unity that does what you want. If there isn't, there is probably a .NET port that you can use. And if all else fails, you can write a C interface for any library you need, and use it as a plugin in Unity.
One big problem with Unity though, is that you need Unity Pro to use plugins. Unity licenses are per-platform, So if you decide to use plugins, and release your game for multiple platforms, you could end up paying a lot of money in licensing fees.
Finally, it's not just an application framework you'll need. You'll also need a level editor. Building a 3D level editor is not a trivial task, and given that your team consists of only two people, this fact alone should be enough to seriously consider using Unity.
So unless you are porting old code, need low level access to hardware, or have specific needs for native code, my advice is don't use C++, just use Unity.
Yes, Ogre3D can handle such "2D tasks" as playing a video. Simply a plane in 3D space that it gets projected onto. However for pure 2D projects an 3D rendering engine such as Ogre3D is usually overkill. If you are talking about 2,5D though, you are back in play with Ogre3D.
Regarding integrations: Not completely sure, but I guess those other 2D frameworks need an OpenGL rendering context that you can get from Ogre.
EDIT: Same question has been asked in the official Ogre3D forums.
We are another small team working on game development.
We tried many rendering engines and finally settled down with Irrlicht Rendering engine. Irrlicht is no way better than Ogre 3D or am not trying to prove that. We felt Irrlicht more flexible for our need. It also supports 2D rendering and it is quite fast with batching. Irrlicht can be easily ported to other platforms, It took us a week to port it to Google Chrome NaCl.
Irrlicht is a very basic rendering system that supports OpenGL and OpenGL ES, so its easy for your to go mobile. You can add any advanced features without much effort. Some of our games are available for iOS, Android, Windows PC, Mac OSX, Linux and Google Chrome Native Client.

OpenGL Core and Compatibility [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I'm trying to learn OpenGL. I've got experience with C and C++, setting up a build environment, and all that jazz, but I'm trying to figure out a good starting point.
I'm aware of the fixed function pipeline that was prominent in OpenGL <= 2.1, and it seems relatively easy to get started with. However, the core profile that OpenGL pushes in OpenGL >= 3.1 makes me want to stay away from the FFP due to deprecation. But I'm confused as to how it all works in 3.1 and above. In 2.1 and below you have your glBegin(GL_WHATEVER) and glEnd() when you're drawing shapes. The first thing I noticed when looking through the core profile API, is that those two function calls are gone. I realize there's probably a simple replacement, but it's quite shocking to see something so (seemingly useful) taken out of such a basic task. It almost seems like deprecating printf() from the c standard library. And when I work through the newest Red Book, they still use the old deprecated code which further muddles my thinking.
When reading through various answers to similar questions I see the typical "shader based" or "it's all done with shaders" etc etc. If I want to draw a simple white square onto a black background (the first example in the newest Red Book), I don't understand how a shader is relevant to drawing a box at all. Shouldn't they do... well.. shading? I've looked into buying the Orange Book and the Blue Book, but I don't want to spend anymore money on something that's going to hide it all behind a library (the Blue Book) or something that's going to talk about programming a shader to perform some lighting task in a 3D environment (the Orange Book).
So where do I begin? How do I draw a box (or a cube or a pyramid or whatever) using nothing but the core profile. I'm not asking for a code snippet here, I'm looking for a expansive tutorial or a book or something that someone could point me to. If this has been answered previously and I didn't find it, please redirect me.
The reason for the sudden "complexity" in the core profile is the fact that the fixed functionality pipeline was not representative of what the GPU actually does for you. Much of the functionality was done on the CPU, and only the actual drawing happened on the GPU. The other problem with the fixed pipeline is that it's a losing battle. The fixed pipeline has sooooooo many knobs and switches! So, not only is it painfully complicated already, it will never keep up with the endless demand of new ways to draw scenes. Enter GLSL, and you have the ability to tell the GPU precisely how you want to draw your scene. This shifts the power to the developer and frees everyone from having to wait for OpenGL updates for new switches/knobs.
Now, regarding your frustration with the sudden loss of glBegin and glEnd... there are simple frameworks that mimic their behavior on the new core profile, and that is a good thing. Again, it shifts the power to developers to choose how they approach the pipeline. However, there is nothing wrong with practicing 3D on the FFP. You need to learn 3D math and concepts first anyway. Those concepts apply regardless of API. (Matrix math will save your life both in OpenGL and Direct3D.) So, first you practice with simple triangles and colors. Then you move onto textures (with texture coordinates). Then you add normals (with lighting). Then, after you understand all those concepts, you stop using glBegin/glEnd, and you start batching large amounts of vertex data into buffers. You will not understand glDrawElements all that well if you do not understand glBegin/glEnd anyway. So, it's OK to learn on those tools.

Modeling in OpenGL [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm trying to model a seashell using a bunch of polygons, for example as shown in: link text
But I am new to OpenGL. How would I get started modeling this, and what would the procedure be like?
I am assuming you are going to generate the geometry procedurally instead of "sculpting it".
What you need to do is to generate your geometry just like in the mathematics example and store your it in vertex buffer objects (VBO). There are multiple ways of doing this, but generally you will want to store you vertex information (position, normal, texture coords if any) in one buffer, and the way these vertices are grouped into faces in another (called an index array).
You can then bind these buffers and draw them with a single call to glDrawElements().
Be careful that the vertices in the faces are all in the same winding order (counter-clockwise or clockwise) and the the winding order is specified correctly to OpenGL, or you will get your shell inside out!
VBOs are supported in OpenGL 1.4 and up. In the extremely unlikely event that your target platform does not support that (update your drivers first!) you can use Vertex Arrays. They do pretty much the same thing, but are slower as they get sent over the bus every frame.
While modelling objects procedurally (i.e. generating coordinates as numbers in the code) may be OK for learning purposes, it's definitely not the thing you want to do, as it gets very impractical if you have anything more complicated than a few triangles or a cyllinder). Some people consider procedural generation an art, but you need a lot of practice to achieve nice-looking (not to mention, realistic) results with that approach.
If you want to display a more complex, realistic model, the approach is to:
create the model in a modelling tool (like the free and powerful Blender)
save it to a file in a given format,
in your program, load the object from the file to memory (either to your RAM to display using Vertex Arrays or to your GPU memory directly using a Vertex Buffer Object) and display it.
Common format (though an old an inconvenient one) is .obj (Wavefront OBJ), Blender is able to save to that and you are likely to google an OpenGL OBJ loader (or you can roll your own - untrivial, but still easy).
An alternative is to create an export script for Blender (very easy, if you know Python) and save the model as a simple binary file containing vertices, etc; then load it in your application code very easily.