Many small classes for Vulkan structs or one BIG class? - c++

I'm learning vlukan and did the Tutorial on their homepage up to where you see the first Triangle on the screen. Within the tutorial everything is put into the main.cpp with it growing easily over 1000 LOC.
I want to emphasize that my goal is to learn more about C++ and how it behaves "under the hood".
Now my question is if I should reafactor this giant class into multiple small ones and what this does to the compiled code:
Assuming I want the following:
VkInstance
I need the following for this:
VkInstanceCreateInfo (Struct to create the instance which in turn needs)
std::vector<VkLayerProperties> (Array of layer properties, for example to support GLFW windows)
std::vector<const char*> (Array of extension names, for example debug utils to write to the console)
My idea would be to create the following structure of my own classes:
class Instance{
InstanceLayers layers;
InstanceExtensions extensions;
};
Within the constructor I would have the following:
Instance::Instance(){
...
const auto requiredExtensions = extensions->getExtensions();
...
}
Would the compiler Replace the whole function call with code here or would it create a jump? Are jumps bad if they happen often and where can I read on this topic?

Since 3D graphics does not end with rendering a single triangle, there is one advice.
Try rendering multiple complex meshes textured with multiple images and/or lighted with different rendering models. Along the way you will encounter lots of challenges to keep Vulkan code at a minimum. E.g., you would not want to copy'n'paste shader module loading/creation code, texture loading code etc. Render passes and framebuffers (for offscreen rendering, not the only swapchain and implicit image views from the tutorial), pipelines and descriptor sets - these are the items on which you concentrate the most and which require management. It is unlikely that for you the initialization process and the representation of extension lists is the biggest problem.
Good luck with mastering C++ (no matter if you use classes or not). Be more practical, don't try to optimize everything from the start. Usually generalization comes from extracting similar parts from the already written code for lots of cases, not from the ground up.

Related

C++ OpenGL wrapper: interface similar to fixed pipeline, can export .collada

I have opengl code that uses the fixed pipeline.
Hitting two birds with one stone, I need a wrapper that can help me with the following tasks:
Convert the code to the new shader-based pipeline with minimal effort.
I have a class that calls opengl functions, such as: glBegin(triangles/lines), glVertex, glPushMatrix, glTranslate, glColor, gluSphere.
Ideally, I'd like it to derive from a class that supplies these functions in the base class. Behind the scenes, it would use the same high level logic as the fixed pipeline.
I'd like to export an opengl scene to .collada to load in an external renderer.
Opengl is low level rendering, and it doesn't have the concept of a scene. For example, this reddit post:
"You realize that you have to write a shim to capture all API calls
you are interested in to do that. Then, when finally, a draw call is
emitted you have to parse every single vertex and collect the data
from all over the memory from the buffers that you have recorded from
the APi calls that set up VAOs, VBOs and IBOs. Then you have to parse
the shader source code so that you can see which uniforms and vertex
attributes contribute to vertex clip coordinate generation. Then you
also have to synthesize/guess which outputs are normal, color, texture
coordinate and so on from the shader source if the resulting program
even have those in .obj file format-wise.
This gets even more complicated if Compute is used to generate data
inside the GPU for any of the buffers. If geometry or tessellator is
used then you also have to implement one of those so that you get
accurate outputs from the vertex processing. TL;DR - you have to write
your own OpenGL 4.5 driver that does exactly the same things a real
hardware driver would do. Good luck with that."
However, my scene is simple, using the fixed pipeline operations above.
I'd like the wrapper to keep track and construct a scene that can be exported.
--
EDIT: Since recommendation is off-topic, I'll ask the following question.
What I need above seems like something obvious that many should have found useful. Since I can't find a library that accomplishes that, I'm wondering if my approach is unreasonable?
More specifically, how do people port their legacy opengl code; do they write the relevant part from scratch, or does everyone implement his own wrapper as I suggested?
What about constructing a scene to export to collada?
Posted also:
https://community.khronos.org/t/c-opengl-wrapper-interface-similar-to-fixed-pipeline-can-export-collada/105829
Although there are some parts in legacy OpenGL that are not optimized in current drivers (like glDrawPixels, the raster drawing operations and indexed color mode), between modern hardware and the modest requirements of legacy applications, legacy OpenGL stuff runs well enough on modern systems.
The main reason to "modernize" legacy OpenGL code is, if one want to make use of the modern features. Any sort of "wrapper" will just run into the same kind of design problems that the OpenGL API ran between OpenGL-1.5 to OpenGL-2.1: Lots of built-in variables, default state, implicit action, etc. etc. This is difficult to document properly, and even more difficult to make use of reliably. Which is the reason you usually don't find these kinds of wrappers.
If you find yourself in the situation, that you absolutely must port your legacy code to modern OpenGL, e.g. to be interoperable with core contexts, then your best course of action will be to do a proper rewrite. Replace implcit mode calls to filling vertex buffers, replace calls to glTexEnv…, glMaterial…, glLight… with loading appropriate shaders and setting their uniforms.
Or, if you want a quick and dirty method: Just create two contexts, a modern one, and a legacy one and switch between them; often you can establish "list" sharing between them.

Simple 3D scene visualisation data format

My PhD project revoles around simulating the paths of photons through objects of different optical properties. My code has classes which create ccd images etc, but it would be much more useful to be able to create a simple rendering of the 3D objects and the paths the photons take through them.
I've written an opengl system for viewing such a scene, but it would be much better if I could use something much more lightweight where I could simply specify the vertices of an object, and then a photon path as a list of connected vertices.
Exporting all the data and then visualising it in another program isn't ideal, as things like mesh transformations need to be taken into account, and I'd rather avoid exporting several new mesh objects just to import them all into another program.
What I essentially need is to be able to create the three dimensional equivalent of a svg image. Does such a '3D scene' file format with a simple visualiser exist?
I write in C++ on MacOS, though I'd prefer to avoid using a visualisation library. I appreciate that what I'm asking for is rather niece and picky, but that's why I'm asking the internet as someone might have come across a similar need for such a tool.

best way to wrap opengl models

In short: What is the "preferred" way to wrap OpenGL's buffers, shaders and/or matrices required for a more high level "model" object?
I am trying to write this tiny graphics engine in C++ built on core OpenGL 3.3 and I would like to implement an as clean as possible solution to wrapping a higher level "model" object, which would contain its vertex buffer, global position/rotation, textures (and also a shader maybe?) and potentially other information.
I have looked into this open source engine, called GamePlay3D and don't quite agree with many aspects of its solution to this problem. Is there any good resource that discusses this topic for modern OpenGL? Or is there some simple and clean way to do this?
That depends a lot on what you want to be able to do with your engine. Also note that these concepts are the same with DirectX (or any other graphic API), so don't focus too much your search on OpenGL. Here are a few points that are very common in a 3D engine (names can differ):
Mesh:
A mesh contains submeshes, each submesh contains a vertex buffer and an index buffer. The idea being that each submesh will use a different material (for example, in the mesh of a character, there could be a submesh for the body and one for the clothes.)
Instance:
An instance (or mesh instance) references a mesh, a list of materials (one for each submesh in the mesh), and contains the "per instance" shader uniforms (world matrix etc.), usually grouped in a uniform buffer.
Material: (This part changes a lot depending on the complexity of the engine). A basic version would contain some textures, some render states (blend state, depth state), a shader program, and some shader uniforms that are common to all instances (for example a color, but that could also be in the instance depending on what you want to do.)
More complex versions usually separates the materials in passes (or sometimes techniques that contain passes) that contain everything that's in the previous paragraph. You can check Ogre3D documentation for more info about that and to take a look at one possible implementation. There's also a very good article called Designing a Data-Driven Renderer in GPU PRO 3 that describes an even more flexible system based on the same idea (but also more complex).
Scene: (I call it a scene here, but it could really be called anything). It provides the shader parameters and textures from the environment (lighting values, environment maps, this kind of things).
And I thinks that's it for the basics. With that in mind, you should be able to find your way around the code of any open-source 3D engine if you want the implementation details.
This is in addition to Jerem's excellent answer.
At a low level, there is no such thing as a "model", there is only buffer data and the code used to process it. At a high level, the concept of a "model" will differ from application to application. A chess game would have a static mesh for each chess piece, with shared textures and materials, but a first-person shooter could have complicated models with multiple parts, swappable skins, hit boxes, rigging, animations, et cetera.
Case study: chess
For chess, there are six pieces and two colors. Let's over-engineer the graphics engine to show how it could be done if you needed to draw, say, thousands of simultaneous chess games in the same screen, instead of just one game. Here is how you might do it.
Store all models in one big buffer. This buffer has all of the vertex and index data for all six models clumped together. This means that you never have to switch buffers / VAOs when you're drawing pieces. Also, this buffer never changes, except when the user goes into settings and chooses a different style for the chess pieces.
Create another buffer containing the current location of each piece in the game, the color of each piece, and a reference to the model for that piece. This buffer is updated every frame.
Load the necessary textures. Maybe the normals would be in one texture, and the diffuse map would be an array texture with one layer for white and another for black. The textures are designed so you don't have to change them while you're drawing chess pieces.
To draw all the pieces, you just have to update one buffer, and then call glMultiDrawElementsIndirect()... once per frame, and it draws all of the chess pieces. If that's not available, you can fall back to glDrawElements() or something else.
Analysis
You can see how this kind of design won't work for everything.
What if you have to stream new models into memory, and remove old ones?
What if the models have different size textures?
What if the models are more complex, with animations or forward kinematics?
What about translucent models?
What about hit boxes and physics data?
What about different LODs?
The problem here is that your solution, and even the very concept of what a "model" is, will be very different depending on what your needs are.

What design pattern to mimic vertex/fragment shaders in a software renderer?

I have a software renderer that is similar designed to the OpenGL 2.0+ rendering pipeline, however, my software renderer is quite static in its functionality. I would like to design it so I can put in custom vertex- and fragment-"shaders" (written as C++ "functions", not in the OpenGL language), however, I'm not sure how to implement a good, reusable, extensible solution.
Basically I want to choose between a custom "function" that is then called in my renderer to process every vertex (or fragment). So maybe I could work with a function object passed to the renderer, or work out some inheritance-based solution or I'm thinking this could be a case for a template-based solution.
I imagine it like this:
for every vertex
// call the vertex-shading function given by the user, with the standard
// arguments plus the custom ones given in user code. May produce some custom
// output that has to be fed into the fragment shader below
end
// do some generic rendering-stuff like clipping etc.
for every triangle
for every pixel in the triangle
// call the fragment-shading function given by the user, with the standard
// arguments, plus the custom ones from the vertex shader and the ones
// given in user code
end
end
I can program C++ quite well, however I don't have much practical experience with templates and the more advanced stuff - I have read a lot and watched a lot of videos though.
There's a few requirements like that one of those "shader-functions" can have multiple (different) input and output variables. There is 2-3 parameters that are not optional and always the same (like the input to a vertex-shader is obviously the triangle, and the output is the transformed position), but one shader could e.g. also require an additional weight-parameter or barycentric coordinates as input. Also, it should be possible to feed one of such custom outputs of the vertex shader into a corresponding fragment shader (like in OpenGL where the output of a variable in the vertex shader is fed into the fragment shader).
At the same time, I would also prefer a simple solution - it shouldn't be too advanced (like I don't want to mimic the GLSL compiler, or have my own DSL). It should just be something like - write VertexShaderA and VertexShaderB and be able to plug them both into my Renderer, along with some parameters depending on the shader.
I would like if the solution uses "modern" C++, as in basically everything that compiles with VS2013 and gcc-4.8.
So to rephrase my main question:
How can I accomplish this "passing of custom functions to my renderer", with the additional functionality mentioned?
If possible, I would welcome a bit of example code to help get me started.
TinyRenderer is a very simple but rather elegant implementation of around 500 lines and it has some wiki with a tutorial. See https://github.com/ssloy/tinyrenderer and the actual shader https://github.com/ssloy/tinyrenderer/blob/master/main.cpp

What has happened with opengl? What kind of nightmare is it now?

I used opengl 2 years ago. In one afternoon I read a tuto, I drew a cube (and then learned how to load any 3d model) and learned home to move the camera around with the mouse. It was easy, less than 100 lines of codes. I didnt get the pipeline completely but I was able to do something.
Now I need to refresh opengl for some basic stuff, basically I need to load a 3D model (any model) and move the model around, with the camera fixed. Something I thought would be another afternoon.
I have spent 1 day and have nothing working. I am reading the recommended tuto http://www.arcsynthesis.org/gltut/ I dont get anything, now to draw just a cube you need a lot of lines and working with lots of buffer, use some special syntax for shaders.... what the hell I only want to draw a cube. Before it was just defining 6 sides.
What is going on with opengl? Some would argue that now is great, I think it is screwed.
Is there any easy library to work with Something that would make my life easier?
GLUT - http://www.opengl.org/resources/libraries/glut/
ASSIMP - http://assimp.sourceforge.net/
These two libraries are all you need to make a simple application where you import a model (various formats). Read it's documentation and examples to get a better understanding on how you can "glue" OpenGL and ASSIMP to work.
Documentation
As to is OpenGL more hard to comprehend? No. What I've learned in recent years from OpenGL is that GFX programming is never simple or done in a few lines of code, you have to be organised, you have to be careful and even a simple primitive (e.g cube) needs to have more than 100 lines of code to make it decent and flexible (for example if you want more subdivisions on your polygons or texturing).
If you learned it only two years ago, then the tutorials were extremely outdated. Immediate Mode has been known to be deprecated for a very, very long time. Actually the first plans to abandon it and display lists date back to 2003.
Vertex Arrays have been around since version 1.1, and they have been the preferred method for sending geometry to OpenGL ever since; in immediate mode every vertex causes several function calls, so for any seriously complex object you spend more time managing the function call stack, than doing actual rendering work. If you used Vertex Arrays consequently since their introduction, switching over to Vertex Buffer Objects is as complicated as just inserting or replacing a few lines.
The biggest hurdle using OpenGL-3 is in Windows, where one has to use a proxy context to get access to the extension functions required to select OpenGL-3 capabilities for context creation. However again no big hurdle, 20 lines of code top. And some programs, like mine for example, create a proxy GL context anyway, to which all shareable data is uploaded, which allows to quicly destroy/recreate visible contexts, yet have full access to textures, VBOs and stuff (you can share VBOs, which is another reason for using them instead of plain vertex arrays; this might not look like something big, at least not if the context is used from a single process; however on plattforms like X11/GLX OpenGL contexts can be shared between X11 clients, which may even run on different machines!)
Also the existance of functions like the matrix manipulation stack led people into the misconception, OpenGL was some matrix math library, some even believed it was a particularily fast one. Neither is true. The removal of the matrix manipulation functions was a very important and right thing to do. Every serious OpenGL application will implement their very own matrix math anyway. For example any modern game using some kind of physics engine used to directly use in OpenGL (glLoadMatrix, or glUniformMatrix) the transform matrix spit out by the physics calculation, completely bypassing the rest of the matrix functions. This also means that the sole reason to have multiple matrix stacks (GL_PROJECTION, GL_MODELVIEW, GL_TEXTURE, GL_COLOR), namely being able to use the same set of manipulation functions on several matrices, was obsoleted and could have been replaced by something like glLoadMatrixSelected{f,d}v(GLenum target, GLfloat *matrix). However Uniforms and shaders already were around, so the logical step was not introducing a new function, but to reuse existing API, which had been used for this task already, anway, and instead remove what's no longer needed.
TL;DR: The new OpenGL-3 API greatly simplyfies using it. It's a lot clearer, has fewer pitfalls and IMHO is also more newbie-friendly.
You don't have to use buffer objects. You can use the deprecated immediate mode. It will be slower, but if you don't really care then go ahead and use OpenGL the way you used to. NeHe has some excellent tutorials on OpenGL 1.x stuff.
Swiftless has some good tutorials (only a few very basic ones) on OpenGL 3.x and 4.x, but the learning curve is, as you've found, very steep.
Does it have to be openGL? XNA offers an ability to draw 3d models without breaking your back.. Could be worth a look