Store static environment in chunks [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have terrain separated by chunks and I would like to put environment (For example, rocks, trees, etc..) in each chunk randomly.
My question is related to how to implement such system in OpenGL.
What I have tried:
Solution: Draw the environment with instancing once for all the terrain (not a specific chunk)
Problem: I except the chunk to sometimes take a bit to load and because I am using threads the environment will appear as floating.
Solution: Draw the environment with instancing for each chunk.
Problem: To draw each chunk, I will need to bind the VBO for the chunk, draw the chunk, bind the VBO for the environment (and the VAO probably) and draw it.
I don't want to put so many glBindBuffer functions because I heard it is slow (Please correct me if I am wrong)
(Not tried) Solution: Somehow merge the vertices of the terrain with its environment and draw them together.
Problem: My terrain is drawn with GL_TRIANGLE_STRIP so this is a first problem, the second problem(?) is that I don't know how well it will function (talking speed).
I tried looking up solutions on the internet but didn't seem to find any that relate to chunks.
Anyone know how other games that uses chunks do that? Is there a way to do it without causing a lot of speed decrease?

Related

How to render text in OpenGL menu-system smartly? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I want to make a simple windowing system in an OpenGL app. Rendering menus with text-boxes, buttons, check-boxes, etc. How do I render this smartly?
So far I have 2 ideas:
In each frame I render every character of the menu to the screen.
I keep the menu/window in a texture, and render only this texture each frame. (and only update the parts of the texture that have changed.)
What are the downsides of each?
Start with the first bullet, then maybe implement the second bullet later as an optimization. The second bullet is sometimes known as "framebuffer caching". Note that Dear ImGui (a very popular GUI library that can use OpenGL for rendering) does not bother with framebuffer caching.
If you decide to implement framebuffer caching, the work you did in the beginning will not be wasted, since you will use it to update the cache.

Vulkan advanced render design [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I've finally managed to make it through the basic concepts of vulkan and have some knowledge of primary/secondary commandbuffers, renderpasses etc. Now I was wondering how one would design a more advanced render engine(not like the simples ones from the samples)?
Let me ask it more specifically:
If I would want to render a small village(a few houses) containing some villagers, which are animated using skeletal animation, how would the design approach look like? Would you create two Renderer classes, one for the villagers, one for the houses, each generating a secondary commandbuffer for each object? And if so, how would one make proper use of multithreading? And if, lets say, every villager has its own pipeline(different textures, bonelayouts, etc), how would one manage those?
Edit: I'm not searching for "the render engine", I just want to know how one could make effective use of multithreading then rendering a scene with different "types" of models(NPCs, houses, terrain).

How to handle backbuffer in Direct3D11? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Recently,i want to learn a book named Tricks of 3D Games Programming Gurus.It used DDraw to implement a soft render engine.But DDraw is to old.I want to use Direct3D11 to do the same things.So i got the texture of the main backbuffer,and update it.But it didn't work,what should i do?
You don't have direct access to the true frontbuffer/backbuffer even with DirectDraw on modern platforms.
If you want to do all your rendering into a block of CPU memory without using the GPU, then your best bet for fast presentation is to use a Direct3D 11 texture with D3D11_USAGE_DYNAMIC, and then do a simple full-screen quad render of that texture onto the presentation backbuffer. For that step, you can look at DirectX Tool Kit and the SpriteBatch class.
That said, performance wise this is likely to be pretty poor because you are doing everything on the CPU and the GPU is basically doing nothing 99% of the time.

Converting text to mesh [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I need to convert text (string+font) into mesh (vertices, indices, triangles etc), but I don't need to draw anything. I'll just get a string from one API and push it as vertices & indices to another. What's the simplest/easiest/best way of doing this? Font metrics and text placing are of course available and no other transforms are needed.
I'm currently working with VC++. However, any kind of OpenSource (C/C++, C#, VB,...) and "non-open but free" COM/.NET -libraries would be great.
I've heard of FreeType. Does it answer my prayers or is there something even better?
EDIT: As Nico Schertler commented, there seems to be Mesh.TextFromFont -function in DirectX -libs that probably does the trick. Thank you Nico! I'll update when I have time to test this in practise.
Mesh.TextFromFont sounded good but it didn't save the day since I couldn't figure out how to get the actual point/triangle data from the mesh -object.
But then I found this. In this project, GraphicsPath is used to create a point-path from a glyph. Then the points are coverted into Polygons and the polygons are then tesselated into triangles using Poly2Tri.
A quick browse through the source code and with some small modifications and code stripping I ended up with a nice .NET -dll with one simple static function that does everything I need.
To convert a text into a mesh you can use the ttf2mesh library. This library consists of just one C-file and allows to open truetype font (.ttf) and convert it glyphs to a mesh objects in 2d or 3d space. There is an examples in the repository.
An interesting feature is the lack of dependency on any third party library (like libfreetype). Also in the examples there is a ttf2obj program that allows you to convert a font file to an OBJ file.

Modeling in OpenGL [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm trying to model a seashell using a bunch of polygons, for example as shown in: link text
But I am new to OpenGL. How would I get started modeling this, and what would the procedure be like?
I am assuming you are going to generate the geometry procedurally instead of "sculpting it".
What you need to do is to generate your geometry just like in the mathematics example and store your it in vertex buffer objects (VBO). There are multiple ways of doing this, but generally you will want to store you vertex information (position, normal, texture coords if any) in one buffer, and the way these vertices are grouped into faces in another (called an index array).
You can then bind these buffers and draw them with a single call to glDrawElements().
Be careful that the vertices in the faces are all in the same winding order (counter-clockwise or clockwise) and the the winding order is specified correctly to OpenGL, or you will get your shell inside out!
VBOs are supported in OpenGL 1.4 and up. In the extremely unlikely event that your target platform does not support that (update your drivers first!) you can use Vertex Arrays. They do pretty much the same thing, but are slower as they get sent over the bus every frame.
While modelling objects procedurally (i.e. generating coordinates as numbers in the code) may be OK for learning purposes, it's definitely not the thing you want to do, as it gets very impractical if you have anything more complicated than a few triangles or a cyllinder). Some people consider procedural generation an art, but you need a lot of practice to achieve nice-looking (not to mention, realistic) results with that approach.
If you want to display a more complex, realistic model, the approach is to:
create the model in a modelling tool (like the free and powerful Blender)
save it to a file in a given format,
in your program, load the object from the file to memory (either to your RAM to display using Vertex Arrays or to your GPU memory directly using a Vertex Buffer Object) and display it.
Common format (though an old an inconvenient one) is .obj (Wavefront OBJ), Blender is able to save to that and you are likely to google an OpenGL OBJ loader (or you can roll your own - untrivial, but still easy).
An alternative is to create an export script for Blender (very easy, if you know Python) and save the model as a simple binary file containing vertices, etc; then load it in your application code very easily.