Modeling in OpenGL [closed] - opengl

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm trying to model a seashell using a bunch of polygons, for example as shown in: link text
But I am new to OpenGL. How would I get started modeling this, and what would the procedure be like?

I am assuming you are going to generate the geometry procedurally instead of "sculpting it".
What you need to do is to generate your geometry just like in the mathematics example and store your it in vertex buffer objects (VBO). There are multiple ways of doing this, but generally you will want to store you vertex information (position, normal, texture coords if any) in one buffer, and the way these vertices are grouped into faces in another (called an index array).
You can then bind these buffers and draw them with a single call to glDrawElements().
Be careful that the vertices in the faces are all in the same winding order (counter-clockwise or clockwise) and the the winding order is specified correctly to OpenGL, or you will get your shell inside out!
VBOs are supported in OpenGL 1.4 and up. In the extremely unlikely event that your target platform does not support that (update your drivers first!) you can use Vertex Arrays. They do pretty much the same thing, but are slower as they get sent over the bus every frame.

While modelling objects procedurally (i.e. generating coordinates as numbers in the code) may be OK for learning purposes, it's definitely not the thing you want to do, as it gets very impractical if you have anything more complicated than a few triangles or a cyllinder). Some people consider procedural generation an art, but you need a lot of practice to achieve nice-looking (not to mention, realistic) results with that approach.
If you want to display a more complex, realistic model, the approach is to:
create the model in a modelling tool (like the free and powerful Blender)
save it to a file in a given format,
in your program, load the object from the file to memory (either to your RAM to display using Vertex Arrays or to your GPU memory directly using a Vertex Buffer Object) and display it.
Common format (though an old an inconvenient one) is .obj (Wavefront OBJ), Blender is able to save to that and you are likely to google an OpenGL OBJ loader (or you can roll your own - untrivial, but still easy).
An alternative is to create an export script for Blender (very easy, if you know Python) and save the model as a simple binary file containing vertices, etc; then load it in your application code very easily.

Related

Creating a GUI in OpenGL, is it possible? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I'm trying to create a custom GUI in OpenGL from scratch in C++, but I was wondering is possible or not?
I'm getting started on some code right now, but I'm gonna stop until I get an answer.
YES.
If you play a video game, in general, every UIs should be implemented by APIs like OpenGL, DXD, Metal or Vulkan. Since a rendering surface has higher frame rate than OS UI APIs, using them together slows down the game.
Starting with making a view class as a base class, implement actual UI classes like button, table and so on inherited from the base class.
Making UIs using a GFX API is similar to making a game in terms of using same graphics techniques such as Texture Compression, Mipmap, MSAA and some special effects and so on. However, handling a font is a sort of huge part, for this reason, many game developers use a game engine/UI libraries.
https://www.twitch.tv/heroseh
Works on a Pure C + OpenGL User Interface Library daily at about 9AM(EST).
Here is their github repo for the project:
https://github.com/heroseh/vui
I myself am in the middle of stubbing in a half-assed user interface that
is just a list of clickable buttons. ( www.twitch.com/kanjicoder )
The basic idea I ran with is that both the GPU and CPU need to know about your
data. So I store all the required variables for my UI in a texture and then
sync that texture with the GPU every time it changes.
On the CPU side its a uint8 array of bytes.
On the GPU side it's unsigned 32 bit texture.
I have getters and setters both on the GPU (GLSL code) and CPU (C99) code that
manage the packing and unpacking of variables in and out of the pixels of the texture.
It's a bit crazy. But I wanted the "lowest-common-denominator" method of creating
a UI so I can easily port this to any graphics library of my choice in the future.
For example... Eventually I might want to switch from OpenGL to Vulkan. So if I keep
most of my logic as just manipulations of a big 512x512 array of pixels, I shoudn't
have too much refactoring work ahead of me.

Store static environment in chunks [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have terrain separated by chunks and I would like to put environment (For example, rocks, trees, etc..) in each chunk randomly.
My question is related to how to implement such system in OpenGL.
What I have tried:
Solution: Draw the environment with instancing once for all the terrain (not a specific chunk)
Problem: I except the chunk to sometimes take a bit to load and because I am using threads the environment will appear as floating.
Solution: Draw the environment with instancing for each chunk.
Problem: To draw each chunk, I will need to bind the VBO for the chunk, draw the chunk, bind the VBO for the environment (and the VAO probably) and draw it.
I don't want to put so many glBindBuffer functions because I heard it is slow (Please correct me if I am wrong)
(Not tried) Solution: Somehow merge the vertices of the terrain with its environment and draw them together.
Problem: My terrain is drawn with GL_TRIANGLE_STRIP so this is a first problem, the second problem(?) is that I don't know how well it will function (talking speed).
I tried looking up solutions on the internet but didn't seem to find any that relate to chunks.
Anyone know how other games that uses chunks do that? Is there a way to do it without causing a lot of speed decrease?

is there a good tutorial on terrain editor? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I am new to 3D Game programming, now studying a lot on DirectX and OpenGL, on Windows.
But I come up with making a terrain editor, But I cannot obtain any open tutorial or ideas on the web.
is there a good tutorial or open source code for learning this?
Simple one is fine, I just wonder how to elevate or lower terrain, or putting a tree on the map, like the following video.
like the following video: http://www.youtube.com/watch?v=oaAN4zSkY24
First I want to say if you haven't chosen between OpenGL and DirectX, then it would be a good idea to do so. My choice is to use OpenGL since OpenGL is cross-platform and works on Windows, Linux, Solaris, Mac, Smartphones, etc. Where DirectX only supports Windows machines.
I can't give you a tutorial or open source code, since this is kinda big, even just a "simple terrain editor", that is still a very complex thing. Though, what I can give you, is some points you need to know about and read about, which if you know these then you will be able to create a "terrain editor".
Points you need to be able to do.
VBO's
Shaders
Multi Texturing
Picking / Ray Picking / 3D Picking
VBO's
A VBO or Vertex Buffer Object, is a way to upload vertex data (positions, normals, texture coordinates, colors, etc) to the GPU itself, this allows for really fast rendering and this is also currently the best way to render. Be aware that this is an OpenGL feature, though DirectX might have a feature like this as well.
Shaders & Multi Texturing
Shaders is for shading/coloring vertices and fragments of all primitives. OpenGL uses GLSL where DirectX uses HLSL, they are both very similar.
Multi Texturing is basically where you bind multiple textures and then through a shader calculate which texture to use for the current vertex/fragment. This way you will be able to achieve what you saw in the video.
Picking / Ray Picking / 3D Picking
Picking is the process of "shooting" a ray from the camera (3D space) or the mouse (2D screen space), then each time the ray hits/collides with something those things will be returned to the user. In you case, you would use the mouse (2D screen space) to create a picking ray, and then at the point on the terrain the ray hits, that is the point where we would want to change the terrain.
If you know nothing about Picking then try Googling, I found that it can be really hard to find good results for 3D related things as well, so if you want to you can read a question I posted some time ago here on Stack Overflow (click here to see the post), the post covers 3D Camera Picking and 2D Screen Space Picking, and there is code and I added my final code to the post itself also.
Extra
If you combine all these things you will be able to create a "terrain editor".
Some of the things I've explained might be OpenGL related, but there sure are things in DirectX which can perform the same kind of things.

Converting text to mesh [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I need to convert text (string+font) into mesh (vertices, indices, triangles etc), but I don't need to draw anything. I'll just get a string from one API and push it as vertices & indices to another. What's the simplest/easiest/best way of doing this? Font metrics and text placing are of course available and no other transforms are needed.
I'm currently working with VC++. However, any kind of OpenSource (C/C++, C#, VB,...) and "non-open but free" COM/.NET -libraries would be great.
I've heard of FreeType. Does it answer my prayers or is there something even better?
EDIT: As Nico Schertler commented, there seems to be Mesh.TextFromFont -function in DirectX -libs that probably does the trick. Thank you Nico! I'll update when I have time to test this in practise.
Mesh.TextFromFont sounded good but it didn't save the day since I couldn't figure out how to get the actual point/triangle data from the mesh -object.
But then I found this. In this project, GraphicsPath is used to create a point-path from a glyph. Then the points are coverted into Polygons and the polygons are then tesselated into triangles using Poly2Tri.
A quick browse through the source code and with some small modifications and code stripping I ended up with a nice .NET -dll with one simple static function that does everything I need.
To convert a text into a mesh you can use the ttf2mesh library. This library consists of just one C-file and allows to open truetype font (.ttf) and convert it glyphs to a mesh objects in 2d or 3d space. There is an examples in the repository.
An interesting feature is the lack of dependency on any third party library (like libfreetype). Also in the examples there is a ttf2obj program that allows you to convert a font file to an OBJ file.

OpenGL, screen doesn't update after adding Assimp 3D model [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
I am using OpenGL 4.0, I have 3 things in my scene, they are-
1- VBO Plane
2- Cube maps
3- 3D models [3ds/obj]
I am using Assimp library to import 3D models, the code which I built to import models was done with the help of a tutorial on youtube from "TheCPlusPlusGuy".
Here is the issue I am facing, I can render the plane in my scene, I can render the cube maps, a.k.a. skyboxes, in my scene, I can render them together.
But When I am rendering any 3D model, be it .3ds or .obj, the screen doesn't update. Even if i resize the screen its not getting updated.
This only happens when I render a 3D model. I used flags and enabled drawing 3D models at runtime, the program runs fine until I render the models, once I render the models, the models itself do not appear on screen, but the screen again freezes.
I googled it, but no one else seems to be having an issue like this.
My primary diagnostic is because I am using VBO for planes, cubemaps, and 3D models I am having this issue.
Here's a list of suggestions:
Using VBOs is not the problem. Nor is using Assimp.
Make sure you've specified the proper number of indices and primitives in your buffer and draw calls, and that they are properly formatted. The OpenGL docs can be vague on what these numbers need to be (bytes, indices, triangles?) so make sure that's done well. The Wiki does a better job of explaining this.
Does your model actually get past the loading stage? Have you tried a very simple model?
Make sure you are only loading the model once (ie not in the rendering loop, and if so, there's a mechanism to ensure that it only loads once). Repeatedly telling your program to load a model will make it run very slowly and runs the risk of eating up all your memory.
Make sure you've translated the model properly from Assimp's data structures to your own. Check that values are being set properly. Load an OBJ and print the values you're copying - do they line up with the .obj file?
Do you have a valid OpenGL context at the time you're loading the model? Loading from Assimp doesn't require one, but going from that data structure to a VBO does.
I'm sure you've done a number of these things but I've had tricky times doing this task, too. Going through step by step will help you narrow down the problem.
I am using Assimp to import models in my editor, but Assimp is only used to read models and mesh data and the values are stored in my own model/mesh format. I assume we all do this? I have had no problems with Assimp, and I am also led to believe that skyboxes etc should be rendered after all other opaque objects so you can do a few tricks to minimize rendering time (skyboxes are to be considered as one of the furthest distant objects).
I am inclined to agree with Bartek. Assimp seems to be irrelevant to the problem you are having, and I would consider redesigning your rendering methods.
I forgot to do this after rendering the plane->
glBindVertexArray(0);
After that the program was working like a charm.