Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I am new to 3D Game programming, now studying a lot on DirectX and OpenGL, on Windows.
But I come up with making a terrain editor, But I cannot obtain any open tutorial or ideas on the web.
is there a good tutorial or open source code for learning this?
Simple one is fine, I just wonder how to elevate or lower terrain, or putting a tree on the map, like the following video.
like the following video: http://www.youtube.com/watch?v=oaAN4zSkY24
First I want to say if you haven't chosen between OpenGL and DirectX, then it would be a good idea to do so. My choice is to use OpenGL since OpenGL is cross-platform and works on Windows, Linux, Solaris, Mac, Smartphones, etc. Where DirectX only supports Windows machines.
I can't give you a tutorial or open source code, since this is kinda big, even just a "simple terrain editor", that is still a very complex thing. Though, what I can give you, is some points you need to know about and read about, which if you know these then you will be able to create a "terrain editor".
Points you need to be able to do.
VBO's
Shaders
Multi Texturing
Picking / Ray Picking / 3D Picking
VBO's
A VBO or Vertex Buffer Object, is a way to upload vertex data (positions, normals, texture coordinates, colors, etc) to the GPU itself, this allows for really fast rendering and this is also currently the best way to render. Be aware that this is an OpenGL feature, though DirectX might have a feature like this as well.
Shaders & Multi Texturing
Shaders is for shading/coloring vertices and fragments of all primitives. OpenGL uses GLSL where DirectX uses HLSL, they are both very similar.
Multi Texturing is basically where you bind multiple textures and then through a shader calculate which texture to use for the current vertex/fragment. This way you will be able to achieve what you saw in the video.
Picking / Ray Picking / 3D Picking
Picking is the process of "shooting" a ray from the camera (3D space) or the mouse (2D screen space), then each time the ray hits/collides with something those things will be returned to the user. In you case, you would use the mouse (2D screen space) to create a picking ray, and then at the point on the terrain the ray hits, that is the point where we would want to change the terrain.
If you know nothing about Picking then try Googling, I found that it can be really hard to find good results for 3D related things as well, so if you want to you can read a question I posted some time ago here on Stack Overflow (click here to see the post), the post covers 3D Camera Picking and 2D Screen Space Picking, and there is code and I added my final code to the post itself also.
Extra
If you combine all these things you will be able to create a "terrain editor".
Some of the things I've explained might be OpenGL related, but there sure are things in DirectX which can perform the same kind of things.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I'm a newbie.. trying to learn shaders. I feel like its not that easy to code and debug it. Examples and tutorials I see on internet allows to code them in text files and read them as char pointer to pass them to shader compiler. Is there any best practices for coding - with intellisence , compiling , debugging ( printing the value some where in console atleast? ) And how to maintain and organise multiple shaders in a giant source code?
I'm using visual studio. Is there any tools or extensions for it to support opengl shaders?
If not is there any dedicated IDE only for opengl shaders? - if so how to organise them with other c++ source files in visual studio?.
How do they actually do it?
Thanks.
The absolute minimum is to have access to GLSL compile and link logs here example:
complete GL+GLSL+VAO/VBO C++ example
just look for glsl_log in the code.
For really hard stuff its best to write your shader as CPU side C++ code and when fully debugged then run it as shader. But for that you would need to write some kind of framework that would emulate shaders. For starters you need 1/2/3/4D vector and matrix arithmetics from GLSL. It is not that easy to code it but there are also another options like GLM. I am using my template but just to enable vec datatypes its around 228 KByte of hideous code due to its getters setters for more info look here:
GLSL like vec template for C++
And another absolute minimum is handling input and output. As the fun part is usually located in fragment itst often enough to process only fragment shader this way so you need just 2 nested loops looping through test image calling your shader main function and setting the pixel according to its result. I usually place #define texture(...)... to emulate texture access and that is in nutshell all. For example these shaders I would not be possible to debug otherwise:
raytrace through 3D mesh
raytrace through 3D volume
As was mentioned in comments for simpler shaders you can write them in any C++ IDE to enable syntax highlight. But its a good idea to write/debug the stuff in your target App where you simply add some subwindow where you render the GLSL info logs as opening text file after each compilation is a dull click work.
When I started coding shaders I was tired of such and write my own IDE for shaders. Its buggy and far from perfect but it fulfills my needs. It looks like this:
Link to it is in (edit3) here:
How do I get textures to work in OpenGL?
There are some similar tools out there but having my own with source code allows a whole lot of stuff like using targets app CPU side code etc ...
Another very useful thing is to be able to print some variable value from fragment shader. Exactly that I was able to do like this:
GLSL debug prints
you just need to decide correct conditions for entire area of the print (so the value you are printing is the same while printing all its pixels). This help me a lot especially to use 3D textures and geometry encoded in textures for the first time...
However in most cases you can use color output instead of debug prints but having the ability to print actual numbers can save you a lot of time.
I'm using visual studio. Is there any tools or extensions for it to support opengl shaders?
There is a GLSL Integration at Visual Studio Marketplace. I generally make a separate folder for shaders to organize them.
Its a bit hard and frustrating to debug shader code, but with practice and multiple iterations of debugging sessions it becomes more natural. I have started to experiment with my shaders at Shadertoy before porting them back at GLSL, which is surprisingly trivial.
Having said that, there are some cool tools you can use to debug your graphics, namely RenderDoc or AMD CodeXL
A friend suggested that I add vogl to the list of debuggers too. I have not used it, but heard good praises.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
Background: I'm writing a program that creates generative art. I care about creating one final static image, and I don't need to render a bunch of frames per second. So far it's been 2D, and I'm on a Mac so I've been using the Core Graphics (aka Quartz) 2D drawing API. I've reached it's limits so I started messing with OpenGL, but I'm not happy with the antialiasing so far.
I'm wondering if I should invest in learning it, or whether it's not built for what I want. Is OpenGL more about creating moving graphics as fast as possible, mainly for games? If I want the highest quality rendering (high resolution, smooth curves, best antialiasing, arbitrary lighting and shading algorithms) do I need to write my own renderer, or does it make sense to learn OpenGL? Will I be able to use it as a base?
OpenGL is not a general purpose graphics library.
OpenGL is a API with the design being focused on controlling GPUs for purposes of drawing realtime graphics. If you know how to use it you can use OpenGL to generate high quality, close to photorealistic drawings. But it takes a lot of effort to do this.
Antialiasing is actually rather easy to do with high quality: Select a multisampled frambuffer format with a high subsampling density, enable multisampling and render.
However your use case sounds more like the task for an offline renderer like Renderman, Pixie, Yafa-Ray, and similar.
You want RenderMan, Pixar's CGI rendering software. Your program could either generate RIB files, which are intended to be the 3D equivalent of PostScript files; or you could use the RenderMan C API directly.
RM has a richer set of built-in primitives, for instance quadrics and subdivision surfaces, and since it's designed for film work you can do everything from toon shading to photorealism.
3Delight have a free/low cost RM renderer you can use on single systems, and Pixar announced a month or so back that they will be providing a free version of RenderMan for individual use Real Soon Now.
The RenderMan Companion by Steve Upstill is the classic guide to programming RM. A more recent book is Rendering for Beginners by Saty Raghavachary.
Hope this helps.
Is OpenGL the right choice for highest quality renders, without time constraints?
No. As datenwolf has explained, OpenGL is designed to take full advantage of the capabilities of the GPU to do real time rendering. There are existing products designed for extremely high quality renders.
Is OpenGL the right choice for your project?
Maybe. None of the capabilities require a high quality renderer, and all of them can be done fairly easily in OpenGL. The main thing that it does not support is ray tracing.* If you need your renders to be ray traced, then you would be better off looking at other options.
*It is theoretically possible to do ray tracing in OpenGL, but would be a lot of work.
For quality? No.
Sure, you can get far, but as fintelia stated, it does not support raytracing (could do that with OpenCL, but that's not really OpenGL).
There are indeed some impressive OGL/D3D renderers out there, but most of the renders seen today are software ones (CPU/Parallelism/CUDA/Compute), V-Ray, Mental Ray, Renderman.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I'm trying to learn OpenGL. I've got experience with C and C++, setting up a build environment, and all that jazz, but I'm trying to figure out a good starting point.
I'm aware of the fixed function pipeline that was prominent in OpenGL <= 2.1, and it seems relatively easy to get started with. However, the core profile that OpenGL pushes in OpenGL >= 3.1 makes me want to stay away from the FFP due to deprecation. But I'm confused as to how it all works in 3.1 and above. In 2.1 and below you have your glBegin(GL_WHATEVER) and glEnd() when you're drawing shapes. The first thing I noticed when looking through the core profile API, is that those two function calls are gone. I realize there's probably a simple replacement, but it's quite shocking to see something so (seemingly useful) taken out of such a basic task. It almost seems like deprecating printf() from the c standard library. And when I work through the newest Red Book, they still use the old deprecated code which further muddles my thinking.
When reading through various answers to similar questions I see the typical "shader based" or "it's all done with shaders" etc etc. If I want to draw a simple white square onto a black background (the first example in the newest Red Book), I don't understand how a shader is relevant to drawing a box at all. Shouldn't they do... well.. shading? I've looked into buying the Orange Book and the Blue Book, but I don't want to spend anymore money on something that's going to hide it all behind a library (the Blue Book) or something that's going to talk about programming a shader to perform some lighting task in a 3D environment (the Orange Book).
So where do I begin? How do I draw a box (or a cube or a pyramid or whatever) using nothing but the core profile. I'm not asking for a code snippet here, I'm looking for a expansive tutorial or a book or something that someone could point me to. If this has been answered previously and I didn't find it, please redirect me.
The reason for the sudden "complexity" in the core profile is the fact that the fixed functionality pipeline was not representative of what the GPU actually does for you. Much of the functionality was done on the CPU, and only the actual drawing happened on the GPU. The other problem with the fixed pipeline is that it's a losing battle. The fixed pipeline has sooooooo many knobs and switches! So, not only is it painfully complicated already, it will never keep up with the endless demand of new ways to draw scenes. Enter GLSL, and you have the ability to tell the GPU precisely how you want to draw your scene. This shifts the power to the developer and frees everyone from having to wait for OpenGL updates for new switches/knobs.
Now, regarding your frustration with the sudden loss of glBegin and glEnd... there are simple frameworks that mimic their behavior on the new core profile, and that is a good thing. Again, it shifts the power to developers to choose how they approach the pipeline. However, there is nothing wrong with practicing 3D on the FFP. You need to learn 3D math and concepts first anyway. Those concepts apply regardless of API. (Matrix math will save your life both in OpenGL and Direct3D.) So, first you practice with simple triangles and colors. Then you move onto textures (with texture coordinates). Then you add normals (with lighting). Then, after you understand all those concepts, you stop using glBegin/glEnd, and you start batching large amounts of vertex data into buffers. You will not understand glDrawElements all that well if you do not understand glBegin/glEnd anyway. So, it's OK to learn on those tools.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
For a school project, I have to be able to move inside a 3D scene like a room and implement collision detection with its walls.
I'm looking for tutorials and bibliography that deals with the subject.
I already have the Redbook and Opengl's Superbible.
Simplest thing that comes to mind is using a Colour Map of the top view of the room.
Basically you create a bitmap using only 2 colours:
One that will determine your 'walls'
One for 'everything else'
Here are a few articles found by googling:
2D Collision Detection using a Color
Map
Collision Detection and Bounce
Calculation using Colour Maps
They use different languages, but that's irrelevant, the principle is the same.
Once you've got the colour map, you will have ratio to convert from x,z in your 3D to x,y in the 2D colour map. In theory, if you want, you could generate the colour map at runtime, rendering an ortographic top view. You would render just the walls using the fact that the walls will probably by the 'tallest' objects in your scene.
HTH
Those guys are pretty good at it: http://www2.imm.dtu.dk/visiondag/
You can try and contact them. I took a course there but I don't have the exact references here.
Here there is a course links with tutorial:
http://www2.imm.dtu.dk/~bdl/virtualreality.html
There is an entire book on Real-Time Collision Detection.
Before you write your own collision detector from scratch, you should consider implementing the rest of your setup and plugging in an existing library. It is much easier to develop a program, if you have a correct result to compare to.
The GAMMA research group has developed a number of collision detection packages that are popular in robotics and more. You or your institution may ask them for a package for non-commercial or academic use. One of these packages, PQP, is the inspiration for Yaobi, an open-source C++ library.
Yaobi and PQP are both easy to use, requiring only a bunch of triangles to model a geometry.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm trying to model a seashell using a bunch of polygons, for example as shown in: link text
But I am new to OpenGL. How would I get started modeling this, and what would the procedure be like?
I am assuming you are going to generate the geometry procedurally instead of "sculpting it".
What you need to do is to generate your geometry just like in the mathematics example and store your it in vertex buffer objects (VBO). There are multiple ways of doing this, but generally you will want to store you vertex information (position, normal, texture coords if any) in one buffer, and the way these vertices are grouped into faces in another (called an index array).
You can then bind these buffers and draw them with a single call to glDrawElements().
Be careful that the vertices in the faces are all in the same winding order (counter-clockwise or clockwise) and the the winding order is specified correctly to OpenGL, or you will get your shell inside out!
VBOs are supported in OpenGL 1.4 and up. In the extremely unlikely event that your target platform does not support that (update your drivers first!) you can use Vertex Arrays. They do pretty much the same thing, but are slower as they get sent over the bus every frame.
While modelling objects procedurally (i.e. generating coordinates as numbers in the code) may be OK for learning purposes, it's definitely not the thing you want to do, as it gets very impractical if you have anything more complicated than a few triangles or a cyllinder). Some people consider procedural generation an art, but you need a lot of practice to achieve nice-looking (not to mention, realistic) results with that approach.
If you want to display a more complex, realistic model, the approach is to:
create the model in a modelling tool (like the free and powerful Blender)
save it to a file in a given format,
in your program, load the object from the file to memory (either to your RAM to display using Vertex Arrays or to your GPU memory directly using a Vertex Buffer Object) and display it.
Common format (though an old an inconvenient one) is .obj (Wavefront OBJ), Blender is able to save to that and you are likely to google an OpenGL OBJ loader (or you can roll your own - untrivial, but still easy).
An alternative is to create an export script for Blender (very easy, if you know Python) and save the model as a simple binary file containing vertices, etc; then load it in your application code very easily.