Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I'm trying to learn OpenGL. I've got experience with C and C++, setting up a build environment, and all that jazz, but I'm trying to figure out a good starting point.
I'm aware of the fixed function pipeline that was prominent in OpenGL <= 2.1, and it seems relatively easy to get started with. However, the core profile that OpenGL pushes in OpenGL >= 3.1 makes me want to stay away from the FFP due to deprecation. But I'm confused as to how it all works in 3.1 and above. In 2.1 and below you have your glBegin(GL_WHATEVER) and glEnd() when you're drawing shapes. The first thing I noticed when looking through the core profile API, is that those two function calls are gone. I realize there's probably a simple replacement, but it's quite shocking to see something so (seemingly useful) taken out of such a basic task. It almost seems like deprecating printf() from the c standard library. And when I work through the newest Red Book, they still use the old deprecated code which further muddles my thinking.
When reading through various answers to similar questions I see the typical "shader based" or "it's all done with shaders" etc etc. If I want to draw a simple white square onto a black background (the first example in the newest Red Book), I don't understand how a shader is relevant to drawing a box at all. Shouldn't they do... well.. shading? I've looked into buying the Orange Book and the Blue Book, but I don't want to spend anymore money on something that's going to hide it all behind a library (the Blue Book) or something that's going to talk about programming a shader to perform some lighting task in a 3D environment (the Orange Book).
So where do I begin? How do I draw a box (or a cube or a pyramid or whatever) using nothing but the core profile. I'm not asking for a code snippet here, I'm looking for a expansive tutorial or a book or something that someone could point me to. If this has been answered previously and I didn't find it, please redirect me.
The reason for the sudden "complexity" in the core profile is the fact that the fixed functionality pipeline was not representative of what the GPU actually does for you. Much of the functionality was done on the CPU, and only the actual drawing happened on the GPU. The other problem with the fixed pipeline is that it's a losing battle. The fixed pipeline has sooooooo many knobs and switches! So, not only is it painfully complicated already, it will never keep up with the endless demand of new ways to draw scenes. Enter GLSL, and you have the ability to tell the GPU precisely how you want to draw your scene. This shifts the power to the developer and frees everyone from having to wait for OpenGL updates for new switches/knobs.
Now, regarding your frustration with the sudden loss of glBegin and glEnd... there are simple frameworks that mimic their behavior on the new core profile, and that is a good thing. Again, it shifts the power to developers to choose how they approach the pipeline. However, there is nothing wrong with practicing 3D on the FFP. You need to learn 3D math and concepts first anyway. Those concepts apply regardless of API. (Matrix math will save your life both in OpenGL and Direct3D.) So, first you practice with simple triangles and colors. Then you move onto textures (with texture coordinates). Then you add normals (with lighting). Then, after you understand all those concepts, you stop using glBegin/glEnd, and you start batching large amounts of vertex data into buffers. You will not understand glDrawElements all that well if you do not understand glBegin/glEnd anyway. So, it's OK to learn on those tools.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I'm a newbie.. trying to learn shaders. I feel like its not that easy to code and debug it. Examples and tutorials I see on internet allows to code them in text files and read them as char pointer to pass them to shader compiler. Is there any best practices for coding - with intellisence , compiling , debugging ( printing the value some where in console atleast? ) And how to maintain and organise multiple shaders in a giant source code?
I'm using visual studio. Is there any tools or extensions for it to support opengl shaders?
If not is there any dedicated IDE only for opengl shaders? - if so how to organise them with other c++ source files in visual studio?.
How do they actually do it?
Thanks.
The absolute minimum is to have access to GLSL compile and link logs here example:
complete GL+GLSL+VAO/VBO C++ example
just look for glsl_log in the code.
For really hard stuff its best to write your shader as CPU side C++ code and when fully debugged then run it as shader. But for that you would need to write some kind of framework that would emulate shaders. For starters you need 1/2/3/4D vector and matrix arithmetics from GLSL. It is not that easy to code it but there are also another options like GLM. I am using my template but just to enable vec datatypes its around 228 KByte of hideous code due to its getters setters for more info look here:
GLSL like vec template for C++
And another absolute minimum is handling input and output. As the fun part is usually located in fragment itst often enough to process only fragment shader this way so you need just 2 nested loops looping through test image calling your shader main function and setting the pixel according to its result. I usually place #define texture(...)... to emulate texture access and that is in nutshell all. For example these shaders I would not be possible to debug otherwise:
raytrace through 3D mesh
raytrace through 3D volume
As was mentioned in comments for simpler shaders you can write them in any C++ IDE to enable syntax highlight. But its a good idea to write/debug the stuff in your target App where you simply add some subwindow where you render the GLSL info logs as opening text file after each compilation is a dull click work.
When I started coding shaders I was tired of such and write my own IDE for shaders. Its buggy and far from perfect but it fulfills my needs. It looks like this:
Link to it is in (edit3) here:
How do I get textures to work in OpenGL?
There are some similar tools out there but having my own with source code allows a whole lot of stuff like using targets app CPU side code etc ...
Another very useful thing is to be able to print some variable value from fragment shader. Exactly that I was able to do like this:
GLSL debug prints
you just need to decide correct conditions for entire area of the print (so the value you are printing is the same while printing all its pixels). This help me a lot especially to use 3D textures and geometry encoded in textures for the first time...
However in most cases you can use color output instead of debug prints but having the ability to print actual numbers can save you a lot of time.
I'm using visual studio. Is there any tools or extensions for it to support opengl shaders?
There is a GLSL Integration at Visual Studio Marketplace. I generally make a separate folder for shaders to organize them.
Its a bit hard and frustrating to debug shader code, but with practice and multiple iterations of debugging sessions it becomes more natural. I have started to experiment with my shaders at Shadertoy before porting them back at GLSL, which is surprisingly trivial.
Having said that, there are some cool tools you can use to debug your graphics, namely RenderDoc or AMD CodeXL
A friend suggested that I add vogl to the list of debuggers too. I have not used it, but heard good praises.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
Background: I'm writing a program that creates generative art. I care about creating one final static image, and I don't need to render a bunch of frames per second. So far it's been 2D, and I'm on a Mac so I've been using the Core Graphics (aka Quartz) 2D drawing API. I've reached it's limits so I started messing with OpenGL, but I'm not happy with the antialiasing so far.
I'm wondering if I should invest in learning it, or whether it's not built for what I want. Is OpenGL more about creating moving graphics as fast as possible, mainly for games? If I want the highest quality rendering (high resolution, smooth curves, best antialiasing, arbitrary lighting and shading algorithms) do I need to write my own renderer, or does it make sense to learn OpenGL? Will I be able to use it as a base?
OpenGL is not a general purpose graphics library.
OpenGL is a API with the design being focused on controlling GPUs for purposes of drawing realtime graphics. If you know how to use it you can use OpenGL to generate high quality, close to photorealistic drawings. But it takes a lot of effort to do this.
Antialiasing is actually rather easy to do with high quality: Select a multisampled frambuffer format with a high subsampling density, enable multisampling and render.
However your use case sounds more like the task for an offline renderer like Renderman, Pixie, Yafa-Ray, and similar.
You want RenderMan, Pixar's CGI rendering software. Your program could either generate RIB files, which are intended to be the 3D equivalent of PostScript files; or you could use the RenderMan C API directly.
RM has a richer set of built-in primitives, for instance quadrics and subdivision surfaces, and since it's designed for film work you can do everything from toon shading to photorealism.
3Delight have a free/low cost RM renderer you can use on single systems, and Pixar announced a month or so back that they will be providing a free version of RenderMan for individual use Real Soon Now.
The RenderMan Companion by Steve Upstill is the classic guide to programming RM. A more recent book is Rendering for Beginners by Saty Raghavachary.
Hope this helps.
Is OpenGL the right choice for highest quality renders, without time constraints?
No. As datenwolf has explained, OpenGL is designed to take full advantage of the capabilities of the GPU to do real time rendering. There are existing products designed for extremely high quality renders.
Is OpenGL the right choice for your project?
Maybe. None of the capabilities require a high quality renderer, and all of them can be done fairly easily in OpenGL. The main thing that it does not support is ray tracing.* If you need your renders to be ray traced, then you would be better off looking at other options.
*It is theoretically possible to do ray tracing in OpenGL, but would be a lot of work.
For quality? No.
Sure, you can get far, but as fintelia stated, it does not support raytracing (could do that with OpenCL, but that's not really OpenGL).
There are indeed some impressive OGL/D3D renderers out there, but most of the renders seen today are software ones (CPU/Parallelism/CUDA/Compute), V-Ray, Mental Ray, Renderman.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I am new to 3D Game programming, now studying a lot on DirectX and OpenGL, on Windows.
But I come up with making a terrain editor, But I cannot obtain any open tutorial or ideas on the web.
is there a good tutorial or open source code for learning this?
Simple one is fine, I just wonder how to elevate or lower terrain, or putting a tree on the map, like the following video.
like the following video: http://www.youtube.com/watch?v=oaAN4zSkY24
First I want to say if you haven't chosen between OpenGL and DirectX, then it would be a good idea to do so. My choice is to use OpenGL since OpenGL is cross-platform and works on Windows, Linux, Solaris, Mac, Smartphones, etc. Where DirectX only supports Windows machines.
I can't give you a tutorial or open source code, since this is kinda big, even just a "simple terrain editor", that is still a very complex thing. Though, what I can give you, is some points you need to know about and read about, which if you know these then you will be able to create a "terrain editor".
Points you need to be able to do.
VBO's
Shaders
Multi Texturing
Picking / Ray Picking / 3D Picking
VBO's
A VBO or Vertex Buffer Object, is a way to upload vertex data (positions, normals, texture coordinates, colors, etc) to the GPU itself, this allows for really fast rendering and this is also currently the best way to render. Be aware that this is an OpenGL feature, though DirectX might have a feature like this as well.
Shaders & Multi Texturing
Shaders is for shading/coloring vertices and fragments of all primitives. OpenGL uses GLSL where DirectX uses HLSL, they are both very similar.
Multi Texturing is basically where you bind multiple textures and then through a shader calculate which texture to use for the current vertex/fragment. This way you will be able to achieve what you saw in the video.
Picking / Ray Picking / 3D Picking
Picking is the process of "shooting" a ray from the camera (3D space) or the mouse (2D screen space), then each time the ray hits/collides with something those things will be returned to the user. In you case, you would use the mouse (2D screen space) to create a picking ray, and then at the point on the terrain the ray hits, that is the point where we would want to change the terrain.
If you know nothing about Picking then try Googling, I found that it can be really hard to find good results for 3D related things as well, so if you want to you can read a question I posted some time ago here on Stack Overflow (click here to see the post), the post covers 3D Camera Picking and 2D Screen Space Picking, and there is code and I added my final code to the post itself also.
Extra
If you combine all these things you will be able to create a "terrain editor".
Some of the things I've explained might be OpenGL related, but there sure are things in DirectX which can perform the same kind of things.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I never got to take a computer graphics course in university, but I want to have a thorough understanding of everything you'd learn in that course. I figure the best way to learn is through practice (programming).
If I were to start an OpenGL program from scratch, and slowly build it up to add new features to understand all of the concepts, what would I cover? And what would be a good project to demonstrate this?
I'm the author of a series of tutorial on the subject :
http://www.opengl-tutorial.org/
Regarding your question on the concepts you'll learn, the table of contents can give you a good idea :
Basic OpenGL
Tutorial 1 : Opening a window
Tutorial 2 : The first triangle
Tutorial 3 : Matrices
Tutorial 4 : A Colored Cube
Tutorial 5 : A Textured Cube
Tutorial 6 : Keyboard and Mouse
Tutorial 7 : Model loading
Tutorial 8 : Basic shading
Intermediate Tutorials
Tutorial 9 : VBO Indexing
Tutorial 10 : Transparency
Tutorial 11 : 2D text
Tutorial 13 : Normal Mapping
Tutorial 14 : Render To Texture
Miscellaneous
Math Cheatsheet
Useful Tools & Links
Tutorial 12 is done by sb else and is not crucial; Tutorials 15 & 16 are on their way ( baked & real-time shadows )
Of course, you have to keep in mind that you can't get a 'thorough understanding of everything' without a big book. I suggest Real-Time Rendering 3 for a broad, but often not in-depth discussion of, well, almost everything, and an additional math book like Essential Mathematics for Games and Interactive Applications.
What's more, I second walkytalky's proposition of making a FPS from scratch. That's basically what we all did.
If you want to start with OpenGL, you can try Angel's books. Another very popular extensible graphics project is a ray tracer. For that, Suffern's Ray Tracing from the Ground Up is very nice and you'll learn lots of concepts.
If you do intend to build a single project and slowly expand it, I would not suggest any form of FPS. Indeed, I would look at any kind of game with suspicion.
The reason being that most games involve a lot of things that you're not interested in dealing with. Collision detection, physics, AI, real interactivity, etc. These are important for making a game, but the more time you spend on them, the less time you spend on anything else.
Honestly, I would suggest not doing your approach entirely. That is, you should not start with a single project and build on it. What you want is to understand graphics. Unless you are going to rebuild graphics theory from first principles, that means at least initially learning from someone else. Whether it is a book, website, etc, you need a firm foundation of math and basic skills to be able to start doing projects on your own. Doing so from the right materials will help you avoid many of the pitfalls that programmers often step in.
The OpenGL Wiki has links to many tutorials and other how-to guides for learning the basics. In the interests of full disclosure, I am writing one of the guides linked there.
What you should do, once you have a firm foundation, is say, "I want to learn about graphics technique X." So then you do some research on the subject. Maybe you ask some people on a forum or something of the like. Once you understand it, you build a new application to test it out and make it work. Then, you move on to graphics technique Y. And so on.
Once you have a solid grasp on how to draw something in particular, then you'll be ready to start on your single project that you slowly expand. That way, you will know something about how to structure your rendering system and so forth.
If you do insist on taking this approach, I recommend starting with some kind of top-down-esque game with limited interaction. Perhaps a puzzle game or something similar. The ground shouldn't be flat, but you shouldn't need to do significant work on collision detection either. That way, you don't have to do much with the "game" part of the game, and you can instead focus on the rendering.
I used opengl 2 years ago. In one afternoon I read a tuto, I drew a cube (and then learned how to load any 3d model) and learned home to move the camera around with the mouse. It was easy, less than 100 lines of codes. I didnt get the pipeline completely but I was able to do something.
Now I need to refresh opengl for some basic stuff, basically I need to load a 3D model (any model) and move the model around, with the camera fixed. Something I thought would be another afternoon.
I have spent 1 day and have nothing working. I am reading the recommended tuto http://www.arcsynthesis.org/gltut/ I dont get anything, now to draw just a cube you need a lot of lines and working with lots of buffer, use some special syntax for shaders.... what the hell I only want to draw a cube. Before it was just defining 6 sides.
What is going on with opengl? Some would argue that now is great, I think it is screwed.
Is there any easy library to work with Something that would make my life easier?
GLUT - http://www.opengl.org/resources/libraries/glut/
ASSIMP - http://assimp.sourceforge.net/
These two libraries are all you need to make a simple application where you import a model (various formats). Read it's documentation and examples to get a better understanding on how you can "glue" OpenGL and ASSIMP to work.
Documentation
As to is OpenGL more hard to comprehend? No. What I've learned in recent years from OpenGL is that GFX programming is never simple or done in a few lines of code, you have to be organised, you have to be careful and even a simple primitive (e.g cube) needs to have more than 100 lines of code to make it decent and flexible (for example if you want more subdivisions on your polygons or texturing).
If you learned it only two years ago, then the tutorials were extremely outdated. Immediate Mode has been known to be deprecated for a very, very long time. Actually the first plans to abandon it and display lists date back to 2003.
Vertex Arrays have been around since version 1.1, and they have been the preferred method for sending geometry to OpenGL ever since; in immediate mode every vertex causes several function calls, so for any seriously complex object you spend more time managing the function call stack, than doing actual rendering work. If you used Vertex Arrays consequently since their introduction, switching over to Vertex Buffer Objects is as complicated as just inserting or replacing a few lines.
The biggest hurdle using OpenGL-3 is in Windows, where one has to use a proxy context to get access to the extension functions required to select OpenGL-3 capabilities for context creation. However again no big hurdle, 20 lines of code top. And some programs, like mine for example, create a proxy GL context anyway, to which all shareable data is uploaded, which allows to quicly destroy/recreate visible contexts, yet have full access to textures, VBOs and stuff (you can share VBOs, which is another reason for using them instead of plain vertex arrays; this might not look like something big, at least not if the context is used from a single process; however on plattforms like X11/GLX OpenGL contexts can be shared between X11 clients, which may even run on different machines!)
Also the existance of functions like the matrix manipulation stack led people into the misconception, OpenGL was some matrix math library, some even believed it was a particularily fast one. Neither is true. The removal of the matrix manipulation functions was a very important and right thing to do. Every serious OpenGL application will implement their very own matrix math anyway. For example any modern game using some kind of physics engine used to directly use in OpenGL (glLoadMatrix, or glUniformMatrix) the transform matrix spit out by the physics calculation, completely bypassing the rest of the matrix functions. This also means that the sole reason to have multiple matrix stacks (GL_PROJECTION, GL_MODELVIEW, GL_TEXTURE, GL_COLOR), namely being able to use the same set of manipulation functions on several matrices, was obsoleted and could have been replaced by something like glLoadMatrixSelected{f,d}v(GLenum target, GLfloat *matrix). However Uniforms and shaders already were around, so the logical step was not introducing a new function, but to reuse existing API, which had been used for this task already, anway, and instead remove what's no longer needed.
TL;DR: The new OpenGL-3 API greatly simplyfies using it. It's a lot clearer, has fewer pitfalls and IMHO is also more newbie-friendly.
You don't have to use buffer objects. You can use the deprecated immediate mode. It will be slower, but if you don't really care then go ahead and use OpenGL the way you used to. NeHe has some excellent tutorials on OpenGL 1.x stuff.
Swiftless has some good tutorials (only a few very basic ones) on OpenGL 3.x and 4.x, but the learning curve is, as you've found, very steep.
Does it have to be openGL? XNA offers an ability to draw 3d models without breaking your back.. Could be worth a look