Vertex buffer object won't render, other primitives will - opengl

I'm loading some scenes/objects from files using assimp, and I had them displaying properly earlier — but rewrote my MVP matrix setup (which had been terribly written and was incomprehensible).
Now, most primitives which I draw in the standard rendering pipeline seem to be appearing just fine. I have a wireframe cube around the origin and can also put in a triangle. But no matter what I do, my ASSIMP-loaded object refuses to be rendered, as a wireframe or as a solid.
I suspect the mistake I'm making is terribly obvious. I've tried to reduce the code to a minimal example.
The object should look like a rock and it should show up within the wireframe box.
Since I haven't much altered the mesh code, I'm guessing the problem is in scene.h or main.cpp.
The old version had GLSL programs, but I eliminated all mention of those here. My understanding from the OpenGL Superbible is that shaders aren't required, though. So that can't be it, right?

The old version had GLSL programs, but I eliminated all mention of those here. My understanding from the OpenGL Superbible is that shaders aren't required, though.
They are if you want to use generic vertex attributes via glVertexAttribPointer(). Without a shader OpenGL has no way of knowing attribute 0 is a vertex or 1 contains a texture coordinate.
Use glVertexPointer() and friends if you don't want to use shaders.

Related

OpenGL - Fixed pipeline shader defaults (Mimic fixed pipeline with shaders)

Can anyone provide me the shader that are similar to the Fixed function Pipeline?
I need the Fragment shader default the most, because I found a similar vertex shader online. But if you have a pair that should be fine!
I want to use fixed pipeline, but have the flexability of shaders, so I need similar shaders so I'll be able to mimic the functionality of the fixed pipeline.
Thank you very much!
I'm new here so if you need more information tell me:D
This is what I would like to replicate: (texture unit 0)
functionality of glTranslatef
functionality of glColor4f
functionality of glTexCoord2f
functionality of glVertex2f
functionality of glOrtho (I know it does some magic stuff behind the scenes with the shader)
Thats it. That is all the functionality I would like to replicate form the fixed function pipeline. Can anyone show me an example of how to replicate those things with shaders?
You have a couple of issues here that will make implementing this using shaders more difficult.
First and foremost, in addition to using fixed-function features you are also using immediate mode. Before you can make the transition to shaders, you should switch to vertex arrays. You could write a class that takes immediate mode-like commands that would come between glBegin (...) and glEnd (...) and pushes them into a vertex array if you absolutely need to structure your software this way.
As for glTranslatef (...) and glOrtho (...) these are nothing particularly special. They create translation matrices and orthographic projection matrices and multiply the "current" matrix by this. It is unclear what language you are using, but one possible replacement for these functions could come from using a library like glm (C++).
The biggest obstacle will be getting rid of the "current" state mentality that comes with thinking in terms of the fixed-function pipeline. With shaders you have full control over just about every state, and you don't have to use functions that multiply the "current" matrix or set the "current" color. You can simply pass the exact matrix or color value that you need to your shader. This is an altogether better way of approaching these problems, and is why I honestly think you should ditch the fixed-function approach altogether instead of trying to emulate it.
This is why your desire to "use the fixed-function pipeline but have the flexibility of shaders" fundamentally makes very little sense.
Having said all that, in OpenGL compatibility mode, there are reserved words in GLSL that refer to many of the fixed-function constructs. These include things like gl_MultiTexCoord<N>, gl_ModelViewProjectionMatrix, etc. They can be used as a transitional aid, but really should not be relied upon in the long run.
Se also this question: OpenGL Fixed function shader implementation where they point to a few web resources.
The OpenGL ES 2 book contains an implementation of the OpenGL ES 1.1 fixed function pipeline in Chapter 8 (vertex shader) and Chapter 10 (fragment shader).
Unfortunately, these shaders seem to not be included in the book's sample code. On the other hand, reading the book and typing the code is certainly worthwile.

Why use GLSL along with OpenGL?

While trying to get the gist of OpenGL, I eventually ran into GLSL. I have used OpenGL before for miminal things, like triangles and colors (since I haven't learnt much yet), but when I found out about deprecated functions like glBegin and glEnd, I had to unlearn the things I had just learnt.
Now, I have come across vertex buffers, vertex buffer objects, vertex and fragments shaders... One thing I never understood though is why should one use GLSL? Why use GLSL along with OpenGL? What are the things you can't do using pure OpenGL? To me integrating GLSL shaders into programs adds complexity, since you have deal with external files or you have to embed shaders into programs, which causes more work.
I have very little experience. I'd like to learn more about the subject, but because of this incomprehensible contradiction, I'm unable to progress.
So, why use GLSL along with OpenGL?
Your question is not about GLSL; it's about shaders in general. GLSL is just the sanctioned way to provide shaders in OpenGL. Your question is really, "Why use shaders along with OpenGL?"
I'm not going to get into all of the details of what shaders can do that fixed function cannot. But here are just a few of the things that you cannot do with fixed-function OpenGL:
GPU-powered vertex weighted skinning.
Dual-quaternion-based skinning.
Proper tangent-space bump mapping.
Various deferred rendering techniques.
On-GPU frustum culling for instanced rendering.
Arbitrary lighting models, including BRDFs or other complex illumination models.
Complex shadow mapping techniques.
High-dynamic range rendering (OpenGL FFP doesn't let you exceed 1.0 on any color calculations).
Arbitrary tone mapping techniques.
I can keep going (for a long time), but I think you get the point. If you actually care about little things like "visual fidelity", you should be using shaders.
The question isn't "why use shaders;" it's "why not use shaders?"

point rendering in openGL and GLSL

Question: How do I render points in openGL using GLSL?
Info: a while back I made a gravity simulation in python and used blender to do the rendering. It looked something like this. As an exercise I'm porting it over to openGL and openCL. I actually already have it working in openCL, I think. It wasn't until i spent a fair bit of time working in openCL that I realized that it is hard to know if this is right without being able to see the result. So I started playing around with openGL. I followed the openGL GLSL tutorial on wikibooks, very informative, but it didn't cover points or particles.
I'm at a loss for where to start. most tutorials I find are for the openGL default program. I want to do it using GLSL. I'm still very new to all this so forgive me my potential idiocy if the answer is right beneath my nose. What I'm looking for is how to make halos around the points that blend into each other. I have a rough idea on how to do this in the fragment shader, but so far as I'm aware I can only grab the pixels that are enclosed by polygons created by my points. I'm sure there is a way around this, it would be crazy for there not to be, but me in my newbishness is clueless. Can some one give me some direction here? thanks.
I think what you want is to render the particles as GL_POINTS with GL_POINT_SPRITE enabled, then use your fragment shader to either map a texture in the usual way, or generate the halo gradient procedurally.
When you are rendering in GL_POINTS mode, set gl_PointSize in your vertex shader to set the size of the particle. The vec2 variable gl_PointCoord will give you the coordinates of your fragment in the fragment shader.
EDIT: Setting gl_PointSize will only take effect if GL_PROGRAM_POINT_SIZE has been enabled. Alternatively, just use glPointSize to set the same size for all points. Also, as of OpenGL 3.2 (core), the GL_POINT_SPRITE flag has been removed and is effectively always on.
simply draw a point sprites (using GL_POINT_SPRITE) use blending functions: gl_src_alpha and gl_one and then "halos" should be visible. Blending should be responsible for "halos" so look for some more info about that topic.
Also you have to disable depth wrties.
here is some link about that: http://content.gpwiki.org/index.php/OpenGL:Tutorials:Tutorial_Framework:Particles

Fixed-Function Vs. Shaders - help understand the conceptual differences

My background: I first started experimenting with OpenGL some months ago, for no particular purpose, just fun. I started reading the OpenGL redbook, and got as far as making a planetary system with a lot of different lighting. That lasted for a month, and my interest for openGL went away. It awoke again a week or so ago, and as I gathered from some SO posts, the redbook is outdated and the OpenGL Superbible is a better source for learning. So I started reading it. I like the concept of shaders but there's a real mess going on in my brain because of transition from my old memories of the fixed pipeline and the new concept of shaders.
Question: I would like to write some statements which I think are true and I am asking OpenGL experts to verify them (i.e. whether I am understanding correctly, not quite correctly or absolutely incorrectly). So...
1) If we don't use any shader program, nothing changes. We have current color, current normal, current transformation matrix, current everything, and as soon as we call glVertex**(...) these current values are taken and the vertex is fed to ... I don't know what. The fact is that it's transformed with the current matrix, the current color and normal are applied to it etc.
2) As soon as we use a shader program, all the above stops working. That is, glColor, glRotate etc. make no sense (Do they?). I mean, glColor still does set the current color, glRotate still multiplies the current matrix by the rotation matrix, but these aren't used at all. Instead, we feed vertex attributes by glVertexAttrib. Which attribute means what is totally dependent on our vertex shader and the in variable binding. We also find ans set the values of the uniforms and then call glVertex and the shader is executed ( I don't know immediately or after glEnd() is called). The actual vertex and fragment processing is done entirely manually in the shader program.
3) Shaders don't add anything to depth testing. That is, I don't need to take care of it in a shader. I just call glEnable(GL_DEPTH_TEST). Neither is face culling affected.
4) Alpha blending and antialiasing need not be taken care of in shaders. glEnable calls will suffice.
5) Is it a good idea to use gluPerspective, glRotate, glPushMatrix and other matrix functions, and then retrieve the current matrix and feed it as a uniform to a shader? Thus there won't be any need in using a 3rd party matrix library.
It depends on what version of OpenGL you're talking about. Up through OpenGL 3.0, all the fixed functionality is still present, so yes, if you decide to just use fixed functionality it continues to work like it always did. Starting from 3.0, quite a bit of the fixed pipeline was deprecated, and as of 3.1 it disappears completely. Using these, you no longer really have the option to just use the fixed pipeline.
Again, it depends. For example, up through OpenGL 3.0, glColor is still supported, even when you use a shader. The difference is that instead of automatically being applied to what gets drawn, it's supplied to your shader, which can use it unchanged, modify it as it sees fit, or ignore it completely. So, your fragment shader receives gl_FrontColor and gl_BackColor, and writes the actual fragment color to gl_FragColor. If you're using OpenGL 3.1 or newer, however, glColor (for example) just no longer exists -- a color will be just another value you supply to your shader like you could/would anything else.
That's correct, at least up to OpenGL 3.1. As of 4.0, there's a new compute shader that (I believe) can get involved in things like depth testing (but I haven't used it, so I'm a bit uncertain about that).
Yes, you can still use built-in alpha blending. Depending on your hardware, you may also want to consider using the gl_ARB_draw_buffers_blend extension (which is mandatory as of OpenGL 4, if I recall correctly).
Yet again, it depends on the version of OpenGL you're talking about. Current OpenGL completely eliminates all support for matrices so you have no choice but to use some other matrix library. Older versions supplied things like gl_ModelViewMatrix and gl_NormalMatrix to your shader as a uniform so you could go that route if you chose.
2) In modern OpenGL, there is no glColor, glBegin, glVertex, glRotate etc. so they don't make sense.
5) In modern OpenGL there are no built-in matrices, so you have to use a 3rd party library or write your own. So to answer your question, no, it's not a good idea.

OpenGL 3.1-4.1 new and deprecated features

I've been working with OpenGL for about a year now, and have learned a lot of stuff. Unfortunatly the way I learned it was the old pre 3.x way, meaning immediate mode, default shaders, matrix stacks, etc. I more or less have an idea of what has changed from then to now by looking at the OpenGL specs, however I don't totally understand some of the new ways to do things.
From my understanding they got rid of matrix stacks, meaning you have to keep track of your own transformation matrices, which doesn't seem too complicated. They also got rid of immediate mode, meaning you now need to use VBOs or VAOs (never know which one, maybe both..) to send the pixel/normal/texture,etc. information to the shader program. I don't really get the way these objects works, I think you need to put all the info into them, and provide an ofset of some sort to show the separators between pixel,normal and texture coordinates. Could someone briefly explain how this actually works (or send me a link which explains it)? I tried wikipedia and googling it, but found myself still not quite understanding them.
Another point I would like to know more about are shaders, as I've never used them. I'm not going to ask how to code them or anything, just what needs to go in there and what opengl still does for you. More specifically, what would you need to do in the shaders to get a basic rendering program? I know you need to do all the ligthing calculations and use your matrices to calculate the real vertex position. But does opengl still take care of backface culling, line clipping, polygon filling and other lower level issues, or do you have to code them yourslef into the shaders (or don't they even belong in the shaders)?
Since immediate mode is deprecated doing a "hello triangle" application is a bit more involved. There is a good tutorial on modern OpenGL here:
http://arcsynthesis.org/gltut/
You should read it thoroughly. Bear in mind that it doesn't use VAOs so you'll have to read about it somewhere else afterwards. VAOs don't change things much so you won't have to unlearn things from mentioned tutorial to use them.
And about your second question... Your vertex shader will be executed by OpenGL for every vertex. Your job is to calculate final position of the vertex and prepare data (like normals, light data...) to be sent to fragment shader, given the attributes of vertex and other data you send to shader (uniforms - you'll read about it in tutorial). Fragment shader will be executed per fragment and in fragment shader you are calculating the final color of each fragment.
You can see here:
http://www.opengl.org/sdk/docs/man4/
that things like, glPolygonMode and glCullFace are still there.