Switching from glDrawElements to glDrawArrays - opengl

I'm using LWJGL and an icosahedron-subdivide-algorithm to create icosphere meshes.
I was using glDrawElements to render the spheres and they looked like this
, but I was hoping for a more low-poly look.
After some research I found that glDrawArrays can achieve a low-poly look, so I switched to glDrawArrays. Without changing anything about the spheres themselves (vertices array, indices array, etc.), they now look like this.
Even the primary icosahedron is completely off. I've played around with the icosahedron's base vertices and arrays (which the subdivide algorithm uses to create the spheres) to see where things go amiss, but I can't wrap my ahead around what's going wrong.
It might also be worth mentioning I use GL_TRIANGLES in both glDrawElements and glDrawArrays.
Any insight would be great.

Related

Opengl/glsl shader animation and lighting issue

So lately i've took my first serious steps (or at least i think so) into opengl/glsl and shaders in general.
Ive managed to construct and render VBOs, create and compile shaders and also mess with them in some sort of way.
I'm using a vertex shader to fix my opengl view (correct the aspect ratio) and also perform animation. This is achieved with varius matrix manipulations.
One would ask why am i using vertex shaders for animation, but reading articles around the globe i got the impression it's best to maintain static VBOs rather updating them constantly. Some sort of GPU>CPU battle.
Now i may be wrong about it that's why im reaching here for aid on the matter. My view on it is that in the future i might make a game which (for instance) will have a lot of coins for a player to grab and i would like them to be staticly stored at the GPU side. And then use the shader for rotating them.
Moving on.. "Let there be light".
I've also managed to use my normals in the vertex shader to reproduce lighting. It all worked fine with the exception that light rotates with my cube (currently im using a cube as a test dummy). Now, i know what's wrong here. It's my vertex shader transforming absolutely everything (even my light source i guess). And i can think of a way or two on how to solve this problem. One would be to apply reverse-negative transformation forces on my light source so i can keep it static as everything else rotates.
And here's where everything blurs. Im reaching stackoverflow for guidance on how to move forward. I am trying to think bigger in a way-sense : what if, in the future, i'll have plenty objects i'd like to perform basic animations for (such as rotation, scaling, translations). Would that require me to have different shaders or even a packed one with every function in it. And how would i even use this. Would i pass different values before every object rending inside the same shader?
Right now, to be honest, i want to handle the lighting issue. But i have a feeling that the way im about to approach this will set my general approach in shading animations in general. One suggested (here in stackoverflow in another question) that one should really use different shaders and swap them before every VBO rendering. I have my concerns on wether that's efficient enough, but i definately like the idea.
One suggested (here in stackoverflow in another question) that one should really use different shaders and swap them before every VBO rendering.
Which question/answer was this? Because you normally should avoid switching shaders where possible. Maybe the person meant uniforms, which are parameters to shaders, but can be changed for cheap.
Also your question is far too broad and also not very concise (all that backstory hides the actual issue). I strongly suggest you split up your doubts into a number of small questions which can be answered in separate.

point rendering in openGL and GLSL

Question: How do I render points in openGL using GLSL?
Info: a while back I made a gravity simulation in python and used blender to do the rendering. It looked something like this. As an exercise I'm porting it over to openGL and openCL. I actually already have it working in openCL, I think. It wasn't until i spent a fair bit of time working in openCL that I realized that it is hard to know if this is right without being able to see the result. So I started playing around with openGL. I followed the openGL GLSL tutorial on wikibooks, very informative, but it didn't cover points or particles.
I'm at a loss for where to start. most tutorials I find are for the openGL default program. I want to do it using GLSL. I'm still very new to all this so forgive me my potential idiocy if the answer is right beneath my nose. What I'm looking for is how to make halos around the points that blend into each other. I have a rough idea on how to do this in the fragment shader, but so far as I'm aware I can only grab the pixels that are enclosed by polygons created by my points. I'm sure there is a way around this, it would be crazy for there not to be, but me in my newbishness is clueless. Can some one give me some direction here? thanks.
I think what you want is to render the particles as GL_POINTS with GL_POINT_SPRITE enabled, then use your fragment shader to either map a texture in the usual way, or generate the halo gradient procedurally.
When you are rendering in GL_POINTS mode, set gl_PointSize in your vertex shader to set the size of the particle. The vec2 variable gl_PointCoord will give you the coordinates of your fragment in the fragment shader.
EDIT: Setting gl_PointSize will only take effect if GL_PROGRAM_POINT_SIZE has been enabled. Alternatively, just use glPointSize to set the same size for all points. Also, as of OpenGL 3.2 (core), the GL_POINT_SPRITE flag has been removed and is effectively always on.
simply draw a point sprites (using GL_POINT_SPRITE) use blending functions: gl_src_alpha and gl_one and then "halos" should be visible. Blending should be responsible for "halos" so look for some more info about that topic.
Also you have to disable depth wrties.
here is some link about that: http://content.gpwiki.org/index.php/OpenGL:Tutorials:Tutorial_Framework:Particles

Occlusion with octrees

I just started learning opengl and writing a first person shooter but I'm getting horrible framerates when I draw 5000 cubes. So now I'm attempting to perform occlusion and culling using an octree. What I'm confused about is where to cast the rays from. Do I only cast them from the fustrum near plane? It seems like I would miss part of the fustrum that expands. Any help is appreciated.
If 5000 cubes already gives bad framerates, you should consider changing the way you render your cubes.
It's very unclear to us what you are drawing the cubes for. If they are static (ie. don't move), then its best to pack them all into a single vertex buffer. If the cubes are supposed to move, then you should go for instancing. If you're going for a landscape made of cubes like minecraft, then you should create vertex buffers but only put in the faces of cubes that are actually visible.
I'd like to help more, but I'm unsure what you're doing.

textures and vertex arrays with OpenGL?

Basically what I'd like to do is make textured NGONS. I also want to use a tesselator (GLU) to make concave and multicontour objects.
I was wondering how the texture comes into play though. I think that the tesselator will return verticies so I will add these to my array, that's fine. But my vertex array will contain more than one polygon object so then how can I tell it when to bind the texture like in immediate mode? Right now I feel stuck with one call to bind.
How can this be done?
Thanks
If you're going to use glDrawArrays or glDrawElements, you'll have to draw your vertices in pieces, one piece per texture. The same texture is used for the entire call. (These calls are like a potentially more efficient version of submitting the same data by hand within glBegin and glEnd, and you can't change texture inside a glBegin...glEnd block, either.)
You could alternatively stick with glBegin and glEnd, and use glArrayElement to submit vertices whose attributes are taken out of the vertex arrays.

OpenGL texture mapping on sides cube using GL_QUADS

I am trying to map a different texture on each side of a cube using a GL_QUADS. My first problem is that I cannot even get a texture to display on the side of a GL_QUADS. I can however get a texture to display using GL_TRIANGLES but I do no understand how to draw things very well using triangles and I want to use QUADS. I also can only use GLUT for this. I need an example that works because I do not know enough about OpenGL for someone to simply explain this to me. Someone please help. Thanks!
Oops didn't realize I forgot to use glTexCoord2f. It works now.
If you post the code that you are having trouble with, perhaps I can help you. Most likely, you need to set the appropriate texture coordinates per-vertex.