I have started learning Vulkan recently and been working on a project that requires migrating OpenGL code to Vulkan. I have been searching for the equivalent of OpenGL's 'GL_LINE_LOOP' in Vulkan to migrate the following piece of code:
glColor3f(0,0,0);
glBegin(GL_LINE_LOOP);
glVertex2f(m_rCircFit.left(),m_rCircFit.top()); //(x,y)
glVertex2f(m_rCircFit.right(),m_rCircFit.top()); //(x+width,y)
glVertex2f(m_rCircFit.right(),m_rCircFit.bottom()); //(x+width,y+height)
glVertex2f(m_rCircFit.left(),m_rCircFit.bottom()); //(x,y+height)
glEnd();
I am able to draw a rectangle using 'VK_PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP' but the output looks like following:
But I don't want the diagonal line connecting between the top-left and bottom-right vertices. Can anyone guide me about what to use in Vulkan to achieve the same functionality as 'GL_LINE_LOOP'?
A line loop is a line strip with an additional line that connects the first and last vertices. So the direct translation flag would be VK_PRIMITIVE_TOPOLOGY_LINE_STRIP.
However note that you're wasting your time if this is the granularity you're working with. Vulkan uses an entirely different methodology than modern OpenGL, and you're not using reference modern OpenGL, you're using the ancient glBegin stuff.
If you have more than ~1-3 draw calls per frame, you can in the most literal sense throw away your code and just keep using OpenGL. Vulkan is all about batching all of your data together and rendering it all at once.
Related
I'm trying to invert an image for a project, which should be as simple as scaling the projection matrix by (1, -1, 1). However, the screen is drawn inside an API that I have no access to and no documentation on (this project is VERY old). So performing that scaling essentially does nothing (I assume because the projection matrix is reset inside the API's call).
Do I still have access to the drawing information after a draw call, or is it cleared? If the information still exists, how do I obtain it?
P.S. - I'm using openGL 1.1
From the programmer's point of view OpenGL is the worst kind of short term amnesiac imaginable. Once a drawing call returns from the programmer's perspective OpenGL already did turn everything into coloured pixels and completely forgot about what it just did.
So…
Do I still have access to the drawing information after a draw call, or is it cleared?
No, you don't have access to it after drawing and for all practical means it is cleared.
However you mentioned that the legacy code is old. So there's a good chance that it doesn't know about shaders, i.e. will not disable or load its custom shader. So what I'd try is loading a shader with one of the older GLSL version profiles, that still have the built-in variables mapping to the old fixed function pipeline stuff and write the shader so that it does match the legacy code's needs.
I am quite new in OpenGL programming. My goal was to set object-oriented graphics programming and I proudly can say that I done some progress. Now I have different problem.
Lets say we have working program what can make one, two or many rotating teapots. I made this by using list inside my class. Raw code for Drawing function is here:
void Draw(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
for(list<teapot>::iterator it=teapots.begin();it!=teapots.end();it++){
glTranslatef(it->pos.x,it->pos.y,it->pos.z);
glRotatef(angle,it->ang.x,it->ang.y,it->ang.z);
glutSolidTeapot(it->size);
glRotatef(angle,-it->ang.x,-it->ang.y,-it->ang.z);
glTranslatef(-it->pos.x,-it->pos.y,-it->pos.z);
}
glPopMatrix();
glutSwapBuffers();
}
Everything is great, but when I draw large amount of teapots - say, 128 in two rows - my fps number drops. I don't know, if it is just hardware limit or I make something wrong? Maybe glPushMatrix() and glPopMatrix() should happen more often? Or less often?
You're using an old, deprecated part of OpenGL (called "immediate mode") in which all the graphics data is sent from the CPU to the GPU every frame: inside glutSolidTeapot() is code that does something like glBegin(GL_TRIANGLES) followed by lots of glVertex3f(...) and finally glEnd(). The reason that's deprecated is because it's a bottleneck. GPUs are highly parallel and are capable of processing many triangles at the same time, but they can't do that if your program is sending the vertices one-at-a-time with glVertex3f.
You should learn about the modern OpenGL API, in which you start by creating a "buffer object" and loading your vertex data into it — basically uploading your shape into the GPU's memory once, up-front — and then you can issue lots of calls telling the GPU to draw triangles using the vertices in that buffer object, instead of having to send all the vertices again every time.
(Unfortunately, this means you won't be able to use glutSolidTeapot(), since that draws in immediate mode and doesn't know how to produce vertex data for a buffer object. But I'm sure you can find a teapot model somewhere on the web.)
Open.gl is a decent tutorial that I know of for modern-style OpenGL, but I'm sure there are others as well.
Wyzard is right,partially.Besides the fact you are using old deprecated API where on each draw call you submit all your data again and again from CPU to GPU you also expect to maintain descent frame rate while rendering the same geometry multiple times.So in fact,keeping such an approach to geometry rendering while using programmable pipeline will not gain you much either.You will start noticing FPS drop already after +- 40-60 objects(depends on your GPU).What you really need is called batched drawing.The batch drawing may have different techniques all of witch imply you using modern OpenGL as we are talking here of data buffers(Arrays of vertices in your case which you upload to GPU).You can either push all the geometry into a single vertex buffer or use instanced rendering commands.In your case ,if all you are after is drawing the same mesh multiple times,second technique is perfect solution.There are more complex techniques like indirect multiple draw commands ,which allow you drawing indeed very large quantities of different geometry by a single draw call.But those are pretty advanced for the beginners.Anyway,the bottom line is you must move to modern OpenGL and start using geometry batching if you want to keep your app FPS high while drawing large amounts of meshes.
I'm loading some scenes/objects from files using assimp, and I had them displaying properly earlier — but rewrote my MVP matrix setup (which had been terribly written and was incomprehensible).
Now, most primitives which I draw in the standard rendering pipeline seem to be appearing just fine. I have a wireframe cube around the origin and can also put in a triangle. But no matter what I do, my ASSIMP-loaded object refuses to be rendered, as a wireframe or as a solid.
I suspect the mistake I'm making is terribly obvious. I've tried to reduce the code to a minimal example.
The object should look like a rock and it should show up within the wireframe box.
Since I haven't much altered the mesh code, I'm guessing the problem is in scene.h or main.cpp.
The old version had GLSL programs, but I eliminated all mention of those here. My understanding from the OpenGL Superbible is that shaders aren't required, though. So that can't be it, right?
The old version had GLSL programs, but I eliminated all mention of those here. My understanding from the OpenGL Superbible is that shaders aren't required, though.
They are if you want to use generic vertex attributes via glVertexAttribPointer(). Without a shader OpenGL has no way of knowing attribute 0 is a vertex or 1 contains a texture coordinate.
Use glVertexPointer() and friends if you don't want to use shaders.
I'm developing an application for a virtual reality environment using OGRE, Bullet and Equalizer. My rendering function looks like this:
root->_fireFrameStarted();
root->_fireFrameRenderingQueued();
root->_fireFrameEnded();
_window->update(false);
The window does not do the buffer swap, because Equalizer does that. This function works fine, we can even use particle systems and all the other fancy stuff OGRE offers.
However, since the projection area in our lab is curved, we have a GLSL module (let's call it Warp) we use in all our applications to distort the rendering output so that it fits our projection wall. We accomplish this by creating a texture, copying the contents of the back buffer to it and applying our warping shader when rendering the distorted texture covering the entire window.
You can find the source code here: pastebin . com/ TjNJuHtC
We call this module in eq::Window::frameDrawFinish() and it works well with pure OpenGL applications but not with OGRE.
This is the output without the module and its shader:
http://s1.directupload.net/images/130620/amr5qh3x.png
If I enable the module, this is the rather strange output:
http://s14.directupload.net/images/130620/74qiwgyd.png
And if I disable the two calls to glBindTexture, the texture used by the sun particle effect (flare.png) is rendered onto the screen (I would show it to you but I can only use two links).
Every GL state variable I consider relevant have the same values in our other applications.
I checked GL_READ_BUFFER, GL_DRAW_BUFFER, GL_RENDER_MODE, GL_CULL_FACE_MODE and GL_DOUBLEBUFFER.
This raises some questions: Why does my call to glTexCopySubImage2D() seem to have no effect at all? And why does my shader not do anything even if I tell it to just make every fragment red?
Supplemental: Putting the entire shader into a material script and letting OGRE handle it is not an option.
Solved the problem: I created my shaders before Ogre::Root created my windows. I have changed the order and now it works.
Question: How do I render points in openGL using GLSL?
Info: a while back I made a gravity simulation in python and used blender to do the rendering. It looked something like this. As an exercise I'm porting it over to openGL and openCL. I actually already have it working in openCL, I think. It wasn't until i spent a fair bit of time working in openCL that I realized that it is hard to know if this is right without being able to see the result. So I started playing around with openGL. I followed the openGL GLSL tutorial on wikibooks, very informative, but it didn't cover points or particles.
I'm at a loss for where to start. most tutorials I find are for the openGL default program. I want to do it using GLSL. I'm still very new to all this so forgive me my potential idiocy if the answer is right beneath my nose. What I'm looking for is how to make halos around the points that blend into each other. I have a rough idea on how to do this in the fragment shader, but so far as I'm aware I can only grab the pixels that are enclosed by polygons created by my points. I'm sure there is a way around this, it would be crazy for there not to be, but me in my newbishness is clueless. Can some one give me some direction here? thanks.
I think what you want is to render the particles as GL_POINTS with GL_POINT_SPRITE enabled, then use your fragment shader to either map a texture in the usual way, or generate the halo gradient procedurally.
When you are rendering in GL_POINTS mode, set gl_PointSize in your vertex shader to set the size of the particle. The vec2 variable gl_PointCoord will give you the coordinates of your fragment in the fragment shader.
EDIT: Setting gl_PointSize will only take effect if GL_PROGRAM_POINT_SIZE has been enabled. Alternatively, just use glPointSize to set the same size for all points. Also, as of OpenGL 3.2 (core), the GL_POINT_SPRITE flag has been removed and is effectively always on.
simply draw a point sprites (using GL_POINT_SPRITE) use blending functions: gl_src_alpha and gl_one and then "halos" should be visible. Blending should be responsible for "halos" so look for some more info about that topic.
Also you have to disable depth wrties.
here is some link about that: http://content.gpwiki.org/index.php/OpenGL:Tutorials:Tutorial_Framework:Particles