How to improve speed when drawing over 50k sphere with Opengl - c++

right now, I used glutSolidSphere to draw multiple sphere which is 50k+ sphere
the speed is extremely low.
Is there any method or suggestion to increase speed?
below is my code...
void COpenGlWnd::OnPaint()
{
CPaintDC dc(this);
::wglMakeCurrent(m_hDC, m_hRC);
for(int k = 0; k < m_nCountZ; k++)
{
for(int j = 0; j < m_nCountY; j ++)
{
for(int i = 0; i < m_nCountX; i ++)
{
::glPushMatrix();
........
::glutSolidSphere(Size[i][j][k], 36, 36);
........
::glPopMatrix();
}
}
}
::SwapBuffers(m_hDC);
}
For more information:
the sphere will always be in specific location, but user can use mouse to rotate and see all sphere from difference view.

Here's a couple of suggestions:
Create a vertex buffer object (VBO) containing the sphere and render this instead of using glutSolidSphere.
Look into instancing, that is drawing many spheres with a single draw call.
The following article does almost exactly what you want: http://sol.gfxile.net/instancing.html

If you really want efficiency and are only dealing with spheres, you can actually draw a sphere with infinite resolution using only a single quad and a shader. Basically use math to work out the sphere. Start with an untextured circle. Add depth, normals, lighting, texturing and so on.
This calculates the sphere per-pixel making it as high res as required.

Related

Multiple framebuffer pass in Metal

I am trying to add blur capability to my application. I found this example of blur and am now trying to implement it using Metal. What the pipeline looks like in my case:
I draw objects using raycast to offscreen texture;
Then I take this texture and make a horizontal blur, and then write the result into texture A;
I take that texture A and make a vertical blur, and write
the result into texture B;
I draw on the screen texture B.
To write to textures A and B, I use the [[color(m)]] attribute as the return value from the fragment function. And then I ran into a problem, in OpenGL, in order to apply blur to a texture, for example 10 times, you can do it like this (using ping-pong framebuffers):
bool horizontal = true, first_iteration = true;
int amount = 10;
shaderBlur.use();
for (unsigned int i = 0; i < amount; i++)
{
glBindFramebuffer(GL_FRAMEBUFFER, pingpongFBO[horizontal]);
shaderBlur.setInt("horizontal", horizontal);
glBindTexture(
GL_TEXTURE_2D, first_iteration ? colorBuffers[1] : pingpongBuffers[!horizontal]
);
RenderQuad();
horizontal = !horizontal;
if (first_iteration)
first_iteration = false;
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);
But how can this be done in Metal? I tried it like this, but it didn't give any results (I mean that with loops, that without them, the result is the same, as if there is only one pass of the blur):
blur_pass_ = 1; // horizontal blur pass
[render_encoder setFragmentBytes:&blur_pass_ length:sizeof(blur_pass_) atIndex:0];
for (std::size_t x = 0; x < 9; ++x) { // how much times I want to apply blur
[render_encoder setFragmentTexture:render_target_texture_ atIndex:0];
[render_encoder drawPrimitives:MTLPrimitiveTypeTriangle vertexStart:0 vertexCount:num_vertices];
}
blur_pass_ = 0; // vertical blur pass
[render_encoder setFragmentBytes:&blur_pass_ length:sizeof(blur_pass_) atIndex:0];
for (std::size_t y = 0; y < 9; ++y) { // how much times I want to apply blur
[render_encoder setFragmentTexture:x_blurry_texture_ atIndex:0];
[render_encoder drawPrimitives:MTLPrimitiveTypeTriangle vertexStart:0 vertexCount:num_vertices];
}
[render_encoder endEncoding];
Can you tell me how to do this correctly in Metal? Yes, I also build for iphones, so performance is important to me.

Issues turning loaded meshes into cloth simulation

I'm having a bit of issue trying to get meshes I import into my program to have cloth simulation physics using a particle/spring system. I'm kind of a beginner into graphics programming, so sorry if this is super obvious and I'm just missing something. I'm using C++ with OpenGL, as well as Assimp to import the models. I'm fairly sure my code to calculate the constraints/springs and step each particle is correct, as I tested it out with generated meshes (with quads instead of triangles), and it looked fine, but idk.
I've been using this link to study up on how to actually do this: https://nccastaff.bournemouth.ac.uk/jmacey/MastersProjects/MSc2010/07LuisPereira/Thesis/LuisPereira_Thesis.pdf
What it looks like in-engine: https://www.youtube.com/watch?v=RyAan27wryU
I'm pretty sure it's an issue with the connections/springs, as the imported model thats just a flat plane seems to work fine, for the most part. The other model though.. seems to just fall apart. I keep looking at papers on this, and from what I understand everything should be working right, as I connect the edge/bend springs seemingly correctly, and the physics side seems to work from the flat planes. I really can't figure it out for the life of me! Any tips/help would be GREATLY appreciated! :)
Code for processing Mesh into Cloth:
// Container to temporarily hold faces while we process springs
std::vector<Face> faces;
// Go through indices and take the ones making a triangle.
// Indices come from assimp, so i think this is the right thing to do to get each face?
for (int i = 0; i < this->indices.size(); i+=3)
{
std::vector<unsigned int> faceIds = { this->indices.at(i), this->indices.at(i + 1), this->indices.at(i + 2) };
Face face;
face.vertexIDs = faceIds;
faces.push_back(face);
}
// Iterate through faces and add constraints when needed.
for (int l = 0; l < faces.size(); l++)
{
// Adding edge springs.
Face temp = faces[l];
makeConstraint(particles.at(temp.vertexIDs[0]), particles.at(temp.vertexIDs[1]));
makeConstraint(particles.at(temp.vertexIDs[0]), particles.at(temp.vertexIDs[2]));
makeConstraint(particles.at(temp.vertexIDs[1]), particles.at(temp.vertexIDs[2]));
// We need to get the bending springs as well, and i've just written a function to do that.
for (int x = 0; x < faces.size(); x++)
{
Face temp2 = faces[x];
if (l != x)
{
verticesShared(temp, temp2);
}
}
}
And heres the code where I process the bending springs as well:
// Container for any indices the two faces have in common.
std::vector<glm::vec2> traversed;
// Loop through both face's indices, to see if they match eachother.
for (int i = 0; i < a.vertexIDs.size(); i++)
{
for (int k = 0; k < b.vertexIDs.size(); k++)
{
// If we do get a match, we push a vector into the container containing the two indices of the faces so we know which ones are equal.
if (a.vertexIDs.at(i) == b.vertexIDs.at(k))
{
traversed.push_back(glm::vec2(i, k));
}
}
// If we're here, if means we have an edge in common, aka that we have two vertices shared between the two faces.
if (traversed.size() == 2)
{
// Get the adjacent vertices.
int face_a_adj_ind = 3 - ((traversed[0].x) + (traversed[1].x));
int face_b_adj_ind = 3 - ((traversed[0].y) + (traversed[1].y));
// Turn the stored ones from earlier and just get the ACTUAL indices from the face. Indices of indices, eh.
unsigned int adj_1 = a.vertexIDs[face_a_adj_ind];
unsigned int adj_2 = b.vertexIDs[face_b_adj_ind];
// And finally, make a bending spring between the two adjacent particles.
makeConstraint(particles.at(adj_1), particles.at(adj_2));
}
}

Draw polygon wire in Maya using OpenGL

I'm looking for fast way of drawing polygon wireframe in Maya using openGL. I have a working solution, however it's very slow for complex scenes.
I also have a fast solution using MGeometry and MGeometryPrimitive, however it gives me triangles and I can't see a way to get polygon definition.
I am only interested in points and polygon definition, I don't care about normals, UVs and such.
Here's my working slow solution:
MPointArray points;
for (MItMeshPolygon oPolyIter(object); !oPolyIter.isDone(); oPolyIter.next())
{
gGLFT->glBegin(MGL_LINE_LOOP);
oPolyIter.getPoints(points);
for (unsigned int i = 0; i < points.length(); i++)
gGLFT->glVertex3d(points[i].x, points[i].y, points[i].z);
gGLFT->glEnd();
}
Any ideas or pointers?
After some research, I came up with this solution, which runs considerably faster.
gGLFT->glPolygonMode(MGL_FRONT_AND_BACK, MGL_LINE);
MIntArray verts;
UintArray vertIds;
for (int i = 0 ; i < mesh.numPolygons(); i++)
{
mesh.getPolygonVertices(i, verts);
vertIds.convert(verts);
gGLFT->glDrawElements(GL_POLYGON, verts.length(), GL_UNSIGNED_INT, vertIds.data() );
}

Add sin wave to triangle mesh

can someone help me addd a sin wave onto my triangle mesh to help me get a wave effect.
for(int i = 0; i<150; i++){
for(int j = 0; j<150; j++){
grid[i][j] = 0;
glBegin(GL_LINE_LOOP);
glVertex3f(i*3,grid[i][j],j*3);
glVertex3f(i*3,grid[i][j],j*3+3);
glVertex3f(i*3+3,grid[i][j],j*3);
glEnd();
glBegin(GL_LINE_LOOP);
glVertex3f(i*3,grid[i][j],j*3+3);
glVertex3f(i*3+3,grid[i][j],j*3+3);
glVertex3f(i*3+3,grid[i][j],j*3);
glEnd();
}
}
If i've got it right, all i should need to do is add a sin value to grid[i][j], am i right?
Are all the y values to be set to the same grid[i][j]?
It really depends on what you are trying to accomplish.
Are you trying to set up a surface that when looked on edge it looks like a sine wave?
If that is the case then assuming that you are modulating the y-axis and the z-axis plays no effect then you need to determine the frequency you want to use.
i.e y = A * sine (w * x + p) where A is amplitude, w is angular frequency, and p is phase.
You will also have to take into account the number of sample points on the x-axis so that it doesn't look to aliased. Sine is a continuous function but you are take only 150 samples.
Also you may want to reconsider how to calculate and draw your final triangle mesh. Your current code is not the most efficient because you are recalculating your mesh every frame.
You may want to consider initializing grid and then drawing triangle strips, etc. There is a lot online that discusses that.

2D tile based game, shows gaps between the tile sprites when I zoom in with the camera?

I am using the D3DXSPRITE method to draw my map tiles to the screen, i just added a zoom function which zooms in when you hold the up arrow, but noticed you can now see gaps between the tiles, here's some screen shots
normal size (32x32) per tile
zoomed in (you can see white gaps between the tiles)
zoomed out (even worst!)
Here's the code snipplet which I translate and scale the world with.
D3DXMATRIX matScale, matPos;
D3DXMatrixScaling(&matScale, zoom_, zoom_, 0.0f);
D3DXMatrixTranslation(&matPos, xpos_, ypos_, 0.0f);
device_->SetTransform(D3DTS_WORLD, &(matPos * matScale));
And this is my drawing of the map, (tiles are in a vector of a vector of tiles.. and I haven't done culling yet)
LayerInfo *p_linfo = NULL;
RECT rect = {0};
D3DXVECTOR3 pos;
pos.x = 0.0f;
pos.y = 0.0f;
pos.z = 0.0f;
for (short y = 0;
y < BottomTile(); ++y)
{
for (short x = 0;
x < RightTile(); ++x)
{
for (int i = 0; i < TILE_LAYER_COUNT; ++i)
{
p_linfo = tile_grid_[y][x].Layer(i);
if (p_linfo->Visible())
{
p_linfo->GetTextureRect(&rect);
sprite_batch->Draw(
p_engine_->GetTexture(p_linfo->texture_id),
&rect, NULL, &pos, 0xFFFFFFFF);
}
}
pos.x += p_engine_->TileWidth();
}
pos.x = 0;
pos.y += p_engine_->TileHeight();
}
Your texture indices are wrong. 0,0,32,32 is not the correct value- it should be 0,0,31,31. A zero-based index into your texture atlas of 256 pixels would yield values of 0 to 255, not 0 to 256, and a 32x32 texture should yield 0,0,31,31. In this case, the colour of the incorrect pixels depends on the colour of the next texture along the right and the bottom.
That's the problem of magnification and minification. Your textures should have invisible border populated with part of adjacent texture. Then magnification and minification filters will use that border to calculate color of edge pixels rather than default (white) color.
I think so.
I also had a similar problem with texture mapping. What worked for me was changing the texture address mode in the sampler state description; texture address mode is used to control what direct3d does with texture coordinates outside of the ([0.0f, 1.0f]) range: i changed the ADDRESS_U, ADDRESS_V, ADDRESS_W members to D3D11_TEXTURE_ADDRESS_CLAMP which basically clamps all out-of-range values for the texture coordinates into the [0.0f, 1.0f] range.
After a long time searching and testing people solutions I found this rules are the most complete rules that I've ever read.
pixel-perfect-2d from Official Unity WebSite
plus with my own experience i found out that if sprite PPI is 72(for example), you should try to use more PPI for that Image(96 maybe or more).It actually make sprite more dense and make no space for white gaps to show up.
Welcome to the world of floating-point. Those gaps exist due to imperfections using floating-point numbers.
You might be able to improve the situation by being really careful when doing your floating-point math but those seams will be there unless you make one whole mesh out of your terrain.
It's the rasterizer that given the view and projection matrix as well as the vertex positions is slightly off. You maybe able to improve on that but I don't know how successful you'll be.
Instead of drawing different quads you can index only the visible vertexes that make up your terrain and instead use texture tiling techniques to paint different stuff on there. I believe that won't get you the ugly seam because in that case, there technically isn't one.