Multiple draw calls per ModelInstance LibGDX? - opengl

I have about thirty ModelInstances on the screen. Each is an animated chicken. When I try to profile the draw calls, I get about 5 draw calls per each model instance. Is there a way to combine these at least for each instance? I've tried combining the meshes but I'm very unaware of what to do with bone transformations, and the transforms for each mesh part. Where can I begin to lower the draw calls?
Each is drawn using:
batch.begin(cam);
batch.render(instances, environment);
batch.end();
Other facts: each model has 5 materials, and three bones.

Related

How can I use instancing to generate 2 single different texture?

For example, I have two transform matrix: WVP_Left and WVP_Right.
Can I render geometry (like a rabbit) using instancing to generate left texture and right texture?
The left texture should just have only one rabbit with the WVP_Left effect, and the right texture should just have one rabbit with the WVP_Right effect.
For now, I get two textures which both have 2 rabbit with some overlap part.
How can I fix it?
I don't want to render the left and right scene into one texture, and split it to 2 texture in another pass.
Also, I don't want to use geometry shader to finish the thing, because geometry shader will add the workload of GPU

How to put 2D frame-by-frame animation on 3d model (hybrid animation)

I'd like to do a cartoony 3D character, where the facial features are flat-drawn and animated in 2D. Sort of like the Bubble Guppies characters.
I'm struggling with finding a good method to do it. I'm using Libgdx, but I think the potential methodologies could apply to any game engine.
Here are ideas I thought of, but each has drawbacks. Is there a way this is commonly done? I was just playing a low-budget Wii game with my kids (a Nickelodeon dancing game) that uses this type of animation for the faces.
Ideas:
UV animation - Is there a way to set up a game model (FBX format) so that certain UV's are stored in various skins? Then the UV's could jump around to various places in a sprite map.
Projected face - This idea is convoluted. Use a projection of a texture onto the model with a vertex shader uniform that shifts the UV's of the projected texture around. So basically, you'd need a projection matrix that's set up to move the face projection around with the model. But you'd need enough padding around the face frame sprites to keep the rest of the model clear of other parts of the sprite map. And this results in a complicated fragment shader that would not be great for mobile.
Move flat 3D decal with model - Separately show a 3D decal that's lined up with the model and batched as a separate mesh in the game. The decal could just be a quad where you change the UV attributes of the vertices on each frame of animation. However, this method won't wrap around the curvature of a face. Maybe it could be broken down to separate decals for each eye and the mouth, but still wouldn't look great, and require creating a separate file to go with each model to define where the decals go.
Separate bone for each frame of animation - Model a duplicate face in the mesh for every frame of animation, and give each a unique bone. Animate the face by toggling bone scales between zero and one. This idea quickly breaks down if there are more than a few frames of animation.
Update part of skin each frame - Copy the skin into an FBO. Draw the latest frame of animation into the part of the FBO color texture that contains the face. Downsides to this method are that you'd need a separate copy of the texture in memory for every instance of the model, and the FBO would have to either do a buffer restore every frame (costly) or you'd have to redraw the entire skin into the FBO each frame (also costly).
I have other ideas that are considerably more difficult than these. It feels like there must be an easier way.
Edit:
One more idea... Uniform UV offset and vertex colors - This method would use vertex colors since they are easily supported in all game engines and modeling packages, but in many cases are unused. In the texture, create a strip of the frames of animation. Set up the face UV's for the first frame. Color all vertices with Alpha 0 except the face vertices, which can be colored Alpha 1. Then pass a UV face offset uniform to the vertex shader, and multiply it by a step function on the vertex colors before adding it to the UVs. This avoids the downsides of all the above methods: everything could be wrapped into one texture shared by all instances of the model, and there would be no two-pass pixels on the model except possibly where the face is. The downside here is a heftier model (four extra attributes per vertex, although perhaps the color could be baked down to a single byte).
Your shader could receive 2 textures, one for the body, and one for the face. The face one being transparent so you could overlay it on top of the body one. Then you just need to send a different face texture based on the animation.
I am struggling with the same problem with implementing a 2d animation to a background billboard in my 3d scene.
I believe that Using Decals is the simplest solution, and implementing the animation is as easy as updating the decal’s TextureRegion according to an Animation object:
TextureRegion frame = animation.getKeyFrame(currentFrameTime, true);
decal.setTextureRegion (frame);
I guess the real problem in your case is positioning the decal inside the scene.
One solution could be using your 3D modeling software for modeling a "phantom" mesh that will store the position of the decal.
The "phantom" mesh will not be rendered with all the other 3d elements, instead it will be used to determine the position of the decals vertices. The only thing you’ll need to do is copy the “phantom” position vertices and paste them to the decal.
I hadn’t got to implement this solution yet, but theoretically it could be relatively easily done.
Hope this idea will help you, and I will appreciate you sharing other solutions/code to this problem if you find any.

OpenGL + multiple cameras

Let me see if I understood. If I use multiple viewports I can create several "cameras" in my OpenGL application. Right?
Well, I have an object that can be seen in the viewport 1, but not visible in the viewport 2. If I want the subject appears in both viewports then ...I must draw double!
That means that if I have two objects, if I have two "cameras", I have to draw these objects twice. So everything I have in my scene, I must have to draw double.
Is this okay? Is there another way to split the screen without duplicating objects?
Is this okay?
Yes, that's how it goes.
Is there another way to split the screen without duplicating objects?
You're not duplicating objects. You can't because there's no such thing as an "object" in OpenGL. OpenGL is just a sophisticated kind of pencil to draw on a framebuffer. There is no scene, there are no objects, there are just points, lines and triangles drawn to a framebuffer.
All you do is draw several pictures of the same thing from different points of view, just as you'd like do it using a pencil on paper.

Multiple base vertex per instanced draw

I am searching a way to do a glDrawElementsInstancedBaseVertex but with different base vertex for each instance.
Basically I have to render a lot of cubes (bounding boxes) which have different model space coordinates.
Each cube have its own modelToCamera matrix which is passed via instanced arrays attributes to the vertex shader.
The problem is that I have a list of 16 indices to render, which are the same for each cube, except for their baseVertex part, and I want to render every cube in a single draw call, without having 16 * numberOfCube indices.
So is there a way to change the baseVertex for each instance ?
No, you can't do that. Furthermore, there's no reason to want to in your case either. If each cube has it's own "model space coordinates" and it's own "modelToCamera matrix"... then you have redundant information.
All cubes are similar to one another. The only difference between one cube and another is the transform of it: scale, rotation, translation. You can take a unit cube and apply a transform to it to turn it into any other cube.
Since you're applying a different transform to each cube, then all your cubes could be just the same initial vertices repeated over and over.
However, don't expect this to give you much performance increase; instancing for tiny objects like a cube generally doesn't help much.

How to create textured wall which continuously updated while player is moving up in opengl?

I was wondering how to create wall in opengl and it is continuously appears from up and disappers at down screen. I am able to construct wall by GL_QUADS with texture mapping. but do not know how to generate it dynamically whenever player climbs up.
You have several possibilities.
Create one quad for, say, one meter. Render it 100 times, from floor(playerPos.z) to 100 meters ahead. Repeat for the opposite wall
Create one quad for 100 meters. Set the U texture coordinate of the quad to playerPos.z and playerPos.z + 100. Set the texture mapping to GL_REPEAT.
The second one is faster (only 2 quads) but doesn't let you choose different textures for different parts of the wall.
You don't have to make a "dynamic wall" (et. change glVertex* values every frame). Just change your camera position (modelview matrix) with glTranslatef function.
(I hope I understood your question correctly)