Given a human 3D model, I want to change its shape by giving parameters, like height, waist, bust etc.
From what I gathered, the 3D model should have some 'hooks' around the areas I can change.
Any pointers for this would be very helpful through OpenGL, Three.js or any other means. I don't want to do it in Blender or other 3D manipulation tools. I want it done programatically.
Here's a Sample 3D model
What you should do is "tag" a group of vertices together.
Then apply a vertex shader to those groups, which changes the position of the vertices to shrink/expand the mesh.
One way to do this is to place a point inside the mesh, and give it a radius. This pretty much means you're creating a sphere.
Run the shader on all the vertices inside the sphere.
What the shader should do is "inflate" the sphere - moving the vertices away from the center point.
Just transform each vertice away from the center by a certain ammount.
(Make a vector from the center to the current vertice, continue the vector, and move the vertice there.
This should work well for the belly.
Another shader you can do is to stretch the mesh vertically (for the person's height).
This is more straightforward.
Just run on all vertices and add to their height.
How much to add - that's what you should figure out. My intuition says it can't be a constant - I think it's a linear function but I'm not sure.
Related
What's the best way to draw part of sphere in, for example, OpenGL, considering I have vertices of boundaries of region that should be rendered?
I'm drawing sphere using octahedron transformation (described here: https://stackoverflow.com/a/7687312/1840136) and I can draw arcs that represent boundaries in same way by creating intermediate vertices and then "normalizing" them.
To create triangles out of plane I can use something from this answer: https://math.stackexchange.com/a/1814637, but thing is it will be still flat something. To get part of sphere, I definitely need another bunch of intermediate vertices for additional triangles. What is the algorithms for such task? And, as I already may have triangles forming original sphere, can I use this data somehow?
I'd like to do a cartoony 3D character, where the facial features are flat-drawn and animated in 2D. Sort of like the Bubble Guppies characters.
I'm struggling with finding a good method to do it. I'm using Libgdx, but I think the potential methodologies could apply to any game engine.
Here are ideas I thought of, but each has drawbacks. Is there a way this is commonly done? I was just playing a low-budget Wii game with my kids (a Nickelodeon dancing game) that uses this type of animation for the faces.
Ideas:
UV animation - Is there a way to set up a game model (FBX format) so that certain UV's are stored in various skins? Then the UV's could jump around to various places in a sprite map.
Projected face - This idea is convoluted. Use a projection of a texture onto the model with a vertex shader uniform that shifts the UV's of the projected texture around. So basically, you'd need a projection matrix that's set up to move the face projection around with the model. But you'd need enough padding around the face frame sprites to keep the rest of the model clear of other parts of the sprite map. And this results in a complicated fragment shader that would not be great for mobile.
Move flat 3D decal with model - Separately show a 3D decal that's lined up with the model and batched as a separate mesh in the game. The decal could just be a quad where you change the UV attributes of the vertices on each frame of animation. However, this method won't wrap around the curvature of a face. Maybe it could be broken down to separate decals for each eye and the mouth, but still wouldn't look great, and require creating a separate file to go with each model to define where the decals go.
Separate bone for each frame of animation - Model a duplicate face in the mesh for every frame of animation, and give each a unique bone. Animate the face by toggling bone scales between zero and one. This idea quickly breaks down if there are more than a few frames of animation.
Update part of skin each frame - Copy the skin into an FBO. Draw the latest frame of animation into the part of the FBO color texture that contains the face. Downsides to this method are that you'd need a separate copy of the texture in memory for every instance of the model, and the FBO would have to either do a buffer restore every frame (costly) or you'd have to redraw the entire skin into the FBO each frame (also costly).
I have other ideas that are considerably more difficult than these. It feels like there must be an easier way.
Edit:
One more idea... Uniform UV offset and vertex colors - This method would use vertex colors since they are easily supported in all game engines and modeling packages, but in many cases are unused. In the texture, create a strip of the frames of animation. Set up the face UV's for the first frame. Color all vertices with Alpha 0 except the face vertices, which can be colored Alpha 1. Then pass a UV face offset uniform to the vertex shader, and multiply it by a step function on the vertex colors before adding it to the UVs. This avoids the downsides of all the above methods: everything could be wrapped into one texture shared by all instances of the model, and there would be no two-pass pixels on the model except possibly where the face is. The downside here is a heftier model (four extra attributes per vertex, although perhaps the color could be baked down to a single byte).
Your shader could receive 2 textures, one for the body, and one for the face. The face one being transparent so you could overlay it on top of the body one. Then you just need to send a different face texture based on the animation.
I am struggling with the same problem with implementing a 2d animation to a background billboard in my 3d scene.
I believe that Using Decals is the simplest solution, and implementing the animation is as easy as updating the decal’s TextureRegion according to an Animation object:
TextureRegion frame = animation.getKeyFrame(currentFrameTime, true);
decal.setTextureRegion (frame);
I guess the real problem in your case is positioning the decal inside the scene.
One solution could be using your 3D modeling software for modeling a "phantom" mesh that will store the position of the decal.
The "phantom" mesh will not be rendered with all the other 3d elements, instead it will be used to determine the position of the decals vertices. The only thing you’ll need to do is copy the “phantom” position vertices and paste them to the decal.
I hadn’t got to implement this solution yet, but theoretically it could be relatively easily done.
Hope this idea will help you, and I will appreciate you sharing other solutions/code to this problem if you find any.
Suppose I want to render a pyramid in Direct3D. I have the following vertices in my vertex buffer:
Vertex vertices[] = {
{XMFLOAT3(+1.0f,0.0f,+1.0f),(const float*)&Colors::Green},
{XMFLOAT3(+1.0f,0.0f,-1.0f),(const float*)&Colors::Green},
{XMFLOAT3(-1.0f,0.0f,-1.0f),(const float*)&Colors::Green},
{XMFLOAT3(-1.0f,0.0f,+1.0f),(const float*)&Colors::Green},
{XMFLOAT3(0.0f,1.5f,0.0f),(const float*)&Colors::Blue}
}
Where Vertex is a simple struct with a position and color value.
Now in my index buffer, what is the proper order to specify these vertices to draw the pyramid so all of its triangles are front facing? Whenever I try what seems logical to me, I end up with some triangles drawn facing the wrong way.
That's how I usually did this:
draw your model on a piece of paper, in 3d mesh editor, or just Google up an image
divide non-triangles (such as square in a base of pyramid) to triangles
assign consecutive numbers and write them near each vertex
start indexing from front-faced, visible triangles in order, depending on what renderer expects (for example, in clockwise order)
then index triangles that are back-faced in opposite order (counterclockwise in our example) -- or -- mentally rotate mesh to 180 degree (or mentally rotate yourself around it), "look at it's back side" and index triangles in straight order (clockwise).
As alternatives:
get 3d mesh editor, draw you model, save in any text format (.obj for example), open it with text editor and find index data
Google for it
Surely, your renderer and, especially, input assembler's primitive type (such as triangle list or triangle strip) and culling must be properly set up to draw model in a right way.
Hope it helps!
I am trying to create the effect of the water surface thickness with a vertex-fragment shader.
I am in a 3D game environment but It's a scroll view so a "2D" view.
Here is a good tutorial of creating such effect in real 2D using fragment shader.
But this can't be used in my case I think.
For the moment I have only a plane were I apply refraction.
And I want to apply the water thickness effect. But I don't know how to do it.
I am not trying to create some water deformation/displacement using vertex for the moment, this is not the point.
I don't know if it's possible with a simple quad maybe should I use an object like this.
Here are some examples.
Thanks a lot !
[EDIT] Added Rayman water effect to have a better reference of the effect.
I am trying to create 2D Water effect with a vertex-fragment shader on a simple quad.
Your first misconception is thinking in 2D. What you see in your right picture is the interaction of light with a 2 surface in a 3D space. A simple quad will not suffice.
For water you need some surface displacement. You can either simulate this by solving some wave equation. Or you're using a fourier transform based approach. I suggest the second. Next you render your scene "regular" for everything above the water, then "murky and refracted" for everything below the water line. Render both to textures.
Then You render the water surface. When looking at the Air→Water Interface (i.e. from above) use a Fresnel reflection term, i.e. mix between top reflection and see through depending on the angle of incidence, and for a too small angle emulate Brewster reflection. For the Water→Air Interface (i.e. from below) you do similar, only you don't need the Fresnel term, but only the Brewster term, to account for total internal reflection.
Since you do all mixing in the fragment shader, you don't need blending, hence no need to sort drawing operations for the water depth.
Yes, rendering water is not trivial.
I have a struct QUAD that stores 4 pointers to 4 VECTOR3D (which contains 3 floats) so that I can draw the quad mesh.
From what I understand is whenever I draw a mesh, I need normal as well to properly light/shade a mesh and it's relatively easy when it's a mesh laying on a plain, using normal per face.
When I have 2 by 2 quad meshes laying on XZ coordinate and tried to raise it's centre (0,0,0) by a certain point, say (0, 4, 0) it would start to form real 3D shapes, then I need to calculate normals again. I'm having hard time understanding how and what is to be to calculated normals. As expected, the 3D shape shades like it's still a flat mesh, so it does not represent real shape. One of the explanation says I need to calculate normals per vertex instead of per face.
Does it mean I need to calculate normals for all corners of mesh? once i have normals what would i do? I was still using old glBegin glEnd methods but now I feel like i need to use DrawArray method. I'm deeply confused and I'm pretty sure I don't make much sound but i'd much appreciate your help.
If you need flat looking surface then your normals will be normals to the quad plane. If you need "soft looking" surface you need to blend(read this and watch this cool simple video) normals - that will add sort of gradient.