I want to animate a model (for example a human, walking) in OpenGL. I know there is stuff like skeleton-animation (with tricky math), but what about this....
Create a model in Blender
Create a skeleton for that model in Blender
Now do a walking animation in Blender with that model and skeleton
Take some "keyFrames" of that animation and export every "keyFrame" as a single model
(for example as obj file)
Make an OBJ file loader for OpenGL (to get vertex, texture, normal and face data)
Use a VBO to draw that animated model in OpenGL (and get some tricky ideas how to change the current "keyFrame"/model in the VBO ... perhaps something with glMapBufferRange
Ok, I know this idea is only a little script, but is it worth looking into further?
What is a good concept to change the "keyFrame"/models in the VBO?
I know that memory problem, but with small models (and not too much animations) it could be done, I think.
The method you are referring to of animating between static keyframes was very popular in early 3D games (quake, etc) and is now often referred to as "blend shape" or "morph target" animation.
I would suggest implementing it slightly differently then you described. Instead of exporting a model for every possible frame of animation. Export models only at "keyframes" and interpolate the vertex positions. This will allow much smoother playback with significantly less memory usage.
There are various implementation options:
Create a dynamic/streaming VBO. Each frame find the previous and next keyframe model. Calculate the interpolated
model between them and upload it to the VBO.
Create a static VBO containing the mesh data from all frames and an additional "next position" or "displacement" attribute at each vertex.
Use the range options
on glDrawArrays to select the current frame.
Interpolate in the vertex shader between position and next position.
You can actually setup blender to export every frame of a scene as an OBJ. A custom tool could then compile these files into a nice animation format.
Read More:
http://en.wikipedia.org/wiki/Morph_target_animation
http://en.wikipedia.org/wiki/MD2_(file_format)
http://tfc.duke.free.fr/coding/md2-specs-en.html
Related
I need help with rendering a .vox model in OpenGL.
The .VOX file format is described here.
Here is an example VOX file reader.
And here is where I come across the problem - how would I go about rendering a .vox model in OpenGL? I know how to render standard .obj models with textures using the Phong reflection model, but how do I handle voxel data? What kind of data should I pass to the shaders? Should I parse the data somehow, to get the index of each individual voxel to parse? How should I create vertices based on voxel data (should I even do that)? Should I pass all the chunks or is there a simple way to filter out those that won't be visible?
I tried searching for information on this topic, but came up empty. What I am trying to accomplish is something like MagicaVoxel Viewer, but much simpler, without all those customizable options and with only a single light source.
I'm not trying to look for a ready solution, but if anyone could even point me in the right direction, I would be very grateful.
After some more searching I decided to render the cubes in two ways:
1) Based on voxel data, I will generate vertices and feed them to the pipeline.
2) Using the geometric shader, I'll emit vertices based on indexes of voxels to render I feed to the pipeline. I'll store the entire model as a 3D texture.
If you click on the model viewer in a 3D Modeler (such as blender or max), it will select the vertex that the mouse was over or near. How does it know which one to use efficiently? How can it use a lasso tool or circle tool efficiently? Does it use screen space co-ordinates for the vertices or does it use simple ray tracing?
I am trying to make a simple 3D model tool (for fun) and i can't imagine how a circle tool would work. How can it pick the nearest vertex to the mouse co- ordinates without a sort?
There are a lot of ways to approach this problem.
If you have only several thousands of vertexes, it can be very fast to just iterate over all of them.
If you are just clicking on a vertex (or other object) in one of the views, then you can render the scene into another buffer using a different "color" for each object in the scene. To figure out which object you clicked on, you just have to read the color from that pixel.
In other circumstances, you can store the vertex data in a spatial index such as an octree.
Remember: Blender is open-source, so you can just read the source code if you want to find out how Blender does it.
I'd like to do a cartoony 3D character, where the facial features are flat-drawn and animated in 2D. Sort of like the Bubble Guppies characters.
I'm struggling with finding a good method to do it. I'm using Libgdx, but I think the potential methodologies could apply to any game engine.
Here are ideas I thought of, but each has drawbacks. Is there a way this is commonly done? I was just playing a low-budget Wii game with my kids (a Nickelodeon dancing game) that uses this type of animation for the faces.
Ideas:
UV animation - Is there a way to set up a game model (FBX format) so that certain UV's are stored in various skins? Then the UV's could jump around to various places in a sprite map.
Projected face - This idea is convoluted. Use a projection of a texture onto the model with a vertex shader uniform that shifts the UV's of the projected texture around. So basically, you'd need a projection matrix that's set up to move the face projection around with the model. But you'd need enough padding around the face frame sprites to keep the rest of the model clear of other parts of the sprite map. And this results in a complicated fragment shader that would not be great for mobile.
Move flat 3D decal with model - Separately show a 3D decal that's lined up with the model and batched as a separate mesh in the game. The decal could just be a quad where you change the UV attributes of the vertices on each frame of animation. However, this method won't wrap around the curvature of a face. Maybe it could be broken down to separate decals for each eye and the mouth, but still wouldn't look great, and require creating a separate file to go with each model to define where the decals go.
Separate bone for each frame of animation - Model a duplicate face in the mesh for every frame of animation, and give each a unique bone. Animate the face by toggling bone scales between zero and one. This idea quickly breaks down if there are more than a few frames of animation.
Update part of skin each frame - Copy the skin into an FBO. Draw the latest frame of animation into the part of the FBO color texture that contains the face. Downsides to this method are that you'd need a separate copy of the texture in memory for every instance of the model, and the FBO would have to either do a buffer restore every frame (costly) or you'd have to redraw the entire skin into the FBO each frame (also costly).
I have other ideas that are considerably more difficult than these. It feels like there must be an easier way.
Edit:
One more idea... Uniform UV offset and vertex colors - This method would use vertex colors since they are easily supported in all game engines and modeling packages, but in many cases are unused. In the texture, create a strip of the frames of animation. Set up the face UV's for the first frame. Color all vertices with Alpha 0 except the face vertices, which can be colored Alpha 1. Then pass a UV face offset uniform to the vertex shader, and multiply it by a step function on the vertex colors before adding it to the UVs. This avoids the downsides of all the above methods: everything could be wrapped into one texture shared by all instances of the model, and there would be no two-pass pixels on the model except possibly where the face is. The downside here is a heftier model (four extra attributes per vertex, although perhaps the color could be baked down to a single byte).
Your shader could receive 2 textures, one for the body, and one for the face. The face one being transparent so you could overlay it on top of the body one. Then you just need to send a different face texture based on the animation.
I am struggling with the same problem with implementing a 2d animation to a background billboard in my 3d scene.
I believe that Using Decals is the simplest solution, and implementing the animation is as easy as updating the decal’s TextureRegion according to an Animation object:
TextureRegion frame = animation.getKeyFrame(currentFrameTime, true);
decal.setTextureRegion (frame);
I guess the real problem in your case is positioning the decal inside the scene.
One solution could be using your 3D modeling software for modeling a "phantom" mesh that will store the position of the decal.
The "phantom" mesh will not be rendered with all the other 3d elements, instead it will be used to determine the position of the decals vertices. The only thing you’ll need to do is copy the “phantom” position vertices and paste them to the decal.
I hadn’t got to implement this solution yet, but theoretically it could be relatively easily done.
Hope this idea will help you, and I will appreciate you sharing other solutions/code to this problem if you find any.
I have been working on my low level OpenGL understanding, and I've finally come to how to animate 3D models. Nowhere I look tells me how to do Skeletal animation. most things use some kind of 3D engine and just say "Load the Skeleton" or "Apply the Animation" but not how to load a skeleton, or how to actually move the vertices.
I'm assuming each bone has a 4x4 Matrix of the Translation/Rotation/Scale for the vertices its attached too that way when the bone is moved the vertices attached also move by the same amount.
for skeletal animation I was guessing that you would pass the Bone(s) to the shader, that way in the vertex shader I move the current vertex before it goes to the fragment shader. If I have a keyframed animation I send the current bone and the new bone to the shader along with the current time between frames and interpolate the points between bones based on how much time there is between keyframes.
Is this the correct way to animate a mesh? or is there a better way
Well - the method of animation depends on the format, and the data that's written in it. Some formats supply you in vectors, some use matrices. I gotta admit I came to this site to ask a similar question, but I've specified the format (was using *.x files, you can check the topic), and I got an answer.
You're idea of the subject is correct. If you want a sample implementation, you can find one on the OpenGL wiki.
I have some model in Blender. I'd like to:
Connect a few different textures into one and save it as bitmap
Make UV mapping for these connected textures
I need to solve this problem for textured models in OpenGL. I have data structure which giving me possibility to bind one texture into one model, so I'd like to have one texture per one model. I'm aware of fact that I can use Texture GL_TEXTURE_xD_ARRAY, but I don't want to complicate my project. I know how to do simple UV mapping in Blender.
My questions:
Can I do 1. and 2. phases exclusively in Blender?
Is Blender Bake technique is what I'm searching for?
Is there some tutorials shows how to do it? (for this one specific problem)
Maybe somebody advise me another Blender technique (or OpenGL
solution)
Connect a few different textures into one and save it as bitmap
Make UV mapping for these connected textures
You mean generating a texture atlas?
Can I do 1. and 2. phases exclusively in Blender?
No. But it would be surely a well received add-in.
Is Blender Bake technique is what I'm searching for?
No. Blender Bake generates texture contents using the rendering process. For example you might have a texture on a static object into which you bake global illumination; then, instead of recalculating GI for each and every frame in a flythrough, the texture is used as source for the illumination terms (it acts like a cache). Other applications is generating textures for the game engine, from Blender's procedural materials.
Maybe somebody advise me another Blender technique (or OpenGL solution)
I think a texture array would be really the best solution, as it also won't make problems for wrapped/repeated textures.
Another possibility is to use projection painting. An object in blender can have multiple uvmaps, if importing it doesn't create each uvmap then you may need to align each one by hand. Then you create a new uvmap that lays the entire model onto one image.
In Texture painting mode you can use projection painting to use the material from one uvmap as the paint brush for painting onto the new image.