I have some model in Blender. I'd like to:
Connect a few different textures into one and save it as bitmap
Make UV mapping for these connected textures
I need to solve this problem for textured models in OpenGL. I have data structure which giving me possibility to bind one texture into one model, so I'd like to have one texture per one model. I'm aware of fact that I can use Texture GL_TEXTURE_xD_ARRAY, but I don't want to complicate my project. I know how to do simple UV mapping in Blender.
My questions:
Can I do 1. and 2. phases exclusively in Blender?
Is Blender Bake technique is what I'm searching for?
Is there some tutorials shows how to do it? (for this one specific problem)
Maybe somebody advise me another Blender technique (or OpenGL
solution)
Connect a few different textures into one and save it as bitmap
Make UV mapping for these connected textures
You mean generating a texture atlas?
Can I do 1. and 2. phases exclusively in Blender?
No. But it would be surely a well received add-in.
Is Blender Bake technique is what I'm searching for?
No. Blender Bake generates texture contents using the rendering process. For example you might have a texture on a static object into which you bake global illumination; then, instead of recalculating GI for each and every frame in a flythrough, the texture is used as source for the illumination terms (it acts like a cache). Other applications is generating textures for the game engine, from Blender's procedural materials.
Maybe somebody advise me another Blender technique (or OpenGL solution)
I think a texture array would be really the best solution, as it also won't make problems for wrapped/repeated textures.
Another possibility is to use projection painting. An object in blender can have multiple uvmaps, if importing it doesn't create each uvmap then you may need to align each one by hand. Then you create a new uvmap that lays the entire model onto one image.
In Texture painting mode you can use projection painting to use the material from one uvmap as the paint brush for painting onto the new image.
Related
So I have a simple OpenGL viewer where you can draw any number of boxes that the user wants. Ive also added the ability to take a PNG or JPG image and texture map it to a primitive.
I want to be able to have the user specify any of the cubes on the screen and apply different textures to them. Im fairly new to OpenGL. Right now I can easily map an image to a single primitive, but Im wondering whats the best way to map 2 seperate images (which may be different sizes) to 2 separate primitives.
Ive done a fair amount of reading up on 2D Texture arrays and it would seem this would be the way I wanna go since I can store multiple textures in one texture unit, but I'm not sure if this is possible considering what I mentioned above. If the images are both different dimensions then I dont think I can do this (at least I dont think so). I know I can just store each image into separate texture units but doing it in an array seemed like the cleaner way to do it.
What would be the best way to do this? Can you in fact store different size images into a 2d texture array? And if so how? Or am I better off just storing them on separate texture units?
Texture arrays are mainly meant if you want to draw a single primitive (or a whole mesh) with the shader being able to select between images without exhausting the number of available texture sampling units. You can use them in the way you thought, but I doubt it will benefit you. Another approach (which is similiar to texture arrays) is using a texture atlas, i.e. creating a patchwork of images that constitutes a single texture and using appropriate texture coordinates to select the subimage.
In your case, I suggest simply load each picture into a separate texture and bind the appropriate texture before drawing the cube.
Until today, when I wanted to create reflections (a mirror) in opengl, I rendered a view into a texture and displayed that texture on the mirroring surface.
What i want to know is, are there any other methods to create a mirror in opengl?
And 2. can this be done lonely in shaders (e.g. geometry shader) ?
Ray-tracing. You can write a ray-tracer in the fragment shader (every fragment follows a ray). Ray-tracers can perfectly deal with reflection (mirroring) on all kinds of surfaces.
You can find an OpenGL example here and a WebGL example including mirroring here.
There are no universal way to do that, in any 3D API i know of.
Depending on your case there are several possible techniques with different downsides.
Planar reflections: That's what you are doing already.
Note that your mirror needs to be flat and you have to clip so anything closer than the mirror ins't rendered into the texture.
Good old cubemaps: attach a cubemap to each mirror then sample it in the reflection direction. This works for any surface but you will need to render the cubemaps (which can be done only once if you don't care about moving objects being reflected). I don't think you can do this without shaders but only the mirror will need one. Its a very common technique as it's easy do implement, can be dynamic and fairly cheap while being easy to integrate into an existing engine.
Screen space ray-marching: It's what danny-ruijters suggested. Kind of like SSAO : for each pixel, sample the depth buffer along the reflection vector until you hit something. This has the advantage to be applicable anywhere (on arbitrary complex surfaces) however it can only reflect stuff that appear on screen which can introduce lots of small artifacts but it's completly dynamic and very simple to implement. Note that you will need an additional pass (or rendering normals into a buffer) to access your scene final color in while computing the reflections. You absolutely need shaders for that, but it's post process so it won't interfere with the scene rendering if that's what you fear.
Some modern game engines use this to add small details to reflective surfaces without the burden of having to compute/store cubemaps.
They are probably many other ways to render mirrors but these are the tree main one (at least for what i know) ways of doing reflections.
I learn OpenGL under Linux platform. Recently, I try to use texts created by glutBitmapCharacter() as the texture of some quadrics objects provided by glu or glut. However, glutBitmapCharacter() does not return a pointer so that I can't feed it to the glTexImage2D(). I had google it for quite a while, but all I found is some topic related to Android SDK which I have no experience to it.
All I can think of is to render texts and read it form buffer using glReadPixels(), then save it to a file. Next, read the pixels back from the file and refer it to a pointer. Finally, draw 3D objects with these texts as the texture (i.e. feed the pointer to the glTexImage2D()).
However, it's kind of silly. What I want to ask is: Are there some other alternative way to this?
Applying text on top of a 3D surface is not trivial with pure OpenGL. GLUT does not provide any tools for that. One possible option would be for you to implement your own text rendering methods, possibly loading glyphs using Freetype then create a texture with the glyphs and apply that texture to the polygons. Freetype-GL is a tiny helper library that would facilitate a lot if you were to do that.
Another option would be to again load the text glyphs into a texture and then apply them as decals over the geometry. That way you could still simulate a 2D text drawing in a flat surface (the decal) and then apply that on top of a 3D object.
I want to animate a model (for example a human, walking) in OpenGL. I know there is stuff like skeleton-animation (with tricky math), but what about this....
Create a model in Blender
Create a skeleton for that model in Blender
Now do a walking animation in Blender with that model and skeleton
Take some "keyFrames" of that animation and export every "keyFrame" as a single model
(for example as obj file)
Make an OBJ file loader for OpenGL (to get vertex, texture, normal and face data)
Use a VBO to draw that animated model in OpenGL (and get some tricky ideas how to change the current "keyFrame"/model in the VBO ... perhaps something with glMapBufferRange
Ok, I know this idea is only a little script, but is it worth looking into further?
What is a good concept to change the "keyFrame"/models in the VBO?
I know that memory problem, but with small models (and not too much animations) it could be done, I think.
The method you are referring to of animating between static keyframes was very popular in early 3D games (quake, etc) and is now often referred to as "blend shape" or "morph target" animation.
I would suggest implementing it slightly differently then you described. Instead of exporting a model for every possible frame of animation. Export models only at "keyframes" and interpolate the vertex positions. This will allow much smoother playback with significantly less memory usage.
There are various implementation options:
Create a dynamic/streaming VBO. Each frame find the previous and next keyframe model. Calculate the interpolated
model between them and upload it to the VBO.
Create a static VBO containing the mesh data from all frames and an additional "next position" or "displacement" attribute at each vertex.
Use the range options
on glDrawArrays to select the current frame.
Interpolate in the vertex shader between position and next position.
You can actually setup blender to export every frame of a scene as an OBJ. A custom tool could then compile these files into a nice animation format.
Read More:
http://en.wikipedia.org/wiki/Morph_target_animation
http://en.wikipedia.org/wiki/MD2_(file_format)
http://tfc.duke.free.fr/coding/md2-specs-en.html
I would like to know if there is a way to generate a single static image of a 3D object (1 single object represented as a triangle list), using OpenGL or DirectX, that allows you to know which specific triangles defining the object have been used to generate every one of the pixels forming the rendered image. I've cited OpenGL and DirectX because they are widely used APIs graphics if somebody knows other ways of achieving the previous that works at high speed I would be also interested in his/her answer. I currently use my own software implementation of the rendering pipeline to keep track of the relationship, but I would like to use the power and effects (mainly antialiasing, shadows and specific skin rendereing techniques) that graphics cards offer.
Thanks very much for your help
Sure, just output a triangle identifier to a separate render-target (using MRT). In GLSL-terms, this is gl_PrimitiveID, and in HLSL-terms it's SV_PrimitiveID. If you are using multi-sampling, then your multi-sample buffer for that render-target become a list of primitives that contribute to each pixel.
Draw each triangle in a different colour. R8G8B8 offers you about 16.7 million possible colours, so one can index that number of triangles with it. You don't have to draw to a on-screen buffer. You could render the picture as usual, and render to a second target, indexing the triangles in a off-screen buffer.