cocos2d-x repeating textures in 3d - c++

I would very much like to create a repeating texture on a 3D object.
I tried exporting from Maya to .obj. The material file (.mtl) looks like this:
newmtl lambert10SG
illum 4
Kd 0.00 0.00 0.00
Ka 0.00 0.00 0.00
Tf 1.00 1.00 1.00
map_Kd -s 0.1 0.1 grass.jpg
Ni 1.00
the line "map_Kd -s 0.1 0.1 grass.jpg" should indicate that the texture is repeating. However this doesn't work at all. The texture doesn't show until I remove "-s 0.1 0.1". Then it gets stretched.
I tried exporting to .fbx and then convert to .c3b. Same result. Texture gets stretched.
Then I tried creating my own texture. I know that in OpenGL I would have to set texture coordinates to >1 to make the texture repeat itself. These seems to be equivalent to maxS and maxT in the texture(?).
This is my texture setup:
cocos2d::Image *textImage = new (std::nothrow) cocos2d::Image();
textImage->initWithImageFile("grass.jpg");
cocos2d::Texture2D *texture = new (std::nothrow)cocos2d::Texture2D();
texture->initWithImage(textImage);
cocos2d::Texture2D::TexParams texParam;
texParam.wrapS = GL_REPEAT;
texParam.wrapT = GL_REPEAT;
texParam.minFilter = GL_LINEAR;
texParam.magFilter = GL_LINEAR;
texture->setTexParameters(texParam);
texture->setMaxS(10.0f);
texture->setMaxT(10.0f);
sprite->getMesh()->setTexture(texture);
Texture is still stretching.
From searching the internet it seems I would be able to set texture coordinates on a 2D sprite in Cocos with the setTextureRect function. However this doesn't seem to exist for sprite3D.
Any ideas will be very much appreciated!

UPDATE:
I managed to get a texture tiling by editing the .obj file manually.
Obviously the CCObjLoader doesn't understand the line in the material file (.mtl):
map_Kd -s 0.1 0.1 grass.jpg
Removing "-s 0.1 0.1" makes the loader recognize the texture (still stretched though).
After this I had to manually change all vt coordinates in the .obj file, by multiplying with 10. Still the texture didn't repeat, until I changed the texture parameters to GL_REPEAT instead of GL_CLAMP_TO_EDGE.
cocos2d::Texture2D::TexParams texParam;
texParam.wrapS = GL_REPEAT;
texParam.wrapT = GL_REPEAT;
texParam.minFilter = GL_LINEAR;
texParam.magFilter = GL_LINEAR;
sprite->getMesh()->getTexture()->setTexParameters(texParam);
This is not a solution to my problem as such, as I would need the app to recognize when a texture should repeat automatically.
I haven't yet deciphered where texture coordinates are kept in the cocos2d structure, hence haven't been able to change these after the sprite has been loaded. A solution could be to fix the objLoader, however this is not very prone to cocos updates. Or maybe make a small .obj file fixer. None of these seems to be ideal solutions...

Related

PovRay conversion

According to my recollection, once an object or a scene was described in PovRay, there was no other output than the rendering generated. In other words, once exported into *.pov, you were no longer able to convert it into an other 3D file format.
Hence I was very surprised to learn about pov2mesh that aims to generate a point cloud, thanks to meshlab eventually suitable for 3D printing.
As I have a number of scenes defined only as *.pov describing molecules (so, spheres and sticks) and colour encoded molecular surfaces from computation, I wonder if there were a way to convert / rewrite such a scene into a format like vrml 2.0, preserving both shape and colour of them.
Performing the computation again and saving the result straight ahead as vrml is not an option, as beside binary output understood by the software, the choice to save the results is either *.png, or *.pov.
Or is there a povray editor, that is able to understand a *.pov produced by other software, and offeres to export the scene in *.vrml (or a different 3D file format)?
I don't think there is an editor that converts from .pov to .vrml, but both formats are text based. Since your pov file is only made out of sphere and cylinders you could convert it by hand, or write a simple program to do it for you. Here is a red sphere in Povray (http://www.povray.org/documentation/view/3.6.2/283/)
sphere{
<0, 0, 0>, 1
pigment{
color rgb <1, 0, 0>
}
}
I don't know much about vrml but this should be the equivalent (found here: https://www.siggraph.org/special-projects/com97/vrmlexample1.html)
Shape {
appearance Appearance {
material Material {
diffuseColor 1.0 0.0 0.0
transparency 0.0
}
}
geometry Sphere {
radius 1.0
}
}

LWJGL Animated Texture by coordinates is off

I'm trying to animate a texture in LWJGL using texture coordinates by dividing 1.0f by the number of frames (glTexCoords range from 0-1), which returns the interval of frames in the image like this:
float f = 1.0f / mat.getFrames();
GL11.glBegin(7);
GL11.glTexCoord2f(0.0F, f*curFrame);
GL11.glVertex2d(rX, rY);
GL11.glTexCoord2f(1.0F, f*curFrame);
GL11.glVertex2d(rX + this.size, rY);
GL11.glTexCoord2f(1.0F, (f*curFrame)+f);
GL11.glVertex2d(rX + this.size, rY + this.size);
GL11.glTexCoord2f(0.0F, (f*curFrame)+f);
GL11.glVertex2d(rX, rY + this.size);
GL11.glEnd();
Note this is mat.getFrames():
public int getFrames(){
return numOfFrames;
}
But when i use 1.0f/10(the image is 64x640) it does this:
My problem
But if I use 16 as the number of frames this happens:Another problem It seems to add some black frames. I don't see what's wrong here it loads the texture, divides 1 by the number of frames then it uses the current frame number * the number it divided for y1, then it uses y1+ the divided number for y2, witch, should theoretically work, but it doesn't.
It looks very much like your texture is not the size you think it is. You are probably using an image/texture loading library that pads sizes to powers of 2 during import.
This means that the loaded image has a size of 64x1024, where 64x640 contain your original image (10 frames of 64x64 each), and the remaining 64x384 are black. This explains all of the symptoms:
When you draw 1/10 of the image, you see more than a frame. Since 1/10 of 1024 is 102.4, but the frame height is 64, you're drawing 102.4 / 64 = 1.6 frames.
When you pretend to have 16 frames, you get full frames, but get black frames. Since 1/16 of 1024 is 64, this matches your actual frame height. But only 10 of the 16 frames are valid. The remaining 6 show parts of the black padding.
To fix this, you have a few options:
Check if the image/texture loading library has an option to prevent rounding up to powers of 2. Use it if there is.
Use a different library that does not have these archaic restrictions.
Actually use 1/16 as the multiplier for the texture coordinates, but draw only the number of frames (10) you actually have.

Skeletal animation of DirectX files in OpenGL

I'm trying to import *.x files to my engine and animate them using OpenGL (without shaders for now, but that isn't really relevant right now). I've found the format reference at MSDN, but it doesn't help much in the problem.
So - basically - I've created a file containing a simple animation of a demon-like being with 7 bones (main, 2 for the tail, and 4 for the legs), from which only 2 (the ones in the right leg) are animated at the moment. I've tested the mesh in the DXViewer, and it seems to work perfectly there, so the problem must be the side of my code.
When I export the mesh, I get a file containing lots of information, from which there are 3 important places for the skeletal animation (all the below matrices are for the RLeg2 bone):
SkinWeights - matrixOffset
-0.361238, -0.932141, -0.024957, 0.000000,
0.081428, -0.004872, -0.996669, 0.000000,
0.928913, -0.362066, 0.077663, 0.000000,
0.139213, -0.057892, -0.009323, 1.000000
FrameTransformMatrix
0.913144, 0.000000, -0.407637, 0.000000,
0.069999, 0.985146, 0.156804, 0.000000,
0.401582, -0.171719, 0.899580, 0.000000,
0.000000, -0.000000, 0.398344, 1.000000
AnimationKey matrix in bind pose
0.913144, 0.000000, -0.407637, 0.000000,
0.069999, 0.985146, 0.156804, 0.000000,
0.401582, -0.171719, 0.899580, 0.000000,
0.000000, -0.000000, 0.398344, 1.000000
My question is - what do I exactly do with these matrices? I've found an equation on the Newcastle University site (http://research.ncl.ac.uk/game/mastersdegree/graphicsforgames/skeletalanimation/), but there's only 1 matrix there. The question is - how do I combine these matrices to get the vertex transform matrix?
This post is not pretend to be a full answer, but a set of helpful links.
How to get all information needed for animation
The question is how do you import your mesh, and why do you do this. You can fight with .x meshes for a months, but this doesn't make any sense, because .x is a very basic, old and really not good enough format. You don't find many fans of .x format on StackOverflow. =)
.x file stores animation data in a tricky way. It was intended to load via set of D3DX*() functions. But, to get bones and weights from it manually, you must preprocess loaded data. Much things to code. Here is a big post, explaining how to:
Loading and displaying .X files without DirectX
Good way to do things is just switch to some mesh loading library. The most popular and universal one is Assimp. At least, look at their docs and/or source code, on how they handle loading and preprocessing, and what whey have as output. Also, here is a good explanation:
Tutorial 38 - Skeletal Animation With Assimp
So, with assimp you can stop fighting and begin animating right now. And maybe later, when you'll find idea on how it's works, you can write your own loader.
When you've got all information needed for animation
Skeletal animation is a basic topic that explained in details all around the web.
You can find basic vertex shader for animation here:
OpenGL Wiki: Skeletal Animation
Here is a explanation of how all works (but implemented in fixed-function style): Basic Bones System
Hope it helps!
Since Drop provided links that talk about the problem, and give clues on how to solve it, but don't quite provide a simple answer, i feel obliged to leave the solution here, in case someone else stumbles on the same problem.
To get the new vertex position in "bind pose"
v'(i) = v(i)*Σ(transform(bone)*W(bone,i))
where:
v'(i) - new vertex position,
v(i) - old vertex position, and
W(bone,i) - weight of the transformation.
(and of course Σ is the sum from 0 to the amount of bones in the skeleton)
The transform(bone) is equal to sw(bone) * cM(bone), where sw is the matrix found inside the SkinWeights tag, and cM(bone) is calculated using a recursive function:
cM(bone)
{
if(bone->parent)
return localTransform*cM(bone->parent);
else
return localTransform;
}
The localTransform is the matrix located inside the FrameTransformMatrix tag.
To get the new vertex position in a certain animation frame
Do the exact same operation as mentioned above, but instead of the matrix in FrameTransformMatrix, use one of the matrices inside the appropriate AnimationKey tag. Note that when an animation is playing, the matrix inside the FrameTransformMatrix tag becomes unused. Which means, you'll probably end up ignoring it most of the time.

Opacity correction in Raycasting Volume Rendering

I want to implement high-quality raycasting volume rendering using OpenGL、GLSL and C++. And I use image-order volume rendering. During a step of raycasting volume rendering called compositing, I use the following formula (front-back-order):
When I read the book 《Real-time volume graphics》 page 16,I see that we need to do Opacity Correction if sample rate changes:
And use this opacity value to replace the old opacity value.In this formula,
is the new sample distance,and the △x is the old sample distance.
My question is : How do I determine △x in my program?
Say your original volume had a resolution of V=(512, 256, 127) then when casting a ray in the direction of (rx, ry, rz), the sample distance is 1/|r·V|. However say in your raycaster you're largely oversampling, say you sample the volume at 3× the sample distance V'=(1526, 768, 384), the oversampled sample distance is 1/|r·V'|, and hence the ratio of sampling rates is 1/3. This is the exponent to add.
Note that the exponent is only noticeable with low density volumes, i.e. in the case of medical images low contrast soft tissue. If you're imaging bones or dense tissue then it makes almost no difference (BTDT).
datenwolf is correct, but there is one piece of missing information. A transfer function is a mapping between scalar values and colors. In RGBA, the range of values is between 0 and 1. You, as the implementer, get to choose what the actual value of opacity translates to, and that is where the original sample distance comes in.
Say you have the unit cube, and you choose 2 samples that translates to a sample distance of 0.5, if you trace a ray in a direction orthogonal to a face. If the alpha value is 0.1 everywhere in the volume, the resulting alpha is 0.1 + 0.1 * (1.0 -0.1) = 0.19. If you chose 3 samples, then the resulting alpha is one more composite from the previous choice: 0.19 + 0.1 * (1-0.19) = 0.271. Thus, your choice of the original sample distance influences the outcome. Chose wisely.

opengl and mtl parameters

I am trying to parse line items from "mtl" file and use the values as parameters to opengl functions.
I could use values of ambience (Ka), specular(Ks) and diffuse(Kd) using glMaterialfv. But I don't know the use for Ni (optical density), d (dissolve), illum (illumination) values given in the mtl file.
Which opengl function should be used with these values?
Any help with these line items?
....
Ni 1.000000
d 1.000000
illum 2
...
Dissolve means transparency. 1.0 means fully opaque object, 0.0 means fully transparent. You can control rendering of transparent objects by using functions like glBlendFunc().
For a full definition of mtl files, including illum, please see http://people.sc.fsu.edu/~jburkardt/data/mtl/mtl.html.
Ni seems to be unsupported and can be ignored.