Three.js ColladaLoader bumpScale/weighting? Way to adjust bump map intensity - opengl

In the current ColladaLoader.js I don't see anything that reads or applies the Collada standard's "weighting" value (0.0-1.0) that indicates bump intensity or "bumpScale" in Three.js Phong material. I noticed that when I export my collada from Blender it picks up the bump materials instantly in three.js (which is amazingly simple - YAY!) but my materials always get an exaggerated bumpScale of default 1.0. It gives the materials an exaggerated bumpiness.
I managed to edit my ColladaLoader a bit and try out my ideal value (0.05) but wonder if I'm missing something or doing this wrong? Anybody else try this? Note that I've not had good luck with json exports so I'm sticking with Collada for now.
Thanks

You can set custom properties in the Collada callback. Use a pattern like this one:
loader.load( 'collada.dae', function ( collada ) {
var dae = collada.scene;
dae.traverse( function( child ) {
if( child instanceof THREE.Mesh ) {
child.material.bumpScale = value;
}
} );
scene.add( dae );
} );
three.js r.71

Related

(UE4) How to properly render after transforming the bones of UPoseableMeshComponent

I am trying to transform bones within UE4 (4.25) using UPoseableMeshComponent. (image of initial state)
However, after I transform the bones using SetBoneTransformByName, the rendering gets into some weird state, below is not motion blur, is just a pose after applied SetBoneTransformByName (image after transform blurred rendering). Although Unlit rendering seems just fine.
After I call AActor::SetActorHiddenInGame(true) to set invisible, and then AActor::SetActorHiddenInGame(false) to show the actor again, the rendering will be fixed. (Image after hide/show)
The code is purely in c++ (no BP), I first create custom Character with SkeletalMesh and added UPoseableMeshComponent in code something like in below:
void AMyCharacter::CreatePoseableMesh() {
USkeletalMeshComponent* skeletalMesh = GetMesh();
UPoseableMeshComponent* poseMesh =
NewObject<UPoseableMeshComponent>(this, UPoseableMeshComponent::StaticClass());
if (poseMesh) {
poseMesh->RegisterComponent();
poseMesh->SetWorldLocation(location);
poseMesh->SetWorldRotation(rotation);
poseMesh->AttachToComponent(GetRootComponent(),FAttachmentTransformRules::KeepRelativeTransform);
poseMesh->SetSkeletalMesh(skeletalMesh->SkeletalMesh);
poseMesh->SetVisibility(true);
skeletalMesh->SetVisibility(false);
}
}
Are there something missing to set in UPoseableMeshComponent?
I might be wrong, but I think this is because setting bone transform manually doesn't write to the velocity buffer, and temporal AA doesn't know that something moved, causing ugly blur.
If you switch to FXAA and the problem disappears - here's your hint.
There is a material node called Previous Frame Switch - you can control the velocity buffer through it using a custom parameter.
Self solved(sort of..). I tried with BP first, where even BP needs to SetVisibility(false) then SetVisibility(true) on PoseableMeshComponent to render properly. Maybe a minor bug within UE4.
TMap<FString, FTransform> const& transforms; // given. map of bone name and its transform.
poseMesh->SetVisibility(false); // PoseableMeshComponent. Hide once
for (auto& x : transforms) {
poseMesh->SetBoneTransformByName(FName(*x.Key), x.Value, EBoneSpaces::WorldSpace);
}
poseableMesh->SetVisibility(true); // show it.
seems to be the workaround for now.

Getting IfcPolyline from ifcSpace

I am quite new to xBim and I am struggeling to find the information I need. I have been able to iterate through all the IFCSpaces for each storey, and I would like to find each space's IfcPolyline so that I will know its boundaries. But how?
using (IfcStore model = IfcStore.Open(filename, null))
{
List<IfcBuildingStorey> allstories = model.Instances.OfType<IfcBuildingStorey>().ToList();
for (int i=0;i<allstories.Count;i++)
{
IfcBuildingStorey storey = allstories[i];
var spaces = storey.Spaces.ToList();
for (int j=0;j<spaces.Count;j++)
{
var space = spaces[j];
var spaceBoundaries=space.BoundedBy.ToList();
for (int u=0;u<spaceBoundaries.Count;u++)
{
//IfcPolyline from here??
}
}
}
}
This is quite old question, but in case you are still looking for the answer: IfcSpace.BoundedBy is an inverse relation and will give you a list of IfcRelSpaceBoundaries. This has RelatedBuildingElement attribute which will give you bounding building element such as a wall, door etc. It also has ConnectionGeometry, which is essentially an interface as the geometry of this connection might be curve, point, surface or volume. If you drill further down in the object model, you will see that the boundary can be any kind of curve, not just a polyline.
Entirely different approach could be to access the space geometry Space.Representation. This could have a 2D representation which would likely be a polygon, or it might be a 3D extrusion with profile. That would again be what you are looking for. But be aware, that it can be any other kind of geometry representation depending on the authoring software and model author.

PovRay conversion

According to my recollection, once an object or a scene was described in PovRay, there was no other output than the rendering generated. In other words, once exported into *.pov, you were no longer able to convert it into an other 3D file format.
Hence I was very surprised to learn about pov2mesh that aims to generate a point cloud, thanks to meshlab eventually suitable for 3D printing.
As I have a number of scenes defined only as *.pov describing molecules (so, spheres and sticks) and colour encoded molecular surfaces from computation, I wonder if there were a way to convert / rewrite such a scene into a format like vrml 2.0, preserving both shape and colour of them.
Performing the computation again and saving the result straight ahead as vrml is not an option, as beside binary output understood by the software, the choice to save the results is either *.png, or *.pov.
Or is there a povray editor, that is able to understand a *.pov produced by other software, and offeres to export the scene in *.vrml (or a different 3D file format)?
I don't think there is an editor that converts from .pov to .vrml, but both formats are text based. Since your pov file is only made out of sphere and cylinders you could convert it by hand, or write a simple program to do it for you. Here is a red sphere in Povray (http://www.povray.org/documentation/view/3.6.2/283/)
sphere{
<0, 0, 0>, 1
pigment{
color rgb <1, 0, 0>
}
}
I don't know much about vrml but this should be the equivalent (found here: https://www.siggraph.org/special-projects/com97/vrmlexample1.html)
Shape {
appearance Appearance {
material Material {
diffuseColor 1.0 0.0 0.0
transparency 0.0
}
}
geometry Sphere {
radius 1.0
}
}

(Kinect v2) Alternative Kinect Fusion Pipeline (texture mapping)

I am trying a different Pipeline without success until now.
The Idea is to use the classic pipeline (as in the Explorer Example) but additionally to use the last ColorImage for the texutre.
So the idea (after clicking SAVE MESH):
Save current Image as BMP
Get the current transformation [m_pVolume->GetCurrentWorldToCameraTransform(&m_worldToCameraTransform);] .. lets call it M
Transform all Mesh vertices v in the last Camera Space Coordinate System ( M * v )
Now the current m_pMapper refers to the latest Frame which we want to use [ m_pMapper->MapCameraPointToColorSpace(camPoint, &colorPoint); ]
In theory I should have now every Point of the fusion mesh as a texture coordinate.. I want to use them to export as OBJ File (with texture and not only color).
What am I doing wrong?
The 3D Transformations seem to be correct.. when I visualize the resulting OBJ file in MeshLab I can see that the transformation is correct.. the WorldCoordinateSystem is Equal to the latest recorded position.
Only the texture is not set correctly.
I would be very very very very happy if anyone could help me. I am trying already for a long time :/
Thank you very much :)

Can't render Quake 3 bsp format

I am writing a loader and a renderer of *.bsp Quake 3 files for my 3D engine. I am supporting the format version 46 (0x2e). Everything is rendered well untill I am using very simple maps. The geometry of simple maps renders correctly both under my engine and the renderer that I found in the Internet (at http://www.paulsprojects.net/opengl/q3bsp/q3bsp.html). Here is the screenshot:
I tried rendering more complicated maps (from: http://lvlworld.com/) with my renderer and a renderer that I found to compare the results. And both renderers suffer from a problem that there are holes in the scene (missing triangles here and there).
I have no clue what my be causing those problems as I checked the maps and they are all of the same version. Has anybody encountered this problem?
EDIT: Some of the very complicated maps render correctly. It confuses me even more :).
The creator of this bsp loader made something wrong. I got fixed it.
Simply edit LoadData function, and make all face data ( except meshes and patches ) into one array and render it. Works for me, no more "holes". Here's piece of code:
int currentFace = 0;
for( int i = 0; i < facesCount; i++ ) {
if( faceData[i].type != SW_POLYGON )
continue;
m_pFaces[i].texture = faceData[i].texture;
m_pFaces[i].lightmapIndex = faceData[i].lightmapIndex;
m_pFaces[i].firstVertexIndex = faceData[i].firstVertexIndex;
m_pFaces[i].vertexCount = faceData[i].vertexCount;
m_pFaces[i].numMeshIndices = faceData[i].numMeshIndices;
m_pFaces[i].firstMeshIndex = faceData[i].firstMeshIndex;
f_bspType[i].faceType = SW_FACE; // Custom one.
f_bspType[i].typeFaceNumber = currentFace;
currentFace++;
}