(UE4) How to properly render after transforming the bones of UPoseableMeshComponent - c++

I am trying to transform bones within UE4 (4.25) using UPoseableMeshComponent. (image of initial state)
However, after I transform the bones using SetBoneTransformByName, the rendering gets into some weird state, below is not motion blur, is just a pose after applied SetBoneTransformByName (image after transform blurred rendering). Although Unlit rendering seems just fine.
After I call AActor::SetActorHiddenInGame(true) to set invisible, and then AActor::SetActorHiddenInGame(false) to show the actor again, the rendering will be fixed. (Image after hide/show)
The code is purely in c++ (no BP), I first create custom Character with SkeletalMesh and added UPoseableMeshComponent in code something like in below:
void AMyCharacter::CreatePoseableMesh() {
USkeletalMeshComponent* skeletalMesh = GetMesh();
UPoseableMeshComponent* poseMesh =
NewObject<UPoseableMeshComponent>(this, UPoseableMeshComponent::StaticClass());
if (poseMesh) {
poseMesh->RegisterComponent();
poseMesh->SetWorldLocation(location);
poseMesh->SetWorldRotation(rotation);
poseMesh->AttachToComponent(GetRootComponent(),FAttachmentTransformRules::KeepRelativeTransform);
poseMesh->SetSkeletalMesh(skeletalMesh->SkeletalMesh);
poseMesh->SetVisibility(true);
skeletalMesh->SetVisibility(false);
}
}
Are there something missing to set in UPoseableMeshComponent?

I might be wrong, but I think this is because setting bone transform manually doesn't write to the velocity buffer, and temporal AA doesn't know that something moved, causing ugly blur.
If you switch to FXAA and the problem disappears - here's your hint.
There is a material node called Previous Frame Switch - you can control the velocity buffer through it using a custom parameter.

Self solved(sort of..). I tried with BP first, where even BP needs to SetVisibility(false) then SetVisibility(true) on PoseableMeshComponent to render properly. Maybe a minor bug within UE4.
TMap<FString, FTransform> const& transforms; // given. map of bone name and its transform.
poseMesh->SetVisibility(false); // PoseableMeshComponent. Hide once
for (auto& x : transforms) {
poseMesh->SetBoneTransformByName(FName(*x.Key), x.Value, EBoneSpaces::WorldSpace);
}
poseableMesh->SetVisibility(true); // show it.
seems to be the workaround for now.

Related

Is using Scale effect in render loop faster than pre-scaling bitmap?

Currently I draw images next way:
During load, using WIC, I obtain the original bitmap, store it as a property in object, that represents an image (ID2D1Bitmap *imageOriginal property).
Then (still at load time), I create compatible render target with the size I need image to be.
Draw image to the compatible target using scale effect.
Allocate new bitmap as property of object that represents an image (ID2D1Bitmap *imageScaled property).
Copy from compatible target to imageScaled.
Free compatible target. Here image load ends.
When already created image object need to be resized, I repeat steps 2-6. In the result, in render loop I have to only draw imageScaled.
I currently thinking about of removing 2-6 steps and just draw scale effect with imageOriginal passed from each image object in the render loop every time.
I do not know what exactly Direct2d Scale effect does. If it actually every time does something similar to steps 2-6, then, probably I don't need to do it.
In the other hand, in my render loop there is basic skip algorithm for objects that are out of parent view, so they are not drawn at all. In current realization I may need to wait time for pre scale objects that possibly out of view, and they will not be drawn currently. With Scale effect in render loop realization this problem will be solved.
Does anyone know which solution will be the fastest?
After rewriting my code, currently it seems that using Scale in render loop is faster for a single image.
Again, before that, when setImage method of the object that represents UI an image is called, something like that was happening:
void ImageObject::setImage(const wchar_t *path)
{
if(!wcscmp(this->path, path))
return;
SafeRelease(&this->originalImage);
// Load original image via WIC
this->scaledImage = RescaleImage(this->originalImage, this->width, this->height);
}
And in the main render loop:
void ImageObject::Render()
// render loop iterates through ImageObject objects array and calls each object's Render method
{
// skip is cached variable simply equals
// (this->x > this->parent->width || this->y > this->parent->height || etc)
if(skip)
return;
renderTarget->DrawBitmap(this->originalImage, rectangle);
}
Now it is like this:
void ImageObject::setImage(const wchar_t *path)
{
if(!wcscmp(this->path, path))
return;
SafeRelease(&this->originalImage);
// Obtain originalImage and that's it
}
void ImageObject::Render()
{
if(skip)
return;
globalScale->SetInput(0, this->originalImage);
globalScale->SetValue(D2D1_SCALE_PROP_SCALE, ...);
renderTarget->DrawImage(globalScale, point);
}
First method actually supposed to be more faster, because in the render I need to just draw plain bitmap.
As I wrote in the post, I though the second method should work faster in case of big amount of images, when part of them are out of screen, but currently, with this method, drawing one image is faster than with image prescaling method.

Opacity overlap within the same OpenGl TriStrip in LibGdx

While my problem lies strictly in the opacity of the tristrip, I'd like to give some context first.
Recently I started developing a game through LibGdx which involves 2D circles which bounce around the screen. So as to provide a neat graphical effect, I created a small system that would provide a "tail" to the actors, which would fade over time. Visually, it looks like this:
Nice Trail Example
Now that ended up looking satisfactory. My problem, however, lies in situation where parts of the "trail" effect overlap, creating an ugly artifact which I would guess is the sum of the opacities of the points.
Ugly Trail Example
I believe this problem lies in the way with which the tristrip is drawn, specifically with the blending methods used.
The code used to generate the trail is as follows:
Array<Vector2> tristrip = new Array<Vector2>(); //Contains the vector information for OpenGL to build the strip.
Array<Vector2> texcoord = new Array<Vector2>(); //Contains the opacity information for the corresponding tristrip point.
// ... Code Here.... //
gl20.begin(camera.combined, GL20.GL_TRIANGLE_STRIP);
for (int i = 0; i < tristrip.size; i++) {
if (i == batchSize) {
gl20.end();
gl20.begin(camera.combined, GL20.GL_TRIANGLE_STRIP);
}
Vector2 point = tristrip.get(i);
Vector2 textcoord = texcoord.get(i);
gl20.color(color.r, color.g, color.b, color.a); // Color.WHITE
gl20.texCoord(textcoord.x, 0f);
gl20.vertex(point.x, point.y, 0);
}
gl20.end();
It is also important to note that the draw function for the strip is called within another class, in this fashion:
private void renderFX() {
Gdx.gl.glEnable(GL20.GL_BLEND);
Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
Array<Ball> balls = mainstage.getBalls();
for (int i = 0; i < balls.size; i++) { //Draws the trails for each actor
balls.get(i).drawFX();
}
}
Is this problem a rookie mistake on my part, or was my implementation of the drawing of the vector array tristrip flawed from the start? How can I fix the blending issue in order to create smoother trails even in the presence of sharp curves?
Thanks in advance...
Edit: Since originally asking this question, I've experimented with some possible solutions, also implementing Deniz Yılmaz's suggestion of using a FBO to facilitate blending. Given that, my render function currently looks like this:
private void renderFX() {
frameBuffer.begin();
Gdx.gl20.glDisable(GL20.GL_BLEND);
Gdx.gl20.glClearColor(0f, 0f, 0f, 0);
Gdx.gl20.glClear(GL20.GL_COLOR_BUFFER_BIT);
Gdx.gl20.glEnable(GL20.GL_STENCIL_TEST);
Gdx.gl20.glStencilOp(GL20.GL_KEEP, GL20.GL_INCR, GL20.GL_INCR);
Gdx.gl20.glStencilMask(0xFF);
Gdx.gl20.glClear(GL20.GL_STENCIL_BUFFER_BIT);
Array<Ball> balls = mainstage.getBalls();
for (int i = 0; i < balls.size; i++) {
Gdx.gl20.glStencilFunc(GL20.GL_EQUAL, 0, 0xFF);
balls.get(i).drawFX(1f, Color.RED);
}
frameBuffer.end();
}
As shown, I've also experimented with stencils so as to try and mask the overlapping portion of the trail. This approach, however, results in the following visuals:
Stenciled Version
Again, this is not ideal, and has made me realize that approaching this problem by masking is not a good idea, as the opacity gradient will never be smooth in the corners as there will always be a sharp line between the two overlapping opacity values, even if somehow the logic prevents blending.
Given that, how else could I approach this problem? Should I scrap this method entirely if I plan to achieve a smooth gradient for this trail effect?
Thanks again.
glBlendFunc() is useless in this case because by default the values calculated based on the blend function are added.
So something like glBlendEquation(GL_MAX) needed
BUT
blending alone won't work, since it can't tell the difference between what is the background and what is the overlapping shapes.
Instead use FrameBuffer to draw trail with a glBlendEquation.
https://github.com/mattdesl/lwjgl-basics/wiki/FrameBufferObjects

(Kinect v2) Alternative Kinect Fusion Pipeline (texture mapping)

I am trying a different Pipeline without success until now.
The Idea is to use the classic pipeline (as in the Explorer Example) but additionally to use the last ColorImage for the texutre.
So the idea (after clicking SAVE MESH):
Save current Image as BMP
Get the current transformation [m_pVolume->GetCurrentWorldToCameraTransform(&m_worldToCameraTransform);] .. lets call it M
Transform all Mesh vertices v in the last Camera Space Coordinate System ( M * v )
Now the current m_pMapper refers to the latest Frame which we want to use [ m_pMapper->MapCameraPointToColorSpace(camPoint, &colorPoint); ]
In theory I should have now every Point of the fusion mesh as a texture coordinate.. I want to use them to export as OBJ File (with texture and not only color).
What am I doing wrong?
The 3D Transformations seem to be correct.. when I visualize the resulting OBJ file in MeshLab I can see that the transformation is correct.. the WorldCoordinateSystem is Equal to the latest recorded position.
Only the texture is not set correctly.
I would be very very very very happy if anyone could help me. I am trying already for a long time :/
Thank you very much :)

Lib Cinder method setup{} in CINDER_APP_BASIC

When my programm start, it must display a circle on a background. Also i must controll all displaying circles. I use class VertexController and class Vertex for that purpose. In Vertex i have constructor:
Vertex::Vertex(const ci::Vec2f & CurrentLoc){
vColor = Color(Rand::randFloat(123.0f),Rand::randFloat(123.0f),Rand::randFloat(123.0f));
vRadius = Rand::randFloat(23.0f);
vLoc = CurrentLoc;
}
and in VertexController i have
VertexController::VertexController()
{
Vertex CenterVertex = Vertex(getWindowCenter());
CenterVertex.draw(); // function-member draw solid circle with random color
}
and in setup{} method i wrote
void TutorialApp::setup(){
gl::clear(Color(255,204,0));
mVertexController=VertexController();
}
Unfrtunatelly, my way didnt work.I see only background.
So the main question - in CINDER_APP_BASIC drawing possible only in draw{},update{},setup{} directly? If yes, advise a solution, else say where is my fail.
this line of code does not make any sense to me:
mVertexController=VertexController();
Anyways, you should use draw() function just for drawing circles to window. This it why by default there is gl::clear(Color(0,0,0)); to clear background and start drawing new frame from scratch (this is the way drawing in OpenGL, used by default in Cinder, works).
I suggest to use Vector container for storing all circles (this way you can add and remove circles on the fly with some effort), add the first one in VertexController constructor, and make separate function VertexController::draw() to draw all circles using for loop.

Positioning Circle Shapes within a Body in Box2D Web

I've had to completely revamp this question as I don't think I was explicit enough about my problem.
I'm attempting to learn the ropes of Box2D Web. I started having problems when I wanted to learn how to put multiple shapes in one rigid body (to form responsive concave bodies). One of the assumptions I made was that this kind of feature would only really be useful if I could change the positions of the shapes (so that I can be in control of what the overall rigid body looked like). An example would be creating an 'L' body with two rectangle shapes, one of which was positioned below and to-the-right of the first shape.
I've gotten that far in so-far-as I've found the SetAsOrientedBox method where you can pass the box its position in the 3rd argument (center).
All well and good. But when I tried to create two circle shapes in one rigid body, I found undesirable behaviour. My instinct was to use the SetLocalPosition method (found in the b2CircleShape class). This seems to work to an extent. In the debug draw, the body responds physically as it should do, but visually (within the debug) it doesn't seem to be drawing the shapes in their position. It simply draws the circle shapes at the centre position. I'm aware that this is probably a problem with Box2D's debug draw logic - but it seems strange to me that there is no online-patter regarding this issue. One would think that creating two circle shapes at different positions in the body's coordinate space would be a popular and well-documented phenomina. Clearly not.
Below is the code I'm using to create the bodies. Assume that the world has been passed to this scope effectively:
// first circle shape and def
var fix_def1 = new b2FixtureDef;
fix_def1.density = 1.0;
fix_def1.friction = 0.5;
fix_def1.restitution = .65;
fix_def1.bullet = false;
var shape1 = new b2CircleShape();
fix_def1.shape = shape1;
fix_def1.shape.SetLocalPosition(new b2Vec2(-.5, -.5));
fix_def1.shape.SetRadius(.3);
// second circle def and shape
var fix_def2 = new b2FixtureDef;
fix_def2.density = 1.0;
fix_def2.friction = 0.5;
fix_def2.restitution = .65;
fix_def2.bullet = false;
var shape2 = new b2CircleShape();
fix_def2.shape = shape2;
fix_def2.shape.SetLocalPosition(new b2Vec2(.5, .5));
fix_def2.shape.SetRadius(.3);
// creating the body
var body_def = new b2BodyDef();
body_def.type = b2Body.b2_dynamicBody;
body_def.position.Set(5, 1);
var b = world.CreateBody( body_def );
b.CreateFixture(fix_def1);
b.CreateFixture(fix_def2);
Please note that I'm using Box2D Web ( http://code.google.com/p/box2dweb/ ) with the HTML5 canvas.
It looks like you are not actually using the standard debug draw at all, but a function that you have written yourself - which explains the lack of online-patter about it (pastebin for posterity).
Take a look in the box2dweb source and look at these functions for a working reference:
b2World.prototype.DrawDebugData
b2World.prototype.DrawShape
b2DebugDraw.prototype.DrawSolidCircle
You can use the canvas context 'arc' function to avoid the need for calculating points with sin/cos and then drawing individual lines to make a circle. It also lets the browser use the most efficient way it knows of to render the curve, eg. hardware support on some browsers.
Since it seems like you want to do custom rendering, another pitfall to watch out for is the different call signatures for DrawCircle and DrawSolidCircle. The second of these takes a parameter for the axis direction, so if you mistakenly use the three parameter version Javascript will silently use the color parameter for the axis, leaving you with an undefined color parameter. Hours of fun!
DrawCircle(center, radius, color)
DrawSolidCircle(center, radius, axis, color)