AWheeledVehicle movement in c++ - c++

using Unreal Engine 4, I am attempting to move my vehicle (AWheeledVehicle) in the forward direction. I am correctly referencing the correct vehicle, but still it won't move.
Not sure what I'm doing wrong.
Attached below is my Vehicle and Controller class
AAIWheeledVehicle
AAIWheeledVehicle::AAIWheeledVehicle(){
AIControllerClass = AMyAIVehicleController::StaticClass();
}
AMyAIVehicleController
void AMyAIVehicleController::Possess(APawn *pawn){
Super::Possess(pawn);
//FVector location2 = pawn->GetActorLocation(); // -11310, 8910, 0
// initialize location of target point
location.X = -9620.0f;
location.Y = 8910.0f;
location.Z = 0.0f;
scaleValue = 1.0f;
target = GetWorld()->SpawnActor<ATargetPoint>(location, FRotator::ZeroRotator);
target->SetActorLocation(location);
// get AI vehicle reference
vehicle = Cast<AWheeledVehicle>(pawn);
// add forward movement to vehicle, scale = 1
vehicle->AddMovementInput(GetActorForwardVector(), scaleValue);
//vehicle->GetVehicleMovement()->Velocity.X = 1.0f;
//vehicle->GetVehicleMovement()->SetThrottleInput(1.0f);
//vehicle->GetVehicleMovement()->SetSteeringInput(1.0f);
//vehicle->GetVehicleMovement()->SetHandbrakeInput(false);
// set rotaion of vehicle to rotation of the target point
vehicle->SetActorRotation(target->GetActorRotation());
}

Vehicle setup in UE4 is tricky. It has a few moving parts that work together. Make sure that you have the initial vehicle setup working otherwise it may not move at all regardless of the code involved. (btw the code looks fine, but I recommend testing in blueprints to verify all is correct with setup)
I made a video about vehicle setup in UE4 on my channel, This might solve your problem. Go through and make sure the setup is the same, since you need to do this whether you are using C++ or blueprints. Here's the link:
UE4 - How to Make Vehicles

Related

How are you supposed to use a controller with a URDF

I'm trying to use the LQR Controller to move the wheels on this really simple URDF linked here
I'm trying to understand how to use the LQR controller for a model loaded from a URDF so this is a mix of the LQR example and the PR2 Example
systems::DiagramBuilder<double> builder;
auto pair = AddMultibodyPlantSceneGraph(
&builder,
std::make_unique<MultibodyPlant<double>>(
FLAGS_mbp_discrete_update_period));
MultibodyPlant<double>& plant = pair.plant;
const std::string full_name ="doublependulum/fetch/urdf/freight.urdf";
auto parser = multibody::Parser(&plant);
parser.package_map().PopulateFromFolder("/root/");
parser.AddModelFromFile(full_name);
// Add model of the ground.
const double static_friction = 0.5;
const Vector4<double> green(0.5, 1.0, 0.5, 1.0);
plant.RegisterVisualGeometry(plant.world_body(), RigidTransformd(),
geometry::HalfSpace(), "GroundVisualGeometry",
green);
// For a time-stepping model only static friction is used.
const multibody::CoulombFriction<double> ground_friction(static_friction,
static_friction);
plant.RegisterCollisionGeometry(plant.world_body(), RigidTransformd(),
geometry::HalfSpace(),
"GroundCollisionGeometry", ground_friction);
plant.Finalize();
plant.set_penetration_allowance(FLAGS_penetration_allowance);
// Set the speed tolerance (m/s) for the underlying Stribeck friction model
// For two points in contact, this is the maximum allowable drift speed at the
// edge of the friction cone, an approximation to true stiction.
plant.set_stiction_tolerance(FLAGS_stiction_tolerance);
const drake::multibody::Body<double>& base = plant.GetBodyByName("base_link");
ConnectContactResultsToDrakeVisualizer(&builder, plant);
geometry::DrakeVisualizer::AddToBuilder(&builder, pair.scene_graph);
auto diagram = builder.Build();
/// Need to add actuators to the models in this form
// Create a context for this system:
std::unique_ptr<systems::Context<double>> diagram_context =
diagram->CreateDefaultContext();
systems::Context<double>& plant_context =
diagram->GetMutableSubsystemContext(plant, diagram_context.get());
Eigen::MatrixXd Q(2, 2);
Q << 10, 0, 0, 1;
Eigen::MatrixXd R(1, 1);
R << 1;
auto controller =
builder.AddSystem(systems::controllers::LinearQuadraticRegulator(
plant, plant_context, Q, R));
builder.Connect(plant.get_state_output_port(),
controller->get_input_port());
builder.Connect(controller->get_output_port(), plant.get_input_port());
However, I get a runtime error:
what(): System::FixInputPortTypeCheck(): expected value of type drake::geometry::QueryObject<drake::AutoDiffXd> for input port 'geometry_query' (index 0) but the actual type was drake::geometry::QueryObject<double>. (System ::plant)
Can someone explain how to use the LQR controller for a Model
There are a few problems here. The first, and the one generating the error, is the fact that you've passed a plant_context (only) into the LinearQuadraticRegulator. Because your system has collision geometry, you need a SceneGraph to be connected for the dynamics to be evaluated. You would want to pass the diagram containing both the MultibodyPlant and the SceneGraph into the LQR call.
But the real problem is going to be deeper than that. LQR is not going to work out of the box for you on this model. That's not a Drake issue, it's a math issue. The system you've described not only has collision dynamics, but also is not going to be controllable in the linearization. When people use LQR to stabilizing wheeled robots, they do it in a minimal coordinates which assumes that the wheels are attached to the ground at a point. In drake, that would mean writing your own LeafSystem with the vehicle dynamics.

Problems with Camera in networking with Unreal engine 4

I am currently working on a project where all my players use different Camera.
I first thought using UCameraComponent, but each camera has to turn around a certain point and not moving with the movement of the pawns.
So I decided to Spawn a Camera Actor in the BeginPlay() of my pawn.
void AMyCharacter::BeginPlay()
{
Super::BeginPlay();
if (!hasCamera) { // Camera not set yet
FVector vectpos; // Target position of the camera
vectpos.X = -1130;
vectpos.Y = 10;
vectpos.Z = 565;
FRotator rotation;
rotation.Pitch = -22;
rotation.Yaw = 0;
rotation.Roll = 0;
APlayerController* controller = Cast<APlayerController>(GetController());
if (controller == NULL) // When I'm on client, the GetController() return NULL.
{
// Trying to find the controller of my client
for (FConstPlayerControllerIterator Iterator = GetWorld()->GetPlayerControllerIterator(); Iterator; ++Iterator)
{
controller = *Iterator;
//On client, there is only 1 controller according to the documentation.
}
}
if (controller != NULL)
{
controller->SetViewTarget(Camera); // Set the view with the new camera
}
SetCamera(true); // Call and RPC Function to update the hasCamera variable
}
}
This is working for the first player, and after that it depends. Sometimes, the second player get a camera that works fine, but sometimes, he is viewing through the wrong camera and the Camera variable is not the same one he is looking in. Sometimes, when a new players join the game, it make the first/second player looking through the wrong camera.
Here is the GameInstance Blueprint we use to make the LAN Connection bewteen the clients and the server(the first client to create the game)
If someone can find why the camera is not working as expected , it would be very nice ! Thanks you all in advance for your help.
Apparently, You choose the wrong way.
In UE4 'ACharacter' (APawn to be precise) is a character representation in the world, so you will have one for every single player. Thus, it is strange to put your camera code in it.
You should make your own controller (ex. 'AMyPlayerController') and control camera from it. Obviously, only for a local player.

openGL object real movement simulation

I need to simulate the movement of a row(oar). The oar object is loaded into eclipse with the min3D library which works with openGL.
At this moment, I make the oar move in the 3 axis x, y and z, but I'm not able to control this movement and to make the oar move in the desired way.
(Don't take care of the values, aren't real values)
This is the class which loads the oar, places it in the screen and moves it:
public class Obj3DView extends RendererFragment {
private Object3dContainer rowObject3D;
/** Called when the activity is first created. */
#Override
public void initScene() {
scene.lights().add(new Light());
scene.lights().add(new Light());
Light myLight = new Light();
myLight.position.setZ(150);
scene.lights().add(myLight);
IParser myParser = Parser.createParser(Parser.Type.OBJ, getResources(), "com.masermic.rowingsoft:raw/row_obj",true);
myParser.parse();
rowObject3D = myParser.getParsedObject();
rowObject3D.position().x = rowObject3D.position().y = rowObject3D.position().z = 0;
rowObject3D.scale().x = rowObject3D.scale().y = rowObject3D.scale().z = 0.28f;
scene.addChild(rowObject3D);
}
//THIS MAKES THE OAR MOVE
#Override
public void updateScene() {
rowObject3D.rotation().x += 1; //pitch
rowObject3D.rotation().z += 1; //roll
rowObject3D.rotation().y += 0.5; //yaw
}
roation() method definition: X/Y/Z euler rotation of object, using Euler angles. Units should be in degrees, to match OpenGL usage.
So the question is about how could I define the values that make the oar simulate a real movement?
This looks more like a mathematics question.
I'll present some general tips;
On positioning:
The fixed point of the oar is where the oar is held on the boat, so the oar's rotation is relative to that point, not the center of the oar.
And on top of that, the boat is moving, so is the oar's "fixed" point.
The order for positioning should be:
Translate to the boat position.
Translate the oar so it's center is relative to the correct spot of the boat.
Apply the oar rotation.
Draw it.
On animation:
It will be easier to animate if you alter your model so the origin is at the point where the oar is fixed, but it may complicate other animations/calculae if you later pretend to do more complex manipulations on the oar.
Interpolation of Euler rotation is a mess, I suggest quartenions. You can grab the angles from that nice picture and interpolate. (if you need Euler, still, you can convert the end result to Euler)
For simple animations, (say you just want the oar to repeatedly rotate in some pattern), hardly you will find a better method then key-frames, that is, create a list of coordinates/angles along the path you want the oar to do, and iterate through them, interpolating.
With enough points, a simple linear interpolation will do just fine.

Box2D - When creating a body at runtime, the body does not collide

I've been working on a game with destructible environments and I've come up with a solution where I check for possible destruction within my ContactListener object. Obviously because this is taking place within Step(), I postpone processing the destruction until the moment after the step. I do this by pooling "destruction events" that need to be processed within the contact listener, and then immediately after Step() calling something like contactListener->processDestructionEvents();.
The way I do this is by capturing the colliding fixtures within the beginContact event and then determining the angle of impact, then using that angle to raycast on the fixture itself. I then grap the vertices from the b2PolygonShape of the fixture, then generate two new shapes which are split based on the impact and exit points of the ray. The original fixture is destroyed on the body, and then a new fixture is generated for the first new shape and added to the original body. For the second shape, a new body is generated and that shape is added to this new body.
Anyway everything works great, in debug view I can see that the new shapes have been generated and are all in place, as they should be. However, I get really screwed up behavior at this point. As soon as this process is complete, neither the original nor the newly generated body will collide with anything. If I enable continuous physics, SOMETIMES a dynamic object will collide with one of the edges of these bodies/fixtures, but not always. I'm wondering if it's something I'm doing wrong in my approach to rebuilding bodies/fixtures at runtime. Here is the code for generating the new objects, any help would be greatly appreciated.
void PhysicsContactListener::processDestructionEvents() {
if(!hasDestructionEvents) {return;}
for(destructionEventsIterator = destructionEvents.begin(); destructionEventsIterator != destructionEvents.end(); ++destructionEventsIterator) {
b2Filter f1, f2;
f1.groupIndex = destructionEventsIterator->originalFixture->GetFilterData().groupIndex;
f1.categoryBits = destructionEventsIterator->originalFixture->GetFilterData().categoryBits;
f1.maskBits = destructionEventsIterator->originalFixture->GetFilterData().maskBits;
f2.groupIndex = destructionEventsIterator->originalFixture->GetFilterData().groupIndex;
f2.categoryBits = destructionEventsIterator->originalFixture->GetFilterData().categoryBits;
f2.maskBits = destructionEventsIterator->originalFixture->GetFilterData().maskBits;
b2PolygonShape newShape0 = destructionEventsIterator->newFixtures[0];
b2FixtureDef fixture0Def;
fixture0Def.shape = &newShape0;
fixture0Def.density = 1.0f;
fixture0Def.restitution = 0.2f;
b2Fixture* fixture1 = destructionEventsIterator->hostBody->CreateFixture(&fixture0Def);
fixture1->SetFilterData(f1);
//destructionEventsIterator->hostBody->SetAwake(true);
destructionEventsIterator->hostBody->ResetMassData();
//destructionEventsIterator->hostBody->SetActive(true);
destructionEventsIterator->hostBody->SetTransform(destructionEventsIterator->hostBody->GetPosition(), destructionEventsIterator->hostBody->GetAngle());
b2BodyDef bd;
bd.position = destructionEventsIterator->hostBody->GetPosition();
bd.angle = destructionEventsIterator->hostBody->GetAngle();
bd.type = destructionEventsIterator->hostBody->GetType();
b2Body* newBody = destructionEventsIterator->hostBody->GetWorld()->CreateBody(&bd);
b2PolygonShape* newShape1 = (b2PolygonShape*)(&destructionEventsIterator->newFixtures[1]);
b2Fixture* fixture2 = newBody->CreateFixture(newShape1, destructionEventsIterator->hostBodyDensity);
fixture2->SetFilterData(f2);
newBody->SetAngularVelocity(destructionEventsIterator->hostBody->GetAngularVelocity());
newBody->SetLinearVelocity(destructionEventsIterator->hostBody->GetLinearVelocity());
//newBody->SetAwake(true);
newBody->ResetMassData();
//newBody->SetActive(true);
newBody->SetTransform(destructionEventsIterator->hostBody->GetPosition(), destructionEventsIterator->hostBody->GetAngle());
destructionEventsIterator->hostBody->DestroyFixture(destructionEventsIterator->originalFixture);
}
The two pieces don't collide with each other? Take a look at the categoryBits and maskBits values that each fixture ends up with - looks like each piece is given the same values for these. My guess is you are just overlooking the fact that these masks are checked against each other both ways, eg. from the Box2D source code:
bool collide =
(filterA.maskBits & filterB.categoryBits) != 0 &&
(filterA.categoryBits & filterB.maskBits) != 0;
On the other hand if you mean the pieces collide with nothing at all and simply fall through the ground and down forever except for SOMETIMES, then I might suspect an incorrect polygon winding.
btw a b2Filter holds only primitives so you could assign those directly:
b2Filter f1 = destructionEventsIterator->originalFixture->GetFilterData();
...also, the first SetTransform and the second ResetMassData are redundant.

Box2D creating rectangular bounding boxes around angled line bodies

I'm having a lot of trouble detecting collisions in a zero-G space game. Hopefully this image will help me explain:
http://i.stack.imgur.com/f7AHO.png
The white rectangle is a static body with a b2PolygonShape fixture attached, as such:
// Create the line physics body definition
b2BodyDef wallBodyDef;
wallBodyDef.position.Set(0.0f, 0.0f);
// Create the line physics body in the physics world
wallBodyDef.type = b2_staticBody; // Set as a static body
m_Body = world->CreateBody(&wallBodyDef);
// Create the vertex array which will be used to make the physics shape
b2Vec2 vertices[4];
vertices[0].Set(m_Point1.x, m_Point1.y); // Point 1
vertices[1].Set(m_Point1.x + (sin(angle - 90*(float)DEG_TO_RAD)*m_Thickness), m_Point1.y - (cos(angle - 90*(float)DEG_TO_RAD)*m_Thickness)); // Point 2
vertices[2].Set(m_Point2.x + (sin(angle - 90*(float)DEG_TO_RAD)*m_Thickness), m_Point2.y - (cos(angle - 90*(float)DEG_TO_RAD)*m_Thickness)); // Point 3
vertices[3].Set(m_Point2.x, m_Point2.y); // Point 3
int32 count = 4; // Vertex count
b2PolygonShape wallShape; // Create the line physics shape
wallShape.Set(vertices, count); // Set the physics shape using the vertex array above
// Define the dynamic body fixture
b2FixtureDef fixtureDef;
fixtureDef.shape = &wallShape; // Set the line shape
fixtureDef.density = 0.0f; // Set the density
fixtureDef.friction = 0.0f; // Set the friction
fixtureDef.restitution = 0.5f; // Set the restitution
// Add the shape to the body
m_Fixture = m_Body->CreateFixture(&fixtureDef);
m_Fixture->SetUserData("Wall");[/code]
You'll have to trust me that that makes the shape in the image. The physics simulation works perfectly, the player (small triangle) collides with the body with pixel perfect precision. However, I come to a problem when I try to determine when a collision takes place so I can remove health and what-not. The code I am using for this is as follows:
/*------ Check for collisions ------*/
if (m_Physics->GetWorld()->GetContactCount() > 0)
{
if (m_Physics->GetWorld()->GetContactList()->GetFixtureA()->GetUserData() == "Player" &&
m_Physics->GetWorld()->GetContactList()->GetFixtureB()->GetUserData() == "Wall")
{
m_Player->CollideWall();
}
}
I'm aware there are probably better ways to do collisions, but I'm just a beginner and haven't found anywhere that explains how to do listeners and callbacks well enough for me to understand. The problem I have is that GetContactCount() shows a contact whenever the player body enters the purple box above. Obviously there is a rectangular bounding box being created that encompasses the white rectangle.
I've tried making the fixture an EdgeShape, and the same thing happens. Does anyone have any idea what is going on here? I'd really like to get collision nailed so I can move on to other things. Thank you very much for any help.
The bounding box is an AABB (axis aligned bounding box) which means it will always be aligned with the the Cartesian axes. AABBs are normally used for broadphase collision detection because it's a relatively simple (and inexpensive) computation.
You need to make sure that you're testing against the OBB (oriented bounding box) for the objects if you want more accurate (but not pixel perfect, as Micah pointed out) results.
Also, I agree with Micah's answer that you will most likely need a more general system for handling collisions. Even if you only ever have just walls and the player, there's no guarantee that which object will be A and which will be B. And as you add other object types, this will quickly unravel.
Creating the contact listener isn't terribly difficult, from the docs (added to attempt to handle your situation):
class MyContactListener:public b2ContactListener
{
private:
PlayerClass *m_Player;
public:
MyContactListener(PlayerClass *player) : m_Player(player)
{ }
void BeginContact(b2Contact* contact)
{ /* handle begin event */ }
void EndContact(b2Contact* contact)
{
if (contact->GetFixtureA()->GetUserData() == m_Player
|| contact->GetFixtureB()->GetUserData() == m_Player)
{
m_Player->CollideWall();
}
}
/* we're not interested in these for the time being */
void PreSolve(b2Contact* contact, const b2Manifold* oldManifold)
{ /* handle pre-solve event */ }
void PostSolve(b2Contact* contact, const b2ContactImpulse* impulse)
{ /* handle post-solve event */ }
};
This requires you to assign m_Player to the player's fixture's user data field. Then you can use the contact listener like so:
m_Physics->GetWorld()->SetContactListener(new MyContactListener(m_Player));
How do you know GetFixtureA is the player and B is the wall? Could it be reversed? Could there be an FixtureC? I would think you would need a more generic solution.
I've used a similar graphics framework (Qt) and it had something so you could grab any two objects and call something like 'hasCollided' which would return a bool. You could get away with not using a callback and just call it in the drawScene() or check it periodically.
In Box2D the existence of a contact just means that the AABBs of two fixtures overlaps. It does not necessarily mean that the shapes of the fixtures themselves are touching.
You can use the IsTouching() function of a contact to check if the shapes are actually touching, but the preferred way to deal with collisions is to use the callback feature to have the engine tell you whenever two fixtures start/stop touching. Using callbacks is much more efficient and easier to manage in the long run, though it may be a little more effort to set up initially and there are a few things to be careful about - see here for an example: http://www.iforce2d.net/b2dtut/collision-callbacks