I'm new to omnet++, and i need to write a simultaion of vehicle on a map.
i have a compund module called "Vehicle" which contains 2 simple module:
for movement (all the cars will move on a map)
for the communication between all the vehicle
Somebody know how to implement the movement part?
You could use the VEINS framework that provides all you need to simulate vehicular networks. Alternatively, the INETMANET framework also contains several mobility and wireless communication models.
Related
My objective is to use finite horizon linear quadratic regulator(FHLQR) to follow a trajectory generated by a mathematical program and simulate it in Gazebo. As FHLQR needs a system as input, I'm using the QuadrotorPlant, but I'm not sure if this is ideal. My problems arise when I try to connect the state from Gazebo into Drake. In short, what would be the proper way of coupling state from Gazebo with a controller such as FHLQR?
I've thought about editing the context of QuadrotorPlant to mirror the state in Gazebo, but after this update I'm having trouble getting control output from the controller. I've also thought about coupling the Simulator between the output of the controller and input of the QuadrotorPlant, but haven't figured out how to modify the Simulator to mirror Gazebo.
And for your gazebo interface, and assuming you're all in c++, then I'd imagine it will look something like this:
// setup
auto regulator = MakeFiniteHorizonLinearQuadraticRegulator(...);
auto context = regulator.CreateDefaultContext();
// during execution
context->SetTime(time_from_gazebo);
context->FixInputPort(0, Eigen::VectorXd([state obtained from gazebo]));
Eigen::VectorXd u = regulator->get_output_port(0)->Eval(context);
// then apply u to your gazebo interface
I think we should be able to help, but probably need a few more details.
My mental model of your setup is that you use QuadrotorPlant to design the trajectory and the MakeFiniteHorizonLinearQuadraticRegulator methods in drake... the result of that is a drake System that has an input port expecting a vector of doubles representing the state of the quadrotor, and it outputs a vector of doubles that represent the thrust commands for the quadrotor.
Separately, you have a Gazebo simulator with a quadrotor, that is accepts commands and simulates the dynamics.
Are you trying to connect them via message passing (as opposed to a single executable)? Presumably with ROS1 messages? (If yes, then we have little systems in drake that can send/receive ROS messages, which we can point you to)
FWIW, I would definitely recommend trying the workflow with just running an LQR controller in drake before trying the trajectory version.
I have a simulation that currently has BonnMotionMobility, where I tell where nodes will be during simulation. But I want that, as a consequence of some events, some nodes change their positions to another position during the simulation.
Is there any function to "set a new position" that can be called somewhere in the middle of the running simulation (some reactive mobility model)?
I hope I was clear enough on my problem.
Thank you for your answers.
Not that way. If you want to implement your own logic how nodes should move, you should implement your own mobility model (deriving from MovingMobilityBase or something appropriate). You should pass all the needed information the mobility module i.e. send events or signals there and the movement logic should be handled inside the mobility module. In the current architecture, determining the location of the module is the sole responsibility of the mobility module.
What you are suggesting is (by looking for a setCoordinates() like function) is that you want to move that responsibility out into other unrelated modules which is usually not a good decision.
In short, you should write your own mobility module that does that. Obviously you can write a simple model that has a setCoordinates() function and call that from your other code.
What is the relationship between SDL_Joystick and SDL_GameController? These are the only things I know of right now:
SDL_GameController and related functions are all part of a new API introduced in SDL2.
SDL_GameController and related functions are built on top of the existing SDL_Joystick API.
(Working Draft) You can obtain an instance of SDL_Joystick by calling on the function SDL_GameControllerGetJoystick() and passing in an instance of SDL_GameController.
(Working Draft) You can obtain an instance of SDL_GameController first by calling on SDL_JoystickInstanceID() and passing in an instance of SDL_Joystick, then pass in the SDL_JoystickID to SDL_GameControllerFromInstanceID.
Even though SDL_Joystick and SDL_GameController are both interchangeable, it seems like SDL_GameController is here to replace and slowly succeed the SDL_Joystick.
Reason is, when polling for SDL_Event, the SDL_Event instance contains both the SDL_Event::jbutton and SDL_Event::cbutton structs, representing the SDL_Joystick buttons and SDL_GameController buttons, respectively. I guess I can use either one, or both, button events for the player controls.
I could be wrong here.
I would like to ask:
What are the differences between SDL_Joystick and SDL_GameController?
Is SDL_Joystick now referring to this controller?
And the same for SDL_GameController?
What are the advantages/disadvantages of using SDL_Joystick over SDL_GameController (and vice versa)?
First of all, SDL game controllers are the extension of SDL joystics (for the scope of this answer when I say "controller" or "joystick" I mean SDL's implementation, not hardware device category in general). As wiki says,
This category contains functions for handling game controllers and for
mapping joysticks to game controller semantics. This is built on top
of the existing joystick API.
If you are running your game from Steam, the game controller mapping
is automatically provided for your game.
Internally SDL uses joystic events and processes them to produce game controller events according to controller mapping. Hence one may say that joystic is lower level thing while game controller is a generalisation upon joysticks to produce more predictable/compatible (but more constrained) for games that wants gamepad-like input devices.
With game controller, you can program input for just one xbox-like controller thing, and SDL will make user's controller compatible with that (sometimes with the user's help - there are way too many different controllers, we can't possibly expect SDL to have configurations for all of them). Of course if controller is very different (or not controller at all - e.g. fly simpulation sticks, wheels, etc.), that would be problemmatic.
Basically game controller provides xbox-like buttons and axes for user side, freeing application developer from the need to support controller remapping - as remapping is done in SDL itself. For some popular controllers SDL already have builtin mappings, and for others user-defined mapping can be loaded via environment variable.
There is also a configuration tool that simplifies remapping for end user, including exporting resulting configuration to said environment variable. Steam also have builtin configuration tool, which configuration it (supposedly - I've never used that) exports to SDL - essentially making users themselves responsible for configuring their controllers.
Hello I'm currently having a hard time with a project I'm working on.
It is a video game made using unreal engine 4 and i have to implement the network part on the project. So first of all, we have multiples pawn possessed by players, every player is allowed to spawn blocks(can be different type of blocks). We are currently using a Grid (an actor) that stores all the blocks in the world. We are using a GridManager ( an UObject) to manage the grid.
The GridManager can create AActor : BuildingAction, PushAction, FallingAction, and all others action that can be applied to any blocks in the grid. So i know that for spawning theses action and block i need to call RPCS from either the PlayerController or the pawn. The problem is that because the gridManager manages the grid and Spawn the actions on the world, the actions dont work on multiplayer. I guess spawning them in the player controller would work but it would need a function for every type of block and a function for every types of actions on the player controller, this would make no sense to store these functions on the player controller. If anyone could help me finding a good way to make this work on the network , it would be very helpful.
I'm designing a component-based system and everything works well, but a critical feature is missing and that is to be able to obtain a type of component from within a class of type Object, whereof this class can be added/removed components. In the Object class there exists a vector of components thus:
vector<Component*> pComponents;
And in the Component class, a Component has to have a name. So, a component such as Drawable would be called like so:
pPlayer->addComponent(new Drawable("Drawable"));
And that's all that's required for the player to be drawable. Now this is the problem: when it comes to adding loads of components that rely on other components, is there a settlement in how components communicate with one another?
Currently, in my Game class (which is not of type Object, although I might have it derive from Object albeit not sure if that's a good design decision) I have this code in the update function:
void Game::update()
{
pPlayer->update(0);
pSpriteLoader->getSprite()->move(pPlayer->getVelocity());
}
I'm using SFML2 solely because it's easy to use for the 2D graphics. The player update function calls the components' respective update functions. And the sprite loader is also a component and it is in charge of handling the loading of sprites through textures/images that can be read from file or memory. If I were to omit this line of code then the sprite would not be able to appear moving on the screen. As you can see, it's odd that pPlayer has a getVelocity() function and it's because I haven't moved all the physics stuff into its own component. What is dreading is that once I have moved the physics stuff out of the Player class into a Physical component class, how can I get these components to communicate with each other without having to resort to the lines of code ascribed above?
My solution is to create a component manager which registers each component and any component that requires another component will have to consult with this component manager without having to do so directly. Would this be a viable solution, and how can I proceed with the logic of such a component manager then?
Well, I suppose you would start by designing a messaging system.
It appears that you want to heavily decouple code and create components as much as possible. This is fine, I suppose, but the answer to having dependencies without coupling is probably something among the lines of message passing.
Say you need an Achievement system. This is a perfect example of a system that needs to stick its hand into as many nooks and crannies as possible in order to allow for the greatest flexibility in designing achievements. But how would you be able to stick your hand into, say, the Physics, AI, and Input system all at the same time and not write spaghetti code? The answer would be to put listeners on event queues and then run them by certain criteria based on the contents of the messages.
So for each component, you might want to inherit a common message sending/receiving component, possibly with generics, in order to send data messages. For example, say you shoot a laser in a FPS game. The laser will most likely make a sound, and the laser will most likely need an animation. You will probably want to send a message to the sound system to play a certain sound, and then send a message to the physics system or such to simulate the effects of the laser.
If you're interested, I have a really, really crude library for modeling a event system based on queues, listeners, and generic messages here: https://github.com/VermillionAzure/Flexiglass You might get some insight from fellow newbie code.
I really suggest taking a look at Boost.Signals2 or Boost.Asio as well. Knowledge of signals/networking can often help when designing communication systems, even if the system is just between game components.
I've recently been working on an entity-component system in SFML and came across the same problem.
Instead of creating some sort of messaging system that allows components to communicate with each other I ended up adding 'System' objects to the mix instead. This is another popular method that is often used when implementing component systems and it's the most flexible one that I've used so far.
Entities are nothing more than collections of components
Components are POD structs and have no methods (e.g. a PositionComponent would have nothing more than X and Y values)
Systems update any entities that have the required components
For example, my 'MovementSystem' iterates through each of my entities and checks if they have a VelocityComponent and an InputComponent. If it does, it changes the velocity of the current entity according to the current key being pressed.
This removes the issue with components communicating with each other because now all you need to do is access/modify the data stored in the current entity's components.
There are a few different ways that you can work out whether or not the current entity has the required components - I'm using bitmasks. If you decide to do the same, I highly suggest you take a look at this post for a more thorough explanation of the method: https://gamedev.stackexchange.com/questions/31473/role-of-systems-in-entity-systems-architecture