I am creating a game using EntityX library which handles Entity-Component-System model, and SFML.
Right now, I just have four basic components: position, direction, velocity and renderable, plus two systems: a movement system and a render system.
I will separate my question in two:
How can I make basic controls such as moving my character? With SFML, I can get the events this way:
sf::Event event;
while(window.pollEvent(event))
{
if(event.type == sf::Event::Closed)
window.close();
// other events ...
}
So I could create a ControllerSystem which targets entities having a Controller component, and give a reference to my window to the system so that it handles the events. But the only controllable entity would be my character so is that really efficient? Plus, it means I would have to loop through the events in my system to get KeyPressed events. Thus I can't for example do the condition to close the window except if I put it along with the events for the character.
Finally, when I get - wherever it is - a KeyPressed event and I want to do an action, how and where should I send this action so that it get realised? I mean, my controller system can deplace my character and a AI system could deplace a mob. I'm not going to write the deplacement twice in my systems, even if it's just changing direction and velocity. I could have more complex actions such as launching spells etc.
Coming to my second point, how to handle different spells/weapons? Suppose I have three keys A, B, C each one launching a totally different spell. In my Controller Component, when I would get an event saying one of these key has been pressed, I would want to launch the spell. But where should I have this spell "stored" in the code? Assuming the three spells must be coded differently as they have a different comportment, but there needs to a common pattern between them, no?
So shall spells be systems, or just normal classes out of the ECS model, having access on entity manager and components to create projectiles or I don't know what?
It remains quite blur to me and I can't really find specific enough tutorials. Thanks for your help.
Related
I've created simple game using SFML(player can run, jump etc pressing arrows), how can i make the same for two players(second player will use WASD to move) at the same time on one computer?
You can access the keys pressed with the sf::Keyboard class. sf::Keyboard is a static class so you don't need to create a unique instance, here is a quick example:
if(sf::Keyboard::isKeypressed(sf::Keyboard::Key::W))
{
Player1.Move(1, 1);
}
if(sf::Keyboard::isKeypressed(sf::Keyboard::Key::Left))
{
Player2.Move(1, 1);
}
I'm assuming you currently are using window.PollEvent(event), the problem with that approach is that you only poll one event per loop cycle since events are added into a queue. With the above code you should be able to press multiple keys at the same time and have two actions processed.
I am attempting to create a framework where I can have multiple events all use the same room.
For example, the player triggers an event and the event builds the room with the passed in variables.
I am having trouble making the room dynamic. I want the room and the objects in the room be reusable for every event. This includes the buttons as well.
Is this possible to do?; OR
Do I have to create separate rooms for each unique event I wish to create?
The game is mostly menu based (like the game "Long Live The Queen") if that helps.
To answer simply, yes it is possible.
There are a lot of cases where I have been able to fit a lot of stuff into a single room in Game Maker. Here are a few ways to achieve this "dynamic" game creation:
Files and Scripts. You can use a single room to hold a variable number of levels by storing walls, floors, player positions, events, etc inside of a file. You can make a script that takes the filename (your "passed in" variable) and then let it simply create all of the instances inside the level for you in that room. You can also have a function that cleans up the room to prepare for another level to load. The side effect though is that your uniqueness is limited to what information can be stored in those files. You can store menu options and text dialog too if you wish.
"Unique" Objects. Game Maker is an IDE. There is nothing stopping you from making new objects in the editor for a unique case and then adding a handler in another object to create it on demand. You have to manage switching between them though.
Make a "manager" object. It can handle all of the events of something happening in-game (and in that room, for that matter). Plus also it can be used by objects to store non-global variables before being destroyed. For instance, if a character dies, it can set a variable in a manager object to "true", which would trigger a boss to appear.
In terms of manipulating object events dynamically though, unless you are running something like Game Maker 8, that is no longer possible. I say this because prior to GameMaker:Studio, object, sprites and others can be created dynamically in game via functions like "object_add()". Of course, these are obsolete and can no longer be used. Nevertheless, there are always ways around it.
I am interested to port applications that use a mouse based interface to a touchscreen based interface and I wonder how to best avoid mis-moves when a user with "large and wiggly" fingertips uses my applications
If the user touches a certain location and his/her/its pulse or impatience causes the fingertip to wiggle around some pixels, my application should not interpret this as a QEvent::MouseMove and cause scrolling of e.g. a list underneath.
There seem to be two classes that do press-like events, namely QTouchEvent and QMouseEvent.
What is the difference in respect to the above issue with reliably detecting the users intent? Does QTouchEvent solve that issue for me?
Do I need to add event handlers for QTouchEvent, in parallel to QMouseEvent (in order to provide backward compatibility with mouse users)?
I imagine that it would not be a good idea to implement this sort of tolerance in every widget again, but there should be some global instance that makes this work, in e.g. QApplication or X11 directly.
Try using QApplication::startDragDistance(). This is application wide accessible. If this only needs to be checked in one part of your application (e.g. small sized elements in a list), you can still use a self defined distance. Then just reimplement QWidget::mouseMoveEvent(QMouseEvent *event) and include (from the docs):
if ((startPos - currentPos).manhattanLength() >= QApplication::startDragDistance())
startTheDrag();
I'm working on a Qt app, and at some point I have a class (I name it here "engine") that is governing the program: it has a signal with a timeout which makes the class to draw, and evolve the app logic. Morevoer, it receives events that are caught from a QGraphicsScene.
Each engine "tick", the update() is called on the Scene, updating the drawing according to the app evolution.
Naturally, I want the drawing to be synchronized with the reactions of the events, otherwise, a drawing of some object could be made while the reaction of a event was destroying that same object, causing a SegFault.
I've tried using a queue on the engine such that I would only make the engine to react to those events on a specific place of a update, thus not interfering with the drawing part.
Two problems rised:
I cannot make a copy of a QGraphicsEvent. Apparently the copy operator is private (which I assume is for a good reason anyway).
When the class is processing the events, before the drawing, it can also happen that a new event appears, which can be "bad" because of non-synchronization of it
Taking into account this situation, is there any approach that can solve this situation? Is there any standard procedure in Qt for this? I mean, how do I ensure the drawing is not potentially desynchronized with the events' reactions of the application?
Sorry for the ambiguous title. What I am wondering is what is an efficient way to alternate rendering between lets say a main menu, options menu, and "in the game."
The only two ways I've come up with so far are to have 1 render function, with code for each part (menu, ...) and a variable to control what gets drawn, or to have multiple render functions, and use a function pointer to point to the appropriate one, and then just call the function pointer.
I always wonder how more professional games do it.
Try to use state-machine / strategy OOP pattern. Game application is in different states and renders different things and reacts on keyboard/mouse input differently when you are playing and when you are interacting with menu.
Well this is a bit more complicated if you want to do it right.
First I create a CScreen class that's the base class for all the screens. It's an abstract class( use pure virtual functions) that has 2 functions: Render and Update. Then I derive it in more screens that I need such as CMainMenuScreen, COptionsScreen, CCreditsScreen, CGameScreen etc. Let each of these classes take care of their own stuff. In each of them you have the interface and then when press for instance the options button in the main menu screen then you change the screen to COptionsScreen. For that you have to just keep one variable CScreen screen somewhere and on every frame call screen->Update() and screen->Draw() remeber to adjust if you do not use pointers(tough I'd recommend this)
If your controls are represented as classes then a polymorhic API render would solve the problem. Depending on the object ( menu types) the corresponding rendering happens.
class UIObject
{
public:
virtual bool render() = 0;
~UIObject(){}
};
class MainMenu : pu{
public:
virtual bool render()
{ //rendering for main menu
}
};
class OptuionMenu
{
public:
virtual bool render() { //render for option menu}
};
Games that I've shipped, that have sold lots of copies, have had a state machine and used switch statements to choose the appropriate functionality. While ostensibly less flexible than an "OOP" state machine, it was far easier to work with than the OOP designs I've subsequently been subjected to.
It actually may be appropriate to have only one render function, but that function shouldn't know specifics about what it's doing. It'll have 3D and 2D passes (at least, for a 3D game, since even those often have 2D UI elements), but it doesn't need to know what "mode" the game is in.
The magic happens in the UpdateMainMenu or UpdateGame or UpdateInGameMenu functions, as well as the Start and Stop functions associated with switching states. Choose which with a switch statement on an enum and use it two places, switching states (one switch to stop the old state, one more to start the new one) and updating.
As I write that my alarm bells go off that this is a perfect opportunity to use OOP, but from experience I would advise against that. You don't want to end up in the situation where you have a million little states that are coming and going; you want to constrain it to the major "run modes," and each of those modes should be able to operate on data that tells it what to display. E.g. one state for the entire in-game menu, which "loads" data (usually, "updates its pointer to the data") to indicate what the behavior of the current screen is. There is nothing worse than having a hundred micro classes and not knowing which one triggers when, not to mention the duplicated logic that often arises from such a design (game developers are very bad at reducing duplication through refactoring).