Basic design of a multithreaded game server? - c++

How are multithreaded game servers written?
If there are 4 threads, is there one thread running the game loop, and 3 accepting and processing requests? Also: is information sent from the thread running the game loop?

Starkey already pointed out that it depends a whole lot on the precise design.
For instance, on games with many clients, you'd assign dedicated threads to handling input, but for games with a few clients (say <=16) there's no need for multiple threads.
Some games feature NPC's with considerable smarts. It may be smart to run those on their own threads, but if you have too many you'll need a threadpool so a bunch of NPC's can share a single thread.
If you've got a persistent world, you'll need to write out state to a hard disk somewhere (probably via a DB). Since that has serious latencies, you won't want to have a main game loop wait on that I/O. That will be another thread, then.
Finally, there's the question whether you even have a main game loop. Would a MMO have a single loop, or would you rather have many ?

The main key is to make sure your game logic is not affected by your threading model.
As such, most game servers look something like this:
main() {
gGlobalReadOnlyStuff = LoadReadOnlyStuff();
SpawnThreads(numCores); // could be another limiting resource...
WaitForThreadsToBeReadyToGo();
while(1) {
WaitForNetworkInput(networkInput);
switch(networkInput.msg) {
case ADMIN_THING: // start/stop websever, dump logs, whatever...
DoAdminThing(networkInput.params);
break;
case SPAWN_GAME: // replace 'game' with 'zone' or 'instance' as needed
idThread = ChooseBestThread(); // round robin, random, etc
PostStartGameMessageToThread(idThread, networkInput.msg);
break;
// ...
}
}
}
void ThreadUpdate() {
threadLocalStuff = LoadThreadLocalStuff();
SignalThreadIsReadyToGo();
while(1) {
lock(myThreadsMessageQueue);
// copy messages to keep lock short
localMessageQueue = threadsMessageQueue;
unlock(myThreadsMessageQueue);
foreach(message in localMessageQueue) {
switch(message.msg) {
case SPAWN_GAME:
threadLocalStuff.games.MakeNewGame(message.params));
break;
case ADMIN_THING__LET_EVERYONE_KNOW_ABOUT_SERVER_RESET:
...;
break;
// etc...
}
}
foreach(game in threadLocalStuff.games) {
game.Update(); // game will handle its own network communication
}
}
The two hard things then are 'coming up with a partition (game, zone, instance, whatever) appropriate for you game' and 'transitioning things (players, fireballs, epic lootz) across those boundaries' One typical answer is "serialize it through a database", but you could use sockets/messages/files/whatever. But yeah, where and how to make these partitions and minimizing what can go across the boundaries is intimately tied to your game design.
(And yes, depending on your setup, there are possibly a few 'shared' systems (logging, memory) that may need a multithreading treatment(or even better, just have one logger/heap per thread))

Related

Eclipse RAP Multi-client but single server thread

I understand how RAP creates scopes have a specific thread for each client and so on. I also understand how the application scope is unique among several clients, however I don't know how to access that specific scope in a single thread manner.
I would like to have a server side (with access to databases and stuff) that is a single execution to ensure it has a global knowledge of all transaction and that requests from clients are executed in sequence instead of parallel.
Currently I am accessing the application context as follows from the UI:
synchronized( MyServer.class ) {
ApplicationContext appContext = RWT.getApplicationContext();
MyServer myServer = (MyServer) appContext.getAttribute("myServer");
if (myServer == null){
myServer = new MyServer();
appContext.setAttribute("myServer", myServer);
}
myServer.doSomething(RWTUtils.getSessionID());
}
Even if I access myServer object there and trigger requests, the execution will still be running in the UI thread.
For now the only way to ensure the sequence is to use synchronized as follows on my server
public class MyServer {
String text = "";
public void doSomething(String string) {
try {
synchronized (this) {
System.out.println("doSomething - start :" + string);
text += "[" + string + "]";
System.out.println("text: " + (text));
Thread.sleep(10000);
System.out.println("text: " + (text));
System.out.println("doSomething - stop :" + string);
}
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Is there a better way to not have to manage the thread synchronization myself?
Any help is welcome
EDIT:
To better explain myself, here is what I mean. Either I trust the database to handle multiple request properly and I have to handle also some other knowledge in a synchronized manner to share information between clients (example A) or I find a solution where another thread handles both (example B), the knowledge and the database. Of course, the problem here is that one client may block the others, but this is can be managed with background threads for long actions, most of them will be no problem. My initial question was, is there maybe already some specific thread of the application scope that does Example B or is Example A actually the way to go?
Conclusion (so far)
Basically, option A) is the way to go. For database access it will require connection pooling and for shared information it will require thoughtful synchronization of key objects. Main attention has to be done in the database design and the synchronization of objects to ensure that two clients cannot write incompatible data at the same time (e.g. write contradicting entries that make the result dependent of the write order).
First of all, the way that you create MyServer in the first snippet is not thread safe. You are likely to create more than one instance of MyServer.
You need to synchronize the creation of MyServer, like this for example:
synchronized( MyServer.class ) {
MyServer myServer = (MyServer) appContext.getAttribute("myServer");
if (myServer == null){
myServer = new MyServer();
appContext.setAttribute("myServer", myServer);
}
}
See also this post How to implement thread-safe lazy initialization? for other possible solutions.
Furthermore, your code is calling doSomething() on the client thread (i.e. the UI thread) which will cause each client to wait until pending requests of other clients are processed. The client UI will become unresponsive.
To solve this problem your code should call doSomething() (or any other long-running operation for that matter) from a background thread (see also
Threads in RAP)
When the background thread has finished, you should use Server Push to update the UI.

periodic state machine with boost statechart

I want to implement a state machine that will periodically monitor some status data (the status of my system) and react to it.
This seems to be something quite basic for a state machine (I've had this problem many times before), but I could not find a good way to do it. Here is some pseudo code to explain what I'd like to achieve:
// some data that is updated from IOs for example
MyData data;
int state = 0;
while( true ) {
update( & data ); //read a packet from serial port
//and update the data structure
switch( state ) {
case 0:
if( data.field1==0 ) state = 1;
else doSomething();
break;
case 1:
if( data.field2>0 ) state = 2;
else doSomethingElse();
break;
// etc.
}
usleep(100000); //100ms
}
Of course on top of that, I want to be able to execute some actions upon entering and exiting a state, maybe do some actions at each iteration of the state, have substates, history, etc. Which is why this simplistic approach quickly becomes impractical, hence boost statechart.
I've thought about some solutions, and I'd like to get some feedback.
1) I could list all my conditions for transitions and create an event for each one. Then I would have a loop that would monitor when each of those boolean toggles. e.g. for my first condition it could be:
if( old_data.field1!=0 && new_data.field1==0 )
// post an event of type Event 1
but it seems that it would quickly become difficult
2) have a single event that all states react to. this event is posted whenever some new status data is available. As a result, the current state will examine the data and decide whether to initiate a transition to another state or not
3) have all states inherit from an interface that defines a do_work(const MyData & data) method that would be called externally in a loop, examine the data and decide whether to initiate a transition to another state or not
Also, I am opened to using another framework (i.e. Macho or boost MSM)
Having worked with boost MSM, statecharts and QP my opinion is that you are on the right track with statecharts. MSM is faster but if you don't have much experience with state machines or meta programming the error messages from MSM are hard to understand if you do something wrong. boost.statecharts is the cleanest and easiest to understand. As for QP its written in embedded style (lots of preprocessor stuff, weaker static checking) although it also works in a PC environment. I also believe its slower. It does have the advantage of working on a lot of small ARM and similar processors. Its not free for commercial use as opposed to boost solutions.
Making an event for every type of state change does not scale. I would make one type of event EvStateChanged give it a data member containing a copy or reference to the dataset (and maybe one to the old data if you need it). You can then use costume reactions to handle whatever you need from any state context. Although default transitions work quite well in a toaster oven context (which are often used to demonstrate SM functionality) most real world SMs I have seen have many custom reactions, don't be shy to use them.
I don't really understand enough about your problem to give a code example but something along the lines of:
while( true ) {
update( & data ); //read a packet from serial port
//and update the data structure
if(data != oldData){
sm.process_event(EvDataChanged(data,oldData));
}
else{
timeout++;
if(timeout>MAX_TIMEOUT)
sm.process_event(EvTimeout());
}
usleep(100000); //100ms
}
and then handle your data changes in custome reactions depending on state along these lines:
SomeState::~SomeState(){
DoSomethingWhenLeaving();
}
sc::result SomeState::react( const EvDataChanged & e){
if(e.oldData.Field1 != e.newData.Field1){
DoSomething();
return transit<OtherState>();
}
if(e.oldData.Field2 != e.newData.Field2){
return transit<ErrorState>(); //is not allowed to change in this state
}
if(e.oldData.Field3 == 4){
return forward_event(); //superstate should handle this
}
return discard_event(); //don't care about anything else in this context
}

Process Manager

I'm trying to make a kernel simulation as my DSA (data structure and algorithm) project in C++. There will be different modules(process manager, memory manager etc.) in it. Right now i have to make a Process Manager and I've only a little a idea about it (like, i can use a queue). Can anyone help me how can i make a process manager in c++.
First make a scheduler (unless you understand "process manager" as what is commonly known as a "scheduler".) you must decide upon multitasking model, cooperative vs preemptive. Preemptive may be difficult - use some kind of interrupts and so on... may be unnecessarily complex for a school project.
If you don't know which model to pick, I strongly suggest cooperative multitasking. It is where each process takes a certain small slice of time, then returns control to the scheduler by itself - say, after going through one iteration of its "main loop". Usually done by the main loop calling some kind of "task()" function of the process-class, and the task() ending with a 'return', with no long loops underway.
Start with a model of a "task/process". Should it be loadable (say, as a shared object file), or predefined at startup (a class). Entry point, persistent state storage, "main loop" routine with a finite state machine (usually implemented as a switch that moves between various states). The task works by repeatedly launching the "entry point" routine.
The states to be implemented will likely be:
init, launched on startup, once
idle - check for requests for activity, if none, return control
various "work" states.
Once you have that, prepare a dynamic queue of such tasks. Adding, removing, iterating, elevated priority = call out of order, and so on. The "scheduler" iterates through all the tasks and starts the "startup routine" of each of them.
When you have that ready, you can write what is commonly known as "task manager" - a program that edits the list. Remove a program from the queue, add a new one, change priority, pause etc.
To help you imagine, you currently usually write:
int main()
{
do_something1();
do_something2();
}
void do_something1()
{
//initialize
...perform stuff
int x=0;
//main loop
do {
if(condition...) {
...perform stuff
} else {
...perform other stuff
blargh(x);
x++;
}
} while(!end);
//ending
//finish...
...mop up.
}
What you need to write:
int main()
{
//main loop
do {
do_something1();
do_something2();
} while(!global_end);
}
void do_something1()
{
static state_enum state = STATE_INI;
static int x=0;
switch(state)
{
case STATE_INI:
//initialize
...perform stuff
state = STATE_WORK1;
x=0;
break;
case STATE_WORK1:
//main loop, mode 1
...perform stuff
if(condition) state = STATE_WORK2;
if(condition2) state = STATE_END;
if(condition4) state = STATE_IDLE;
break;
case STATE_WORK2:
//main loop, mode 2
...perform stuff
blargh(x);
x++;
if(condition3) state = STATE_WORK1;
if(condition4) state = STATE_IDLE;
break;
case STATE_IDLE:
//do nothing
//don't do any stuff.
if(any_condition) state = STATE_WORK1;
break;
case STATE_END:
//finish...
...mop up.
break;
}
return;
}
...and your process manager will be replacing what constitutes static calls to
do_something1();
do_something2();
with a dynamic list of functions to call.
fyi, writing apps for preemptive scheduling system is much easier, you just write them like in the first version, never worrying about preserving state between calls (static), or returning control, or keeping each case statement short and sweet with very short, if any loops inside, unrolling bigger ones. But writing the scheduler itself, interrupting a program and saving its state, then restoring it and resuming from where it interrupted is much, much harder.
A process manager manages processes. Obviously, to refine that, you first need to define what constitutes a process in your OS. There's no reason for a process manager to deal with threads when all your processes are single-threaded, for instance. And if you don't have virtual memory, that doesn't need to be managed either.
You did note that you'd have a memory manager. This is certainly possible outside the process manager, but you would need to define the interface between them. For instance, the process manager would need to allocate memory to load the program code on startup; the program itself cannot do that (chicken and egg problem).

Symbian C++ - synchronous Bluetooth discovery with timeout using RHostResolver

I am writing an application in Qt to be deployed on Symbian S60 platform. Unfortunately, it needs to have Bluetooth functionality - nothing really advanced, just simple RFCOMM client socket and device discovery. To be exact, the application is expected to work on two platforms - Windows PC and aforementioned S60.
Of course, since Qt lacks Bluetooth support, it has to be coded in native API - Winsock2 on Windows and Symbian C++ on S60 - I'm coding a simple abstraction layer. And I have some problems with the discovery part on Symbian.
The discovery call in the abstraction layer should work synchronously - it blocks until the end of the discovery and returns all the devices as a QList. I don't have the exact code right now, but I had something like that:
RHostResolver resolver;
TInquirySockAddr addr;
// OMITTED: resolver and addr initialization
TRequestStatus err;
TNameEntry entry;
resolver.GetByAddress(addr, entry, err);
while (true) {
User::WaitForRequest(err);
if (err == KErrHostResNoMoreResults) {
break;
} else if (err != KErrNone) {
// OMITTED: error handling routine, not very important right now
}
// OMITTED: entry processing, adding to result QList
resolver.Next(entry, err);
}
resolver.Close();
Yes, I know that User::WaitForRequest is evil, that coding Symbian-like, I should use active objects, and so on. But it's just not what I need. I need a simple, synchronous way of doing device discovery.
And the code above does work. There's one quirk, however - I'd like to have a timeout during the discovery. That is, I want the discovery to take no more than, say, 15 seconds - parametrized in a function call. I tried to do something like this:
RTimer timer;
TRequestStatus timerStatus;
timer.CreateLocal();
RHostResolver resolver;
TInquirySockAddr addr;
// OMITTED: resolver and addr initialization
TRequestStatus err;
TNameEntry entry;
timer.After(timerStatus, timeout*1000000);
resolver.GetByAddress(addr, entry, err);
while (true) {
User::WaitForRequest(err, timerStatus);
if (timerStatus != KRequestPending) { // timeout
resolver.Cancel();
User::WaitForRequest(err);
break;
}
if (err == KErrHostResNoMoreResults) {
timer.Cancel();
User::WaitForRequest(timerStatus);
break;
} else if (err != KErrNone) {
// OMITTED: error handling routine, not very important right now
}
// OMITTED: entry processing, adding to result QList
resolver.Next(entry, err);
}
timer.Close();
resolver.Close();
And this code kinda works. Even more, the way it works is functionally correct - the timeout works, the devices discovered so far are returned, and if the discovery ends earlier, then it exits without waiting for the timer. The problem is - it leaves a stray thread in the program. That means, when I exit my app, its process is still loaded in background, doing nothing. And I'm not the type of programmer who would be satisfied with a "fix" like making the "exit" button kill the process instead of exiting gracefully. Leaving a stray thread seems a too serious resource leak.
Is there any way to solve this? I don't mind rewriting everything from scratch, even using totally different APIs (as long as we're talking about native Symbian APIs), I just want it to work. I've read a bit about active objects, but it doesn't seem like what I need, since I just need this to work synchronously... In the case of bigger changes, I would appreciate more detailed explanations, since I'm new to Symbian C++, and I don't really need to master it - this little Bluetooth module is probably everything I'll need to write in it in foreseeable future.
Thanks in advance for any help! :)
The code you have looks ok to me. You've missed the usual pitfall of not consuming all the requests that you've issued. Assuming that you also cancel the timer and do a User::WaitForRequest(timerStatus) inside you're error handing condition, it should work.
I'm guessing that what you're worrying about is that there's no way for your main thread to request that this thread exit. You can do this roughly as follows:
Pass a pointer to a TRequestStatus into the thread when it is created by your main thread. Call this exitStatus.
When you do the User::WaitForRequest, also wait on exitStatus.
The main thread will do a bluetoothThread.RequestComplete(exitStatus, KErrCancel) when it wants the subthread to exit, where bluetoothThread is the RThread object that the main thread created.
in the subthread, when exitStatus is signalled, exit the loop to terminate the thread. You need to make sure you cancel and consume the timer and bluetooth requests.
the main thread should do a bluetoothThread.Logon and wait for the signal to wait for the bluetooth thread to exit.
There will likely be some more subtleties to deal correctly with all the error cases and so on.
I hope I'm not barking up the wrong tree altogether here...
The question is already answered, but... If you'd use active objects, I'd propose you to use nested active scheduler (class CActiveSchedulerWait). You could then pass it to your active objects (CPeriodic for timer and some other CActive for Bluetooth), and one of them would stop this nested scheduler in its RunL() method. More than this, with this approach your call becomes synchronous for the caller, and your thread will be gracefully closed after performing the call.
If you're interested in the solution, search for examples of CActiveSchedulerWait, or just ask me and I'll give your some code sample.

Handling Interrupt in C++

I am writing a framework for an embedded device which has the ability to run multiple applications. When switching between apps how can I ensure that the state of my current application is cleaned up correctly? For example, say I am running through an intensive loop in one application and a request is made to run a second app while that loop has not yet finished. I cannot delete the object containing the loop until the loop has finished, yet I am unsure how to ensure the looping object is in a state ready to be deleted. Do I need some kind of polling mechanism or event callback which notifies me when it has completed?
Thanks.
Usually if you need to do this type of thing you'll have an OS/RTOS that can handle the multiple tasks (even if the OS is a simple homebrew type thing).
If you don't already have an RTOS, you may want to look into one (there are hundreds available) or look into incorporating something simple like protothreads: http://www.sics.se/~adam/pt/
So you have two threads: one running the kernel and one running the app? You will need to make a function in your kernel say ReadyToYield() that the application can call when it's happy for you to close it down. ReadyToYield() would flag the kernel thread to give it the good news and then sit and wait until the kernel thread decides what to do. It might look something like this:
volatile bool appWaitingOnKernel = false;
volatile bool continueWaitingForKernel;
On the app thread call:
void ReadyToYield(void)
{
continueWaitingForKernel = true;
appWaitingOnKernel = true;
while(continueWaitingForKernel == true);
}
On the kernel thread call:
void CheckForWaitingApp(void)
{
if(appWaitingOnKernel == true)
{
appWaitingOnKernel = false;
if(needToDeleteApp)
DeleteApp();
else
continueWaitingForKernel = false;
}
}
Obviously, the actual implementation here depends on the underlying O/S but this is the gist.
John.
(1) You need to write thread-safe code. This is not specific to embedded systems.
(2) You need to save state away when you do a context switch.