C++ frameworks for distributed computer applications - c++

I have a C++/MFC application ("MyApp") that has always run standalone, but which now needs to be run simultaneously on at least two PCs, and eventually on perhaps up to 20 PCs. What is desirable is:
The system will be deployed with fixed PC names ("host1", "host2", etc) and IP addresses (192.168.1.[host number]) on an isolated network
Turn on the first PC, and it will start all the others using wake-on-lan
One or more instances of "MyApp" automatically start on each node, configure themselves, and do their stuff more-or-less independently (there is no critical inter-node communication once started)
The first node in the system also hosts a user interface that provides some access to other nodes. Communication to/from the other nodes only occurs at the user's request, which is generally sporadic for normal users, and occasionally intense for users who have to test that new features are working as designed across the system.
For simulation/test purposes, MyApp should also be able to be started on a specified subset of arbitrarily-named computers on a LAN.
The point of the last requirement is so that when trying to reproduce the problem that is occurring on a 20-PC system on the other side of the world, I don't have to find 20 PCs from somewhere and hook them up, I can instead start by (politely) stealing some spare CPU cycles from other PCs in my office.
I can imagine building this from the ground up in C++/MFC along these lines:
write a new service that automatically starts with Windows, and which can start/stop instances of MyApp when it receives the corresponding command via the network, or when configured to automatically start MyApp in non-test deployments.
have MyApp detect which node it is in the wider system, and if it is the first node, ensure all other nodes are turned on via wake-on-lan
design and implement a communications protocol between the various local and remote services, and MyApp instances
This really seems quite basic, but also like it would be reinventing wheels if I did it from the ground up. So the central question is are there frameworks with C++ classes that make this kind of thing easy? What options should I investigate? And is referring to this as "distributed computing" misleading? What else would you call it? (Please suggest additional or different tags)

Related

Multiprocess web server with ocaml

I want to make webserver with ocaml. It will have REST interface and will have no dependencies (just searching in constant data loaded to RAM on process startup) and serve read only queries (which can be served from any node - result will be the same).
I love OCaml, however, I have one problem that it can only process using on thread at a time.
I think of scaling just by having nginx in front of it and load balance to multiple process instances running on different ports on the same server.
I don't think I'm the only one running into this issue, what would be the best tool to keep running few ocaml processes at a time and to ensure that if any of them crash they would be restarted and have different ports from each other (to load balance between them)?
I was thinking about standard linux service but I don't want to create like 4 hardcoded records and call service start webserver1 on each of them.
Is there a strong requirement for multiple operating system processes? Otherwise, it seems like you could just use something like cohttp with either lwt or async to handle concurrent requests in the same OS process, using multiple threads (and an event-loop).
As you mentioned REST, you might be interested in ocaml-webmachine which is based on cohttp and comes with well-commented examples.

How to restrict the resources within one JVM

I'm trying with WSO2 products, and I'm thinking about a scenario where bad code could take up all the CPU time (e.g. dead loop or so). I did try it with WSO2 AS with 2 tenants, A and B. And A's bad code does affect B and B's app will have a very long reponse delay or even stuck. Is there a way to restrict the CPU usage of a tenant? Thanks!
At the moment, you will have to setup your environment in what is known as private jet mode, where each tenant gets its own JVM, if you need total isolation.
In a shared environment, we have stuck thread detection which will ensure that critical threads will not run for more than a specified time period. We have plans for CPU usage limiting on per tenant basis. This would be available in a future release.
My suggestion would be to not run two tenants in one application server. Run two separate processes on the same machine. Better yet, run two separate processes in separate OS-level containers (like a jail or an lxc container). Or separate virtual machines if you can't use containers.
Operating systems give you tools for controlling CPU use - rlimit and nice for processes, and implementation-specific facilities for containers and VMs. Because they're implemented in the OS (or virtual machine manager), they are capable of doing this job correctly and reliably. There's no way an application server can do it anywhere near as well.
In any case, having separate applications share an application server and JVM is a terrible idea which should have been put to death in the '90s. There's just no need for it, and it introduces so many potential headaches.

Redundancy without central control point?

If it possible to provide a service to multiple clients whereby if the server providing this service goes down, another one takes it's place- without some sort of centralised "control" which detects whether the main server has gone down and to redirect the clients to the new server?
Is it possible to do without having a centralised interface/gateway?
In other words, its a bit like asking can you design a node balancer without having a centralised control to direct clients?
Well, you are not giving much information about the "service" you are asking about, so I'll answer in a generic way.
For the first part of my answer, I'll assume you are talking about a "centralized interface/gateway" involving ip addresses. For this, there's CARP (Common Address Redundancy Protocol), quoted from the wiki:
The Common Address Redundancy Protocol or CARP is a protocol which
allows multiple hosts on the same local network to share a set of IP
addresses. Its primary purpose is to provide failover redundancy,
especially when used with firewalls and routers. In some
configurations CARP can also provide load balancing functionality. It
is a free, non patent-encumbered alternative to Cisco's HSRP. CARP is
mostly implemented in BSD operating systems.
Quoting the netbsd's "Introduction to CARP":
CARP works by allowing a group of hosts on the same network segment to
share an IP address. This group of hosts is referred to as a
"redundancy group". The redundancy group is assigned an IP address
that is shared amongst the group members. Within the group, one host
is designated the "master" and the rest as "backups". The master host
is the one that currently "holds" the shared IP; it responds to any
traffic or ARP requests directed towards it. Each host may belong to
more than one redundancy group at a time.
This might solve your question at the network level, by having the slaves takeover the ip address in order, without a single point of failure.
Now, for the second part of the answer (the application level), with distributed erlang, you can have several nodes (a cluster) that will give you fault tolerance and redundancy (so you would not use ip addresses here, but "distributed erlang" -a cluster of erlang nodes- instead).
You would have lots of nodes lying around with your Distributed Applciation started, and your application resource file would contain a list of nodes (ordered) where the application can be run.
Distributed erlang will control which of the nodes is "the master" and will automagically start and stop your application in the different nodes, as they go up and down.
Quoting (as less as possible) from http://www.erlang.org/doc/design_principles/distributed_applications.html:
In a distributed system with several Erlang nodes, there may be a need
to control applications in a distributed manner. If the node, where a
certain application is running, goes down, the application should be
restarted at another node.
The application will be started at the first node, specified by the
distributed configuration parameter, which is up and running. The
application is started as usual.
For distribution of application control to work properly, the nodes
where a distributed application may run must contact each other and
negotiate where to start the application.
When started, the node will wait for all nodes specified by
sync_nodes_mandatory and sync_nodes_optional to come up. When all
nodes have come up, or when all mandatory nodes have come up and the
time specified by sync_nodes_timeout has elapsed, all applications
will be started. If not all mandatory nodes have come up, the node
will terminate.
If the node where the application is running goes down, the
application is restarted (after the specified timeout) at the first
node, specified by the distributed configuration parameter, which is
up and running. This is called a failover
distributed = [{Application, [Timeout,] NodeDesc}]
If a node is started, which has higher priority according to
distributed, than the node where a distributed application is
currently running, the application will be restarted at the new node
and stopped at the old node. This is called a takeover.
Ok, that was meant as a general overview, since it can be a long topic :)
For the specific details, it is highly recommended to read the Distributed OTP Applications chapter for learnyousomeerlang (and of course the previous link: http://www.erlang.org/doc/design_principles/distributed_applications.html)
Also, your "service" might depend on other external systems like databases, so you should consider fault tolerance and redundancy there, too. The whole architecture needs to be fault tolerance and distributed for "the service" to work in this way.
Hope it helps!
This answer is a general overview to high availability for networked applications, not specific to Erlang. I don't know too much about what is available in the OTP framework yet because I am new to the language.
There are a few different problems here:
Client connection must be moved to the backup machine
The session may contain state data
How to detect a crash
Problem 1 - Moving client connection
This may be solved in many different ways and on different layers of the network architecture. The easiest thing is to code it right into the client, so that when a connection is lost it reconnects to another machine.
If you need network transparency you may use some technology to sync TCP states between different machines and then reroute all traffic to the new machine, which may be entirely invisible for the client. This is much harder to do than the first suggestion.
I'm sure there are lots of things to do in-between these two.
Problem 2 - State data
You obviously need to transfer the session state from the crashed machine unto the backup machine. This is really hard to do in a reliable way and you may lose the last few transactions because the crashed machine may not be able to send the last state before the crash. You can use a synchronized call in this way to be really sure about not losing state:
Transaction/message comes from the client into the main machine.
Main machine updates some state.
New state is sent to backup machine.
Backup machine confirms arrival of the new state.
Main machine confirms success to the client.
This may potentially be expensive (or at least not responsive enough) in some scenarios since you depend on the backup machine and the connection to it, including latency, before even confirming anything to the client. To make it perform better you can let the client check with the backup machine upon connection what transactions it received and then resend the lost ones, making it the client's responsibility to queue the work.
Problem 3 - Detecting a crash
This is an interesting problem because a crash is not always well-defined. Did something really crash? Consider a network program that closes the connection between the client and server, but both are still up and connected to the network. Or worse, makes the client disconnect from the server without the server noticing. Here are some questions to think about:
Should the client connect to the backup machine?
What if the main server updates some state and send it to the backup machine while the backup have the real client connected - will there be a data race?
Can both the main and backup machine be up at the same time or do you need to shut down work on one of them and move all sessions?
Do you need some sort of authority on this matter, some protocol to decide which one is master and which one is slave? Who is that authority? How do you decentralise it?
What if your nodes loses their connection between them but both continue to work as expected (called network partitioning)?
See Google's paper "Chubby lock server" (PDF) and "Paxos made live" (PDF) to get an idea.
Briefly,this solution involves using a consensus protocol to elect a master among a group of servers that handles all the requests. If the master fails, the protocol is used again to elect the next master.
Also, see gen_leader for an example in leader election which works with detecting failures and transferring service ownership.

Game engine design: Multiplayer and listen servers

My game engine right now consists of a working singleplayer part. I'm now starting to think about how to do the multiplayer part.
I have found out that many games actually don't have a real singleplayer mode, but when playing alone you are actually hosting a local server as well, and almost everything runs as if you were in multiplayer (except that the data packets can be passed over an alternate route for better performance)
My engine would need major refactoring to adapt to this model. There would be three possible modes: Dedicated client, Dedicated server and Client-Server (listen mode)
How often is the listen-server model used in the gaming industry?
What are the (dis)advantages of it?
What other options do I have?
I'll see if I can answer this the best I can:
How often is the listen-server model used in the gaming industry?
When it comes to most online games, you'll find that a large majority of games use a client-server architecture, though not always in the way you think. Take any Source game, for instance. Most will use a standard client-server with a master server architecture (to list games available), in that one person will host a dedicated server and anyone with a client can join it.
However, you have some games and services, take for instance Left 4 Dead, League of Legends, and some XBox Live games, that take a slightly different approach. These all use a client-server architecture with a controlling server. The main idea here is that someone creates a dedicated server that isn't "running" any game. The controlling server will create a "lobby" of sorts, and when the game is started, the controlling server will add them to a queue, and when it is that lobby's turn, it will select a matching dedicated server (in terms of location/speed, availability, numerous factors), and assign the players to that server. Only then will the server actually "run" the game. It's the same idea, but a little simplified, as the client doesn't need to "pick" a server, only join a game and let the controlling server do the work.
Of course, the biggest client-server model is the MMO model, where one or many servers runs a persistent world that handles almost all data and logic. Some of the more famous games using this model are World of Warcraft, Everquest, anything like that.
So where does a listen server fit in here? To be honest, not really that well, however, you will still find many games using it. For instance, most Source games allow listen servers to be created, and many XBox Live games do (it's been a while, but I believe Counter Strike did, as well as Quake 4, and many others). In general though, they seem to be frowned upon due to the advantages of the client-server model, which brings us to our next point.
What are the (dis)advantages of it?
First and foremost: performance. In a client-server model, the client will handle local changes (such as input, graphics, sounds, etc) on each cycle of the game. At the end of the cycle, it will package up relevant data (such as, did the player move? If so, where to? Where is s/he looking now? Velocity? Did they shoot? If so, information on the bullet. Etc) and send that to the server for processing. The server will take this data and determine if every thing is valid such as, is the user moving in a way that indicates hacking (more on that later), is the move valid (anything in the way?), did the bullet from player 1 hit player 2?, and more. Then the server packages this up, and sends it to the clients, which then update whatever necessary, such as adjusting health if the player was shot, kicking the player if it is determined that they are hacking, etc.
A listen server, however, must deal with all of this at the same time. Since I assume you are familiar with programming, you probably realize how much power a game can rob from a computer, especially a poorly designed one. Adding on network processing, security processing, and more as well as the client's game, you can see where performance would take a serious hit, at least as far as just standard processing goes. Furthermore, most servers run on fast networks, and are servers designed to withstand network traffic. If a listen server's network is slow, the entire game will suffer.
Second security, as stated earlier, one of the main things a server will do is determine if a player is exploiting the game. You may have seen these as Punkbuster, VAC, etc. There are a very complicated set of rules that run these programs, for instance, determining the difference between a hacker, and just a very good player. It would be very bad for your game if you weren't able to catch hackers, but even worse if you executed action against a falsely accused one.
A listen server will generally not be able to handle the client's game, the server processing, and the hack detection, and in most cases, detectors like Punkbuster are very hard to, if not impossible to get to run on a listen server, because it's hard for it to function correctly without the necessary processing power, as generally the game logic is prioritized over security, and if the detector is not allowed to process for one frame it may lose the data it needed to convict someone.
Lastly, gameplay. The biggest thing about servers is that they are persistent, meaning that even if everyone leaves, the server will continue to run. This is useful if you have a popular server that doesn't have much activity in the night time, people can still join when they are ready to play and not have to wait for it to be brought back online.
In a listen server, the main disadvantage is that as soon as the client hosting the listen server leaves, the game must either be transferred to another player (creating a lul in the game that can last minutes in some cases), or must end completely. This is not preferable on a big server, as the host must either stay online (wasting a slot in the server, and his/her computer power, which could also slow the game), or end the game for everyone.
However, despite these problems, listen servers do have a few advantages.
Easy to set up: Most listen servers are nothing more than hitting "New game" and letting people join. This is easy for people who just want to play with their friends, and don't wish to have to try to find an empty dedicated server, or play with other people.
Good for testing: If one owns a dedicated server and wishes to change it's configuration, it is generally a better idea to test the configuration first. The user would either have to create a backup of the dedicated server and go blindly into the changes, with the only option being to roll back if something goes wrong, create a new dedicated server to test them, of just create a simple listen server to test them. And in with point 1, these are generally easier to start up and configure. This is especially true, as most dedicated servers are not within the administrators immediate access (most dedicated servers are rented from a remote location). It takes much longer to push configuration changes, as well as commands for restarting, etc, to a remote location than a machine that the administrator is currently on.
Less resources: In most dedicated servers, a user with the same IP cannot connect to the dedicated server (meaning, the client must either host the server, or play, they cannot do both). If the client wishes to play on his/her own server, they will usually need a second machine to host the server, or buy or rent a dedicated server so that they can actually play on it. A listen server requires only one machine, which may be the only thing the client can use.
In either case, both have advantages and disadvantages, and you need to weigh them with what you're willing to design and implement. From my experience, I believe that if you were to implement a listen server it would get used, if for nothing else than for a few users wishing to play around with friends, or test settings.
Lastly:
What other options do I have?
This is an industrial can of worms. In reality, any type of network architecture can be applied to video games. However, from what I've seen, like most internet communication, most boil down to some form of client-server model.
Please let me know if I didn't answer your question, or if you need something expanded, and I'll see what I can do.

How are Massively Multiplayer Online RPGs built?

How are Massively Multiplayer Online RPG games built?
What server infrastructure are they built on? especially with so many clients connected and communicating in real time.
Do they manage with scripts that execute on page requests? or installed services that run in the background and manage communication with connected clients?
Do they use other protocols? because HTTP does not allow servers to push data to clients.
How do the "engines" work, to centrally process hundreds of conflicting gameplay events?
Thanks for your time.
Many roads lead to Rome, and many architectures lead to MMORPG's.
Here are some general thoughts to your bullet points:
The server infrastructure needs to support the ability to scale out... add additional servers as load increases. This is well-suited to Cloud Computing by the way. I'm currently running a large financial services app that needs to scale up and down depending on time of day and time of year. We use Amazon AWS to almost instantly add and remove virtual servers.
MMORPG's that I'm familiar with probably don't use web services for communication (since they are stateless) but rather a custom server-side program (e.g. a service that listens for TCP and/or UDP messages).
They probably use a custom TCP and/or UDP based protocol (look into socket communication)
Most games are segmented into "worlds", limiting the number of players that are in the same virtual universe to the number of game events that one server (probably with lots of CPU's and lots of memory) can reasonably process. The exact event processing mechanism depends on the requirements of the game designer, but generally I expect that incoming events go into a priority queue (prioritized by time received and/or time sent and probably other criteria along the lines of "how bad is it if we ignore this event?").
This is a very large subject overall. I would suggest you check over on Amazon.com for books covering this topic.
What server infrastructure are they built on? especially with so many clients connected and communicating in real time.
I'd guess the servers will be running on Linux, BSD or Solaris almost 99% of the time.
Do they manage with scripts that execute on page requests? or installed services that run in the background and manage communication with connected clients?
The server your client talks to will be a server running a daemons or service that sits idle listening for connections. For instances (dungeons), usually a new process is launched for each group, which would mean there is a dispatcher service somewhere mananging this (analogous to a threadpool)
Do they use other protocols? because HTTP does not allow servers to push data to clients.
UDP is the protocol used. It's fast as it makes no guarantees the packet will be received. You don't care if a bit of latency causes the client to lose their world position.
How do the "engines" work, to centrally process hundreds of conflicting gameplay events?
Most MMOs have zones which limit this to a certain amount of people. For those that do have 100s of people in one area, there is usually high latency. The server is having to deal with 100s of spells being sent its way, which it must calculate damage amounts for each one. For the big five MMOs I imagine there are teams of 10-20 very intelligent, mathematically gifted developers working on this daily and there isn't a MMO out there that has got it right yet, most break after 100 players.
--
Have a look for Wowemu (there's no official site and I don't want to link to a dodgy site). This is based on ApireCore which is an MMO simulator, or basically a reverse engineer of the WoW protocol. This is what the private WoW servers run off. From what I recall Wowemu is
mySQL
Python
However ApireCore is C++.
The backend for Wowemu is amazingly simple (I tried it in 2005 however) and probably a complete over simplification of the database schema. It does gives you a good idea of what's involved.
Because MMOs by and large require the resources of a business to develop and deploy, at which point they are valuable company IP, there isn't a ton of publicly available information about implementations.
One thing that is fairly certain is that since MMOs by and large use a custom client and 3D renderer they don't use HTTP because they aren't web browsers. Online games are going to have their own protocols built on top of TCP/IP or UDP.
The game simulations themselves will be built using the same techniques as any networked 3D game, so you can look towards resources for that problem domain to learn more.
For the big daddy, World of Warcraft, we can guess that their database is Oracle because Blizzard's job listings frequently cite Oracle experience as a requirement/plus. They use Lua for user interface scripting. C++ and OpenGL (for Mac) and Direct3D (for PC) can be assumed as the implementation languages for the game clients because that's what games are made with.
One company that is cool about discussing their implementation is CCP, creators of Eve online. They have published a number of presentations and articles about Eve's infrastructure, and it is a particularly interesting case because they use Stackless Python for a lot of Eve's implementation.
http://www.disinterest.org/resource/PyCon2006-StacklessInEve.wmv
http://us.pycon.org/2009/conference/schedule/event/91/
There was also a recent Game Developer Magazine article on Eve's architecture:
https://store.cmpgame.com/product/3359/Game-Developer-June%7B47%7DJuly-2009-Issue---Digital-Edition
The Software Engineering radio podcast had an episode with Jim Purbrick about Second Life which discusses servers, worlds, scaling and other MMORPG internals.
Traditionally MMOs have been based on C++ server applications running on Linux communicating with a database for back end storage and fat client applications using OpenGL or DirectX.
In many cases the client and server embed a scripting engine which allows behaviours to be defined in a higher level language. EVE is notable in that it is mostly implemented in Python and runs on top of Stackless rather than being mostly C++ with some high level scripts.
Generally the server sits in a loop reading requests from connected clients, processing them to enforce game mechanics and then sending out updates to the clients. UDP can be used to minimize latency and the retransmission of stale data, but as RPGs generally don't employ twitch gameplay TCP/IP is normally a better choice. Comet or BOSH can be used to allow bi-directional communications over HTTP for web based MMOs and web sockets will soon be a good option there.
If I were building a new MMO today I'd probably use XMPP, BOSH and build the client in JavaScript as that would allow it to work without a fat client download and interoperate with XMPP based IM and voice systems (like gchat). Once WebGL is widely supported this would even allow browser based 3D virtual worlds.
Because the environments are too large to simulate in a single process, they are normally split up geographically between processes each of which simulates a small area of the world. Often there is an optimal population for a world, so multiple copies (shards) are run which different sets of people use.
There's a good presentation about the Second Life architecture by Ian Wilkes who was the Director of Operations here: http://www.infoq.com/presentations/Second-Life-Ian-Wilkes
Most of my talks on Second Life technology are linked to from my blog at: http://jimpurbrick.com
Take a look at Erlang. It's a concurrent programming language and runtime system, and was designed to support distributed, fault-tolerant, soft-real-time, non-stop applications.