How do I take an existing Phaser game and make it multiplayer?
Can I use the Lance library for this purpose? Both libraries control their own game objects so I don't know how to use the two frameworks together in the same game.
Disclaimer: I am one of the co-creators of Lance
Unfortunately, Phaser's 2.* architecture makes it hard to pair with Lance to make Realtime Javascript Multiplayer games.
The issue is that Phaser makes a lot of assumptions that don't hold for a multiplayer setting. For example, the rendering and game loop are tied together. The server, obviously doesn't need to render anything.
Phaser also assumes the existence of the DOM and the window object which also don't exist on the server. In addition, all of the data structures that hold the world game state objects, sprites, etc' are saved on an extended instances PIXI objects which don't make sense in a Server context. These limitations and tight-coupling aren't compatible with Lance's modular approach.
It is entirely possible to run Phaser on the server using libraries that emulate DOM and Canvas like JSDOM and Node Canvas however it does mean that there's a significant performance degradation by running PIXI on the server, and you also still have the problem of syncing PIXI data structures to contend with.
The good news is that Phaser 3.0 is an ongoing, complete rewrite of Phaser 2.0 in a much more modular approach will hopefully make it much easier to integrate with Lance. We have plans to make this integration easier ourselves in the near future.
Related
I'm currently working on a C++/SDL/OpenGL game. I've already made a few small games, but only local ones (no netcode). So I know how to make the engine, but I'm unsure about the netcode.
Can I firstly create the full engine for split-screen play and later on add the netcode or will this make everything complicated? Do I already have to take netcode into consideration while programming the basic game engine or is it also okay to just put it on top of the game after it runs fine on one machine?
It's a 2D shooter type game, if that matters. And no, I don't like to change my choice of programming language/window manager/api because I already implemented the bare bones of the game. I'm just curous how this issue is approached best.
In theory, all you need is a good enough design. Write enough abstract classes and BAM! you can pop out one user interface (i.e. local-only) for another one (networked). I wouldn't believe the theory, though.
It's possible to do what you want, but it involves taking into consideration all of the new issues you address when dealing with networked gameplay - syncing views for multiple users, what to do when one user drops their network link (how to detect when one user drops their network link, of course), network latency in receiving user input, handling lag on one side and not the other. Networked programming is completely different, and some of the aspects (largely ones dealing with synchronization) may impact your core engine itself. Even "just showing two views" gets a lot tougher, because you now have data on two completely different machines, and the data isn't necessarily the same.
My suggestion would be to do the opposite of what you're hoping for. Get the networking code working first with minimal graphics. In fact, console messages will be far more important than pretty graphics. You already have experience with making the graphics of other games - work the most questionable technology first. Get a good feel of all the things the networked code will ask of you, then focus on the graphics afterwards.
Normally for a network oriented game there are five concepts too keep in mind:
events
dispatcher
synchronization
rendering
simulation
Events. A game engine is a event software, that means over a state of each generic object in the game (can be a unit, GUI, etc), you do an action, that means, you call a function or do nothing.
Dispatcher take each event change and dispatch that change to another subsystem.
Synchronization means that over a change of event, all clients in network must be advised throw his dispatcher over that change, in this way all players can see the changes of other players, render and simulate same things at same time.
Rendering The render read parameters and relevant states for each object and draw in screen. For example, is you have a property for each unit named life_points, you can draw a normal unit if life_points>50 and a damage unit if life_point>0 and life_point<50 and a destroyed unit if life_point=0. Render dont make changes in objects, just draw what read from them.
Simulation read every object and perform some task taking on count states and properties, for example, if you have cero point of life, you mark the state of a unit as DEAD (for example) or change de GUI, or if a unit get close to another of a enemy team, you change the state from static to move moving close to that another unit. Plus this, here you make the physics of units, changing positions, rotations, etc etc... as you have all objects synchronized over network, everybody will be watching the same thing.
Best regards.
Add in netcode as soon as you can. If you don't do this you may have to overhaul a lot of the engine later in the dev cycle, so better to do it early.
It also depends on how complex the game is, but the same principles still stand. Best not to tack it on at the last second
Hope this helps!
I am curious to know if there is a way of connecting a flash front-end to a C++ driven backend? I'm not currently working on a project that involves this, but I found out about an application used in the gaming industry, called Scaleform, that requires knowledge on doing such things in order to create menus in games.
Another way to resolve the problem would be to create webservices using C++ and consume them on the flash side. It is a more naive approach and certainly not as good as Scaleform but it is certainly simpler to implement.
You could also have a look at FluorineFx which is an implementation of Flex/Flash Remoting services for the .NET framework. The project is open-source so it could help you get started. Basically, FluorinexFx catches the remote calls in AMF from the flash player and pipes it through the corresponding .NET method and corresponding arguments. It also helps you convert back to flash native .Net objects such as ArrayList, DataTable and even lists of typed objects.
Scaleform implement their own custom Flash Player, thats probably not what you had in mind.
What you can do in a reasonable amount of time is pretending to Adobes flash player that you are a NPAPI-compatible browser - see e.g. here how screenweaver-hx does it.
We have created an OpenGL application in C++ which visualizes some physical simulations. The basic application is contained within a DLL which is used by a simple GUI. It currently runs on a desktop PC, but we have the idea to turn it into a web service.
Since the simulations require dedicated hardware, the idea is that a user, through his/her browser can interact with our application as a service and this service then renders the result to an image (jpg or anything appropriate) that can then be displayed/updated in the browser.
My question:
How can I "easily" turn a c++ application as described into a web-service that runs on some server so that I can approach it over the web? What kind of technologies/APIs should I look at? And are there any real-life examples that tackle a similar problem?
This is possible, but one major difficulty you'll have is trying to use OpenGL from a web service. You'll need to port this to do 100% offscreen rendering and work without a windowing context. That would be my first step, and it's not always a trivial one.
Also, it is very difficult to maintain and manage your opengl contexts from a webservice correctly, and the overhead involved can be quite painful. Depending on the number and types of renderings, you may run into some issues there.
If you need it to scale up well CGI would probably be kind of slow & hacky.
There are some C++ web frameworks out there, see this question.
As far as the OpenGL goes, you'll probably need to use the frame buffer extension as Jay said.
You could then render your image to a texture, and use use glGetTexImage() to grab the pixel data. From there just save into what ever image format you want with the accompanying library.
I had a similar project/question, that although it wasn't a web service, it required OpenGL rendering in a windows service. I had tons of problems getting it to work on Vista, although eventually it did work on XP with regular OpenGL.
I finally tried using Mesa, which I built to work as a private DLL for my service. It was a great decision because I could now actually step into the OpenGL calls and see where things were going wrong. It ran fine in software mode under the service, and while it wasn't hardware accelerated, it worked very well.
If your user-interaction needs are simple, I'd just look at CGI. It should be pretty easy to understand.
As far as getting the output of the OpenGL program, I'd take a look at the framebuffer extension. That should make it easy for you to render into memory, which could then be fed into a JPEG compressor.
I guess you already have access to some webserver, e.g. Apache or similar. In that case you could try out CppServ.
You just need to configure your webserver so it can communicate with CppServ. I'd then propose that you develop a servlet (check the documentation on the site) which in turn communicates with your already existing dll. Since the dll knows everything about OpenGL it should be no problem to create jpeg images etc.
How about using something like Flex to create a web-enabled 'control interface', with a server backend that streams the opengl rendering as video? Basically, you are redirecting keyboard/mouse input via the Flex app, and using it to display 'realtime' 3D activity using a standard movie component.
The devil's in the details, of course....
You can try to use O3D API from google and don't do anysort of service, it would be much more simple.
This answer might sound very basic and elementary, have you tried this approach
Send the vector data from the server to the client or vice versa (just co-ordinates and so on)
Render it on the client side.
This means that the server is only doing the calculations and the numbers are passed to and fro.
I agree that is not a trivial method of trying to vectorize every object/model and texture, but is very fast since instead of heavy graphical images going across, only vector data is being sent across.
I have a significant codebase written in MFC and am tasked with creating a port for Mac OS X. I know that I'm going to have to roll up my sleeves at some point and do alot of grunt work to get everything working correctly, but are there any tools out there that might get me partway?
I'm working on one.
From the GUI point of view, the new version of AppMaker is based around an import/generate model. Most of commercial work I've done with AppMaker has been the other way, porting Macintosh applications to Windows. However, there's no reason why the same principles can't be applied in reverse.
AppMaker v2 had a very good importer for PowerPlant UI resources and traditional Mac dialogs. As it is only able to run on Classic, that code base has been discarded (you really don't want to know) and the final generator languge I wrote for AppMaker v2 is an XML exporter which dumps the entire object model to an extended XAML.
I already have a XAML UI generator and am currently working on a Cocoa xib generator - one of the reasons for going to WWDC in June. The focus at this time is on import/generator suites before returning my attention to a GUI editor.
I wrote PP2MFC to allow PowerPlant applications to be compiled for Windows - a cross-platform solution needed because no other framework or cross-platform tool at the time (1997) would perform well enough for the hardware requirements. I've since discussed an opposite program with someone I could chase up and I'm sure an MFC portability layer could be created to map to Cocoa objects. Whilst many developers have a poor opinion of MFC's message-map architecture, the heavily macro-based API sits on top of a reasonably clean OO framework.
This is the kind of project where you need to think about long-term maintainability - do you want something which ends up as large chunks of MFC code working with Cocoa or do you want to migrate to an idiomatic Cocoa program.
Any further discussion should probably be taken off SO - contact me at dent at oofile.com.au but I'm happy to debate technicalities and feasibility on here. The combination of code generation and skinny framework adaptor layers works better than most people expect.
Honestly, the models are so different that I suspect you're going to need to do a nearly complete re-code at least of most of the UI parts.
No, Such a tool would be pretty much impossible to write.
MFC and Cocoa are such fundamentally different platforms there is no easy way to convert between the two.
Depending on how you've written your applications, you will either need to write the GUI portion of your code or even the whole codebase.
I need a 2d political map of the world on which I will draw icons, text, and lines that move around. Users will interact with the map, placing and moving the icons, and they will zoom in and out of the map.
The Google Maps interface isn't very far from what I need, but this is NOT web related; it's a Windows MFC application and I want to talk to a C++ API for a map that lives in the application, not a web interface. Ideally I don't want a separate server, either, and any server MUST run locally (not on the Internet). What canned map package or graphics library should I use to do this? I have no graphics programming experience.
This is strictly 2D, so I don't think something like Google Earth or WorldWind would be appropriate. Good vector graphics support would be cool, and easy drawing of bitmaps is important.
All the canned options seem web oriented. SDL is about all I know of for flexible canvas programming, but it seems like making my own map would be a lot of work for what is probably a common problem. Is there anything higher level? Maybe there's a way to interact with an adobe Flash object? I'm fairly clueless.
Perhaps:
http://www.codeplex.com/SharpMap
ESRI MapObjects
http://www.esri.com/software/mapobjects/index.html
ESRI MapObjects LT
http://www.esri.com/software/mapobjectslt/index.html
See
http://www.esri.com/software/mapobjectslt/about/mo_vs_lt.html
for a comparison of the two MapObjects feature sets.
ESRI may have a replacement to the MapObjects libraries
You could extend your search by using the term GIS (Geographic Information System). I'm sure its gonna be easier. There's a lot of stuff out there on that subject.
Here's a page I found: http://www.ucancode.net/Gis-Source-Code.htm
or: http://opensourcegis.org/
You might want to try the Mapnik C++/Python GIS Toolkit.
You can take a look at the Marble Widget, which is part of KDE's Marble project. There are Windows binaries for this, too, but they might be dependent on Qt.
Yes, Marble has also the advantage that it provides kind of a ready made solution in a single control (called "widget" in Qt's technical terms).
The dependency on Qt (which is the only dependency btw.) might also be seen as an advantage: Qt's upcoming version is licensed under the LGPL, so even if you plan to use this in a proprietary application then there shouldn't be any real worries. And of course Qt and Marble are cross-plattform and provide an API that is very intuitive and easy to understand. Unlike common GIS solutions the Marble API and the usage of the widget is rather focused on people who don't know much about GIS. So its usage is quite easy to understand even if you feel scared by technical terms used in GIS.
Marble offers several interfaces to programming:
You can either create your own Marble plugins and paint inside those or you can subclass the MarbleWidget control. For a simple HelloWorld application see:
http://techbase.kde.org/Projects/Marble/MarbleCPlusPlus