I want to collect statistics from an RPG game. This data should be stored in one place, online, so I can analyse it later.
Example of events -
Player achieved something
Player win the game using this way
So the question is : what is the best way, with minimum effort, to implement this functionality ?
I understand that I will have anyway to implement message sending functionality on the game's side.
I know that this can be implemented using Amazon SQS, but this doesn't seem to be the easiest way.
Idealy - it should be like that : I just send data from game, in form of messages. After that - I can retrieve the data from cloud storage and parse/analyze it.
P.S. I don't want a server at home
If you don't want to use an existing service, you can build a simple webapp & receive events by REST urls.
Example:
http://my.statistics.server/achieve?player=123&category=456&level=789
http://my.statistics.server/win?player=123&score=12345
You could also packet the events up & append a keyed hash, if you want to make falsification a bit more difficult.
Example:
http://my.statistics.server/record?packet=<base64 data..., plus HMAC>
See:
https://en.wikipedia.org/wiki/Hash-based_message_authentication_code
Depending on the platform you are going to write the game on, but I have used Flurry Analitica and it worked for my mobile games like
Flurry Analitics
Related
I tried to find the best way to store data on a local persistant store but I did not find a lot of resources about this.
I found only :
Motion model
But what is the best gem/way to make a offline app. I mean, I sync with remote one time and after that, my application uses a local storage (cora data, sqlite...) to read data?
Thank you
I use MotionModel (heck, I wrote MotionModel) but I'm biased. It's supposed to be for use-cases where you don't want to set up the Core Data stack. That said, InfiniteRed has done a great job with RMQ so it's likely they did a great job with CDQ, which wraps Core Data.
I suggest you make a play app with each and decide for yourself.
I prefer the way HipByte suggests in the Locations sample.
Check the LocationsStore class, and how they use CoreData in a very simple way.
You could also use CouchbaseLite and leverage it's sync capabilities to make the data available offline. I created an CouchbaseLite RubyMotion Example which is a port of a TodoLite-iOS version of the App. I'm currently working on making the integration nicer and more ruby like, but it works as is.
I have a final project abount database design this semester. And my teacher gives us many tasks for alternatives, such as Student Information Management System, Airline Reservation System and so on. However, I want to design such a player that it allows users to upload their own works and share together. Of course it also provides download service. I'm a sophomore this year. I'm familiar with c++ programming, but do not know much about network programming. Furthermore, I learnt T-SQL this semester and also did some works on MySQL in java(course in this semester, too). My idea is here (I have drew a picture):http://tmjfzy.blog.163.com/blog/static/66447025201242553045/
I need some advice about network programming. Could you give me some to help me realize my imagination? Thank you :-).
So, you're basically reinventing YouTube but with dedicated client?
Actually it's very easy to start without any clients or C++ : all you need is a server with MySQL, Apache and PHP. I recommend WAMP server if you're on Windows. FlowPlayer is a flash video player quite easy to integrate, but today using HTML5 video features should be a better idea. I believe you can have the reference Web system up and running in about 3 to 6 days.
(I'm also a C++ programmer and I had no problem with learning enough HTML, PHP and JavaScript to do very similar thing ).
Once you have a system up and running (possibly with limitation on video file format) you can design and implement API. From server side it's nothing more than PHP files, just returning data in your format of choice (eg. JSon, XML) instead of generating HTML.
With server-side API done, you can start working on client. Registration, login, upload from file and download to file should go first. Once the client can get the video files, you can implement a player. A streaming player goes next. If you encounter problems with playing back the video files, this is good moment to break compatibility with web version and change video file format. Now that you know what playback formats are supported, you might implement converting before upload. (Conversion makes sense if you want to have all files on server in one format. Otherwise it's not really usefull: if a client can convert a video, it should be able to play it back. This means all other clients also should understand the format and be able to play it back.)
At this point you can consider rewriting the server. Or other student can be writing the server in the same time as you're busy with client.
Having working Apache-PHP reference all the time makes such parallelism a breeze.
All the above requires using HTTP as underlying protocol. I think Qt has built in support. If not you can use some library (like cURL) or implement it from scratch on sockets.
Eventually streaming, like RTP+RTSP, can be added for playback.
If you feel really adventurous, you can start designing your own protocol, but this is the very last step, after having both own client and own server working flawlessly on hand-implemented HTTP.
I am starting to evaluate frameworks for a HTML5 app. I really like the enyo model for developing an app. However, my app needs an object-relational mapper (ORM) for local storage and some way to update the UI based on changes in the ORM data.
It looks like Ember has some great linkages for the ORM and update parts.
Has anyone used these two together? Does it makes sense or do either of them (by themselves) already address the entire problem space?
Thanks in advance,
Charlie
I've not tried to integrate them directly yet, but I think that the Enyo event model would work well here. Have the ORM live as a top-level component in your application and have it broadcast data change messages into your tree of components using enyo.waterfall() or enyo.waterfallDown().
I do something similar in a cryptogram game I'm working on where I use that mechanism to broadcast information about the player's guesses into the view tree where individual cells use them to modify their display.
I have a Windows Phone 7 app that (currently) calls an OData service to get data, and throws the data into a listbox. It is horribly slow right now. The first thing I can think of is because OData returns way more data than I actually need.
What are some suggestions/best practices for speeding up the fetching of data in a Windows Phone 7 app? Anything I could be doing in the app to speed up the retrieval of data and putting into in front of the user faster?
Sounds like you've already got some clues about what to chase.
Some basic things I'd try are:
Make your HTTP requests as small as possible - if possible, only fetch the entities and fields you absolutely need.
Consider using multiple HTTP requests to fetch the data incrementally instead of fetching everything in one go (this can, of course, actually make the app slower, but generally makes the app feel faster)
For large text transfers, make sure that the content is being zipped for transfer (this should happen at the HTTP level)
Be careful that the XAML rendering the data isn't too bloated - large XAML structure repeated in a list can cause slowness.
When optimising, never assume you know where the speed problem is - always measure first!
Be careful when inserting images into a list - the MS MarketPlace app often seems to stutter on my phone - and I think this is caused by the image fetch and render process.
In addition to Stuart's great list, also consider the format of the data that's sent.
Check out this blog post by Rob Tiffany. It discusses performance based on data formats. It was written specifically with WCF in mind but the points still apply.
As an extension to the Stuart's list:
In fact there are 3 areas - communication, parsing, UI. Measure them separately:
Do just the communication with the processing switched off.
Measure parsing of fixed ODATA-formatted string.
Whether you believe or not it can be also the UI.
For example a bad usage of ProgressBar can result in dramatical decrease of the processing speed. (In general you should not use any UI animations as explained here.)
Also, make sure that the UI processing does not block the data communication.
I have been developing a Mac Desktop app with an iOS device counterpart.
Basically I want to upload event information (music gigs etc.) from the Desktop to an online database and be able to read (only) the information whilst mobile.
I've got both apps working, using Core Data (with a sqlite database - I was going to use XML but the iOS doesn't seem to let me do that), but I'm at a loss when it comes to the Web Services part.
I've been googling and checking docs involving sqlite, XML, JSON, NSXMLParser (do I need restful services?)and umpteen other things and I'm just getting nowhere fast.
Could someone explain to me the principle involved? Do I actually need Core Data? Do I have to convert the sqlite data to XML and back again to read it via an iOS mobile device?
I feel I'm making this out to be way more complicated than it should be - or is it?
Hoping someone can put me straight. Hope I've given enough information.
What i do, and I have done many web service iOS apps. I make a webpage in JSON, call it, and then I use SBJsonParser, which parses the JSON into native objects, like a dictionary or array of dictionaries, then I display the data. It really is very simple.
The at a specific time like ViewDidLoad, I fetch the JSON file. Remember, the json document can be web service or just a text file. Whatever you need. JSON doesn't need extra code, is extremely lightweight, and parses without any interference into native objects. Less work for you.