Is it possible to tell bullet that something has happened in the past so it can take this information and adjust internal interpolation to show this change in the present?
There would never be a need to go back in time more than 1-5 seconds, 5 being a very rare occasion, more realistically between 1.5 and 2.5 seconds is where most of the change would occur.
The ability to query an objects position, rotation, and velocities at a specific time in the past would be needed as well, but this can easily be accomplished.
The reason behind all of this would be to facilitate easier synchronization of two physics simulations, specifically in a networked environment.
A server constantly running the simulation in real-time would send position, rotation, and velocity updates to client simulations at periodic intervals. These updates would arrive at the client 'in the past' from the client simulation's perspective due to network latency, thus the need to query the updated objects values in the past to see if they are different arises, if they are different the need to change these values in the past is also necessary. During the next simulation step bullet would take these past-changes into account and update the object accordingly.
Is this ability present in bullet or would it need to be emulated somehow? If emulation is needed could someone point me in the correct direction to getting started on this "rewind and replay" feature?
If you are unfamiliar with "rewind and replay" this article goes into great detail about the theory behind implementation for someone who may be going about creating their own physics library. http://gafferongames.com/game-physics/n ... d-physics/
Related
I'm doing some tests with an HTML map in conjunction with Leaflet. Server side I have a Ruby Sinatra app serving json markers fetched by a MySQL table. What are the best practices working with 2k-5k and potentially more markers?
Load all the markers in the first place and then delegate everything to Leaflet.markercluster.
Load the markers every time the map viewport change, sending southWest & northEast points to the server, elaborate the clipping server side and then sync the marker buffer client side with the server-fetched entries (what I'm doing right now).
A mix of the two above approaches.
Thanks,
Luca
A few months have passed since I originally posted the question and I made it through!
As #Brett DeWoody correctly noted the right approach is to be strictly related to the number of DOM elements on the screen (I'm referring mainly to markers). The more the merrier if your device is faster (CPU especially). Since the app I was developing has both desktop and tablet as target devices, CPU was a relevant factor just like the marker density of different geo-areas.
I decided to separate DBase querying/fetching and map representation/displaying. Basically, the user adjusts controls/inputs to filter the whole dataset, afterward records are fetched and Leaflet.markercluster does the job of representation. When a filter is modified the cycle starts over. Users can choose the map zoom level of clusterization depending on their CPU power.
In my particular scenario, the above-mentioned represented the best approach (verified by console.time). I found that viewport optimization was good for lower marker-density areas (a pity).
Hope it may be helpful.
Cheers,
Luca
Try options and optimize when you see issues rather than optimizing early. You can probably get away with just Leaflet.markercluster unless your markers have lots of data attached to them.
I have been learning about AWS recently and I had an idea. My personal laptop is small, I use it for coding and keep all my files on the cloud. My work laptop is huge, but I can't game on my work laptop.
I have a few games Im looking at on Steam that just wouldn't fit on my personal laptop. Do you think it would be cost effective to download them onto a server through AWS and play them like that?
To continue my comment. Cost-effectiveness is not really the issue but rendering and lag is the issue.
On your PC / Laptop you want to go for 60 frames per second (fps), meaning at max ~13ms of rendering per frame. Depending on the type of game you can get by with 30fps, but e.g. for first person shooters you might even try to go for 144+ fps.
The key thing to understand is that your action, e.g. moving around or clicking something has to be rendered as soon as possible, e.g. at most 13ms after you do it. You will immediately notice a high input lag meaning a "long" time between an input being issued and the input taking effect. If you now move the rendering to the server the frame does not only need to be rendered but it needs to also be transferred onto your machine. Depending on your bandwidth and the image size this can naively easily take a second. This is extremely far beyond anything that is actually playable. And that does not yet account for you sending your interaction to the server to make the server render it.
Long story short: it is not feasible for you to implement. As already mentioned Google Stadia tries to achieve that but you can be sure there are A LOT of optimizations in place to make it work.
I'm trying to build a two-wheeled balancing robot for fun. I have all of the hardware built and put together, and I think I have it coded as well. I'm using an IMU with gyro and accelerometers to find my tilt angle with a complimentary filter for smoothing the signal. The input signal from the IMU seems pretty smooth, as in less than 0.7 variance + or - the actual tilt angle.
My IMU sampling rate is 50 Hz and I do a PID calculation at 50 Hz too, which I think should be fast enough.
Basically, I'm using the PID library found at PID Library .
When I set the P value to something low then the wheels go in the right direction.
When I set the P value to something large then I get an output like the graph.
From the graph it looks like your system is not stable.
I hope you have tested each subsystem of your robot before directly going for tuning. Which means that both sensors and actuators are responding properly and with acceptable error. Once each subsytem is calibrated properly for external error. You can start tuning.
Once this done is you can start with valid value of P may be (0.5) to first achieve proper response time, you will need to do some trials here, them increment I slowly to cut down steady state error if any and use D only when required(in case of oscillation).
I would suggest to handle P,I and D one by one instead of tweaking all at one time.
Also during the testing you will need to continuously monitor the your sensor and actuator data to see if they are in acceptable range.
As Praks Wrote, your system looks as if it is either unstable or at perhaps marginally stable.
Generally Two wheeled robots can be quite difficult to control as they are inherently unstable without a controller.
I would personally try A PD controller at first, and if you have problems with setpoint accuracy i would use a PID, but just remember that if you want to have a Differential gain in your controller (The D part) it is extremely important that you have a very smooth signal.
Also, the values of the controller greatly depends on your hardware setup (Weight and weight distribution of the robot, motor coefficients and voltage levels) and the units you use internally in your software for the control signals (eg. mV V, degrees/radians). This entails that it will almost be impossible for anybody to guess the correct parameters for you.
What a control engineer could do would be to make a mathematical model of the robot and analyse the pole/zero locations.
If you have any experience with control theory you can take a look at the following paper, and see if it makes sense to you.
http://dspace.mit.edu/bitstream/handle/1721.1/69500/775672333.pdf
There are many heuristic rules to PID tuning out there, but what most people fail to realize is that PID tuning should not be an heuristic process, but should based on math and science.
What #Sigurd V said is correct: "What a control engineer could do would be to make a mathematical model...", and this can get as complicated as you want. But now a days there are many software tools that can help you automate all the math stuff and get you your desired PID gains quite easily.
Assuming all your hardware is in good shape you can use a free online tool like PidTuner to input your data and get near to optimal PID gains. I have personally used it and achieved good results. Use these as an starting point and then tune manually if required.
If you haven't already, I'd suggest you do a search on the terms Arduino PID (obvious suggestion but lots of people have been down this road). I remember when that PID library was being written, the author posted quite a bit with tutorials, etc. (example). Also I came across this PIDAutotuneLibrary.
I wrote my own PID routines but also had a heck of a time tuning and never got it quite right.
I'm currently working on a C++/SDL/OpenGL game. I've already made a few small games, but only local ones (no netcode). So I know how to make the engine, but I'm unsure about the netcode.
Can I firstly create the full engine for split-screen play and later on add the netcode or will this make everything complicated? Do I already have to take netcode into consideration while programming the basic game engine or is it also okay to just put it on top of the game after it runs fine on one machine?
It's a 2D shooter type game, if that matters. And no, I don't like to change my choice of programming language/window manager/api because I already implemented the bare bones of the game. I'm just curous how this issue is approached best.
In theory, all you need is a good enough design. Write enough abstract classes and BAM! you can pop out one user interface (i.e. local-only) for another one (networked). I wouldn't believe the theory, though.
It's possible to do what you want, but it involves taking into consideration all of the new issues you address when dealing with networked gameplay - syncing views for multiple users, what to do when one user drops their network link (how to detect when one user drops their network link, of course), network latency in receiving user input, handling lag on one side and not the other. Networked programming is completely different, and some of the aspects (largely ones dealing with synchronization) may impact your core engine itself. Even "just showing two views" gets a lot tougher, because you now have data on two completely different machines, and the data isn't necessarily the same.
My suggestion would be to do the opposite of what you're hoping for. Get the networking code working first with minimal graphics. In fact, console messages will be far more important than pretty graphics. You already have experience with making the graphics of other games - work the most questionable technology first. Get a good feel of all the things the networked code will ask of you, then focus on the graphics afterwards.
Normally for a network oriented game there are five concepts too keep in mind:
events
dispatcher
synchronization
rendering
simulation
Events. A game engine is a event software, that means over a state of each generic object in the game (can be a unit, GUI, etc), you do an action, that means, you call a function or do nothing.
Dispatcher take each event change and dispatch that change to another subsystem.
Synchronization means that over a change of event, all clients in network must be advised throw his dispatcher over that change, in this way all players can see the changes of other players, render and simulate same things at same time.
Rendering The render read parameters and relevant states for each object and draw in screen. For example, is you have a property for each unit named life_points, you can draw a normal unit if life_points>50 and a damage unit if life_point>0 and life_point<50 and a destroyed unit if life_point=0. Render dont make changes in objects, just draw what read from them.
Simulation read every object and perform some task taking on count states and properties, for example, if you have cero point of life, you mark the state of a unit as DEAD (for example) or change de GUI, or if a unit get close to another of a enemy team, you change the state from static to move moving close to that another unit. Plus this, here you make the physics of units, changing positions, rotations, etc etc... as you have all objects synchronized over network, everybody will be watching the same thing.
Best regards.
Add in netcode as soon as you can. If you don't do this you may have to overhaul a lot of the engine later in the dev cycle, so better to do it early.
It also depends on how complex the game is, but the same principles still stand. Best not to tack it on at the last second
Hope this helps!
Since there's no complete BPM framework/solution in ColdFusion as of yet, how would you model a workflow into a ColdFusion app that can be easily extensible and maintainable?
A business workflow is more then a flowchart that maps nicely into a programming language. For example:
How do you model a task X that follows by multiple tasks Y0,Y1,Y2 that happen in parallel, where Y0 is a human process (need to wait for inputs) and Y1 is a web service that might go wrong and might need auto retry, and Y2 is an automated process; follows by a task Z that only should be carried out when all Y's are completed?
My thoughts...
Seems like I need to do a whole lot of storing / managing / keeping
track of states, and frequent checking with cfscheuler.
cfthread ain't going to help much since some tasks can take days
(e.g. wait for user's confirmation).
I can already image the flow is going to be spread around in multiple UDFs,
DB, and CFCs
any opensource workflow engine in other language that maybe we can port over to CF?
Thank you for your brain power. :)
Study the Java Process Definition Language specification where JBoss has an execution engine for it. Using this Java based engine may be your easiest solution, and it solves many of the problems you've outlined.
If you intend to write your own, you will probably end up modelling states and transitions, vertices and edges in a directed graph. And this as Ciaran Archer wrote are the components of a State Machine. The best persistence approach IMO is capturing versions of whatever data is being sent through workflow via serialization, capturing the current state, and a history of transitions between states and changes to that data. The mechanism probably needs a way to keep track of who or what has responsibility for taking the next action against that workflow.
Based on your question, one thing to consider is whether or not you really need to represent parallel tasks in your solution. Where instead it might be possible to en-queue a set of messages and then specify a wait state for all of those to complete. Representing actual parallelism implies you are moving data simultaneously through several different processes. In which case when they join again you need an algorithm to resolve deltas, which is very much a non trivial task.
In the context of ColdFusion and what you're trying to accomplish, a scheduled task may be necessary if the system you're writing needs to poll other systems. Consider WDDX as a serialization format. JSON, while seductively simple, I recall has some edge cases around numbers and dates that can cause you grief.
Finally see my answer to this question for some additional thoughts.
Off the top of my head I'm thinking about the State design pattern with state persisted to a database. Check out the Head First Design Patterns's Gumball Machine example.
Generally this will work if you have something (like a client / order / etc.) going through a number of changes of state.
Different things will happen to your object depending on what state you are in, and that might mean sitting in a database table waiting for a flag to be updated by a user manually.
In terms of other languages I know Grails has a workflow module available. I don't know if you would be better off porting to CF or jumping ship to Grails (right tool for the job and all that).
It's just a thought, hope it helps.