I want to make a 2D game that is basically a physics driven sandbox / activity game. There is something I really do not understand though. From research, it seems like updates from the server should only be about every 100ms. I can see how this works for a player since they can just concurrently simulate physics and do lag compensation through interpolation.
What I do not understand is how this works for updates from other players. If clients only get notified of player positions every 100ms, I do not see how that works because a lot can happen in 100ms. The player could have changed direction twice or so in that time. I was wondering if anyone would have some insight on this issue.
Basically how does this work for shooting and stuff like that?
Thanks
Typically, what will happen is that the player will view the game 100ms behind how it actually is. That, and you have to accept that some game designs just require faster server updates. Typically, games that need faster server updates use client/server model where the server computes all the physics, not the client. You've seen the traditional RTS model- peer2peer, slow updates. But the traditional model for FPS games is client/server, fast updates, for this exact reason.
There is a forum entry and a whole multiplayer/network FAQ on GameDev on this topic that might be a good first read.
You have to design your game to take his into account. For example if you're shooting a target that could move out of the way suddenly, just have shells with proximity fuses and make lots of noise and pretty colours for a while, hiding the fact neither your nor your player actually knows yet if they hit the target.
In fact this is pretty close to real physics: it takes time for the consequences of actions to be observed.
Related
For the quick and concise: I want to find out how to work with .ogg files in visual c++ to put it in a format the PC can play and have tools to control how it's played and be able to play multiple at once, but all of it limited to a 2d scale. Preferably with no or very little cost. Also I've heard about streaming audio but I'm not so sure what the details on that are. The rest is an explanation of my though process on the whole thing, which is likely flawed in other areas this short hand doesn't convey but getting the short hand down will make that apparent eventually.
So I'm working on a game which will be wholly in the 2D space, for purposes of self improvement I want to try and build the engine myself and work things the best I can. For audio I'm thinking ahead of how to program it while I finish with the visual side of things, and have gotten an idea of things I want to be able to do with the sound but I'm inexperienced and probably have a flawed sense of the basic process. So I'll list what I functions I'll need done and my current mindset of how to approach it. After this I'll clarify my issues with .ogg. So the functions I know I'll need at current are:
Ability to play audio files
Ability to change volume of audio files mid playback
Ability to play them starting a specific times (this along with the above to allow for me to fade a background song out and play a different version of the same background music at where the last one dropped off)
Ability to play them in a loop
Ability to play multiple audio files at the same time (sound effects with background music.)
Ability to pan sounds to one ear or the other when using headphones and such.
Now for my mindset of how to approach it:
When loading in decode the audio files and load them into objects I can reference later on that same screen, scene, or event till I no longer need them and can delete them.
have events which will call audio effects which will play them or change things. IE: player gets hit, sound plays then ends.
I think I'll need buffers or channels to play them like 1 channel for background music, another for a secondary background music that fades into the current as I fade out and remove the current. Then I can free up the first channel for another fading of tracks. Then a few more to make space for sound effects. Like say 10 that stay empty but allow for me to play a sound in one, then delete it and then go back and play some other effect in that same channel later. However if I need to play multiple sound effects at once I can play up to 10 at the same time. However if I need them I'm not sure if I'll be able to clear them up or each channel has its own song tied to it instead of me being able to pass the decoded sound to what it has set to play.
Then every frame progresses the sound being played, I'm not exactly sure if I need or will do this. I intend to try and work online play into my game using rollback netcode. From my experience sounds don't freeze or progress based on connection and just run on. As in game freezes from connection waiting for opponent, the background music will play like normal. If it freezes on a hit, the hit sound should just play out once and end so desync there should be fine but I am kind of curious on how and if there is a good reason for it. I also heard a term dropped, streaming, I'm trying to understand how to go about it but I struggled to get a good grasp on it.
That's about it for how I think I should go about it, if I had to come up with a sort psuedo code probably would be:
LoadScene(..., file backgroundsounda, file backgroundsoundb, file soundeffecthit){
.
.
.
Backsadeco = new decodedsound(backgroundsounda)
.
.
.
LoadScene.setBackSoundA(backsadeco)
.
.
.
LoadScene.makeBuffers(2,10)
//makeBuffers from a Buffer class which makes two for the background and 10 effect buffers that the scene can reference later for those purposes. The buffers use a switch statement to denote how to play the sound based on number given later. EX:1 plays sound once and then empties buffer. Buffer also manages which effect or background buffer to use by checking which ones aren't playing sound.
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
class collision
strikeHit(...,scene){
.
.
.
scene.useEffectBuffer(scene.getSoundEffectHit(),1)
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
(the pseudo code above for handling the scene stuff could also be flawed cause I haven't gotten to making scene system yet with load screens and such, first I want to get what I need done and existing in one scene first then work on handling the transitions. Though if you have advice on that I'm all ears too.)
Aside from any potential flaws in my plan when I can play sound, working with ogg which I planned to use since its' a smaller file size and planning to have 3 versions of every song which can go upwards of 2 minutes each would be wise, I'm not sure how or what to do. I've heard of OpenAL but I'm confused on the license stuff and since I plan to hopefully sell the game one day and I want to reduce costs I'm trying to avoid fees where I can. Secondly I hear it has quite a few stuff for 3d and I'm not so sure since all my actions are intended for 2d and at most I'll pan sounds, if it's worth it or if I can manage something more streamlined and focused for my needs. Realistically all I should need is a way to format it to be played by the PC and then tools to handle how that is played. Also understanding of how to keep the PC playing it as the rest of the game updates and changes logically and visually on 60fps.
Edit based on first comment: I intend to keep it usable by multiple systems but limited to Windows, focusing on Windows 10 currently. I am working In visual studio and so far have just used things available there for my needs so I think the only framework would be .NET. As for video playing, at current I don't have a need for that since I plan to work more so with 2d images and animation, and currently can't find a purpose for it. As for licenses I'm not well versed on that topic but I'm trying to stay free for commercial use if possible.
I am thinking about starting a new project with the purpose of mapping and navigating a maze with a cluster of robots. The number of robots I was thinking about are 2 or 3.
The following assumptions are made :
The robots are fitted with a camera each to help detect the walls of the maze
The size and shape of the maze is unknown and can be changed according to will
The way the robots should work is that they should communicate and efficiently divide the task of mapping and navigation among themselves.
I am studying Electrical Engineering and have no previous experience with maze planning/solving with robotics. I would like to know as to how to begin with this; and more importantly the resources I should be looking at. Any suggestions of books, websites, forums are welcome.
The microcontroller I am planning to use is Arduino Uno. I am familiar with it and it has very good support online. Thus it seems to be a good choice. Also, I will have around 2 months to finish the project. Is that amount of time enough to accomplish the aforementioned things?
A single robot in a maze is called a Braitenberg-vehicle. A group of such robots is a multi-robot-formation, which implies that the agents must coordinate their behavior. In the literature such games are called “Signaling games”, because a sender has private access to an event and must share this information with the group. For example, robot1 has detected a wall and sends the status update to another robot.
In it's easiest form, signaling games are modeled with a lexicon. That is a list of possible messages between the robots. For example: 0=detectwall, 1=walkahead, 2=stop. As a reaction to a received signal, the robot can adapt his behavior and change his map of the maze. Sometimes this idea is called a distributed map building algorithm, because the information is only partial available.
A basic example is the case if two robots are driving against each other (Chicken Game). They must communicate about their evasive strategy to prevent collision. If both robots have decided to drive in the same direction, they will collide as well. The problem is, that they don't know, what the other robot is planning.
Im trying to do a screen-flashing application, that flashes the screen according to the music(which will be frequencies, such as healing frequencies, etc...).
I already made the player and know how will I make the screen flash, but I need to make the screen flash super fast according to the music, for example if the music speeds up, the screen-flash will flash faster. I understand that I would achieve this by FFT or DSP(as I only need to know when the frequency raises from some Hz, lets say 20 to change the color, making the screen-flash).
But I've found that I understand NOTHING, even less try to implement it to my application.
Can somebody help me out to learn whichever both of them? My email is sismetic_chaos#hotmail.com. I really need help, I've been stucked for like 3 days not coding or doing anything at all, trying to understand, but I dont.
PS:My application is written in C++ and Qt.
PS:Thanks for taking the time to read this and the willingness to help.
Edit: Thanks to all for the answers, the problem is in no way solved yet, but I appreciate all the answers, I didnt thought I would get so many answers and info. Thanks to you all.
This is a difficult problem, requiring more than an FFT. I'll briefly describe how I implemented beat detection when I was writing software for professional DJ equipment.
First of all, you'll need to cut down the amount of data you're dealing with, since there are only two or three beats per second, but tens of thousands of samples. You'll also need to look at different frequency ranges, since some types of music carry the tempo in the bassline, and others in percussion or other instruments. So pass the signal through several band-pass filters (I chose 8 filters, each covering one octave, from low bass to high treble), and then downsample each band by averaging the power over a few hundred samples.
Every few seconds, you'll have a thousand or so samples in each band. Your next tool is an autocorrelation, to identify repetitive patterns in the music. The peaks of the autocorrelation tell you what the beat is more or less likely to be; but you'll need to make up some heuristics to compare all the frequency bands to find a beat that you can be confident in, and to avoid misleading syncopations. If you can manage that, then you'll have a reasonable guess at the tempo, but no idea of the phase (i.e. exactly when to flash the screen).
Now you can look at the a smoothed version of the audio data for peaks, some of which are likely to correspond to beats. Initially, look for the strongest peak over the course of a few seconds and take that as a downbeat. In conjunction with the tempo you estimated in the first stage, you can predict when the next beat is due, and measure where you actually saw something like a beat, and adjust your estimate to more closely match the data. You can also maintain a confidence level based on how well the predicted beats match the measured peaks; if that drops too low, then restart the beat detection from scratch.
There are a lot of fiddly details to this, and it took me some weeks to get it working nicely. It is a difficult problem.
Or for a simple visualisation effect, you could simply detect peaks and flash the screen for each one; it will probably look good enough.
The output of a FFT will give you the frequency spectrum of an audio sample, but extracting the tempo from the FFT output is probably not the way you want to go.
One thing you can do is to use peak detection to identify the volume "spikes" that typically occur on the "down-beats" of the music. If you can identify the down-beats, then you can use a resource like bpmdatabase.com to find the tempo of the song. The tempo will tell you how fast to flash and the peaks you detected will tell you when to start flashing. Have your app monitor your flashes to make sure that they generally occur at the same time as a peak (if the two start to diverge, then the tempo may have changed mid-song).
That may sound straightforward, but this is actually a very non-trivial thing to implement. You might want to read this SO question for more information. There are some quality links in the answers there.
If I'm completely mis-interpreting what you are trying to do and you need to do FFTs for something different, then you might want to look at using one of the existing FFT libraries to do the heavy lifting for you. Some examples are FFTW and KissFFT.
It sounds like maybe you're trying to get your visualizer to flash the screen in time with the
music somehow. I don't think calculating the FFT is going to help you here. At any
given instant, there will be many simultaneous frequency components, all over the audio spectrum (roughly 20 Hz to 20 kHz). But you're likely to be a lot more interested in the
musical tempo (beats per minute -- more like 5 Hz or below), and that's not going to show
up anywhere in an FFT of the raw audio signal.
You probably need something much simpler -- some sort of real-time peak detection.
Whenever you see a peak greater than some threshold above the average volume,
make your screen flash.
Of course, more complicated visualizations might well take advantage of the FFT,
but not the one you're describing.
My recommendation would be to find a library that does this for you. Unless you have a lot of mathematics to back you up, I think you will be wasting a ton of your time trying to learn FFTs when all you really want out is some sort of 'base hits per minute' number out which you can adjust your graphics to accordingly.
Check out this similar post:
here
It took me about three weeks to understand the mathematics behind FFTs and then another week to write something in Matlab using those concepts. If you are discouraged after three days, don't try and roll your own.
I hope this is helpful advice and not discouraging.
-Brian J. Stinar-
As previous answers have noted, an FFT is probably not the tool you need in order to solve your problem, which requires tempo detection rather than spectral analysis.
For an example of what can be done using FFT - and of how a particular FFT implementation was integrated into a Qt application, take a look at this blog post which describes the spectrum analyzer demo I developed. Code for the demo is shipped with Qt itself, in the demos/spectrum directory.
I was curious if anyone had any experience/knowledge about aim bots in online FPS games such as Counter-Strike. I am curious and would like to learn more about how the cursor knows how to lock on to an opposing player. Obviously if I wanted to cheat I could go download some cheats so this is more of a learning thing. What all is involved in it? Do they hook the users mouse/keyboard in order to move the cursor to the correct location? How does the cheat application know where exactly to point the cursor? The cheat app must be able to access data within the game application, how is that accomplished?
EDIT: to sids answer, how do people obtain those known memory locations to grab the data from? EDIT2: Lets say I find some values that I want at location 0xbbbbbbbb using a debug program or some other means. How do I now access and use the data stored at that location within the application since I don't own that memory, the game does. Or do I now have access to it since I have injected into the process and can just copy the memory at that address using memcpy or something?
Anyone else have anything to add? Trying to learn as much about this as possible!
Somewhere in the game memory is the X,Y, and Z location of each player. The game needs to know this information so it knows where to render the player's model and so forth (although you can limit how much the game client can know by only sending it player information for players in view).
An aimbot can scan known memory locations for this information and read it out, giving it access to two positions--the player's and the enemies. Subtracting the two positions (as vectors) gives the vector between the two and it's simple from there to calculate the angle from the player's current look vector to the desired angle vector.
By sending input directly to the game (this is trivial) and fine-tuning with some constants you can get it to aim automatically pretty quickly. The hardest part of the process is nailing down where the positions are stored in memory and adjusting for any dynamic data structure moving players around on you (such as frustum culling).
Note that these are harder to write when address randomization is used, although not impossible.
Edit: If you're wondering how a program can access other programs memory, the typical way to do it is through DLL injection.
Edit: Since this is still getting some hits there are more ways that aimbots work that are more popular now; namely overwriting (or patching in-place) the Direct3D or OpenGL DLL and examining the functions calls to draw geometry and inserting your own geometry (for things like wall-hacks) or getting the positions of the models for an aimbot.
Interesting question - not exactly your answer but I remember in the early days of Counter-Strike people used to replace their opengl32.dll with a botched one that would render polygons as transparent so they could see through the walls.
The hacks improved and got more annoying, and people got more creative. Now Valve/Steam seems to do a good job of removing them. Just a bit of warning if you're planning on playing with this stuff, Steam does scan for 'hacks' and if any are found, they'll ban you permanently
A lot of "Aim bots" aren't aim bots at all but trigger bots. They're background processes that wait until your reticule is actually over a target and fire automatically. This can be accomplished in a number of different ways but a lot of games make it easy by displaying the name of someone whenever your target goes over them or some other piece of data in memory that a trigger bot can pin point.
This way, you play by waving the mouse at your target and as soon as you mouse over them it will trigger a shot without your having to actually fire yourself.
They still have to be able to pinpoint that sort of stuff in memory and have the same sort of issues that truer "Aim bots" do.
Another method that has been used in the past is to reverse engineer the network packet formatting. A man-in-the-middle attack on the packet stream (which can be done on the same system the game runs on) can provide player positions and other useful related information. Forged packets can be sent to the server to move the player, shoot, or do all kinds of things depending on the game.
Check out the tutorial series by Fleep here. His fully commented C# source code can be downloaded here.
In a nutshell:
Find your player's x y z coordinates, cursor x y coordinates as well as all enemies x y z coordinates. Calculate the distance between you and the nearest enemy. You are now able to calculate the x y cursor coordinates needed in order to get auto aim.
Alternatively you can exclude enemies who are dead (health is 0) so in this case you also need to find the enemy's health address. Player-related data is usually close to each other in memory.
Again, check out the source code to see in detail how all of this works.
Edit: I know this offtopic, sorry But i thought this would help out the asker.
The thing the hacking industry haven't tried out, but which I've been experimenting with, is socket hijacking. It may sound a lot more than it actually is, but basically it uses the WinPCap drivers to hook into the process' Internet connections via TCP (Sockets), without even going near the process' offsets.
Then you will simply have to learn the way the TCP Signals are being transferred and store them into a hash-Table or a Multiplayer (Packet) class. Then after retrieving the information and overlay the information over the Window (not hooked), just transparent labels and 2D boxes over the screen of the windowed game.
I've been testing it on Call of Duty 4 and I have gotten the locations of 4 players via TCP, but also as Ron Warholic has said: all of the hacking methods won't work if a game developer wrote a game server to only output the players when the current user should see the player.
And after cut the transmission of that player's location as for the X Y Z and player will no longer be stored and not rendered there for stop the wallhack. And aimbots will in a way stall work but not efficiently. So anyway, if you are looking into making a wallhack, don't hook into the process, try to learn WinPCap and hook into the Internet signals. As for games, don't search for processes listing for Internet transmissions. If you need an example that utilizes this, go search Rust Radar that outputs the player's location on a map and also outputs other players around you that is being sent via Internet transmissions TCP and is not being hooked into the game.
I am programming a game using Visual C++ 2008 Express and the Ogre3D sdk.
My core gameplay logic is designed to run at 100 times/second. For simplicity, I'll say it's a method called 'gamelogic()'. It is not time-based, which means if I want to "advance" game time by 1 second, I have to call 'gamelogic()' 100 times. 'gamelogic()' is lightweight in comparison to the game's screen rendering.
Ogre has a "listener" logic that informs your code when it's about to draw a frame and when it has finished drawing a frame. If I just call 'gamelogic()' just before the frame rendering, then the gameplay will be greatly affected by screen rendering speed, which could vary from 5fps to 120 fps.
The easy solution that comes to mind is : calculate the time elapsed since last rendered frame and call 'gamelogic()' this many times before the next frame: 100 * timeElapsedInSeconds
However, I pressume that the "right" way to do it is with multithreading; have a separate thread that runs 'gamelogic()' 100 times/sec.
The question is, how do I achieve this and what can be done when there is a conflict between the 2 separate threads : gamelogic changing screen content (3d object coordinates) while Ogre is rendering the screen at the same time .
Many thanks in advance.
If this is your first game application, using multi-threading to achieve your results might be more work than you should really tackle on your first game. Sychronizing a game loop and render loop in different threads is not an easy problem to solve.
As you correctly point out, rendering time can greatly affect the "speed" of your game. I would suggest that you do not make your game logic dependent on a set time slice (i.e. 1/100 of a second). Make it dependent on the current frametime (well, the last frametime since you don't know how long your current frame will take to render).
Typically I would write something like below (what I wrote is greatly simplified):
float Frametime = 1.0f / 30.0f;
while(1) {
game_loop(Frametime); // maniuplate objects, etc.
render_loop(); // render the frame
calculate_new_frametime();
}
Where Frametime is the calculcated frametime that the current frame took. When you process your game loop you are using the frametime from the previous frame (so set the initial value to something reasonable, like 1/30th or 1/15th of a second). Running it on the previous frametime is close enough to get you the results that you need. Run your game loop using that time frame, then render your stuff. You might have to change the logic in your game loop to not assume a fixed time interval, but generally those kinds of fixes are pretty easy.
Asynchoronous game/render loops may be something that you ultimately need, but that is a tough problem to solve. It involves taking snapshops of objects and their relevant data, putting those snapshots into a buffer and then passing the buffer to the rendering engine. That memory buffer will have to be correctly partitioned around critical sections to avoid having the game loop write to it while the render loop is reading from it. You'll have to take care to make sure that you copy all relevant data into the buffer before passing to the render loop. Additionally, you'll have to write logic to stall either the game or render loops while waiting for one or the other to complete.
This complexity is why I suggest writing it in a more serial manner first (unless you have experience, which you might). The reason being is that doing it the "easy" way first will force you to learn about how your code works, how the rendering engine works, what kind of data the rendering engine needs, etc. Multithreading knowledge is defintely required in complex game development these days, but knowing how to do it well requires indepth knowledge of how game systems interact with each other.
There's not a whole lot of benefit to your core game logic running faster than the player can respond. About the only time it's really useful is for physics simulations, where running at a fast, fixed time step can make the sim behave more consistently.
Apart from that, just update your game loop once per frame, and pass in a variable time delta instead of relying on the fixed one. The benefit you'll get from doing multithreading is minimal compared to the cost, especially if this is your first game.
Double buffering your render-able objects is an approach you could explore. Meaning, the rendering component is using 1 buffer which is updated when all game actions have updated the relevant object in the 2nd buffer.
But personally I don't like it, I'd (and have, frequently) employ Mark's approach.