I was curious if anyone had any experience/knowledge about aim bots in online FPS games such as Counter-Strike. I am curious and would like to learn more about how the cursor knows how to lock on to an opposing player. Obviously if I wanted to cheat I could go download some cheats so this is more of a learning thing. What all is involved in it? Do they hook the users mouse/keyboard in order to move the cursor to the correct location? How does the cheat application know where exactly to point the cursor? The cheat app must be able to access data within the game application, how is that accomplished?
EDIT: to sids answer, how do people obtain those known memory locations to grab the data from? EDIT2: Lets say I find some values that I want at location 0xbbbbbbbb using a debug program or some other means. How do I now access and use the data stored at that location within the application since I don't own that memory, the game does. Or do I now have access to it since I have injected into the process and can just copy the memory at that address using memcpy or something?
Anyone else have anything to add? Trying to learn as much about this as possible!
Somewhere in the game memory is the X,Y, and Z location of each player. The game needs to know this information so it knows where to render the player's model and so forth (although you can limit how much the game client can know by only sending it player information for players in view).
An aimbot can scan known memory locations for this information and read it out, giving it access to two positions--the player's and the enemies. Subtracting the two positions (as vectors) gives the vector between the two and it's simple from there to calculate the angle from the player's current look vector to the desired angle vector.
By sending input directly to the game (this is trivial) and fine-tuning with some constants you can get it to aim automatically pretty quickly. The hardest part of the process is nailing down where the positions are stored in memory and adjusting for any dynamic data structure moving players around on you (such as frustum culling).
Note that these are harder to write when address randomization is used, although not impossible.
Edit: If you're wondering how a program can access other programs memory, the typical way to do it is through DLL injection.
Edit: Since this is still getting some hits there are more ways that aimbots work that are more popular now; namely overwriting (or patching in-place) the Direct3D or OpenGL DLL and examining the functions calls to draw geometry and inserting your own geometry (for things like wall-hacks) or getting the positions of the models for an aimbot.
Interesting question - not exactly your answer but I remember in the early days of Counter-Strike people used to replace their opengl32.dll with a botched one that would render polygons as transparent so they could see through the walls.
The hacks improved and got more annoying, and people got more creative. Now Valve/Steam seems to do a good job of removing them. Just a bit of warning if you're planning on playing with this stuff, Steam does scan for 'hacks' and if any are found, they'll ban you permanently
A lot of "Aim bots" aren't aim bots at all but trigger bots. They're background processes that wait until your reticule is actually over a target and fire automatically. This can be accomplished in a number of different ways but a lot of games make it easy by displaying the name of someone whenever your target goes over them or some other piece of data in memory that a trigger bot can pin point.
This way, you play by waving the mouse at your target and as soon as you mouse over them it will trigger a shot without your having to actually fire yourself.
They still have to be able to pinpoint that sort of stuff in memory and have the same sort of issues that truer "Aim bots" do.
Another method that has been used in the past is to reverse engineer the network packet formatting. A man-in-the-middle attack on the packet stream (which can be done on the same system the game runs on) can provide player positions and other useful related information. Forged packets can be sent to the server to move the player, shoot, or do all kinds of things depending on the game.
Check out the tutorial series by Fleep here. His fully commented C# source code can be downloaded here.
In a nutshell:
Find your player's x y z coordinates, cursor x y coordinates as well as all enemies x y z coordinates. Calculate the distance between you and the nearest enemy. You are now able to calculate the x y cursor coordinates needed in order to get auto aim.
Alternatively you can exclude enemies who are dead (health is 0) so in this case you also need to find the enemy's health address. Player-related data is usually close to each other in memory.
Again, check out the source code to see in detail how all of this works.
Edit: I know this offtopic, sorry But i thought this would help out the asker.
The thing the hacking industry haven't tried out, but which I've been experimenting with, is socket hijacking. It may sound a lot more than it actually is, but basically it uses the WinPCap drivers to hook into the process' Internet connections via TCP (Sockets), without even going near the process' offsets.
Then you will simply have to learn the way the TCP Signals are being transferred and store them into a hash-Table or a Multiplayer (Packet) class. Then after retrieving the information and overlay the information over the Window (not hooked), just transparent labels and 2D boxes over the screen of the windowed game.
I've been testing it on Call of Duty 4 and I have gotten the locations of 4 players via TCP, but also as Ron Warholic has said: all of the hacking methods won't work if a game developer wrote a game server to only output the players when the current user should see the player.
And after cut the transmission of that player's location as for the X Y Z and player will no longer be stored and not rendered there for stop the wallhack. And aimbots will in a way stall work but not efficiently. So anyway, if you are looking into making a wallhack, don't hook into the process, try to learn WinPCap and hook into the Internet signals. As for games, don't search for processes listing for Internet transmissions. If you need an example that utilizes this, go search Rust Radar that outputs the player's location on a map and also outputs other players around you that is being sent via Internet transmissions TCP and is not being hooked into the game.
Related
For the quick and concise: I want to find out how to work with .ogg files in visual c++ to put it in a format the PC can play and have tools to control how it's played and be able to play multiple at once, but all of it limited to a 2d scale. Preferably with no or very little cost. Also I've heard about streaming audio but I'm not so sure what the details on that are. The rest is an explanation of my though process on the whole thing, which is likely flawed in other areas this short hand doesn't convey but getting the short hand down will make that apparent eventually.
So I'm working on a game which will be wholly in the 2D space, for purposes of self improvement I want to try and build the engine myself and work things the best I can. For audio I'm thinking ahead of how to program it while I finish with the visual side of things, and have gotten an idea of things I want to be able to do with the sound but I'm inexperienced and probably have a flawed sense of the basic process. So I'll list what I functions I'll need done and my current mindset of how to approach it. After this I'll clarify my issues with .ogg. So the functions I know I'll need at current are:
Ability to play audio files
Ability to change volume of audio files mid playback
Ability to play them starting a specific times (this along with the above to allow for me to fade a background song out and play a different version of the same background music at where the last one dropped off)
Ability to play them in a loop
Ability to play multiple audio files at the same time (sound effects with background music.)
Ability to pan sounds to one ear or the other when using headphones and such.
Now for my mindset of how to approach it:
When loading in decode the audio files and load them into objects I can reference later on that same screen, scene, or event till I no longer need them and can delete them.
have events which will call audio effects which will play them or change things. IE: player gets hit, sound plays then ends.
I think I'll need buffers or channels to play them like 1 channel for background music, another for a secondary background music that fades into the current as I fade out and remove the current. Then I can free up the first channel for another fading of tracks. Then a few more to make space for sound effects. Like say 10 that stay empty but allow for me to play a sound in one, then delete it and then go back and play some other effect in that same channel later. However if I need to play multiple sound effects at once I can play up to 10 at the same time. However if I need them I'm not sure if I'll be able to clear them up or each channel has its own song tied to it instead of me being able to pass the decoded sound to what it has set to play.
Then every frame progresses the sound being played, I'm not exactly sure if I need or will do this. I intend to try and work online play into my game using rollback netcode. From my experience sounds don't freeze or progress based on connection and just run on. As in game freezes from connection waiting for opponent, the background music will play like normal. If it freezes on a hit, the hit sound should just play out once and end so desync there should be fine but I am kind of curious on how and if there is a good reason for it. I also heard a term dropped, streaming, I'm trying to understand how to go about it but I struggled to get a good grasp on it.
That's about it for how I think I should go about it, if I had to come up with a sort psuedo code probably would be:
LoadScene(..., file backgroundsounda, file backgroundsoundb, file soundeffecthit){
.
.
.
Backsadeco = new decodedsound(backgroundsounda)
.
.
.
LoadScene.setBackSoundA(backsadeco)
.
.
.
LoadScene.makeBuffers(2,10)
//makeBuffers from a Buffer class which makes two for the background and 10 effect buffers that the scene can reference later for those purposes. The buffers use a switch statement to denote how to play the sound based on number given later. EX:1 plays sound once and then empties buffer. Buffer also manages which effect or background buffer to use by checking which ones aren't playing sound.
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
class collision
strikeHit(...,scene){
.
.
.
scene.useEffectBuffer(scene.getSoundEffectHit(),1)
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
(the pseudo code above for handling the scene stuff could also be flawed cause I haven't gotten to making scene system yet with load screens and such, first I want to get what I need done and existing in one scene first then work on handling the transitions. Though if you have advice on that I'm all ears too.)
Aside from any potential flaws in my plan when I can play sound, working with ogg which I planned to use since its' a smaller file size and planning to have 3 versions of every song which can go upwards of 2 minutes each would be wise, I'm not sure how or what to do. I've heard of OpenAL but I'm confused on the license stuff and since I plan to hopefully sell the game one day and I want to reduce costs I'm trying to avoid fees where I can. Secondly I hear it has quite a few stuff for 3d and I'm not so sure since all my actions are intended for 2d and at most I'll pan sounds, if it's worth it or if I can manage something more streamlined and focused for my needs. Realistically all I should need is a way to format it to be played by the PC and then tools to handle how that is played. Also understanding of how to keep the PC playing it as the rest of the game updates and changes logically and visually on 60fps.
Edit based on first comment: I intend to keep it usable by multiple systems but limited to Windows, focusing on Windows 10 currently. I am working In visual studio and so far have just used things available there for my needs so I think the only framework would be .NET. As for video playing, at current I don't have a need for that since I plan to work more so with 2d images and animation, and currently can't find a purpose for it. As for licenses I'm not well versed on that topic but I'm trying to stay free for commercial use if possible.
I have built an autonomous RC car that is capable of tracking its location and scanning its environment for obstacles. I am stuck however on how to store/create a map of its environment. The environment would not be fixed in size, as the robot explores it may uncover new areas. My background is on navigation, so that part and determining where obstacles are is the easy part (for me). The hard part is how to store/access the data.
Originally I was going to use a 2D array/matrix with a resolution of 2-5 cm, i.e each cell is 2-5cm wide when projected onto the ground. This however could eat up a lot of memory (it is embedded so I have limited resources, 512mb of RAM right now but may decrease when prototype is finished). As well, I will eventually be moving the system to a quadcopter so I will be working in 3D.
The next solution was to create an obstacle object, which would have a position. However I was unsure on how to: 1. keep track of the size and orientation of the object, i.e. how to tell if the object is a wall or just a table leg (cant avoid a wall, can go around a leg) and 2. how to efficiently search/tell if an obstacle is in the robots way.
I am using c++, and the integration of a camera and ultrasonic sensor to detect the environment. As mentioned, stuck on how to store the environment efficiently. I would like to keep the environment information, whether save to a file or send to a server so I can later use/access the map. Would anyone have any suggestions on what to try, or where to even begin looking, this area is all new to me?
You can use a R-Tree where your initial place is the root of the tree. Each time you can go somewhere, you can add the place you went in your tree.
You could also fusion some neighbour parts together if possible.
From a memory point of view, you are not bounded by your array/matrix limits and you store only valuable data. It is up to you to choose what to store : obstacles or free space.
Searching in that kind of structure is also efficient.
As for implementations, you could take a look a the r-tree in the boost library.
Later on, you could even create a graph based on your tree to be used by path-finding algorithms if you expect your robot to go from one point to another.
I am thinking about starting a new project with the purpose of mapping and navigating a maze with a cluster of robots. The number of robots I was thinking about are 2 or 3.
The following assumptions are made :
The robots are fitted with a camera each to help detect the walls of the maze
The size and shape of the maze is unknown and can be changed according to will
The way the robots should work is that they should communicate and efficiently divide the task of mapping and navigation among themselves.
I am studying Electrical Engineering and have no previous experience with maze planning/solving with robotics. I would like to know as to how to begin with this; and more importantly the resources I should be looking at. Any suggestions of books, websites, forums are welcome.
The microcontroller I am planning to use is Arduino Uno. I am familiar with it and it has very good support online. Thus it seems to be a good choice. Also, I will have around 2 months to finish the project. Is that amount of time enough to accomplish the aforementioned things?
A single robot in a maze is called a Braitenberg-vehicle. A group of such robots is a multi-robot-formation, which implies that the agents must coordinate their behavior. In the literature such games are called “Signaling games”, because a sender has private access to an event and must share this information with the group. For example, robot1 has detected a wall and sends the status update to another robot.
In it's easiest form, signaling games are modeled with a lexicon. That is a list of possible messages between the robots. For example: 0=detectwall, 1=walkahead, 2=stop. As a reaction to a received signal, the robot can adapt his behavior and change his map of the maze. Sometimes this idea is called a distributed map building algorithm, because the information is only partial available.
A basic example is the case if two robots are driving against each other (Chicken Game). They must communicate about their evasive strategy to prevent collision. If both robots have decided to drive in the same direction, they will collide as well. The problem is, that they don't know, what the other robot is planning.
I want to make a 2D game that is basically a physics driven sandbox / activity game. There is something I really do not understand though. From research, it seems like updates from the server should only be about every 100ms. I can see how this works for a player since they can just concurrently simulate physics and do lag compensation through interpolation.
What I do not understand is how this works for updates from other players. If clients only get notified of player positions every 100ms, I do not see how that works because a lot can happen in 100ms. The player could have changed direction twice or so in that time. I was wondering if anyone would have some insight on this issue.
Basically how does this work for shooting and stuff like that?
Thanks
Typically, what will happen is that the player will view the game 100ms behind how it actually is. That, and you have to accept that some game designs just require faster server updates. Typically, games that need faster server updates use client/server model where the server computes all the physics, not the client. You've seen the traditional RTS model- peer2peer, slow updates. But the traditional model for FPS games is client/server, fast updates, for this exact reason.
There is a forum entry and a whole multiplayer/network FAQ on GameDev on this topic that might be a good first read.
You have to design your game to take his into account. For example if you're shooting a target that could move out of the way suddenly, just have shells with proximity fuses and make lots of noise and pretty colours for a while, hiding the fact neither your nor your player actually knows yet if they hit the target.
In fact this is pretty close to real physics: it takes time for the consequences of actions to be observed.
I just finished essential part of my own personal 2D engine in C++ and I'm kinda deciding how to complete the part where it is actually supposed to display everything on the screen, namely when do I call that function which does the job.
I don't have much idea of how does the graphic card work, my biggest experience is calling bios graphic services to write some stuff on the screen. Could you give me a hint on this please? Or maybe some keywords I should try to google?
look up render loop.
In a game, you will do it in a loop. You can also look up game loop which is a related concept if you're working on a game.
Are you rendering to a back buffer and then trying to display that? Common terminology includes "flip" (as in page flipping) or "present". If your copying from a back buffer to the screen, it might also be a "blit" (or blt) from "bit-block transfer".