I currently have a multiplayer game that players are starting to use memory editing to to cancel attack animation making the attack packets come-in faster or making the attacks a lot faster than normal.
Yes a better design would be ideal but that could take a while. I wanted to get a temporary fix that can be done quick.
The ideas:
Check time difference between the last attack packet ignore everything that is too fast. (for server)
Use EnumWindows check for window classes and stop the game if a known memory editor is detected. EnumWindows will be executed each time an attack is made. (for client)
Use ReadProcessMemory to read running processes and find signitures for known memory editors.
Well the question really is if any of the following could work and how it would be done:
Detour ReadProcessMemory or OpenProcess and exit when called? (though I think this wont work because these functions gets called by the memory editor not my game).
ReadProcessMemory on my self(game) and check the addresses that they are changing. Check if the values are not within the normal range then exit.
Any suggestions?
I know that it is futile to do this because cheaters that knows what their doing can still get around all this. But my game has only about 600 active players, I believe they are just somewhat scriptkiddies. I think this simple countermeasures should be enough for small games like mine. But of course, the design will be corrected.
Detour ReadProcessMemory or OpenProcess and exit when called?
These are not being called in your process so hooking them locally wouldn't do anything. You would need to hook every running process, which is not recommended.
ReadProcessMemory on my self(game) and check the addresses that they are changing. Check if the values are not within the normal range then exit.
You don't need to ReadProcessMemory, you're inside your own process. Just check the value normally.
You should calculate these values on the server if you don't want the client's to be able to manipulate them, then just replicate this info to the clients and overwrite them.
You can also add an antidebug library to your client to prevent the majority of people from manipulating your process. Here is a decent one
Related
Got a large C++ function in Linux that calls a whole lot of other functions, making up an algorithm. At various points given certain bad inputs, the algorithm can get "stuck" and go on forever. Adding a timeout seems appropriate as all potential "stuck" points cannot be predicted. But despite scouring the Internet for timeout examples I've only found how to apply timeouts when either the thing your timing is a separate thread or it's reading inputs. My code is a single thread and does not modify file descriptors, so not coming up with any luck. Do I basically have no choice but to thread it?
I am not sure about the situation, actually server applications or embedded applications often run for years in background without stopping. I think one option is to let your program run in background and log to a file(or screen) timely, and, if you really want to stop the program after certain time, you can use timeout command or a script to kill your program after that time, say, timeout 15s your-prog.
Solved: For when simple profiling isn't effective enough, I have written a tool to show me where performance hits occur. Basic information about how the tool works is in the accepted answer below. The source can be found here: http://pastebin.com/ETiW8hE8 (be sure to turn debugging symbols on in the program you're testing)
I've built a game engine in C++ and I have noticed in one particular area of a level that there is a brief performance hit. The game will stop completely for about half a second, and then continue on merrily. I've tried to profile this, but it's difficult isolate the condition since I also have to load the map and perform the in-game task which causes the performance hit. I can make a map load automatically and skip showing menus, etc, and comparing those profile results against a set of similar control data (all the same steps but without actually initiating the performance hit), but it doesn't show anything obvious.
I'm using gmon to profile.
This is a large application with many, many classes and functions. The performance hit only happens once, so there's no way to just trigger the problem many times during one execution to saturate my profiling results in order to make the offending functions more obvious in the profiling results.
What else can I do?
What I would do is try to grab a stack sample in that half second when it's frozen.
This would require an alarm clock timer set to go off some small time in the future, like 100ms.
Then in some loop, like the frame display loop, that normally takes less than 100ms to repeat, keep resetting the timer.
That way, it will act as a watchdog that barks if you don't keep petting it.
Then, stick a breakpoint in the timer interrupt handler.
When it gets there, you know you're in the bad slice of time.
Then just display the call stack, and it should show you what the problem is.
You might have to repeat the process a few times.
You are not saying anything about whether your application is threaded, but I will assume that it is not.
As per suggestion from mike, get insights by getting a stack trace at and see where it is freezing, you can do that with a bit of luck using pstack, so
while usleep 100000; do
pstack processid
done >/tmp/stack.log
Should give you some output to go on -- my guess is that you are calling a blocking IO operation, like reading some assets from disk.
I've used FileSystemWatcher in the past. However, I am hoping someone can explain how it actually is working behind the scenes.
I plan to utilize it in an application I am making and it would monitor about 5 drives and maybe 300,000 files.
Does the FileSystemWatcher actually do "Checking" on the drive - as in, will it be causing wear/tear on the drive? Also does it impact hard drive ability to "sleep"
This is where I do not understand how it works - if it is like scanning the drives on a timer etc... or if its waiting for some type of notification from the OS before it does anything.
I just do not want to implement something that is going to cause extra reads on a drive and keep the drive from sleeping.
Nothing like that. The file system driver simply monitors the normal file operations requested by other programs that run on the machine against the filters you've selected. If there's a match then it adds an entry to an internal buffer that records the operation and the filename. Which completes the driver request and gets an event to run in your program. You'll get the details of the operation passed to you from that buffer.
So nothing actually happens the operations themselves, there is no extra disk activity at all. It is all just software that runs. The overhead is minimal, nothing slows down noticeably.
The short answer is no. The FileSystemWatcher calls the ReadDirectoryChangesW API passing it an asynchronous flag. Basically, Windows will store data in an allocated buffer when changes to a directory occur. This function returns the data in that buffer and the FileSystemWatcher converts it into nice notifications for you.
I am currently writing a game in C++ for Windows. The server counterpart creates two additional threads at the very start. One of them handles receiving new data, and the other handles movement calculation of the objects in the game. What I managed to find out is that the last thread function (called TickFunc) is the one that slows everything down. My music freezes, I can't open new tabs in my browser, everything is slow and freezes. Even if I comment everything within the TickFunc out (leaving an empty while loop that executes forever), it still freezes, but if I do not create that thread at all, it's fine. It seems as though it slows the system down regardless of the intensity of calculation performed within the TickFunc. I would really appreciate any hints concerning what may be causing this. Thank you.
Regards,
Neob91
Put a small delay inside your infinite loop.
I have two applications running on my machine. One is supposed to hand in the work and other is supposed to do the work. How can I make sure that the first application/process is in wait state. I can verify via the resources its consuming, but that does not guarantee so. What tools should I use?
Your 2 applications shoud communicate. There are a lot of ways to do that:
Send messages through sockets. This way the 2 processes can run on different machines if you use normal network sockets instead of local ones.
If you are using C you can use semaphores with semget/semop/semctl. There should be interfaces for that in other languages.
Named pipes block until there is both a read and a write operation in progress. You can use that for synchronisation.
Signals are also good for this. In C it is called sendmsg/recvmsg.
DBUS can also be used and has bindings for variuos languages.
Update: If you can't modify the processing application then it is harder. You have to rely on some signs that indicate the progress. (I am assuming you processing application reads a file, does some processing then writes the result to an output file.) Do you know the final size the result should be? If so you need to check the size repeatedly (or whenever it changes).
If you don't know the size but you know how the processing works you may be able to use that. For example the processing is done when the output file is closed. You can use strace to see all the system calls including the close. You can replace the close() function with the LD_PRELOAD environment variable (on windows you have to replace dlls). This way you can sort of modify the processing program without actually recompiling or even having access to its source.
you can use named pipes - the first app will read from it but it will be blank and hence it will keep waiting (blocked). The second app will write into it when it wants the first one to continue.
Nothing can guarantee that your application is in waiting state. You have to pass it some work and get back a response. It might be transactions or not - application can confirm that it got the message to process before it starts to process it or after it was processed (successfully or not). If it does not wait, passing a piece of work should fail. Whether when trying to write to a TCP/IP socket or other means, or if timeout occurs. This depends on implementation, what kind of transport you are using and other requirements.
There is actually a way of figuring out if the process (thread) is in blocking state and waiting for data on a socket (or other source), but that means that client should be on the same computer and have access privileges required to do that, but that makes no sense other than debugging, which you can do using any debugger anyway.
Overall, the idea of making sure that application is waiting for data before trying to pass it that data smells bad. Not to mention the racing condition - what if you checked and it was OK, and when you actually tried to send the data, you found out that application is not waiting at that time (even if that is microseconds).