Inject sleep() into a function of an external process - c++

I know how to inject a DLL into a running process and also how to utilize functions used internally by the process e.g.
void__stdcall remoteMethod(unsigned short id)
{
typedef void (__stdcall *pFunctionAddress)(unsigned short);
pFunctionAddress pMyFunction = (pFunctionAddress)(0xCAFEBABE);
pMyFunction(id);
}
Now i want to add a sleep() into an existing method in the running process - this is the main loop of the program and doesnt stop for a sec and uses up all processing power.
I know that with frameworks like detours i could make a trampoline function which calls my function and then the original one - however my problem is that the while(1) loop is somewhere within the function of the external process. So i know the offset where the loop starts - and after that i would like to first call sleep() and then continue with the normal route of the loop.
The only alternative i saw so far is binary editing the program but this is not a good solution.
Any suggestion? Thanks

I think you are trying to be too cute here. Just call SuspendThread/ResumeThread alternately on a timer. I know it's ugly, but you aren't going to enter your solution in any beauty pageant I suspect.

Post the name of the spin-waiting program.
Wait for SO-ers to send hate mail to the developer.
Install the update the developer sends you as a bribe to stop the hate mail.

In principle, as long as you've been executed once within the space of the other process, and you know that the loop isn't executing, then you could enabling writing to text pages and patch the actual loop code in situ. You'll need a few redundant bytes to write a call to your function over (extending the function will need a lot of rewriting as all relative offsets will break).
This is not, however, terribly easy nor terribly robust. Consider why you want to to this, and if you can achieve the goal another way.

Related

Force foreground processing in WindowsAPI

I have an executable program that performs latency measurements. C++ pseudo-code below:
void main(){
lock_priority();
start_measurements();
work();
end_measurements();
}
The work() creates multiple threads and takes a long time to complete, so ideally I'd like to minimize the executable console when the process is running, just to save screen space. This, however, reduces the output latency by around 50% compared to when not minimized.
I'd like to implement the lock_priority() function so that even when minimized, the process does not go into PROCESS_MODE_BACKGROUND_BEGIN mode.
What I've tried so far
SetPriorityClass(GetCurrentProcess(), REALTIME_PRIORITY_CLASS); - did not work
Created a thread that every few seconds calls the function above - it did work, but, scientifically speaking, "it looks ugly"
I have tried to find a method to attach a callback to the SetPriorityClass() function so that after it finishes if the PriorityClass was anything but REALTIME_PRIORITY_CLASS, it'd re-set it again (or at least set PROCESS_MODE_BACKGROUND_END priority). This sounds like a perfect solution, but I could not find anything in the docs about this.
I discovered there is a way to set the processor to prefer foreground/background tasks (reference) - however even if this was possible to be configured through code, I still need a way to bind this theoretical function to the priority change.
Any help would be very appreciated!
How about redirecting the programm output from console to a file or just buffer it, like here:
Redirect both cout and stdout to a string in C++ for Unit Testing
This way, you don't have any console latency at all - if this is alright for your testing.

How would I update a variable continuously and also wait for input at the same time?

In my little project, I've decided to create a game that updates a counter of the user's experience points every second, as well as printing a menu and allowing the user to navigate said menu simultaneously. The code to update the user's experience is as follows, and it works perfectly fine standalone.
double timerX = GetTickCount();
double timerY = GetTickCount();
while(true)
{
double timerZ = GetTickCount() - timerX;
double timerA = GetTickCount() - timerY;
if(timerZ >= 1000) {
userExperience = userExperience + 1;
timerX = GetTickCount();
}
if(timerA >= 1100) {
system("CLS");
refreshExperience();
timerY = GetTickCount();
}
The function 'refreshExperience()' simply prints the 'userExperience' variable onto the screen using 'cout'.
At the same time as this, my program should be able to display the main menu GUI and ask for input from the user. However, I do not want the asking of input to halt the program, especially the money updater, as it is paramount that that is updated constantly. I have attempted to use multithreading by creating a thread for the 'refreshExperience' function, and also creating a thread for asking for input, but the problem still remained - the money would only update if the user was continually inputting (pressing keys). If he was not, the money would stay the same.
Any help would be very much appreciated.
Getting input from the user with no discernible break in program execution is only possible in GUI programming. When working in the console, every request for user input will block for the obvious reason that the program has to wait to actually have the necessary data before proceeding.
This is also why you should initialize variables before declaring them; if you don't, stack-allocated variables will contain random (to you) data and the program will not function as intended. Conceptually, this is the same problem the console has, except it doesn't have the luxury of free will and can't simply choose to skip the wait.
Conceptually, programs that have a user interface work by operating with a loop. Every event that occurs, from a mouse movement to a button click, triggers an event in the Window procedure. In the Win32 API, it's just a switch statement that checks for each possible event against what actually happened. When there's a match, the system triggers that event handler.
It should be noted that it only seems like there is no lag, because usually graphical window procedures are fast enough to seem to respond instantaneously. In reality, any action on the window triggers a calculation by the computer to determine what part of the window was blocked and must be redrawn, as it is now called "invalid."
Lastly, I would highly recommend a different method for the scoreboard update. I know it's just a contrived example for you to experiment with, but that means it's just as good if not better for trying out some design patterns, namely the observer pattern. Rather than the program checking for input every possible clock cycle it can is just a waste. When you have a situation like this, it's common to use callback functions, which in C are just function pointers that you pass along. That way you don't have to check to find out when the event is triggered, you can just have the event invoke the function that you passed in as a parameter. This is how Node.js works, by the way, and how it seems to do so much at once despite being single-threaded.
If you've heard anything about Reactive programming lately -it's been getting talked about just about everywhere in the C# community these past few months- this is what it's talking about, and the reason I bring it up is because this is one of the more common, though trivial, examples of a textbook reactive programming scenario.

Basic Protection of a Game Client

I currently have a multiplayer game that players are starting to use memory editing to to cancel attack animation making the attack packets come-in faster or making the attacks a lot faster than normal.
Yes a better design would be ideal but that could take a while. I wanted to get a temporary fix that can be done quick.
The ideas:
Check time difference between the last attack packet ignore everything that is too fast. (for server)
Use EnumWindows check for window classes and stop the game if a known memory editor is detected. EnumWindows will be executed each time an attack is made. (for client)
Use ReadProcessMemory to read running processes and find signitures for known memory editors.
Well the question really is if any of the following could work and how it would be done:
Detour ReadProcessMemory or OpenProcess and exit when called? (though I think this wont work because these functions gets called by the memory editor not my game).
ReadProcessMemory on my self(game) and check the addresses that they are changing. Check if the values are not within the normal range then exit.
Any suggestions?
I know that it is futile to do this because cheaters that knows what their doing can still get around all this. But my game has only about 600 active players, I believe they are just somewhat scriptkiddies. I think this simple countermeasures should be enough for small games like mine. But of course, the design will be corrected.
Detour ReadProcessMemory or OpenProcess and exit when called?
These are not being called in your process so hooking them locally wouldn't do anything. You would need to hook every running process, which is not recommended.
ReadProcessMemory on my self(game) and check the addresses that they are changing. Check if the values are not within the normal range then exit.
You don't need to ReadProcessMemory, you're inside your own process. Just check the value normally.
You should calculate these values on the server if you don't want the client's to be able to manipulate them, then just replicate this info to the clients and overwrite them.
You can also add an antidebug library to your client to prevent the majority of people from manipulating your process. Here is a decent one

Creating an update method with custom rate processing in c++

If you've ever used XNA game studio 4 you are familiar with the update method. By default the code within is processed at 60 times per second. I have been struggling to recreate such an effect in c++.
I would like to create a method where it will only process the code x amount of times per second. Every way I've tried it processes all at once, as loops do. I've tried for loops, while, goto, and everything processes all at once.
If anyone could please tell me how and if I can achieve such a thing in c++ it would be much appreciated.
With your current level of knowledge this is as specific as I can get:
You can't do what you want with loops, fors, ifs and gotos, because we are no longer in the MS-DOS era.
You also can't have code running at precisely 60 frames per second.
On Windows a system application runs within something called an "event loop".
Typically, from within the event loop, most GUI frameworks call the "onIdle" event, which happens when an application is doing nothing.
You call update from within the onIdle event.
Your onIdle() function will look like this:
void onIdle(){
currentFrameTime = getCurrentFrameTime();
if ((currentFrameTime - lastFrameTime) < minUpdateDelay){
sleepForSmallAmountOfTime();//using Sleep or anything.
//Delay should be much smaller than minUPdateDelay.
//Doing this will reduce CPU load.
return;
}
update(currentFrameTime - lastFrameTime);
lastFrameTime = currentFrameTime;
}
You will need to write your own update function, your update function should take amount of time passed since last frame, and you need to write a getFrameTime() function using either GetTickCount, QueryPerformanceCounter, or some similar function.
Alternatively you could use system timers, but that is a bad idea compared to onIdle() event - if your app runs too slowly.
In short, there's a long road ahead of you.
You need to learn some (preferably cross-platform) GUI framework, learn how to create a window, the concept of an event loop (can't do anything without it today), and then write your own "update()" and get a basic idea of multithreading programming and system events.
Good luck.
As you are familiar with XNA then i assume you also are familiar with "input" and "draw". What you could do is assign independant threads to these 3 functions and have a timer to see if its time to run a thread.
Eg the input would probably trigger draw, and both draw and input would trigger the update method.
-Another way to handle this is my messages events. If youre using Windows then look into Windows messages loop. This will make your input, update and draw event easier by executing on events triggered by the OS.

Setting timeout for embedded Lua

I have embedded Lua in a C/C+= application. I want to be able to set a timeout value to prevent getting trapped with badly written scripts that can result in infinite loops (or even string searches that take an infinite time to complete).
Basically, I want to be able to set a time interval and if the script fails to complete running at the end of that time interval, I want to be able to kill the Lua script engine (gracefully, if possible).
Anyone knows of best practise way to do this?
One way to control the amount of time a script takes is to set a count hook and then raise an error in the hook. But this does not work if the script can call C functions that may take a long time.