I'm trying to read variables from memory. Variables, that doesn't belong to my own program. For instance, let's say I have this Adobe Shockwave (.dcr) application running in browser and I want to read different variables from it. How it's being done? Do I need to hook the process? But it's running under virtual machine, so I don't know how to do it.
This task is pretty much trivial in normal w32 applications (as it is mainly just
CBT-hooking / subclassing), but as I mentioned before, I've got no idea how it's being
done with flash / shockwave.
I'm using C++ (VS9) as my development-environment, in case you wish to know.
Any hints would be highly appreciated, so thank you in advance.
Best regards,
nhaa123
If you're trying to do it manually just for one or two experiments, it's easy.
Try a tool like Cheat engine which is like a free and quick and simple process peeker. Basically it scans the process's memory space for given key values. You can then filter those initial search hits later as well. You can also change those values you do find, live. The link above shows a quick example of using it to find a score or money value in a game, and editing it live as the game runs.
without having debug Binaries/DLLs of the Apps, your only chance is asking some hackers.
Normally you can connect to a process with a debugger, but without the debugging symbols of the binaries you don't see any variable names - just memory addresses.
Further the Flash/Shockwave code runs inside a sandbox inside the browser to prevent security holes by manipulated Flash code. So you don't have a real chance to get access to the running Flash code / to the plugin executing the Flash code - except you have a manipulated version of such a plugin.
So your task is quite hard to solve without using less legal methods. The next hard thing is the virtual machine - this could be solved by implementing your app as a client/server solution, where the "inspector" / watchdog runs as server inside the virtual machine and the client requesting the variable status/content running on your normal host. The communication could be done as simple socket connection.
If you have the chance to write your own Flash/Shockwave plugin, you maybe could be able to see contents of variables.
Sorry, that I cannot help you any further.
ciao,
3DH
Related
I have a .net Core web service which seems to slowly increase its cpu usage.
meaning at the first day it won't go past 10%, the second day it can go up to 20% and so on.
Using the TOP command in linux, all my webservices seems to sometime be shown there (probably when a request is made) and afterward disappear.
This specific process after running for a while just stays there constantly consuming cpu even when no request has been made.
the API still working fine, it seems like there are some threads that just keeps hanging and consuming cpu. last time I checked I had 5 threads that consumed 3-4% cpu and didn't die for some reason.
My guess is that in some specific scenario a thread just stays alive consuming cpu.
The app runs on ubuntu machine, my first step was trying to create a dump file with ProcDump so I can analyze those threads and maybe find where they are hanging.
ProcDump generates a huge 21gb file, which trying to analyze with lldb throws out of memory exception. even tried transferring it to a windows machine to debug with windbg , no help there as it couldn't open the file.
As there is no specific exception or anything I can't really share any piece of code as I have no idea where the issue is... just kind a hoping for some suggestion that might help me get to a solution or at least understand where the problem is.
Thanks a lot for reading, cheers
You could try using something like jetBrains’ DotMemory, they also have a fairly high level but helpful guide https://www.jetbrains.com/help/dotmemory/How_to_Find_a_Memory_Leak.html it also worth checking your startup file and double checking the services you’ve registered are used in the correct way ie not added as scoped when they should be transient or even a singleton etc
so iv'e been at it for a while.
Eventually found out that my problem was with HttpClient
Probably some bad mix of static class and creating new instances of HttpClient that causes the issue Iv'e explained above.
Solved it by utilizing HttpClientFactory as explained here -
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/http-requests?view=aspnetcore-2.1
Lesson learned :)
A little late but Procdump for Linux just added .NET Core 3 support that generates much more managable sized core dumps. It automatically detects if the target process is .NET Core and does the right thing (i.e., no need to specify switches).
Lets take an example here which is known everywhere in the IT world:
We have a game, for example solitaire, and someone makes and releases a trainer for it that your moves are always '0'.
How do I programatically determine which adresses and what values that "hack" changes?
What way would be the best, if this is possible?
From within the game [injecting/loading my own dll?]
By intercepting traffic between the hack and target process with my own process?
I ask this question because of 2 things:
Protect an application from being "hacked" (at least by the script kiddies)
Reverse engineer a trainer (so you don't have to reinvent the wheel / avoid NIH syndrome)
You can't. Some broken attempts may be setting two addresses and then comparing them (they will find the other address though). Or they can simply remove your compare call.
They can alter any protection function that you use to "programatically determine" to always return false results. They can do anything to your executable, so there is no way.
Unless you hook the kernel functions that open your process to modify the memory. But that is also breakable and if I am not wrong you need to get your "protection kernel driver" digitally signed now.
There is another way in which you load a DLL in every running and newly spawned processes (which will probably alert antiviruses about your program being a virus), with that DLL you hook OpenProcess (and if there is another alternative to it, that too) functions in each process and check if its targeted at your program, and prevent it if so. Search about function hooking. I believe there was something called "MS Detour" or something for it.
And still, the game will not even be close to safe.
To sum up, no way is good to protect your game locally. If you are storing scores or something you should create a server program and client should report every move to server.
Even then, they can create a bot to automatically respond to server. Then the best you can do is somehow verify it is a human that is playing. (maybe captcha or comparing the solving speed with human avarage?)
I'm currently working on trying to resolve a crash/exception on an unmanaged C++ application.
The application crashes with some predicatibility. The program basically
process a high volume of files combined with running a bunch of queries through
the access DB.
It's definitely occuring during a file access. The error message is:
"failed reading. Network name is no longer available."
It always seems to be crashing in the same lower level file access code.
It's doing a lower level library Seek(), then a Read(). The exception occurs
during the read.
To further complicate things, we can only get the errors to occur when
we're running an disk balancing utility. The utility essentially examines file
access history and moves more frequently/recently used files to faster storage retrieval
while files that are used less frequently are moved to a slower retrieval area. I don't fully
understand the architecture of the this particular storage device,
but essentially it's got an area for "fast" retrieval and one for "archived/slower."
The issues are more easily/predicably reproducible when the utility app is started and
stopped several times. According to the disk manufacturer, we should be able to run
the utility in the background without effecting the behaviour of the client's main application.
Any suggestions how to proceed here? There are theories floating around here that it's somehow related to latency on the storage device. Is there a way to prove/disprove that? We've written a small sample app that basically goes out accesses/reads a whole mess of files on the drive. We've (so far) been unable to reproduce the issue even running with SmartPools. My thought is to try push the latency theory is to have multiple apps basically reading volumes of files from disk while running the utility application.
The memory usage and CPU usage do not look out of line in the Task Manager.
Thoughts? This is turning into a bit of a hairball.
Thanks,
JohnB
Grab your debug binaries.
Setup Application Verifier and add your application to its list.
Hopefully wait for a crash dumb.
Put that through WinDBG.
Try command: !avrf
See what you get....
Our app is ran from SU or normal user. We have a library we have connected to our project. In that library there is a function we want to call. We have a folder called notRestricted in the directory where we run application from. We have created a new thread. We want to limit access of the thread to file system. What we want to do is simple - call that function but limit its access to write only to that folder (we prefer to let it read from anywhere app can read from).
Update:
So I see that there is no way to disable only one thread from all FS but one folder...
I read your propositions dear SO users and posted some kind of analog to this question here so in there thay gave us a link to sandbox with not a bad api, but I do not really know if it would work on anething but GentOS (but any way such script looks quite intresting in case of using Boost.Process command line to run it and than run desired ex-thread (which migrated to seprate application=)).
There isn't really any way you can prevent a single thread, because its in the same process space as you are, except for hacking methods like function hooking to detect any kind of file system access.
Perhaps you might like to rethink how you're implementing your application - having native untrusted code run as su isn't exactly a good idea. Perhaps use another process and communicate via. RPC, or use a interpreted language that you can check against at run time.
In my opinion, the best strategy would be:
Don't run this code in a different thread, but run it in a different process.
When you create this process (after the fork but before any call to execve), use chroot to change the root of the filesystem.
This will give you some good isolation... However doing so will make your code require root... Don't run the child process as root since root can trivially work around this.
Inject a replacement for open(2) that checks the arguments and returns -EACCES as appropriate.
This doesn't sound like the right thing to do. If you think about it, what you are trying to prevent is a problem well known to the computer games industry. The most common approach to deal with this problem is simply encoding or encrypting the data you don't want others to have access to, in such a way that only you know how to read/understand it.
I am trying to make a program that records a whole bunch of things periodically.
The specific reason is that if it bluescreens, a developer can go back and check a lot of the environment and see what was going on around that time.
My problem, is their a way to cause a bluescreen?
Maybe with a windowsAPI call (ZeroMemory maybe?).
Anywhoo, if you can think of a way to cause a bluescreen on call I would be thankful.
The computer I am testing this on is designed to take stuff like this haha.
by the way the language I am using is C\C++.
Thank you
You can configure a machine to crash on a keystroke (Ctrl-ScrollLock)
http://support.microsoft.com/kb/244139
Since it appears that there are times when that won't work on some systems with USB keyboards, you can also get the Debugging Tools for Windows, install the kernel debugger, and use the ".crash" command to force a bugcheck.
http://www.microsoft.com/whdc/devtools/debugging/default.mspx
In order to cause a BSOD, a driver running in kernel mode needs to cause it. If you really want to do this, you can write a driver which exposes KeBugCheck to usermode.
http://msdn.microsoft.com/en-us/library/ms801640.aspx
Thanks to Andrew below for pointing this utility out:
http://download.sysinternals.com/files/NotMyFault.zip
If you kill the csrss process you'll get a blue-screen rather quickly.
If you want to simulate a hard crash such as a bluescreen, you'd pretty much have to yank the power cord. NOT recommended.
In case of a crash, anything not saved to persistent storage will be lost. If you want to simulate a crash for purposes of logging, write a "kill switch" into your logger, which stops the logging. Now you can simulate a crash by killing the logging and making sure you have the data you would have wanted in case of an actual crash.
First of all, I would advise you to use a Virtual Machine to test this BSOD on. This will allow you to keep a backup just in case the BSOD does some damage to the system. Here's a tip on how to generate a BSOD simply by pressing CTRL+SCROLLLOCK+SCROLLLOCK.
Is there a Windows API to generate one? No, according to this article. Still, if you would call certain API's with invalid data, they could still cause a crash inside the kernel, which would result in your BSOD.
I'm not sure exactly what you'd be testing. Since your program runs periodically, surely it's enough to check that the information is being dumped at the frequency that you specify while the system is running? Are you checking that the information stays around after the blue screen? Depending on how you are dumping it (and whether you are flushing buffers), this may not be necessary.
If you dont want to write code (driver, IOCTL...) you can use DiskCryptor. Note that no disk encrypting is need.
Just need to install the driver:
dcinst.exe -setup
And then generate a bsod using the DC console:
dccon.exe -bsod
Run process as critic and exit http://waleedassar.blogspot.co.uk/2012/03/rtlsetprocessiscritical.html