Detect And Remove Rootkit [closed] - rootkit

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
What is the best (hopefully free or cheap) way to detect and then, if necessary, remove a rootkit found on your machine?

SysInternals stopped updating RootKit Revealer a couple of years ago.
The only sure way to detect a rootkit is to do an offline compare of installed files and filesystem metadata from a trusted list of known files and their parameters. Obviously, you need to trust the machine you are running the comparison from.
In most situations, using a boot cdrom to run a virus scanner does the trick, for most people.
Otherwise, you can start with a fresh install of whatever, boot it from cdrom, attach an external drive, run a perl script to find and gather parameters (size, md5, sha1), then store the parameters.
To check, run a perl script to find and gather parameters, then compare them to the stored ones.
Also, you'd need a perl script to update your stored parameters after a system update.
--Edit--
Updating this to reflect available techniques. If you get a copy of any bootable rescue cd (such as trinity or rescuecd) with an up-to-date copy of the program "chntpasswd", you'll be able to browse and edit the windows registry offline.
Coupled with a copy of the startup list from castlecops.com, you should be able to track down the most common run points for the most common rootkits. And always keep track of your driver files and what the good versions are too.
With that level of control, your biggest problem will be the mess of spaghetti your registry is left in after you delete the rootkit and trojans. Usually.
-- Edit --
and there are windows tools, too. But I described the tools I'm familiar with, and which are free and better documented.

Rootkit revealer from SysInternals

Remember that you can never trust a compromised machine. You may think you found all signs of a rootkit, but the attacker may have created backdoors in other places. Non-standard backdoors that tools you use won't detect. As a rule you should reinstall a compromised machine from scratch.

Related

Is it possible to save a variable internally and read/write to it after program shuts down? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I've been looking for some time now without some hope.
Basically I was wondering if it's possible to save a variable and read from it inside the executable after the program exits.
I know that one can use the fstream function to store the variable in a external file, But I'm looking for a way to store it internaly like in the .exe.
string l;
cin >> l;
// function to save it internally.....
Thanks in advance o.o
Here are a few hints on why it's not a good idea.
It is no better than using another file.
You cannot access a big block of memory as “all my data” and write it into a file, so you will have to serialize / unserialize properly. Once you have that code available, it is actually more work messing with a complex file format (be it ELF or PE) than writing to an empty file.
It is worse actually.
Bugs in writing the data could make your program unworkable.
Multiple users cannot each have their own data.
Your executable file is normally not writable.
On Unix-based systems, binary files are typically installed to system directories and a normal user simply cannot change them.
Even if running as root, it's not uncommon for system partition to be read-only (my own setup has / mounted as read-only for instance).
On Windows systems, although it is more common to run with admin rights, it's not universal and, anyway, the binary file of an running program is locked.
Even if you manage to workaround all this, it prevents data portability.
Install and update of your program and data is gone.
There is no way to backup your data and restore it later (possibly on another system).
The only programs modifying executables those days are malware. For this reason, intercepting executable-modifying programs and shutting them down is one of the most basic features of anti-malware software.
Along those lines, on system that implement signed binary or any kind of trust system, your modified binary won't pass signature tests.
So, lots of quirks, lots of complex workarounds both in your program and in user experience (need to request special permissions, tricky save and backup, very probable data loss). While on the other hand a simple save to a data file is easy to implement and user-friendly.
As mentioned by #drescherjm and #Peter in the comments, such a practice is what the security software look for, so its not really the brightest of idea.
I'm not well aware of your intentions but if you are trying to implement co-routines within your programs here's what you can do:
Create a static variable, say static int state=0;, and use that to implement co-routines on the scale of a program-lifetime.
Use a file, say "Sys_Status.dat" to store those variables' info.

Why should I use command-line arguments? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've recently learned about command line arguments and I understand how to use them. But I just don't get why I should use them at all. I mean, you could use any normal variable to do the same job as a command line argument.
Could someone explain or give a scenario of how a command line argument could be essential to a program?
edit myfile.txt
You could always make an editor to edit one specific file, but it would make more sense if the user was able to tell you which file he wanted to edit. Command line args is one way of doing this.
The purpose of a command line argument is to allow you to pass information into your program without hard coding it into the program. For example
Foo -pages:10
Foo -pages:20
Here we've passed information into the program (in this case a pages setting). If you set a variable in your program you'd have to recompile it every time you wanted to change it!
It means you don't have to edit the program to change something in it.
Say your program processes all files in a folder to remove icon previews. You could hardcode the folder to process in the program. Or you could specify it as a commandline argument.
I use this example because it describes a bash script I use on my Mac at home.
Automation.
You cannot script or use an application/tool in a headless (or unmanned) environment if you require interactive user input.
You can use "config files" and write and read from temporary files, but this can become cumbersome quickly.
Driving the application.
Almost every non-trivial application has some variation in what or how it does something; there is some level of control that can and must be applied. Similar to functions, accepting arguments is a natural fit.
The command line is a natural and intuitive environment, supporting and using a command line allows for better and easier adoption of the application (or tool).
A GUI can be used, sure, but unless your plan is to only support GUI environments and only support input via the GUI, the command line is required.
Consider echo, which repeats its arguments—it could hardly work without them.
Or tar—how could it tell whether to extract, create, decompress, list, etc. etc. without command line arguments?
Or git, with its options to pull, push, merge, branch, checkout, fetch, ...
Or literally any UNIX program except maybe true and false (although the GNU versions of those do take arguments.)
There are countless applications for passing arguments to main. For example, let's say you are a developer and you've designed an application for processing images. Normal users need only to pass the names of the images to your application for processing. The actual source files of your application are not available to them or they are probably not programmers.

Is there a disassembler with modification and reassembly capabilities for a 32-bit executable? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I have a class project where we need to take a 32-bit executable written in C++ and disassemble it and modify the assembly code and then reassemble it. We're supposed to do things like hardcode cheats into the game.
I've been searching for hours and I can't find any software that will do this. I've looked at Ollydbg and spent about two hours with it and couldn't really figure out how to get it to work. I utilized Cheat Engine and that actually worked out really well for me - I was able to isolate the code modifying the addresses I cared about and replace it with code to have a favorable impact on the game, but as far as I can tell Cheat Engine has no ability to recompile the modified code.
This is a fairly lower level Computer Science class so please take into account my ability level when making suggestions but if there is any software out there or alternative ways that will allow me to do this I would greatly appreciate it. Thanks!
Since you mentioned OllyDBG and Cheat Engine I'm going to assume you're using Windows.
First, you can use OllyDBG to save a file, but for some reason I can't find this option in OllyDBG 2, only in older versions (like 1.10). You can right-click on the code window and then copy to executable > all modifications, A new window will open, right-click on the new window and then choose save file.
An alternative that I really like is x64dbg. it's an open source debugger/disassembler and has an option to save changes via "Patches".
Another option is to apply the changes via an hex editor, which allows you to modify any file (including executables) in a binary format. It is, of course, a bit harder to do since you need to translate your changes to op-codes manually, but if your changes are not too big or only consisting of modifying some constants it can be a faster and easier solution. There are a lot of hex editors out there but my favorite is XVI32.
What I personally like to do is to modify the memory via code using Windows API's WriteProcessMemory and ReadProcessMemory since it allows you to do this things dynamically.

Is it possible to run .out files in windows? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a file with the extension .out . I'm running windows 10. From what I understand, .out files are generated while coding in C and C++ in Linux. I was wondering if there was any way in which I could execute the file in windows. Renaming it's extension to .exe gave me an error saying the file was incompatible with 64-bit version of windows.
So is there any way I could execute the file, or better yet, view it's contents as proper code so I can work with it, while using Windows?
There's no way of directly converting a linux executable to Windows format.
You'll have to recompile or use Cygwin, It allows running Linux commands in Windows environment.
a.out is not neccessarily related to C or C++, it can be generated from any other kind of compiler/assembler. If you read the article, then you can see that it isn't even guaruanteed that this actually is what you may think of a.out format.
In order to execute it, the only possible way to achieve this is to install a Unix OS to execute it, but this again wont guaruantee that it really can be executed, because there may be dependencies or the wrong OS, etc..
To view the content of the file, there are different utillities on different platforms. For example you can use objdump on Linux or Cygwin/Windows to take a look at it. You can use a disassembler and see if you can make sense of it. On Windows you can use IDA which covers a broad range of fileformats and may be able to dissect it.
Now that you managed to take a look inside it, there is the next issue you asked for, by converting it. This is a tedious process though, because you must do it by hand. If IDA could identify it, you get a good start because you now have an assembly source as a starting point, but it will likely not assemble, and certainly not run on your target platform (Windows).

installation? whats there behind screens of linux? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I would focus on libraries though it can be a general application installation as well.
When we install a library (say C++), a novice user like me probably expects that when we "install" a library, all that source-code gets copied somewhere with few flags and path variables set so that we can directly use #include kind of statements in our own code and start using them.
But by inspection I can say that actually, the exact source-files are not copied but instead pre-compiled object-forms of the files are copied, except for the so called *.h header-files. (Simply because, I cannot find the sourcefiles all over the hard-disk except the headerfiles)
My Questions:
What is the behind scene method, when we "install" something.. what are all the typical locations that get affected by in a 'linux' environment. And the typical importance/use of each of those locations.
What is the difference between "installing" a library and installing a new application into the linux system via "sudo apt-get" or so.
Finally, If I have a custom set of source files which are useful as a library, and want to send them to another system, how would I "install" my own library there, in the same way as above.
Just to clarify, My primary interest is to know from your kind answers and literature-pointers, the bigger picture of a typical installation (an application/a library), to a level that I can crosscheck,learn and re-do if I want to.
(Question was removed, question addressed difference between header and object files) This is more a question of general programming. A header file is just the declaration of classes/functions/etc, it does nothing. All a header file does is say "hey, I exist, this is what I look like." That is to say it's just a declaration of signatures used later in the actual code. The object code is just the compiled and assembled, but not linked code. This diagram does a good job of explaining the steps of what we generally call the "compilation" process, but would better be called the "compilation, assembling, and linking process." Briefly, linking is pulling in all necessary object files, including those needed from the system, to create a running executable which you can use.
(Now question 1) When you think about it, what is installation except the creation and modification of necessary files with the appropriate content? That's what installing is, just placing the new files in the appropriate place, and then modifying configuration files if necessary. As to what "locations" are typically affected, you usually see binaries placed in /bin, /usr/bin and /usr/local/bin; libraries are typically placed in /lib or /usr/lib. Of course this varies, depending. I think you'd find this page on linux system directories to be an educational read. Remember though, anything can be placed pretty much anywhere and still work appropriately as long as you tell other things where to find it, these directories are just used because they keep things organized and allow for assumptions about where items, such as binaries, will be located.
(Now question 2) The only difference is that apt-get generally makes it easier by installing the item you need and keeping track of installed items, also it allows for easy removal of installed items. In terms of the actual installation, if you do it correctly manually then it should be the same. A package manager such as apt-get just makes life easier.
(Now question 3) If you want to do that you could create your own package or if it's less involved, you could just create a script that moves the files to the appropriate locations on the system. However you want to do it, as long as you get the items where they need to be. If you want to create a package yourself, it'd be a great learning experience and there are plenty of tutorials are online. Just find out what package system your flavor of linux uses then look for a tutorial on how to create packages of that type.
So the really big picture, in my opinion, of the installation process is just compilation (if necessary), then the moving of necessary files to their appropriate places on the system, and the modification of existing files on the system if necessary: Put your crap there, let the system know it's there if you need to.