How do self-extracting programs work? - compression

Recently I've been working on writing a compressor for graphing calculator programs, where space is minimal and the calculator OS doesn't care if you set the instruction pointer to an arbitrary address.
I implemented the basic DEFLATE and then tried to google for compression algorithms which might work better on executable code.
But here's my question: wouldn't any modern OS with DEP disallow a program to execute extracted code (at least not directly). So are all "packed executables" limited to unzipping data, or having an uncompressed interpreter run the extracted code, or something in between?

wouldn't any modern OS with DEP disallow a program to execute extracted code (at least not directly)
Less-directly there is of course no problem. Even with W^X, the unpacker can simply write the code to writable memory, and only then turn it into executable memory.
Though on many operating systems, even with DEP support, a program is allowed to allocate memory that is both writable and executable. DEP does not stop you from doing things that you have the permission to do.

Related

Why is the first compilation of the day slower than next ones?

I'm using GCC/G++ for almost all my C and C++ projects for years, and there is something I never understood: the first use of one of them that you do after the start of your machine take always one or two seconds more than every next compilation until you shut down your computer.
As it is only once in a days, it is not really troublesome, but I just wonder why. Does it initialize anything and re-use it then?
If yes, does it save something somewhere?
If yes, why does it have to reinitialize it everyday?
Notably because of the page cache. If you have enough RAM, most of the data (including your source code files, and object files also) tend to stay there.
If that bothers you, use some SSD disks and/or run while drinking your first coffee before your first compilation a command reading all your source code, e.g. on Linux wc *.cc (and you could even make it a crontab job with #reboot); read also http://linuxatemyram.com/
You might also use the suspend-to-disk facility instead of the shutdown: but I don't think it is worth the pain (and I like /tmp/ being cleaned at boot time)
(I am guessing or hoping you are using Linux with a native filesystem, e.g. Ext4 or BTRFS)
The short answer is caching, possibly at multiple levels.
For example, a lot of modern hard drives have a built in cache, so if some data is being read repeatedly (e.g. the gcc executables, libraries, makefiles, source files) the first access will result in data being in the device cache, and subsequent operations will interact with the cache rather than reading from the platters. Operations on the cache are must faster than - for example - positioning drive platters for reading.
A number of operating systems also implement some form of page cache, which does a similar thing (just using system RAM as a cache).
Most modern machines have various levels of cache (in devices, managed by the OS, multiple cache levels in the CPU, etc).
Caching can, depending on configuration, affect both read and write operations. Caches that affect writing (write operations actually write to cache and return, and are committed - such as to a drive - at a later time) are part of the reason operating systems require an explicit shutdown, so they commit all cached writes.

How does libc work?

I'm writing a MIPS32 emulator and would like to make it possible to use the whole Standard C Library (maybe with the GNU extensions) when compiling C programs with gcc.
As I understand at this point, I/O is handled by syscalls on the MIPS32 architecture. To successfully run a program using libc/glibc, how can I tell what syscalls do I need to emulate? (without trial and error)
Edit: See this for an example of what I mean by syscalls.
(You can check out the project here if you are interested, any feedback is welcome. Keep in mind that it's in a very early stage)
Very Short Answer
Read the much longer answer.
Short Answer
If you intend to provide a custom libc that uses some feature of your emulator to have the host OS execute your system calls, you have to implement all of them.
Much Longer Answer
Step back for a minute and look at the way things are typically layered in a real (non-emulated) system:
The peripherals have some I/O interface (e.g., numbered ports or memory mapping) that the CPU can tickle to make them do whatever they do.
The CPU runs software that understands how to manipulate the hardware. This can be a single-purpose program or an operating system that runs other programs. Since libc is in the picture, let's assume there's an OS and that it's something Unix-y.
Userspace programs run by the OS use a defined interface between themselves and OS to ask for certain "system" functions to be carried out.
What you're trying to accomplish takes place between layers 3 and 2, where a function in libc or user code does whatever the OS defines as triggering a system call. This opens up numerous cans of worms:
What the OS defines as triggering a system call differs from OS to OS and (rarely) between versions of the same OS. This problem is mitigated on "real" systems by providing a dynamically-linkable libc that takes care of hiding those details. That aside, if you have a MIPS32 binary you want to run, does it use a system call convention that your emulator supports?
You would need to provide a custom libc that does something your emulator can recognize as making a particular system call and carry it out. Any program you wish to run will have to be cross-compiled to MIPS32 and statically linked with it, as would any other libraries the program requires (libm comes to mind). Alternately, your emulator package will need to provide a simulation of a dynamic linker plus dynamically-linkable copies of all required libraries, because opening those on the host won't work. If you have enough source to recompile the program from scratch, porting might be better than emulation.
Any code that makes assumptions about paths to files on a particular system or other assumptions about what they'll find in certain devices (which are themselves files) won't run correctly.
If you're providing layer 2, you're signing yourself up to provide a complete, correct simulation of the behavior of one particular version of an entire operating system. Some calls like read() and write() would be easy to deal with; others like fork(), uselib() and ioctl() would be much more difficult. There also isn't necessarily a one-to-one mapping of calls and behaviors your program uses with those your host OS provides. All of this assumes the host is Unix and the target program is, too. If the target is compiled for some other environment, all bets are off.
That last point is why most emulators provide just a CPU and the hardware behaviors of some target system (i.e., everything in layer 1). With those in place, you can run an original system's boot ROM, OS and user programs, all unaltered. There are a number of existing MIPS32 emulators that do just this and can run unaltered versions of the operating systems that ran on the hardware they emulate.
HTH and best of luck on your project.
Most of the ISO standard C library can be written in straight C. Only a few portions need access to lower level OS functionality.
At a minimum, you'll need to emulate basic I/O at the block or character level for fopen, fread, and fwrite. You could take the Unix approach, though, and implement those on top of the lower-level open, read, and write calls.
And you'll have to manage dynamic memory allocation for malloc and free.
And setjmp and longjmp, which needs access to the execution stack.
Also time and the signal.h functions.
I don't know exactly how MIPS works, but on Win32 then OS calls have to be explicitly imported in to a process via the DLL/EXE import table. There could be something similar in the executable format used by the MIPS system.
The usual approach is to emulate not only the CPU, but also a representive set of standard peripherals. Then you start an operating system in your emulator which comes with a libc and hardware drivers included. Libc will invoke the OSes drivers which invoke the virtual hardware in your emulator. For a popular example, see DosBox.
The other interpretation of your question is that you don't want to write a full emulator, but a binary compatibility layer that allows you to execute mips32 binaries on a non-mips32 system. A popular example of that is MacOsX (Intel) that can also execute PowerPC applications.
In the latter scenario you need to emulate either the OSes ABI (application binary interface) or maybe you can get away with libc's ABI. In both cases you need to implement stub code running on the emulator and proxy code running on the host:
The stub serializes the function call arguments
...and transmits them from emulator memory to host memory using some special virtual instructions
The proxy needs to patch the arguments (endianness, integer length, address space ...)
...and executes the function call on the host system
The proxy then paches and serializes the outgoing function arguments
...and transmits them back to the stub
...which returns the data to the caller
Most calls will not be able to work with generic stub/proxy, but need a specific solutions.
Good luck!

Creating a native application for X86?

Is there a way I could make a C or C++ program that would run without an operating system and that would draw something like a red pixel to the top left corner? I have always wondered how these types of applications are made. Since Windows is written in C I imagine there is a way to do this.
Thanks
If you're writing for a bare processor, with no library support at all, you'll have to get all the hardware manuals, figure out how to access your video memory, and perform whatever operations that hardware requires to get a pixel drawn onto the display (or a sound on the beeper, or a block of memory read from the disk, or whatever).
When you're using an operating system, you'll rely on device drivers to know all this for you. Programs are still written, every day, for platforms without operating systems, but rarely for a bare processor. Many small MPUs come with a support library, usually a set of routines that lets you manipulate whatever peripheral devices they support.
It can certainly be done. You typically write the code in C, and you pretty much have to do everything on your own, with no standard library. To set your pixel, you'd usually load a pointer to the physical address of the screen, and write the correct value to that pointer. Alternatively, on a PC you could consider using the VESA BIOS. In all honesty, it's fairly similar to the way most code for MS-DOS was written (most used MS-DOS to read and write data on disk, but little else).
The core bootloader and the part of the Kernel that bootstraps the OS are written in assembly. See http://en.wikipedia.org/wiki/Booting for a brief writeup of how an operating system boots. There's no way I'm aware of to write a bootloader or Kernel purely in a higher level language such as C or C++ without using assembly.
You need to write a bootstrapper and a loader combination followed by a payload which involves setting the VGA mode manually by interrupt, grabbing a handle to the basic video buffer and then writing a value to the 0th byte.
Start here: http://en.wikipedia.org/wiki/Bootstrapping_(computing)
Without an OS it's difficult to have a loader, which means no dynamic libc. You'd have to link statically, as well as have a decent amount of bootstrap code written in assembly (although it could be provided as object files which you could then link with). Also, since you'd be at the mercy of whatever the system has, you'd be stuck with the VESA video modes (unless you want to write your own graphics driver and subsystem, which you don't).
There is, but not generally from within the OS. Initially, they are an asm stub that's executed from the MBR on the drive. See MBR. For x86 processors, this is generally 16-bit processing code, this generally jumps into the operating system code from here, and upgrades to 32-bit/64-bit mode depending on the operating system and chipset.

How can I create an executable to run on a certain processor architecture (instead of certain OS)?

So I take my C++ program in Visual studio, compile, and it'll spit out a nice little EXE file. But EXEs will only run on windows, and I hear a lot about how C/C++ compiles into assembly language, which is runs directly on a processor. The EXE runs with the help of windows, or I could have a program that makes an executable that runs on a mac. But aren't I compiling C++ code into assembly language, which is processor specific?
My Insights:
I'm guessing I'm probably not. I know there's an Intel C++ compiler, so would it make processor-specific assembly code? EXEs run on windows, so they advantage of tons of things already set up, from graphics packages to the massive .NET framework. A processor-specific executable would be literally starting from scratch, with just the instruction set of the processor.
Would this executable be a file-type? We could be running windows and open it, but then would control switch to processor only? I assume this executable would be something like an operating system, in that it would have to be run before anything else was booted up, and have only the processor instruction set to "use".
Let's think about what "run" means...
Something has to load the binary codes into memory. That's an OS feature. The .EXE or binary executable file or bundle or whatever, is formatted in a very OS-specific way so that the OS can load it into memory.
Something has to turn control over to those binary codes. There's the OS, again.
The I/O routines (in C++, but this is true in most places) are just a library that encapsulate OS API's. Drat that OS, it's everywhere.
Reminiscing.
In the olden days (yes, I'm this old) I worked on machines that didn't have OS's. We also didn't have C.
We wrote machine codes using tools like "assemblers" and "linkers" to create big binary images that we could load into the machine. We had to load these binary images through a painful bootstrap process.
We'd use front panel keys to load enough code into memory to read a handy device like a punched paper-tape reader. This would load a small piece of fairly standard boot linking loader software. (We used mylar tape so it wouldn't wear out.)
Then, when we had this linking loader in memory, we could feed the tape we'd prepared earlier with the assembler.
We wrote our own device drivers. Or we used library routines that were in source form, punched on paper tapes.
A "patch" was actually patched pieces of paper tape. Plus, since there were also little bugs, we'd have to adjust the memory image based on hand-written instructions -- patches that hadn't been put into the tape.
Later, we had simple OS's that had simple API's, simple device drivers, and a few utilities like a "file system", an "editor" and a "compiler". It was for a language called Jovial, but we also used Fortran sometimes.
We had to solder serial interface boards so we could plug in a device. We had to write device drivers.
Bottom Line.
You can easily write C++ programs that don't require an OS.
Learn about the hardware BIOS (or BIOS-like) facilities that are part of your processor's chipset. Most modern hardware has a simple OS wired into ROM that does power-on self-test (POST), loads a few simple drivers, and locates boot blocks.
Learn how to write your own boot block. That is the first proper "software" thing that's loaded after POST. This isn't all that hard. You can use various partitioning tools to force your boot block program onto a disk and you'll have complete control over the hardware. No OS.
Learn how GRUB, LILO or BootCamp launch an OS. It's not complicated. Once they're booted, they can load your program and you're off and running. This is slightly simpler because you create the kind of partition that a boot loader wants to load. Base yours on the Linux kernel and you'll be happier. Don't try to figure out how Windows boots -- it's too complicated.
Read up on ELF. http://en.wikipedia.org/wiki/Executable_and_Linkable_Format
Learn how device drivers are written. If you don't use an OS, you'll need to write device drivers.
The problem is that the OS really does a lot to start your programs. The EXE file itself has header information on it that Windows recognizes, identifying itself as an EXE file. Your app does everything, from filesystem access to memory allocations, through the OS.
But yes, you CAN run apps compiled for Windows/intel on other platforms without emulation. If you want to run your EXE on a Mac or UNIX, you will need to install a bit more software to do the work that Windows would do to run your program -- take a look at the "Wine" project.
What you're talking about is what's known in the embedded world as a "bare-metal" application. They're very common for things like a ARM Cortex-M3 that goes in (say) a debit-card validator box or an interactive toy, and doesn't have enough memory or capability to run a full operating system. So, instead of getting an "ARM/Linux" compiler that would compile an application to run on Linux on an ARM processor, you get an "ARM bare-metal" compiler that compiles things to run on an ARM processor without an operating system. (I'm using ARM rather than x86 as an example, because x86 bare-metal applications are really quite rare these days.)
As stated in your question and the other answers, your application will need to do some things that would otherwise be taken care of by the operating system.
First, it needs to initialize the memory system, the interrupt vectors, and various other bits of board goo. Typically this is something that a bare-metal compiler will do for you, though if you have a weird board, you may need to tell it how to do that. This gets things from the point where the board turns on to the point where your main() function starts.
Then, you need to interact with things outside the CPU and RAM. An operating system includes all sorts of functions for doing this -- disk I/O, screen output, keyboard and mouse input, networking, etc., so forth, and so on. Without an operating system, you have to get that from somewhere else. You may get some of that from libraries from your hardware manufacturer; for instance, a board I was recently playing with has a 40x200-pixel LED screen, and it came with a library with the code to turn that on and set individual pixel values on it. And there are several companies selling libraries to implement a TCP/IP stack and things like that, for doing networking or whatnot.
Consider, for example, that this makes it difficult to do even a basic printf. When you have an operating system, printf just sends a message to the operating system that says "put this string on the console", and the operating system finds the current cursor position on the console, and does all the stuff to figure out what pixels to change on the screen, and what CPU instructions to use to change those pixels, in order to do that.
Oh, and did we mention that you first have to figure out how to get the program into the CPU? A typical computer has a bit of programmable ROM that it will load instructions from when it starts up. On an x86, this is the BIOS, and it usually already contains a handy program that gets the CPU started, sets up the display, looks for disks, and loads a program off the disk that it finds. On an embedded system, that's typically where your program goes -- which means you need some way to put your program there. Often, that means you have a device called a "debugger" that's physically attached to your embedded board that loads the program -- and can also do things that allow you to pause the processor and determine what its state is, so that you can step through your program just as if you were running it in a software debugger on your computer. But I digress.
Anyway, to answer your second question, this executable that you'd create is something that gets stored in that ROM on your embedded board -- or perhaps you'd just store a bit of it in ROM (which is, after all, pretty small) and store the rest on a flash drive, and the bit in ROM would include the instructions to get the rest of it off the flash drive. It would probably be stored as a file on your main computer (that is, the Linux or Windows computer where you're creating it), but that's just for storage, it wouldn't run there.
You'll notice that when you've got a lot of these libraries together, they're doing a fair bit of what an operating system does, and there's sort of this space between the pile of libraries and a real operating system. In that space goes what's called an RTOS -- "real-time operating system". The smaller ones of these are really just collections of libraries that work together to do all the operating-systemy things, and sometimes also include stuff so you can run multiple threads at once (and then you can have different threads act like different programs) -- though all of this is all compiled into the same compiled "program", and the RTOS is really nothing more than a library you've included. Larger ones start storing parts of the code in separate places, and I think some of them can even load pieces of code off of disks -- just like Windows and Linux do when running a program. It's sort of a continuum, rather than an either/or.
The FreeRTOS system is an open-source RTOS that's towards the smaller end of the RTOS space; they might be a good place to look at some of this if you're more interested. They do have some examples of x86 applications, which would give you an idea of what sort of x86 systems would run a bare-metal or RTOS-based program and how you'd compile something to run on one; link here: http://www.freertos.org/a00090.html#186.
The computer is not the CPU. To do anything useful, the CPU has to be connected to memory and IO controllers and other devices. An OS takes care of abstracting all of that from running programs. So, if you want to write a program that runs without an OS, your program will have to replicate at least some features of an OS: Taking over from the BIOS during the boot process, initializing devices, communicating with the disk controller to load code and data, communicating with the display controller to show information to the user, communicating with the keyboard controller and the mouse controller to read user input etc etc etc.
Unless you are building an embedded system with specialized hardware, there is no point in doing this. Besides, running your program would mean the user would have to give up running other programs. While this may be acceptable for an ATM today or WordStar in 1984, these days people frown on not being able to check email while listening to music.
Sure, they exist. They are called cross compilers. For example, that's how I can program for the iPhone platform using Xcode.
A related type of compiler is one that compiles for a virtual platform. That's how Java works.
Any given compiler/toolset produces code for a particular processor/OS combination. So your Visual Studio compile example produces code for x86/Windows. That .EXE will only run on x86/Windows and not on (for example) ARM/Windows (as used by some cellphones).
To produce code for a processor/OS combination other than what you're running the compiler on requires what is generally referred to as a cross-compiler. If you have a full professional Visual Studio subscription, you can get the ARM cross compiler, which will allow you to produce ARM/Windows .EXE files which won't run on your desktop machine, but WILL run on an ARM/Windows based cellphone or palmtop.
Yes, you can make an executable that runs on the 'bare metal' of a processor. Obviously that's how operating system kernels work. The main thing you need to do is create an executable that uses no libraries whatsoever. However, the "no libraries" restriction includes the C standard library! So that means no malloc, no printf, etc. You have to basically be your own OS and manage memory and I/O yourself. This will inevitably require a fair bit of work directly in assembly at some stage.
You also lose several other luxuries, such as main(), which can't be the starting point of your program since main() is something that is invoked by the OS and the C runtime environment.
Absolutely! That is what embedded programming is. As many have probably said already the operating system does quite a bit for you. And even in the embedded world without an operating system a number of the development tools will provide the startup code to get the processor running enough to jump to your program. Some/many provide full blow C/C++ libraries so that you can call functions like memcpy() and sometimes even malloc() and printf().
You are welcome to provide every line of code and every instruction and not use a development tool package but still use a compiler like gcc for example. Some of the binary formats are common to those run on operating systems like elf for example. You can execute elf files on Linux but also have your embedded program result in an elf binary. The processor cannot execute elf in that format but whatever programs the boot prom or ram in some cases will extract the binary program from the elf file, not unlike an operating system extracting the program to run from an elf file. EXE is not one of those file formats. Your favorite windows application compiler is probably not an embedded compiler either although you can sometimes use one to do the high level language stuff and then use an alternative assembler and linker. More work than it is worth usually. For example you write a function in C (that does NOT make any library or system calls), compile that to an object. Write your own or find a utility to extract the compiled binary from that object, convert it to another object format or to assembler (disassemble). Add your startup code and other assembly to it. Assemble and link everything together as an embedded program. I did it once with Microsofts embedded visual C just to see how it measured up to other compilers, it wasnt horrible but certainly was not worth the effort of hacking to get at the output.
Every processor from the one in your computer to the one in your cell phone or microwave has too have some boot up code. That code is not running on an operating system. That code uses the same or similar compilers than operating system applications use. For some devices that code puts the processor and memory and on and off chip peripherals in a state where the operating system can be started. From there the operating system takes over. On your computer this would be the BIOS followed by the bootloader, then eventually the operating system, dos, windows, linux, etc.
The main problem is the file format. PE is very different to ELF(Used in unix-like systems). A valid PE program cannot be a valid ELF. So, you either load the binary dynamically with different starters or you have to give up.
Other than that, with knowledge of OS services, the value of registers at startup, etc. your code can probably detect easily and reliably which OS you are running under and act accordingly(Some malware does just that). Another challenge is then reusing code instead of having two or more different programs in the same binary. Basically you would have to write an emulator, at least for the services that you need.
Don't also forget about the Windows libraries. Look into QT and GTK+

Are there any downsides to using UPX to compress a Windows executable?

I've used UPX before to reduce the size of my Windows executables, but I must admit that I am naive to any negative side effects this could have. What's the downside to all of this packing/unpacking?
Are there scenarios in which anyone would recommend NOT UPX-ing an executable (e.g. when writing a DLL, Windows Service, or when targeting Vista or Win7)? I write most of my code in Delphi, but I've used UPX to compress C/C++ executables as well.
On a side note, I'm not running UPX in some attempt to protect my exe from disassemblers, only to reduce the size of the executable and prevent cursory tampering.
... there are downsides to
using EXE compressors. Most notably:
Upon startup of a compressed EXE/DLL, all of the code is
decompressed from the disk image into
memory in one pass, which can cause
disk thrashing if the system is low on
memory and is forced to access the
swap file. In contrast, with
uncompressed EXE/DLLs, the OS
allocates memory for code pages on
demand (i.e. when they are executed).
Multiple instances of a compressed EXE/DLL create multiple
instances of the code in memory. If
you have a compressed EXE that
contains 1 MB of code (before
compression) and the user starts 5
instances of it, approximately 4 MB of
memory is wasted. Likewise, if you
have a DLL that is 1 MB and it is used
by 5 running applications,
approximately 4 MB of memory is
wasted. With uncompressed EXE/DLLs,
code is only stored in memory once and
is shared between instances.
http://www.jrsoftware.org/striprlc.php#execomp
I'm surprised this hasn't been mentioned yet but using UPX-packed executables also increases the risk of producing false-positives from heuristic anti-virus software because statistically a lot of malware also uses UPX.
There are three drawbacks:
The whole code will be fully uncompressed in virtual memory, while in a regular EXE or DLL, only the code actually used is loaded in memory. This is especially relevant if only a small portion of the code in your EXE/DLL is used at each run.
If there are multiple instances of your DLL and EXE running, their code can't be shared across the instances, so you'll be using more memory.
If your EXE/DLL is already in cache, or on a very fast storage medium, or if the CPU you're running on is slow, you will experience reduced startup speed as decompression will still have to take place, and you won't benefit from the reduced size. This is especially true for an EXE that will be invoked multiple times repeatedly.
Thus the above drawbacks are more of an issue if your EXE or DLLs contains lots of resources, but otherwise, they may not be much of a factor in practice, given the relative size of executables and available memory, unless you're talking of DLLs used by lots of executables (like system DLLs).
To dispell some incorrect information in other answers:
UPX will not affect your ability to run on DEP-protected machines.
UPX will not affect the ability of major anti-virus software, as they support UPX-compressed executables (as well as other executable compression formats).
UPX has been able to use LZMA compression for some time now (7zip's compression algorithm), use the --lzma switch.
The only time size matters is during download off the Internet. If you are using UPX then you actually get worse performance than if you use 7-zip (based on my testing 7-Zip is twice as good as UPX). Then when it is actually left compressed on the target computer your performance is decreased (see Lars' answer). So UPX is not a good solution for file size. Just 7zip the whole thing.
As far as to prevent tampering, it is a FAIL as well. UPX supports decompressing too. If someone wants to modify the EXE then they will see it is compress with UPX and then uncompress it. The percentage of possible crackers you might slow down does not justify the effort and performance loss.
A better solution would be to use binary signing or at least just a hash. A simple hash verification system is to take a hash of your binary and a secret value (usually a guid). Only your EXE knows the secret value, so when it recalculates the hash for verification it can use it again. This isn't perfect (the secret value can be retrieved). The ideal situation would be to use a certificate and a signature.
The final size of the executable on disk is largely irrelevant these days. Your program may load a few milliseconds faster, but once it starts running the difference is indistinguishable.
Some people may be more suspicious of your executable just because it is compressed with UPX. Depending on your end users, this may or may not be an important consideration.
The last time I tried to use it on a managed assembly, it munged it so bad that the runtime refused to load it. That's the only time I can think of that you wouldn't want to use it (and, really, it's been so long since I tried that that the situation may even be better now). I've used it extensively in the past on all types of unmanaged binaries, and never had an issue.
If your only interest is in decreasing the size of the executables, then have you tried comparing the size of the executable with and without runtime packages? Granted you will have to also include the sizes of the packages overall along with your executable, but if you have multiple executables which use the same base packages, then your savings would be rather high.
Another thing to look at would be the graphics/glyphs you use in your program. You can save quite a bit of space by consolidating them to a single Timagelist included in a global data module rather than have them repeated on each form. I believe each image is stored in the form resource as hex, so that would mean that each byte takes up two bytes...you can shrink this a bit by loading the image from a RCData resource using a TResourceStream.
There are no drawbacks.
But just FYI, there is a very common misconception regarding UPX as--
resources are NOT just being compressed
Essentially you are building a new executable that has a "loader" duty and the "real" executable, well, is being section-stripped and compressed, placed as a binary-data resource of the loader executable (regardless the types of resources were in the original executable).
Using reverse-engineering methods and tools either for education purposes or other will show you the information regarding the "loader executable", and not variable information regarding the original executable.
IMHO routinely UPXing is pointless, but the reasons are spelled above, mostly, memory is more expensive than disk.
Erik: the LZMA stub might be bigger. Even if the algorithm is better, it does not always be a net plus.
Virus scanners that look for 'unknown' viruses can flag UPX compressed executables as having a virus. I have been told this is because several viruses use UPX to hide themselves. I have used UPX on software and McAfee will flag the file as having a virus.
The reason UPX has so many false alarms is because its open licensing allows malware authors to use and modify it with impunity. Of course, this issue is inherent to the industry, but sadly the great UPX project is plagued by this problem.
UPDATE: Note that as the Taggant project is completed, the ability to use UPX (or anything else) without causing false positives will be enhanced, assuming UPX supports it.
I believe there is a possibility that it might not work on computers that have DEP (Data Execution Prevention) turned on.
When Windows load a binary, first thing it does is called Import/Export Table resolution. Ie, what ever API and DLL that is indicated in the Import Table, it will first load the DLL into a randomly generated base address. And using the base address plus offset into the DLL's function, this information will be updated to the Import Table.
EXE does not have Export Table.
All these happened even before jumping to the original entry point for execution.
Then after it start executing from the entry point, the EXE will run a small piece of code before starting the decompression algorithm. This small piece of code also means that the Windows API needed will be very small, resulting in a small Import Table.
But after the binary is decompressed, if it started to use any Windows API not resolved before, then likely it is going to crash. So it is essential that the decompression routine will resolve and update the Import Table for all the referenced Window API inside the decompressed codes, before executing the decompressed codes.
References:
https://malwaretips.com/threads/malware-analysis-2-pe-imports-static-analysis.62135/