I have a Linux executable that seems to have been written in C++ using a GNU compiler, and in debug mode. I'd like to know if my conclusion is correct and what are my chances of decompiling it into something at least somewhat readable. Here's a few telltale snippets from the executable:
Snippet 1
</heap>
<malloc version="1">
nclears >= 3 Arena %d:
system bytes = %10u
in use bytes = %10u
Total (incl. mmap):
max mmap regions = %10u
Snippet 2
__gnu_cxx::__concurrence_lock_error
Snippet 3
ELF file ABI version invalid ◻ invalid ELF header ◻ ELF file OS ABI invalid
Snippet 4
GCC: (Ubuntu/Linaro 4.4.4-14ubuntu5) 4.4.5 ◻
GCC: (Ubuntu/Linaro 4.4.4-14ubuntu1) 4.4.5 20100909 (prerelease)
Snippet 5
_dl_debug_vdprintf pid >= 0 && sizeof (pid_t) <= 4
...
_dl_debug_initialize (0, args->nsid)->r_state == RT_CONSISTENT
The file is full of readable text like this (but most of it is still gibberish). I don't think you should find this kind of text in an executable compiled in release mode (then again, my knowledge on the matter is very limited). What decompiler should I try to use on this executable?
why dont you put a breakpoint in main and run the executable??? if it is debug mode, you could see the full source code(if the source file is present in the same path). you could step in each step and see the function calls with exact arguments.
Related
I am beginner regarding gcc command line compilation.
I need a help regarding -m64 flag.
I installed gcc compiler using MinGW.
I checked for gcc version by following,
gcc -v command, which shows Target: x86_64-w64-mingw32.
So I assume, 64-bit version of gcc is installed.
Objective: I wrote a small program to check, if the main.exe is generated for 32 or 64 bit.
#include<stdio.h>
int main(void)
{
printf("The Size is: %lu\n", sizeof(long));
return 0;
}
I compiled using following command, gcc -o main main.c. When I execute the main.exe, it outputs, The Size is: 4.
But I expected the output to be `The Size is: 8'.
So i modified the command as gcc -m64 -o main main.c. When I executed the main.exe again, still it outputs `The Size is: 4'
How to compile for 64-bit version exe?
As others have said in the comments, the size of long can be 8 or 4 bytes on a 64bit system. You can try sizeof(size_t) or sizeof(void*). Even this might not be reliable on every system (but should work for Windows, Linux, macOS).
Here is a better way of doing it.
First download Sigcheck from Microsoft https://learn.microsoft.com/en-us/sysinternals/downloads/sigcheck then run it like below:
C:\Sigcheck>sigcheck64.exe -u -e "C:\Sublime C++ Projects\runtime_measure.exe"
Sigcheck v2.82 - File version and signature viewer
Copyright (C) 2004-2021 Mark Russinovich
Sysinternals - www.sysinternals.com
c:\sublime c++ projects\runtime_measure.exe:
Verified: Unsigned
Link date: 7:43 PM 12/8/2021
Publisher: n/a
Company: n/a
Description: n/a
Product: n/a
Prod version: n/a
File version: n/a
MachineType: 64-bit
As you can see, in this case, runtime_measure.exe is a 64-bit binary.
Don't forget to give the correct address so that the terminal can find and execute sigcheck64.exe from the directory you have placed it.
Also, notice the use of two parameters -u and -e in the command.
x86_64-w64-mingw32:
The mingw32 is compiler that will generate 32bits executables.
The references to 64bit in you package name indicates that this compiler runs in 64bits mode.
If you wan't to generate 64 bits executables, you will need mingw64 compiler:
https://www.mingw-w64.org/
I'm using an unordered set for the first time for my Data Structures Class. When I try to run this code on our schools server, it tells me its the wrong architecture. Here is my main code(RAJ.cpp):
#include<iostream>
#include<tr1/unordered_set>
#include "nflData.h"
using namespace std;
using std::tr1::unordered_set;
struct ihash: std::unary_function<NFLData, std::size_t> {
std::size_t operator()(const NFLData& x) const
{
return x.getDown();//Currently just trying to return a value, will not be actual has function.
}
};
int main(){
string a = "20070906_NO#IND,1,46,42,IND,NO,2,6,27,(1:42) P.Manning pass deep left to M.Harrison for 27 yards TOUCHDOWN.,0,0,2007";
string b = "20070906_NO#IND,1,46,42,IND,NO,3,6,27,(1:42) P.Manning pass deep left to [88'] for 27 yards TOUCHDOWN.,0,0,2007";
string c = "20070906_NO#IND,1,46,42,IND,NO,,,27,A.Vinatieri extra point is GOOD Center-J.Snow Holder-H.Smith.,0,0,2007";
unordered_set<NFLData, ihash> myset;
cout << "\ninsert data a";
myset.insert(NFLData(a));
cout << "\ninsert data b";
myset.insert(NFLData(b));
}
And here is the main error I receive when trying to run after successfully compiling with g++:
./test: Exec format error. Wrong Architecture.
It should be noted, this same code works fine when templated for an integer type
You need to compile the program for the type of machine you're going to run it on. The type of machine you compiled this for does not match your school's computer.
If the school has a compiler installed on its server, use it to compile your program.
You can see what type of executable you have with the file command under UNIX, Linux and MacOS X. For example:
$ file /bin/ls # on my Linux box
/bin/ls: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, stripped
$ file /bin/ls # on my MacBook Pro
/bin/ls: Mach-O usiversal binary with 2 architectures
/bin/ls (for architecture x86_64): Mach-O 64-bit executable x86_64
/bin/ls (for architecture i386): Mach-O executable i386
Usually, different operating systems are able to at least minimally identify executables for foreign systems, but not always. That is, it'll identify that it's foreign, but might not be able to identify which foreign system.
If you are compiling the code on your school's server, then something else strange is afoot. The file command above should help rule out certain things. BTW, you might list out what compiler flags you're using, and the output of file for the version that works and the version that does not.
One other thing to check: Make sure your final compile step does not include the -c flag to g++. That flag tells G++ that you're building an intermediate object, not the final object.
I have created a small program as a proof-of-concept for a system which are to be implemented on an embedded platform. The program is written in C++11 with use of std and compiled to run on a laptop. The final program which should be implemented later is an embedded system. We do not have access to the compiler of the embedded platform.
I would like to know if there is a way to determine a programs static memory (the size of the compiled binaries) in a sensible and comparable way when it should be ported to an embedded platform.
The requirement is that the size of the binary is less than 10kb.
Our binary has a size of 700Kb when compiled and stripped with the following flags:
g++ options: -Os -s -ffunction-sections -fdata-sections
linker options: -s -Wl,--gc-sections
strip libmodel.a -s -R .comment -R .gnu.version --strip-unneeded -R .note
It took up 4MB before we used strip and optimization options.
I am still way off and it is not really that big a program. How can I justify a comparison in any way with an equivalent program on an embedded platform.
Note that the size of the binary can be a little deceptive in the sense that uninitialised variables, the .bss sections, will not necessarily take up physical space in the binary as these are generally just noted as present without actually have any space given to them... this normally happens by the OS loader when it runs your program.
objdump (http://www.gnu.org/software/binutils/) or perhaps elfdump or the elf tool chain (http://sourceforge.net/apps/trac/elftoolchain/) will help you determine the size of your various segments, data and text, as well as the size of individual functions and globals etc. All these programs "look" into your compiled binary and extract a lot of information such as the size of the .text, .data section, list the various symbols, their locations and sizes, and can even dissasemble the .text section...
An example of using elfdump on an ELF image test.elf might be elfdump -z test.elf > output.txt. This will dump everything including text section dissassembly. For example, from an elfdump on my system I saw
Section #6: .text, type=NOBITS, addr=0x500, off=0x5f168
size=149404(0x2479c), link=0, info=0, align=16, entsize=1
flags=<WRITE,ALLOC,EXECINSTR>
Section #7: .text, type=NOBITS, addr=0x24c9c, off=0x5f168
size=362822(0x58946), link=0, info=0, align=4, entsize=1
flags=<WRITE,ALLOC,EXECINSTR,INCLUDE>
....
Section #9: .rodata, type=NOBITS, addr=0x7d5e4, off=0x5f168
size=7670(0x1df6), link=0, info=0, align=4, entsize=1
flags=<WRITE,ALLOC>
So I can see how much my code is taking up (the .text sections) and my read only data. Later in the file I then see...
Symbol table ".symtab"
Value Size Bind Type Section Name
----- ---- ---- ---- ------- ----
218 0x7c090 130 LOC FUNC .text IRemovedThisName
So I can see that my function IRemovedThisName takes 130 bytes. A quick script would allow you list functions sorted by size and variables sorted by size. This could point you at places to optimize...
For a good example of objdump try http://www.thegeekstuff.com/2012/09/objdump-examples/, specifically the section 3, which shows you how to get the contents of the section headers using the -h option.
As to how the program will compare on two different platforms I think you will just have to compile on both platforms and compare the results you get from your obj/elfdump on each system - the results will depend on the system instruction set, how well each compiler can optimize, general hardware architecture differences etc.
If you don't have access to the embedded system, you might try using a cross-compiler, configured for your eventual target, on your laptop. This would give you a binary suited to the embedded platform and the tools to analyze the file (i.e. the cross-platform version of objdump). This would give you some ball-park figures for how the program would look on the eventual embedded sys.
Hope this helps.
EDIT: This will also help How to get the size of a C function from inside a C program or with inline assembly?
It appeared that the included libraries took up an enormous of space (as it was pointed out in the comment) and by removing these it was possible to reduce the size to nearly nothing in combination with the following flags:
set(CMAKE_CXX_FLAGS "-Os -s -ffunction-sections -fdata-sections -DNO_STD -fno-rtti -fno-exceptions")
set(CMAKE_EXE_LINKER_FLAGS "-s -Wl,--gc-sections")
And stripping away any unnecessary code using:
strip libmodel.a -s -R .comment -R .gnu.version --strip-unneeded -R .note
The 4MB could be reduced to 9.4kb which is below our limit.
In summary, std takes up an tremendous amount of space.
I'm trying to port FLAC encoder using Adobe Alchemy for use in flash but can't figure out where the problem is.
I'm using Alchemy for Cygwin on Windows. It is properly installed and configured.
The following are the steps that I have followed in order to port FLAC encoder:
Download the latest version of FLAC sources (1.2.1)
Configure FLAC sources (./configure --enable-static=true --enable-shared=false) with the alchemy enabled (alc-on before configure)
Compile libFLAC with the alchemy enabled (make in src/libFLAC folder)
Copy header files and compiled static library (libFLAC.a) to alchemy folders (${ACLHEMY_HOME}/usr/local/include and ${ACLHEMY_HOME}/usr/local/lib respectively)
Finally, compile SWC in that way:
gcc encodeflac.c -O3 -Wall -swc -lFLAC -o encodeflac.swc
or (whatever)
gcc encodeflac.c -O3 -Wall -swc -lflac -o encodeflac.swc
encodeflac.c is the modified version of example included in FLAC sources (examples/c/encode/file/main.c) and adopted to work with ActionScript ByteArrays.
The swc will compile without warnings or errors. But the final swc size is only 85kb, while the static library size (libFLAC.a) is about 1mb! Also, the encoding is not working.
I get the following error when trying to use it in AS:
[Fault] exception, information=Undefined sym: FLAC_stream_encoder_new
Does it mean that the static library is not included in swc? Why?
Thanks in advance.
Alchemy's swc linker doesn't have very good error reporting, which makes debugging it hard. What's happening is that the linker isn't finding the lib. How to fix it:
gcc is case-sensitive. You must use -lFLAC (not -lflac)
alchemy needs the FLAC.l.bc file that was generated when you built libFLAC.a
Unfortunately, getting it to actually link ends up producing a link-time error:
Cannot yet select: 0x198b960: i32 = ConstantPool < i64 6881500230622117888> 0
0 llc 0x00636dfe _ZNSt8_Rb_treeIN4llvm3sys4PathES2_St9_IdentityIS2_ESt4lessIS2_ESaIS2_EE13insert_uniqueERKS2_ + 6078
1 llc 0x006373a2 _ZNSt8_Rb_treeIN4llvm3sys4PathES2_St9_IdentityIS2_ESt4lessIS2_ESaIS2_EE13insert_uniqueERKS2_ + 7522
2 libSystem.B.dylib 0x9402f2bb _sigtramp + 43
3 ??? 0xffffffff 0x0 + 4294967295
4 libSystem.B.dylib 0x940a323a raise + 26
5 libSystem.B.dylib 0x940af679 abort + 73
6 llc 0x002f862b _ZN98_GLOBAL__N__Volumes_data_dev_FlaCC_llvm_2.1_lib_Target_AVM2_AVM2ISelDAGToDAG.cpp_00000000_F04616B616AVM2DAGToDAGISel10SelectCodeEN4llvm9SDOperandE + 187
7 llc 0x002fa193 _ZN98_GLOBAL__N__Volumes_data_dev_FlaCC_llvm_2.1_lib_Target_AVM2_AVM2ISelDAGToDAG.cpp_00000000_F04616B616AVM2DAGToDAGISel10SelectRootEN4llvm9SDOperandE + 819
8 llc 0x002e6a2c _ZN4llvm19X86_64TargetMachineD0Ev + 65116
9 llc 0x003de4ca _ZN4llvm11StoreSDNodeD1Ev + 1610
10 llc 0x0040d3fe _ZN4llvm11StoreSDNodeD1Ev + 193918
11 llc 0x0040f92e _ZN4llvm11StoreSDNodeD1Ev + 203438
12 llc 0x005d1926 _ZN4llvm12FunctionPassD1Ev + 20998
13 llc 0x005d1f3a _ZN4llvm12FunctionPassD1Ev + 22554
14 llc 0x005d20c5 _ZN4llvm12FunctionPassD1Ev + 22949
15 llc 0x00002e44 _mh_execute_header + 7748
16 llc 0x00001f36 _mh_execute_header + 3894
17 ??? 0x00000006 0x0 + 6
I saw this same error when trying to build libFLAC (v1.2.1) as a whole (not just the library). This error happens when there's some kind of C code that produces LLVM bytecode that Alchemy can't handle. (It's unclear if this is a problem with what LLVM produces or a bug with Alchemy.)
You have to figure out where the offending code is and change it into something that Alchemy likes (without actually changing the logic!). I seem to remember someone having a similar problem with ffmpeg:
http://forums.adobe.com/message/2905914#2905914
Took me a while but I managed to track down the linking error to this assignment on line 956 in stream_encoder.c (version 1.2.1):
encoder->private_->local_fixed_compute_best_predictor = FLAC__fixed_compute_best_predictor_wide
It actually seems to have something to do with the symbol name of the wide method. Haven't figured out a good solution yet. I'll amend my answer when I do. Do note that this is only an issue if the block size is too big (more than 4096 at 16 bits), which by default never is the case, so you can safely comment out the assignment and not deal with the real problem...
And just a heads up: when you are actually using the Flac library and all you're getting is zeros, check the SWAP_BE_WORD_TO_HOST macro in bitwriter.c. For some reason ntohl is only returning zeros. Try defining your own endianness swapper like this:
#define SWAP_BE_WORD_TO_HOST(x) (x<<24|(x&0x0000FF00)<<8|(x&0x00FF0000)>>8|x>>24)
Hope it helps anyone trying to get the Flac lib to compile in alchemy.
I have used two different versions of GDB, both give problems in the following code:
Trimmed down code in MyFile.h:
template<class T>
struct ABC: PQR<T> {
void flow(PP pp) {
const QX qx = XYZ<Z>::foo(pp); // Trying to set a breakpoint here, line no. 2533
ASSERTp(qx >= last_qx());
}
}
GDB 7.1:
Reading symbols from /path_to_exec/exec...done.
(gdb) break MyFile.h:2533
Note: breakpoint 1 also set at pc 0x121.
Note: breakpoint 1 also set at pc 0x121.
Note: breakpoint 1 also set at pc 0x121.
Note: breakpoint 1 also set at pc 0x156.
Note: breakpoint 1 also set at pc 0x156.
Note: breakpoint 1 also set at pc 0x121.
Note: breakpoint 1 also set at pc 0x121.
Note: breakpoint 1 also set at pc 0x121.
Note: breakpoint 1 also set at pc 0x121.
Note: breakpoint 1 also set at pc 0x121.
Note: breakpoint 1 also set at pc 0x121.
Note: breakpoint 1 also set at pc 0x156.
Note: breakpoint 1 also set at pc 0x156.
Note: breakpoint 1 also set at pc 0x121.
Breakpoint 1 at 0x44e5c4: file PacketEngine.h, line 2533. (23 locations)
(gdb) run
Starting program: /path_to_exec/exec -options
Warning:
Cannot insert breakpoint 1.
Error accessing memory address 0x121: Input/output error.
Cannot insert breakpoint 1.
Error accessing memory address 0x156: Input/output error.
Why is it trying to set 23 breakpoints for one? And further down, it is giving error on run
GDB 6.3:
This GDB was configured as "x86_64-redhat-linux-gnu"...Using host libthread_db library "/lib64/tls/libthread_db.so.1".
(gdb) break MyFile.h:2533
No line 2533 in file "MyFile.h".
At start of the program, it doesn't even accept the breakpoint
If I break in function ASSERTp, it breaks. Then. if I go "UP", and type break, it successfully inserts breakpoint (break MyFile.h:2533). [thus somehow it finds the file/line after the program actually runs]. However, despite the breakpoint being set, on rerunning the program it does not stop at line 2533 but 2534 only (breakpoint in function ASSERTp).
My questions:
1) Can someone please help me solve this?
2) I have often had problems with template code and GDB. Is there any good & free C++ debugger for templates?
3) Not really important, but a side question if it matters: Which version is preferable? The 7.1 seems to be more buggy, but I remember on some runs, it gives less problems.
System info:
uname -a
Linux ... 2.6.9-67.ELsmp #1 SMP Fri Nov 16 12:49:06 EST 2007 x86_64 x86_64 x86_64 GNU/Linux
file /usr/bin/gdb #### GDB 6.3
/usr/bin/gdb: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.4.0, dynamically linked (uses shared libs), stripped
file ~/local/bin/gdb #### GDB 7.1
/home/user/local/bin/gdb: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.4.0, dynamically linked (uses shared libs), not stripped
file /path_to_exec/exec
/path_to_exec/exec: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.4.0, dynamically linked (uses shared libs), not stripped
I am not of any other debugger for linux, but I never experienced such problems as you explained.
You formulated your question really nice (so you probably did), but did you compile your sources with debug symbols?
EDIT
btw I haven't tried gdb 7.1 - only 6.8 version. If you think it is very buggy, try using the last version of the 6 version.
I have seen something similar (using GDB 7.0) where a breakpoint set in a template function is never hit.
Our project is built using an old version of G++ (much older than the version shipped in my distro). I found that by building a version of GDB using the same compiler we develop with that the problem was solved.
gdb sets a different breakpoint for each instantiated template, i.e for each different type assumed by T (and perhaps Z) in your program. However, the addresses that it is trying to set breakpoints at 0x121 seem to be too low and probably correspond to some system locations. This is probably why gdb can't set the breakpoints.
You should try gdb 7.2, perhaps that will help.
Also, e2dbg is a different type of debugger for Linux, but it is not as mature as gdb.
http://www.eresi-project.org/wiki/TheEmbeddedELFDebugger