It is straightforward for a reverse engineer to attach a graphics debugger to an OpenGL application to extract the shader source code. It is my understanding that Vulkan, on the other hand, uses SPIR-V bytecode, rather than passing plaintext shaders to the graphics API.
Does SPIR-V bytecode obfuscate the shader source, or is it fairly easy to decompile?
There is an entire specification explaining, in explicit detail, the behavior of each and every SPIR-V opcode. That's kinda the opposite of obfuscation. But there's more to it than that.
SPIR-V, despite being "assembly", retains a rich amount of information about the source program. It contains structure definitions, function definitions with parameter and return types, looping and conditional constructs, etc. Writing a decompiler for SPIR-V is not at all difficult.
SPIR-V also can optionally contain fragments of text that annotate various SPIR-V definitions. This is more of a function of the environment that compiled to SPIR-V, but the output SPIR-V can contain variable names, structure names, and etc. These OpName decorations can all be easily culled if you wish.
But even without names, all of the important structural information is there. So the security gains from SPIR-V compared to raw GLSL is rather minimal.
It doesn't do any real obfuscation. The only thing it could really do is strip the variable names.
If the application is not willing to complicate the actual calculation then that's about it.
It cannot do much with control flow because vulkan requires structured control flow. Where each conditional branch must have a merge block and every loop has a strict structure.
SPIR opcodes behaves like bytecodes in Java:
it creates a neutral meta-operators, the opcodes, closer of machine raw codes, to easy Spir driver translation to raw GPU code.
as advantage, the opcodes avoids the distribution of plain source codes, and the compiled spir opcodes should have compile issues, as typo or syntax errors - it is already compiled;
a disadvantage is the reversibility of binary representation to plain source code again.
There is no easy workaround to the reversibility of opcodes to plain code. Some solutions used in Java field are:
obfuscation - like ProGuard does for Java's bytecode - not sure if this is possible with SPIR;
code encryption with symmetric key - the key is hardcoded in your C code.
code encryption with asymmetric keys - private key comes from web, after login to server.
Related
What i have is:
a hex file with the bytes of a c-struct in it, orderd in big-endian
the struct definition as *.h file
the struct information as dwarf2 debug info
My application has to be written in C / C++. Intermediate scripts using for example python would be ok.
What i have to do is read the bytes of the hex-file and cast it into the struct type on a system that is little-endian.
And during this process, i will have to reverse the bytes of each struct member.
The obvious solution would be to write a conversion function, that does byteswapping for each struct-member, but since the struct has multiple layers and ~1200 members that are changing faster than i can update my conversion function, writing that by hand is no solution.
So i could generate the conversion function automatically by:
Finding and parsing the types inside multiple *.h files
Iterating members of all struct-types and generate swaps for them -> without some sort of reflection api not that easy)
loading the struct via the conversion function.
Since this solution seems like quite a bit of work, i was wandering if there is easier way like telling the compiler to swap it or use debug-info somehow.
Does anybody know a trick that might help in this case?
Thanks and greetings!
Remark:
Changing any of the processes leading to this / changing the input-conditions or delegating responsibilities to other developers involved is not pssible.
Changing something about the hex-file as an input is not possible. This file comes out of some other system that will not change to fix this problem here.
Padding, Datatype-sizes etc. are identical. This is ensured by other measures, too. So endianess is defenetly the only problem. This is also why i see no reason against using dwarf2 info to identify the bytes of every struct member.
I agree that the layout of the struct is very bad. But It has some reasons why it is that way and to be short, i can/am not allowed not change that anyway because of process-reasons and backwards compatibility.
To give some more scope:
The Software that all of this is used in is deployed to multiple different embedded devices (multiple types). The hex-file containes the calibration information of the software and is thus stored in a specific system that can only output this hex-file.
I am now porting the software to a little-endian device and i have to use the hex-file given from the "main" branch of software, which is big-endian, as an input.
There is no way to tell C or C++ compiler to swap bytes from LE to BE or vice versa automatically. You really have to do it yourself. If your data structs are really huge, probably the best way is to implement automatic conversion code generation.
The problem, as far as I understand it, is tricky but tractable. As far as I understand, data extraction won't be running on an embedded device, so it won't be resource constrained. I say - embrace the runtime inefficiency that desktop hardware allows, and go for easy to debug instead.
Instead of thinking of the source file as "almost what I need modulo a couple of minor adjustments", think of it as "generic binary file with an open ended, evolving schema". The schema description is the DWARF data.
What I would do: start a Python project. Use the pyelftools PyPI module to parse the DWARF. Scroll for the compile units (CUs). In each CU, scroll through the top level entries (DIEs). Look for a DW_TAG_structure_type DIE with a specific value of DW_AT_name (I hope the struct name is known in advance). Then go through the DW_TAG_member sub-DIEs. DW_AT_data_member_location will give you the offset, letting you work around the padding. Look at DW_AT_type to detect the member type (you'd have to resolve the DIE reference for that). Recurse into struct- and array-type members as necessary.
From that, generate a format string for the struct.unpack method - it can read big-endian ints seamlessly. Then use struct.pack to format it into whatever format the C++ consumer expects.
This depends on you being able to track the data file to the DWARF info of the generating executable, exactly the same build. I hope the processes of the organization allow for that.
Recent versions of GCC allow the declaration of the desired endianness irrespective of the target platform for a source code section using the pragma scalar_storage_order or a specific type using an attribute with the same identifier. The main catch: g++ does not support this. Also, this won't work in all cases. For example, taking a pointer to a member with transparent endianness conversion leads to an error. Unless you're okay with sticking to C for struct access (it all depends on your current codebase), this is not an option.
The persistence layout is based on the original struct layout - so be it. However, a more explicit approach of serializing the structs should be preferred for exactly the reason you bring this up. Besides the endianness issue, struct packing also affects compatibility and should be explicitly specified. For persistence, a packing of 1 would be optimal. For in-memory data structures, that alignment is far from optimal in terms of performance and concurrency characteristics. Also, different platforms might have incompatible data types (e.g. sizeof(long) on 64-bit Linux/Windows - LP64 vs. LLP64). So, keeping the persistence layout separate from in-memory data structures tends to have a long list of advantages and therefore usually outweights the disadvantage of having to maintain the serialization code separately. Particularly, if portability is a major concern.
You could take advantage of C/C++-based reflection libraries or implement one yourself. In case of C, this will definitely require macros (e.g. Metaresc). In case of C++, you might actually get away your original struct definitions (e.g. Boost.Precise and Flat Reflection).
If reflection is not an option, you could generate the serialization code either by parsing the headers or debug symbols. Generally, parsing C/C++ is more complex. By moving the structs involved into dedicated headers, you might get away with a simple C/C++ parser. To make things easier, you could simplify parsing by processing the gdb output of ptype based on debug symbols. Or, you could parse debug symbols directly. With a scripting language like Python, both approaches should be feasible (pygccxml and pyelftools come to mind).
Rather than sticking to generating the serialization code as part of the build process, you could generate that code once and require updates whenever the structs change in the future. That's what I would do in a multi-platform scenario. Doing that would also spare you the pain of implementing a perfect parser that can deal with all kinds of C/C++ input, it would only have to be good enough for one-time generation.
I'm developing a new language in LLVM using the C++ API which compiles down to target the C ABI.
I would like to support modular compilation by allowing end users to build what are effectively static libraries. I noticed the LLVM C++ API has a llvm::Linker class that I can use during compilation to combine source files (llvm::Module), however I wanted to guarantee library compatibility via metadata version numbers or at least the publicly exposed interface between separate compilation runs.
Much of the information available on metadata in LLVM suggest that it should only be used for extended information that would not break correctness when silently removed.
llvm
blog
IntrinsicsMetadataAttributes
pdf
I wouldn't think this would be a deal breaker as it could be global metadata, but it would be good to get a second opinion on that point.
I also know there is a method in IRReader to parseIRFile so I can load some previously built bc files. I would be curious if it would be reasonable practice to include size and CRC information for comparison when loading these files.
My language has concepts similar to C# including interfaces. I figure I could allow modular compilation by importing/exporting an interface type along with external functions (Much like C++, I don't restrict the language to only methods of classes).
This approach allows me to include language specific information in the interface without needing to encode it in the IR as both the library and the calling code would be required to build with the interface. This again requires the interfaces to be compatible.
One language feature that would require extended information would be named parameters in functions.
My language is very type-safe and also mandates named parameters so there is no predetermined function parameter order. This allows call sites to be more explicit, the compiler to catch erroneous parameter usage, and authors have more liberty in determining default parameters as they are not restricted to the last parameters to the function.
The compiler will need to know names, modifiers, defaults, etc. of these parameters to correctly map calls at compile time, so I figure the interface approach would work well here.
TL;DR
Does LLVM have any predefined facilities for building static libraries?
Is version number, size, and CRC information reasonable use cases for LLVM's metadata?
This is probably not QUITE an answer... Or at least not a complete answer.
I like this question, as I'm going to need a solution in the future too (some time in the next few months or years) for my Pascal compiler. It supports "units" which is meant to be a separately compiled object, but currently what I do is simply drag in the source file and compile it into the main llvm::Module - that's neither efficient nor flexible (can't use the linker to choose between the "Linux" and "Windows" version of some code, for example - not that I think there is 5% chance that my compiler will work on Windows without modification anyway...)
However, I'm not sure storing the "object" file as LLVM IR would be the right thing to do. I was thinking that a better way would be to store your AST in some serialized form - then
you don't depend on LLVM versions changing the IR format.
You can add whatever metadata you like. There won't be much
difference in generating LLVM-IR from this during your link phase or
building the IR at compile and then reading the IR to figure out if
the metadata is correct. [The slow part, as you may have already found out, is the optimisation and MC generation, and you'd still have to do that either way]
Like I started out, I'm not sure this is an answer, but it's my thoughts so far on the subject. Now I'll go back to adding debug symbol stuff to my Pascal compiler... Before Christmas, I couldn't see the source in GDB. Now I can step, but no viewing of variables yet...
When I watch demoscene videos on youtube the author's often boast of how their filesizes are 64kb or less, some as few as just 4kb. When I compile even a very basic program in C++ the executable is always at least 90kb or so. Are these demos written entirely in assembly? it was my understanding that demomakers used c/c++ as well.
I'm one of the coder of Felix's Workshop and Immersion (64k intros by Ctrl-Alt-Test). Most 64k intros nowadays use C++ (exception: Logicoma uses Rust). Assembly may make sense for 4k intros (although most of them actually use C++), but not for 64k intros.
Here are the two most important things:
Compile without the standard library (in particular, the STL could make the binary quite large).
Compress your binary (kkrunchy for 64k intros on Windows, Crinkler for 4k intros on Windows).
Now, you can write a ton of code before filling the 64kB. How to use them? Procedural generation.
For music, music sheet is compressed. Instruments are generated with a soft synth. A popular option, although a bit outdated, is to use v2 by Farbrausch.
If you need textures, generate them.
If you need 3d models, generate them.
Animations and effects are procedural.
For the camera, save some key positions and interpolate.
Shaders are heavily used in modern graphics. Minifying the shaders can save quite a lot of space.
Want to hear more about procedural generation and other techniques? Check IQ's articles.
If you want to further optimise your code, here are some additional tricks:
You probably use lots of floats. Try to truncate the mantissa of your floats (it can save many kB).
Disable function inlining (it saved me 2kB).
Try the fastcall calling convention (it saved me 0.7kB).
Disable support for exceptions. You don't need them.
If you use classes, avoid inheritance.
Be careful if you use templates.
In a typical 4k intro, the C++ code is used for the music and the initialisation. Graphics are done in a shader.
Those demos do not use the standard library (not C++ and not even the C standard lib), nor do they link with standard libraries (to avoid import table sizes). They dynamically link only the absolute minimum necessary.
The demo's "main function" is usually identical with the entry point (unlike in a normal program where the entry point is a CRT init function which does some OS-specific setup, initializes globals, runs constructors, and eventually calls main).
Usually the demo executables are not compliant with the specifications (omitting minimum section sizes and alignments) of the executable format and are compressed with an exe-packer. Technically, these are "broken" programs, but they are just "broken" so much that they still run successfully.
Also, such demos rely heavily on procedurally generated content.
These ultra-small programs typically don't depend on any libraries or frameworks, as is typical with traditional application development. These programs typically accesses graphics/io, etc. directly.
I can't comment yet because I don't have 50 rep points, so I'm answering.
One way to create a smaller program is to use an older compiler, such as Microsoft Visual C/C++ 4.0, which produces a smaller .exe file than say Microsoft Visual Studio 2005.
It really depends on your environment, but if you don't
instantiate any templates, and you link everything dynamically,
it's fairly easy to achieve a very small size for your
executable, since none of the code you actually execute will be
in the executable.
I'm looking to get an AST for C++ that I can then parse with an external program. What programs are out there that are good for generating an AST for C++? I don't care what language it is implemented in or the output format (so long as it is readily parseable).
My overall goal is to transform a C++ unit test bed to its corresponding C# wrapper test bed.
You can use clang and especially libclang to parse C++ code. It's a very high quality, hand written library for lexing, parsing and compiling C++ code but it can also generate an AST.
Clang also supports C, Objective-C and Objective-C++. Clang itself is written in C++.
Actually, GCC will emit the AST at any stage in the pipeline that interests you, including the GENERIC and GIMPLE forms. Check out the (plethora of) command-line switches begining with -fdump- — e.g. -fdump-tree-original-raw
This is one of the easier (…) ways to work, as you can use it on arbitrary code; just pass the appropriate CFLAGS or CXXFLAGS into most Makefiles:
make CXXFLAGS=-fdump-tree-original-raw all
… and you get “the works.”
Updated: Saw this neat little graphing system based on GCC AST's while checking my flag name :-) Google FTW.
http://digitocero.com/en/blog/exporting-and-visualizing-gccs-abstract-syntax-tree-ast
Our C++ Front End, built on top of our DMS Software Reengineering Toolkit can parse a variety of C++ dialects (including C++11 and ObjectiveC) and export that AST as an XML document with a command line switch. See example ASTs produced by this front end.
As a practical matter, you will need more than the AST; you can't really do much with C++ (or any other modern language) without an understanding of the meaning and scope of each identifier. For C++, meaning/scope are particularly ugly. The DMS C++ front end handles all of that; it can build full symbol tables associating identifers to explicit C++ types. That information isn't dumpable in XML with a command line switch, but it is "technically easy" to code logic in DMS to walk the symbol table and spit out XML. (there is an option to dump this information, just not in XML format).
I caution you against the idea of manipulating (or even just analyzing) the XML. First, XSLT isn't a particularly good way to understand the meaning of the ASTs, let alone transform the AST, because the ASTs represent context sensitive language structures (that's why you want [nee MUST HAVE] the symbol table). You can read the XML into a dom-like tree if you like and write your own procedural code to manipulate it. But source-to-source transformations are an easier way; you can write your transformations using C++ notation rather than buckets of code goo climbing over a tree data structure.
You'll have another problem: how to generate valid C++ code from the transformed XML. If you don't mind spitting out raw text, you can solve this problem in purely ad hoc ways, at the price of having no gaurantee other than sweat that generated code is syntactically valid. If you want to generate a C++ representation of your final result as an AST, and regenerate valid text from that, you'll need a prettyprinter, which are not technically hard but still a lot of work to build especially for a language as big as C++.
Finally, the reason that tools like DMS exist is to provide the vast amount of infrastructure it takes to process/manipulate complex structure such as C++ ASTs. (parse, analyse, transform, prettyprint). You can try to replicate all this machinery yourself, but this is usually a poor time/cost/productivity tradeoff. The claim is it is best to stay within the tool ecosystem rather than escape it and build bad versions of it yourself. If you haven't done this before, you'll find this out painfully.
FWIW, DMS has been used to carry out massive analysis and transformations on C++ source code. See Publications on DMS and check the papers by Akers on "Re-engineering C++ Component Models".
Clang is based on the same kind of philosophy; there's an ecosystem of tools.
YMMV, but I'd be surprised.
Why are the binaries that are generated when I compile my C++ programs so large (as in easily 10 times the size of the source code files)? What advantages does this offer over interpreted languages for which such compilation is not necessary (and thus the program size is only the size of the code files)?
Modern interpreted languages do typically compile the code to some manner of representation for faster execution... it might not get written out to disk, but there's certainly no guarantee that the program is represented in a more compact form. Some interpreters go the whole hog and generate machine code anyway (e.g. Java JIT). Then there's the interpreter itself sitting in memory which can be large.
A few points:
The more sophisticated the commands in the source code, the more machine code operations might be required to execute them. Thus, higher level language features tend to have a higher ratio of compiled-code to source code. That's not necessarily a bad thing: think of it as "I only have to say a little about what I want done and it infers all those necessary steps". The challenge in programming is to ensure they are necessary - that requires good library and program design.
The compiler often deliberately decides to trade some executable size for faster expected execution speed: inline vs out-of-line code is part of this compromise, though for small functions neither may be consistently more compact.
More sophisticated run-time environments (e.g. adding support for C++ exceptions) can involve a bit of extra code that runs when the program first starts to construct the necessary environment for that language feature.
Libraries feature may not be comparable. As well as the sort of add-on libraries you're very likely to have had to track down yourself and be very aware of using (e.g. XML, PDF parsing, OpenGL), languages often quietly use supporting libraries for what seem like language features and functions. Any of these can be suprisingly large.
For example, many interpreters just expose the C library's printf() statement or something similar, while for output formatting C++ has ostream - a more complex, extensible and type-safe system with (for better or worse) persistent state across function calls, routines to query and set that state, an additional layer of customisable buffering, customisable character types and localisation, and generally a lot of small inline functions that can lead to smaller or larger programs depending on the exact use and compiler settings. What's best depends on your application and memory vs performance goals.
Inbuilt language statements may be compiled differently: a switch on an integer expression and have 100 case labels spread randomly between 1 and 1000: one compiler/languages might decide to "pack" the 100 cases and do a binary search for a match, another to use a sparsely populated array of 1000 elements and do direct indexing (which wastes space in the executable but typically makes for faster code). So, it's hard to draw conclusions based on executable size.
Typically, memory usage and execution speed become increasingly important as the program gets larger and more complex. You don't see systems like Operating Systems, enterprise web servers or full-featured commercial word processors written in interpreted languages because they don't have the scalability.
Interpreted languages assume an interpreter is available while compiled programs are in most cases standalone.
Take a trivial case: Suppose you have a one line program
print("hello world")
what does that "print" do? Surely it's clear that your asking some other code to do some work? And that code isn't free, the sum total of what needs to run is much more than the lines of code you write. In more realistic programs you exploit many sophisticated libraries managing windows and other UI features, networks, databases and so on. Now whether that code is bundled into your application or loaded from DLLs or is present in the interpreter it's got to be somewhere.
There are plenty of trades-off between compilation and interpretation, and intermediate solutions such as Java's compilation/byte-code interpreatation approach. For example, you might consider
the run-time cost of interpreting the source every time you run versus running the compiled code
the portability advantages of interpreters - you need to compile separate versions of an app for different platforms.
Usually, programs are written in higher level languages, for these programs to be executed by the CPU, the programs have to be converted to machine code. This conversion is done by a Compiler or an Interpreter.
A Compiler makes the conversion just once, while an Interpreter typically converts it every time a program is executed.
Interpreted programs run much slower than compiled programs because the interpreter must analyze each statement in the program each time it is executed and then perform the desired action, whereas the compiled code just performs the action within a fixed context determined by the compilation(which is the reason for presence of large sized binary files).
Another disadvantage of Interpreters is that they must be present in the enviornment as additional software to run the source code.