Learning to read GCC assembler output - c++

I'm considering picking up some very rudimentary understanding of assembly. My current goal is simple: VERY BASIC understanding of GCC assembler output when compiling C/C++ with the -S switch for x86/x86-64.
Just enough to do simple things such as looking at a single function and verifying whether GCC optimizes away things I expect to disappear.
Does anyone have/know of a truly concise introduction to assembly, relevant to GCC and specifically for the purpose of reading, and a list of the most important instructions anyone casually reading assembly should know?

You should use GCC's -fverbose-asm option. It makes the compiler output additional information (in the form of comments) that make it easier to understand the assembly code's relationship to the original C/C++ code.

If you're using gcc or clang, the -masm=intel argument tells the compiler to generate assembly with Intel syntax rather than AT&T syntax, and the --save-temps argument tells the compiler to save temporary files (preprocessed source, assembly output, unlinked object file) in the directory GCC is called from.
Getting a superficial understanding of x86 assembly should be easy with all the resources out there. Here's one such resource: http://www.cs.virginia.edu/~evans/cs216/guides/x86.html .
You can also just use disasm and gdb to see what a compiled program is doing.

I usually hunt down the processor documentation when faced with a new device, and then just look up the opcodes as I encounter ones I don't know.
On Intel, thankfully the opcodes are somewhat sensible. PowerPC not so much in my opinion. MIPS was my favorite. For MIPS I borrowed my neighbor's little reference book, and for PPC I had some IBM documentation in a PDF that was handy to search through. (And for Intel, mostly I guess and then watch the registers to make sure I'm guessing right! heh)
Basically, the assembly itself is easy. It basically does three things: move data between memory and registers, operate on data in registers, and change the program counter. Mapping between your language of choice and the assembly will require some study (e.g. learning how to recognize a virtual function call), and for this an "integrated" source and disassembly view (like you can get in Visual Studio) is very useful.

"casually reading assembly" lol (nicely)
I would start by following in gdb at run time; you get a better feel for whats happening. But then maybe thats just me. it will disassemble a function for you (disass func) then you can single step through it
If you are doing this solely to check the optimizations - do not worry.
a) the compiler does a good job
b) you wont be able to understand what it is doing anyway (nobody can)

Unlike higher-level languages, there's really not much (if any) difference between being able to read assembly and being able to write it. Instructions have a one-to-one relationship with CPU opcodes -- there's no complexity to skip over while still retaining an understanding of what the line of code does. (It's not like a higher-level language where you can see a line that says "print $var" and not need to know or care about how it goes about outputting it to screen.)
If you still want to learn assembly, try the book Assembly Language Step-by-Step: Programming with Linux, by Jeff Duntemann.

I'm sure there are introductory books and web sites out there, but a pretty efficient way of learning it is actually to get the Intel references and then try to do simple stuff (like integer math and Boolean logic) in your favorite high-level language and then look what the resulting binary code is.

Related

What is the relation between assembly-syntax and platform

In this youtube video the guy explains that there are 2 types of assembly syntax - Intel and AT&T. Is there a relation between assembly-syntax and platform, (for example intel platforms assemblers only accept Intel assembly syntax)?
No connection whatsoever. An assembly language syntax is defined by the assembler, the program that parses it. The machine code is the only real standard that has to be conformed to. Although folks like to talk about two x86 syntaxes for example there are many many more than that as the syntax encompasses all of the text not just the ordering of operands. There are multiple syntaxes for ARM, and others for the same reason.
It is in the interest of the chip or ip vendor to define the instruction set and at the same time in their interest to present this with an assembly language syntax, or at least with respect to the instructions (there is a lot more to an assembly language than just the instructions, in the case of x86 variation within the instructions themselves other than just the ordering of operands), and these vendors make or have made for them at a minimum an assembler and linker as needed (in their best interest). Their documentation and this specific assembler ideally match (in their best interest). But the standardization stops there, the next person/team to make an assembler only needs to conform to the machine code, and will often make at least subtle changes making the code incompatible (intentionally or not, why bother making a perfect clone). For whatever reason gas implementations tend to create an incompatible language from the chip/ip vendor except for the cases where the gas version IS the vendor driven initial version (lets say risc-v for example)
The syntax connection and binary file format (elf, coff, etc) are important with respect to the toolchain. For obvious reasons compilers output assembly language then launch the assembler, so the code produced by the compiler needs to match the syntax of the assembler in that chain. Likewise the object file format output by the assembler needs to match that of the linker, I assume that linkers like gnu ld dont actually rely on the assembler for patching up the machine code used during linking, just modifies the machine code directly (or adds machine code as needed although in that case maybe does use assembly language, have not personally inspected/tested/confirmed).
One can argue since the assembler comes before the compiler that the compiler conforms to the assembler, if it is the same folks then possible that items are added to the assembler to help the overall toolchain solution, but would expect the assembler and thus the assembly language as a whole comes first.
And lastly we as programmers expect the installed compiler to produce programs for that system/platform. gcc on a linux machine makes working programs gcc hello.c -o hello. Without us having to do the bootstrap and linking for that platform, and that comes from the C library usually, but depends on the design of the toolchain, everyone from start to end may have to be system aware.
I answer in case someone else made that mistake. As Cody Gray commented, both the Intel and AT&T are in-fact Intel X86 assembly languages.
Resources:
https://en.wikipedia.org/wiki/X86_assembly_language#Syntax
https://stackoverflow.com/questions/tagged/x86
https://stackoverflow.com/questions/tagged/att
https://stackoverflow.com/questions/tagged/intel-syntax

Is it possible to convert C/C++ source code to assembly?

Is it possible to somehow convert a simple C or C++ code (by simple I mean: taking some int as input, printing some simple shapes dependent on that int as output) to assembly language? If there isn't I'll just do it manually but since I'm gonna be doing it for processors like Intel 8080, it just seemed a bit tedious. Can you somehow automate the process?
Also, if there is a way, how good (as in: elegant) would the output assembly file source code be when compared to just translating it manually?
Most compilers will let you produce assembly output. For a couple of obvious examples, Clang and gcc/g++ use the -S flag, and MS VC++ uses the -Fa flag to do so.
A few compilers don't support this directly (e.g., if memory serves Watcom didn't). The ones I've seen like this had you produce an object file, and then included a disassembler that would produce an assembly language file from the object file. I don't remember for sure, but it wouldn't surprise me if this is what you'd need to do with the Digital Mars compiler.
To somebody who's accustomed to writing assembly language, the output from most compilers typically tends to look at least somewhat inelegant, especially on a CPU like an x86 that has quite a few registers that are now really general purpose, but have historically had more specific meanings. For example, if some piece of code needs both a pointer and a counter, a person would probably put the pointer in ESI or EDI, and the counter in ECX. The compiler might easily reverse those. That'll work fine, but an experienced assembly language programmer will undoubtedly find it more readable using ESI for the pointer and ECX for the counter.
Take look at gcc -S:
gcc -S hello.c # outputs hello.s file
Other compilers that maintain at lest partial gcc compatibility may also accept this flag. LLVM's clang, for example, does.
Well, yes there is such a program. It's called "Compiler"
To answer your edit: The elegance of the output depends on the optimization of your compiler. Usually compilers do not generate code we humans would call "elegant".
Most folks here are right, but seem to have missed the note about 8080 (no wonder, it's not in the title :). However, Google comes to the rescue as always - looking for compiler for 8080 produces some nice results like these:
http://www.bdsoft.com/resources/bdsc.html
http://tack.sourceforge.net/
Most of these are pretty old and might be poorly maintained. You might also try 8085 which is fairly similar
(by simple I mean: taking some int as input, printing some simple shapes dependent on
that int as output) to assembly language?
Looking at the output of an x86 compiler is not going to be very helpful, since inputting and outputting are done by a C or C++ library. With an 8080 there is no such library so you have to develop your own I/O routines for some particular hardware. That's lots and lots of additional work.

Where can I find script that convert VC++ inline assembler to intrinsics?

I am porting inline assembler that use SSE commands to intrinsics. It takes much work to find appropriate intrinsic for assembler instruction. Somewhere on the Internet I saw a Python script that simplifies the job, but cannot find it now.
I don't think you will be happy with such a script.
First, in my opinion intrinsics are only useful for a one or two liner, if you have more instructions it is possible better to have a separate assembler file. Also with a long listing of assembler instructions you will have to control the result anyway, which include to understand each instruction and its result, which basically means you can write it again in the same time.
Second, I think you are looking for something like this because you want to port a piece of software from 32 bit to 64 bit, right? My experience told me that you will run into some strange errors because of some unexpected type casts if you don't have a look on every line of code.
Third, are you talking about Visual Studio? Is there any other compiler which supports intrinsics? We had some strange errors while porting our software using intrinsics, because there are some ugly compiler bugs while using intrinsics, mostly by messing up the stack. We had a lot of trouble in finding these things and ending up to write these functions in assembler.
So my suggestion is to be careful with intrinsics!
I'm not aware of a script that will do exactly what you asking. A lot of cases will also have non-SSE instructions interleaved into the assembly, and not every assembly instruction can be mapped to an intrinsic or a primitive C operation.
I suppose you can probably hack you way through it with find-and-replace. (This actually might not be that bad. How much code are you trying port? Thousands of lines?)
Also, VC++ doesn't allow inline assembly at all on 64-bit. So everything needs to be done using intrinsics or a completely separate assembly file.
I won't go far to say that using intrinsics is completely inferior to assembly (assuming you know what you're doing), but writing good intrinsic code that compiles well and runs as fast as optimized assembly is a work of art on it's own. But it maintains two advantages: portability, and ease of use (no need to manually allocate registers).
I created my own script to convert inline assembler to intrinsics. He does a lot of rough work.
https://github.com/KindDragon/Asm2Intrinsics

How is Assembly used in the modern day (with C/C++ for example)?

I understand how a computer works on the basic principles, such as, a program can be written in a "high" level language like C#, C and then it's broken down in to object code and then binary for the processor to understand. However, I really want to learn about assembly, and how it's used in modern day applications.
I know processors have different instruction sets above the basic x86 instruction set. Do all assembly languages support all instruction sets?
How many assembly languages are there? How many work well with other languages?
How would someone go about writing a routine in assembly, and then compiling it in to object/binary code?
How would someone then reference the functions/routines within that assembly code from a language like C or C++?
How do we know the code we've written in assembly is the fastest it possibly can be?
Are there any recommended books on assembly languages/using them with modern programs?
Sorry for the quantity of questions, I do hope they're general enough to be useful for other people as well as simple enough for others to answer!
However, I really want to learn about assembly, and how it's used in modern day applications.
On "normal" PCs it's used just for time-critical processing, I'd say that realtime multimedia processing can still benefit quite a bit from hand-forged assembly. On embedded systems, where there's a lot less horsepower, it may have more areas of use.
However, keep in mind that it's not just "hey, this code is slow, I'll rewrite it in assembly and it by magic it will go fast": it must be carefully written assembly, written knowing what it's fast and what it's slow on your specific architecture, and keeping in mind all the intricacies of modern processors (branch mispredictions, out of order executions, ...). Often, the assembly written by a beginner-to-medium assembly programmer will be slower than the final machine code generated by a good, modern optimizing compiler. Performance stuff on x86 is often really complicated, and should be left to people who know what they do => and most of them are compiler writers. :) Have a look at this, for example. C++ code for testing the Collatz conjecture faster than hand-written assembly - why? gets into some of the specific x86 details for that case which you have to understand to match or beat a compiler with optimization enabled, for a single small loop.
I know processors have different instruction sets above the basic x86 instruction set. Do all assembly languages support all instruction sets?
I think you're confusing some things here. Many (=all modern) x86 processors support additional instructions and instruction sets that were introduced after the original x86 instruction set was defined. Actually, almost all x86 software now is compiled to exploit post-Pentium features like cmovcc; you can query the processor to see if it supports some features using the CPUID instruction. Obviously, if you want to use a mnemonic for some newer instruction set instruction your assembler (i.e. the software which translates mnemonics in actual machine code) must be aware of them.
Most C compilers have intrinsics like _mm_popcnt_u32 and/or command line options like -mpopcnt to enable them that let you take advantage of new instructions without hand-written asm. x86 -mbmi / -mbmi2 extensions have several instructions that compilers know how to use when optimizing ordinary C like x << y (shlx instead of the more clunky shl) or x &= x-1; (blsr / _blsr_u32()). GCC has a -march=native option to enable all the instruction sets your CPU supports, and to set the -mtune= option to optimize for your CPU in terms of how much loop unrolling is a good idea, or which instructions or sequences are faster on one CPU, slower on another.
If, instead, you're talking about other (non-x86) instruction sets for other families of processors, well, each assembler should support the instructions that the target processor can run. Not all the instructions of an assembly language have direct replacement in others, and in general porting assembly code from an architecture to another is usually a hard and difficult work.
How many assembly languages are there?
Theoretically, at least one dialect for each processor family. Keep in mind that there are also different notations for the same assembly language; for example, the following two instructions are the same x86 stuff written in AT&T and Intel notation:
mov $4, %eax // AT&T notation
mov eax, 4 // Intel notation
How would someone go about writing a routine in assembly, and then compiling it in to object/binary code?
If you want to embed a routine in an application written in another language, you should use the tools that the language provides you, in C/C++ you'd use the asm blocks.
You can instead make stand-alone .s or .asm files using the same syntax a C compiler would output, for example gcc -O3 -S will compile to a .s file that you can assemble with gcc -c. Separate files are a good idea if you want to write whole functions in asm instead of wrapping one or a couple instructions. A few open source projects like x264 and x265 (video encoders) have extensive amounts of NASM source code for different versions of functions for different versions of SSE or AVX available.
If you, instead, wanted to write a whole application in assembly, you'd have to write just in assembly, following the syntactic rules of the assembler you'd like to use.
How do we know the code we've written in assembly is the fastest it possibly can be?
In theory, because it is the nearest to the bare metal, so you can make the machine do just exactly what you want, without having the compiler take in account for language features that in some specific case do not matter. In practice, since the machine is often much more complicated than what the assembly language expose, as I said often assembly language will be slower than compiler-generated machine code, that takes in account many subtleties that the average programmer do not know.
Addendum
I was forgetting: knowing to read assembly, at least a little bit, can be very useful in debugging strange issues that can come up when the optimizer is broken/only in the release build/you have to deal with heisenbugs/when the source-level debugging is not available or other stuff like that; have a look at the comments here.
Intel and the x86 are big on reverse compatibility, which certainly helped them out but at the same time hurts greatly. The internals of the 8088/8086 to 286 to 386, to 486, pentium, pentium pro, etc to the present are somewhat of a redesign each time. Early on adding protection mechanisms for operating systems to protect apps from each other and the kernel, and then into performance by adding execution units, superscalar and all that comes with it, multi core processors, etc. What used to be a real, single AX register in the original processor turns into who knows how many different things in a modern processor. Originally your program was executed in the order written, today it is diced and sliced and executed in parallel in such a way that the intent of the instructions as presented are honored but the execution can be out of order and in parallel. Lots and lots of new tricks buried behind what on the surface appears to be a very old instruction set.
The instruction set changed from the 8/16 bit roots to 32 bit, to 64 bit, so the assembly language had to change as well. Adding EAX to AX, AH, and AL for example. Occasionally other instructions were added. But the original load, store, add, subtract, and, or, etc instructions are all there. I have not done x86 in a long time and was shocked to see that the syntax has changed and/or a particular assembler messed up the x86 syntax. There are a zillion tools out there so if one doesnt match the book or web page you are using, there is one out there that will.
So thinking in terms of assembly language for this family is right and wrong, the assembly language may have changed syntax and is not necessarily reverse compatible, but the instruction set or machine language or other similar terms (the opcodes/bits the assembly represents) would say that much of the original instruction set is still supported on modern x86 processors. 286 specific nuances may not work perhaps, as with other new features of specific generations, but the core instructions, load, store, add, subtract, push, pop, etc all still work and will continue to work. I feel it is better to "Drive down the center of the lane", dont get into chip or tool specific ghee whiz features, use the basic boring, been working since the beginning of time syntax of the language.
Because each generation in the family is trying for certain features, usually performance, the way the individual instructions are handed out to the various execution units changes...on each generation...In order to hand tune assembler for performance, trying to out-do a compiler, can be difficult at best. You need detailed knowledge about the specific processor you are tuning for. From the early x86 days to the present, unfortunately, what made the code execute faster on one chip, would often cause the next generation to run extra slow. Perhaps that was a marketing tool in disguise, not sure, "Buy the hot new processor that cost twice as much as the one you have now, advertises twice the clock speed, but runs your same copy of windows 30% slower. In a few years when the next version of windows is compiled (and this chip is obsolete) it will then double in performance". Another side effect of this is that at this point in time you cannot take one C program and create one binary that runs fast on all x86 processors, for performance you need to tune for the specific processor, meaning you need to at least tell the compiler to optimize and what family to optimize for. And like windows or office, or something you are distributing as a binary you likely cannot or do not want to somehow bury several differently tuned copies of the same program in one package or in one binary...drive down the center of the road.
As a result of all the hardware improvements it may be in your best interest to not try to tune the compiler output or hand assembler to any one chip in particular. On average the hardware improvements will compensate for the lack of compiler tuning and your same program hopefully just runs a little faster each generation. One of the chip vendors used to aim to make todays popular compiled binaries run faster tomorrow, the other vendor improved the internals such that if you recompiled todays source for the new internals you could run faster tomorrow. Those activities between vendors has not necessarily continued, each generation runs todays binaries slower, but tomorrows recompiled source the same speed or slower. It will run tomorrows re-written programs faster, sometimes with the same compiler sometimes you need tomorrows compiler. Isnt this fun!
So how do we know a particular compiled or hand assembled program is as fast as it possibly can be? We dont, in fact for x86 you can guarantee it isnt, run it on one chip in the family and it is slow, run it on another it may be blazing fast. x86 or not, other than very short programs or very deterministic programs like you would find on a microcontroller, you cannot definitely say this is the fastest possible solution. Caches for example are very hard if even possible to tune for, and the memory behind it, particularly on a pc, where the user can choose various sizes, speeds, ranks, banks, etc and adjust bios settings to change even more settings, you really cannot tell a compiler to tune for that. So even on the same computer same processor same compiled binary you have the ability to turn some of the knobs and make that program run a lot faster or a lot slower. Change processor families, change chipsets, motherboards, etc. And there is no possible way to tune for so many variables. The nature of the x86 pc business has become too chaotic.
Other chip families are not nearly as problematic. Some perhaps but not all. So these are not general statements, but specific to the x86 chip family. The x86 family is the exception not the rule. Probably the last assembler/instruction set you would want to bother learning.
There are tons of websites and books on the subject, cannot say one is better than the other. I learned from the original set of 8088/86 books from intel and then the 386 and 486 book, didnt look for Intel books after that (or any other boos). You will want an instruction set reference, and an assembler like nasm or gas (gnu assembler, part of binutils that comes with most gcc based compiler toolchains). As far as the C to/from assembler interface you can if nothing else figure that out by experimenting, write a small C program with a few small C functions, disassemble or compile to assembler, and look at what registers and/or how the stack is used to pass parameters between functions. Keep your functions simple and use only a few parameters and your assembler will likely work just fine. If not look at the assembler of the function calling your code and figure out where your parameters are. It is all well documented somewhere, and these days probably much better than old. In the early 8088/86 days you had tiny, small, medium, large and huge compiler models and the calling conventions could vary from one to the other. As well as one compiler to the next, watcom (formerly zortech and perhaps other names) was pass by register, borland and microsoft were passed on the stack and pretty close if not the same. Now with 32 and 64 bit flat memory space, and standards, you can use one model and not have to memorize all the nuances (just one set of nuances). Inline assembly is an option but varies from C compiler to C compiler, and getting it to work properly and effectively is more difficult than just writing assembler in its own file. gcc and perhaps other compilers will allow you to put the assembler file on the C compiler command line as if it were just another C file and it will figure out what you have given it and pass it to the assembler for you. That is if you dont want to call the assembler program yourself and put the object on the C compiler command line.
if nothing else disassemble a lot of simple functions, add a few parameters and return them, etc. Change compiler optimization settings and see how that changes the instructions used, often dramatically. Even if you cannot write assembler from scratch being able to read it is very valuable, both from a debugging and performance perspective.
Not all compilers for all processors are good. Gcc for example is a one size fits all, just like a sock or ball cap that one size doesnt really fit anyone well. Does pretty good for most of the targets but not really great. So it is quite possible to do better than the compiler with hand tuned assembler, but on the average for lots of code you are not going to win. That applies to most processors, which are more deterministic, not just the x86 family. It is not about fewer instructions, fewer instructions does not necessarily equate to faster, to outperform even an average compiler in the long run you have to understand the caches, fetch, decode, execution state machines, memory interfaces, memories themselves, etc. With compiler optimizations turned off it is very easy to produce faster code than the compiler, so you should just use the optimizer but also understand that that increases the risk of the compiler making a mistake. You need to know the tool very well, which goes back to disassebling often to understand how your C code and the compiler you use today interact with each other. No compiler is completely standards compliant, because the standards themselves are fuzzy, leaving some features of the language up to the discretion of the compiler (drive down the middle of the road and dont use those parts of the language).
Bottom line from the nature of your questions, I would recommend writing a bunch of small functions or programs with some small functions, compile to assembler or compile to an object and disassemble to see what the compiler does. Be sure to use different optimization settings on each program. Gain a working reading knowledge of the instruction set (granted the asm output of the compiler or disassembler, has a lot of extra fluff that gets in the way of readability, you have to look past that, you need almost none of it if you want to write assembler). Give yourself 5-20 years of studying and experimenting before you can expect to outperform the compiler on a regular basis, if that is your goal. By then you will learn that, particularly with this chip family, it is a futile effort, you win a few but mostly lose...It would be to your benefit to compile (to assembler) the same code to other chip families like arm and mips, and get a general feel for what C code compiles well in general, and what C code doesnt compile well, and make your C programming better instead of trying to make the assembler better. Also try other compilers like llvm. Gcc has a lot of quirks that many think are the C language standards but are instead nuances or problems with the specific compiler. Being able to read and analyze the assembly output of the compilers and their options will provide this knowledge. So I recommend you work on a reading knowledge of the instruction set, without necessarily having to learn to write it from scratch.
You need to look upon it from the hardware's point of view, the assembly language is created with regard to what the CPU can do. Every time a new feature in a CPU is created an appropriate assembly instruction is created so that it can be used.
Assembly is thus very dependent on the CPU, the high level languages like C++ provides abstractions from this to allow us to not have to think about the details like CPU instructions as well as the compiler generates optimized assembly code.
EDIT:
How many assembly languages are there?
How many work well with other
languages?
as many as there are different types of CPU. The second question I didn't understand. Assembly per se is not interacting with any other language, the output, the machine code is.
How would someone go about writing a
routine in assembly, and then
compiling it in to object/binary
code?`
The principle is similar to writing in any other compiled language, you create a text file with the assembly instructions, use an assembler to compile it to machine code. Then link it with eventual runtime libraries.
How would someone then reference the functions/routines within that
assembly code from a language like C
or C++?
C++ and C provide inline assembly so there is no need to link, but if you want to link you need to create the assembly object following the same/similar calling conventions as the host language. For instance some languages when calling a function push the arguments to the function on the stack in a certain order, so you would have to do the same.
How do we know the code we've written
in assembly is the fastest it possibly
can be?
Because it is closest to the actual hardware. When you are dealing with higher level languages you don't know what the compiler will do with your for loop. However more often than not they do a good and better job of optimizing the code than a human can do (of course in very special circumstances you can probably get a better result).
There are many many different assembly languages out there. Usually there is at least one for every processor instruction set, which means one for every processor type. One thing that you should also keep in mind is that even for a single processor there may be several different assembler programs that may use a different syntax, which from a formal view constitutes a different language. (for x86 there are masm, nasm, yasm, AT&T (what *nix assemblers like the GNU assembler use by default), and probably many more)
For x86 there are lots of different instruction sets because there have been so many changes to the architecture over the years. Some of these changes could be viewed mostly as additional instructions, so they are a super set of the previous assembly. Other changes may actually remove instructions (none are coming to mind for x86, but I've heard of some on other processors). And other changes add modes of operation to processors that make things even more complicated.
There are also other processors with completely different instructions.
To learn assembly you will need to start by picking a target processor and an assembler that you want to use. I'm going to assume that you are going to use x86, so you would need to decide if you want to start with 16 bit segmented, 32 bit, or 64 bit. Many books and online tutorials go the 16 bit route where you write DOS programs. If you are wanting to write parts of C programs in assembly then you will probably want to go the 32 or 64 bit route.
Most of the assembly programming I do is inline in C to either optimize something, to make use of instructions that the compiler doesn't know about, or when I otherwise need to control the instructions used. Writing large amounts of code in assembly is difficult, so I let the C compiler do most of the work.
There are lots of places where assembly is still written by people. This is particularly common in embedded, boot loaders (bios, u-boot, ...), and operating system code, though many developers in these never directly write any assembly. This code may be start up code that has to run before the stack pointer is set to a usable value (or RAM isn't usable yet for some other reason), because they need to fit within small spaces, and/or because they need to talk to hardware in ways that aren't directly supported in C or other higher level languages. Other places where assembly is used in OSes is writing locks (spinlocks, critical sections, mutexes, and semaphores) and context switching (switching from one thread of execution to another).
Other places where assembly is commonly written is in the implementation of some library code. Functions like strcpy are often implemented in assembly for different architectures because there are often several ways that they may be optimized using processor specific operations, while a C implementation might use a more general loop. These functions are also reused so often that optimizing them by hand is often worth the effort in the long run.
Another, related, place where lots of assembly is written is within compilers. Compilers have to know how to implement things and many of them produce assembly, so they have assembly templates (or something similar) built into them for use in generating output code.
Even if you never write any assembly knowing the instructions and registers of your target system are often useful. They can aid in debugging, but they can also aid in writing code. Knowing the target processor can help you write better (smaller and/or faster) code for it (even in a higher level language), and being familiar with a few different processors will help you to write code that will be good for many processors because you will know generally how CPUs work.
We do a fair bit of it in our Real-Time work (more than we should really). A wee bit of assembly can also be quite useful when you are talking to hardware, and need specific machine instructions executed (eg: All writes must be 16-bit writes, or you'll hose nearby registers).
What I tend to see today is assembly insertions in higher-level language code. How exactly this is done depends on your language and sometimes compiler.
I know processors have different
instruction sets above the basic x86
instruction set. Do all assembly
languages support all instruction
sets?
"Assembly language" is a kind of misnomer, at least in the way you are using it. Assemblers are less of a language (CS graduates may object) and more of a converter tool which takes textual representation and generates a binary image from it, with a close to 1:1 relationship between text elements (memnonics, labels and numbers) and binary elements. There is no deeper logic behind the elements of an assembler language because their possibilities to be quoted and redirected ends mostly at level 1; you can, for example, use EAX only in one instruction at a time - the next use of EAX in the next instruction bears no relationship with its previous use EXCEPT for the unwritten logical connection which the programmer had in mind - this is the reason why it is so easy to create bugs in assembler.
How would someone go about writing a
routine in assembly, and then
compiling it in to object/binary code?
One would need to pin down the lowest common denominator of instruction sets and code the function times the expected architectures the code is intended to run on. IOW if you are not coding for a certain hardware platform which is defined at the time of writing (e.g. a game console, an embedded board) you no longer do this.
How would someone then reference the
functions/routines within that
assembly code from a language like C
or C++?
You need to declare them in your HLL - see your compilers handbook.
How do we know the code we've written
in assembly is the fastest it possibly
can be?
There is no way to know. Be happy about that and code on.

Verifying compiler optimizations in gcc/g++ by analyzing assembly listings

I just asked a question related to how the compiler optimizes certain C++ code, and I was looking around SO for any questions about how to verify that the compiler has performed certain optimizations. I was trying to look at the assembly listing generated with g++ (g++ -c -g -O2 -Wa,-ahl=file.s file.c) to possibly see what is going on under the hood, but the output is too cryptic to me. What techniques do people use to tackle this problem, and are there any good references on how to interpret the assembly listings of optimized code or articles specific to the GCC toolchain that talk about this problem?
GCC's optimization passes work on an intermediary representation of your code in a format called GIMPLE.
Using the -fdump-* family of options, you can ask GCC to output intermediary states of the tree.
For example, feed this to gcc -c -fdump-tree-all -O3
unsigned fib(unsigned n) {
if (n < 2) return n;
return fib(n - 2) + fib(n - 1);
}
and watch as it gradually transforms from simple exponential algorithm into a complex polynomial algorithm. (Really!)
A useful technique is to run the code under a good sampling profiler, e.g. Zoom under Linux or Instruments (with Time Profiler instrument) under Mac OS X. These profilers not only show you the hotspots in your code but also map source code to disassembled object code. Highlighting a source line shows the (not necessarily contiguous) lines of generated code that map to the source line (and vice versa). Online opcode references and optimization tips are a nice bonus.
Instruments: developer.apple.com
Zoom: www.rotateright.com
Not gcc, but when debugging in Visual Studio you have the option to intersperse assembly and source, which gives a good idea of what has been generated for what statement. But sometimes it's not quite aligned correctly.
The output of the gcc tool chain and objdump -dS isn't at the same granularity. This article on getting gcc to output source and assembly has the same options as you are using.
Adding the -L option (eg, gcc -L -ahl) may provide slightly more intelligible listings.
The equivalent MSVC option is /FAcs (and it's a little better because it intersperses the source, machine language, and binary, and includes some helpful comments).
About one third of my job consists of doing just what you're doing: juggling C code around and then looking at the assembly output to make sure it's been optimized correctly (which is preferred to just writing inline assembly all over the place).
Game-development blogs and articles can be a good resource for the topic since games are effectively real-time applications in constant memory -- I have some notes on it, so does Mike Acton, and others. I usually like to keep Intel's instruction set reference up in a window while going through listings.
The most helpful thing is to get a good ground-level understanding of assembly programming generally first -- not because you want to write assembly code, but because having done so makes reading disassembly much easier. I've had a hard time finding a good modern textbook though.
In order to output the optimizations applied you can use:
-fopt-info-optimized
To see those that have not been applied
-fopt-info-missed
Beware that the output is sent to standard error stream so to see it you actually have to redirect that : ( hint 2>&1 )
Here is nice example of :
g++ -O3 -std=c++11 -march=native -mtune=native
-fopt-info-optimized h2d.cpp -o h2d 2>&1
h2d.cpp:225:3: note: loop vectorized
h2d.cpp:213:3: note: loop vectorized
h2d.cpp:198:3: note: loop vectorized
h2d.cpp:186:3: note: loop vectorized
You can check the interleaved output, when having applied -g with objdump -dS|c++filt , but that will not get you that far.Enjoy!
Zoom from RotateRight ( http://rotateright.com ) is mentioned in another answer, but to expand on that: it shows you the mapping of source to assembly in what they call the "code browser". It's incredibly handy even if you're not an asm expert because they have also integrated assembly documentation into the app. And the assembly listing is annotated with comments and timing for several CPU types.
You can just open your object or executable file with Zoom and take a look at what the compiler has done with your code.
Victor, in your case the optimization you are looking for is just a smaller allocation of local memory on the stack. You should see a smaller allocation at function entry and a smaller deallocation at function exit if the space used by the empty class is optimized away.
As for the general question, I've been reading (and writing) assembly language for more than (gulp!) 30 years and all I can say is that it takes practice, especially to read the output of a compiler.
Instead of trying to read through an assembler dump, run your program inside a debugger. You can pause execution, single-step through instructions, set breakpoints on the code you want to check, etc. Many debuggers can display your original C code alongside the generated assembly so you can more easily see what the compiler did to optimize your code.
Also, if you are trying to test a specific compiler optimization you can create a short dummy function that contains the type of code that fits the optimization you are interested in (and not much else, the simpler it is the easier the assembly is to read). Compile the program once with optimizations on and once with them off; comparing the generated assembly code for the dummy function between builds should show you what the compiler's optimizers did.