In which scenario it is useful to use Disassembly language while debugging - c++

I have following basic questions :
When we should involve disassembly in debugging
How to interpret disassembly, For example below what does each segment stands for
00637CE3 8B 55 08 mov edx,dword ptr [arItem]
00637CE6 52 push edx
00637CE7 6A 00 push 0
00637CE9 8B 45 EC mov eax,dword ptr [result]
00637CEC 50 push eax
00637CED E8 3E E3 FF FF call getRequiredFields (00636030)
00637CF2 83 C4 0C add
Language : C++
Platform : Windows

It's quite useful to estimate how efficient is the code emitted by the compiler.
For example, if you use an std::vector::operator[] in a loop without disassembly it's quite hard to guess that each call to operator[] in fact requires two memory accesses but using an iterator for the same would require one memory access.
In your example:
mov edx,dword ptr [arItem] // value stored at address "arItem" is loaded onto the register
push edx // that register is pushes into stack
push 0 // zero is pushed into stack
mov eax,dword ptr [result] // value stored at "result" address us loaded onto the register
push eax // that register is pushed into stack
call getRequiredFields (00636030) // getRequiredFields function is called
this is a typical sequence for calling a function - paramaters are pushed into stack and then the control is transferred to that function code (call instruction).
Also using disassembly is quite useful when participating in arguments about "how it works after compilation" - like caf points in his answer to this question.

When you should involve disassembly: When you exactly want to know what the CPU is doing when it's executing your program, or when you don't have the source code in whatever higher level language the program was written in (C++ in your case).
How to interpret assembly code: Learn assembly language. You can find an exhaustive reference on Intel x86 CPU instructions in Intel's processor manuals.
The piece of code that you posted prepares arguments for a function call (by getting and pushing some values on the stack and putting a value in the register eax), and then calls the function getRequiredFields.

1 - We should (I) involve disassembly in debugging as a last resort. Generally, an optimizing compiler generates code that is not trivial to understand to the human eye. Instructions are reordered, some dead code is eliminated, some specific code is inlined, etc, etc. So it is not necessary and not easy when necessary to understand disassembled code. For example, I sometimes look at the disassembly to see if constants are part of the opcode or are stored in const variables.
2 - That piece of code calls a function like getRequiredFields(result, 0, arItem). You have to learn assembly language for the processor you want. For x86, go to www.intel.com and get the manuals of the IA32.

I started out in 1982 with assembly debugging of PL/M programs on CP/M-80 and later Digital Research OSes. It was the same in the early days of MS-DOS until Microsoft introduced symdeb which was a command-line debugger where source and assembly were displayed simultaneously. Symdeb was a leap forward but not that great since the earlier debuggers had forced me to learn to recognize what assembly code belonged to which source code line. Before CodeView the best debugger was pfix86 from Phoenix Technologies. NuMegas SoftIce was the best debugger (apart from pure hardware ICEs) I've ever come across in that it not only debugged my application but effortlessly led me through the inner workings of Windows as well. But I digress.
Late in 1990 a consultant in a project I was working in approached me and said he had this (very early) C++ bug he'd been working on for days but couldn't understand what the problem was. He single-stepped through the source code (on a windowed non-graphic DOS debugger) for me while I got all impatient. Finally I interrupted him and looked through the debugger options and sure enough there was the mixed source/assembly mode with registers and everything. This made it easy to realize that the application was trying to free an internal pointer (for local variables) containing NULL. For this problem, the source code mode was of no help at all. Today's C++ compilers will probably no longer contain a bug such as this but there will be others.
Knowing assembly-level debugging allows you to understand the source-compiler-assembly relationship to the extent of being able to predict what code the compiler will generate. Many people here on stackoverflow say "profile-profile-profile" but this goes a step further in that you learn what source-code constructs (I write in C) to use when and which to avoid. I suspect this is even more important with C++ which can generate a lot of code without the developer suspecting anything. For example there is a standard class for handling lists of objects which appears to be without drawbacks - just a few lines of code and this fantastic functionality! - until you look at the scores of strange procedure calls it generates. I'm not saying it's wrong to use them, I'm just saying that the developer should be aware of the pros and cons of using them. Overloading operators may be great functionality (somewhat weird to a WYSIWYG programmer like me) but what is the price in execution speed? If you say "nothing" I say "prove it."
It is never wrong to use mixed or pure assembly mode when debugging. Difficult bugs will usually be easier to find and the developer will learn to write more efficient code. Developers from the interpreted camp (C# and Java) will say that their code is just as efficient as the compiled languages but if you know assembly you will also know why they are wrong, why they are dead wrong. You can smile and think "yeah, tell me about it!"
After you've worked with different compilers you will come across one with the most astonishing code-generation ability. One PowerPC compiler condensed three nested loops into one loop simply through the superior code interpretation of it's optimizer. Next to the guy who wrote that I'm ... well, let's just say in a different league.
Up until about ten years ago I wrote quite a bit of pure assembly but with multi-stage pipelines, multiple execution units and now multiple cores to contend with the C compiler beats me hands down. On the other hand I know what the compiler can do a good job with and what it shouldn't have to work with: Garbage In still equals Garbage Out. This is true for any compiler that produces assembly output.

Related

Why C/C++ is slower than Assembly and other low level languages?

I write a code, doing nothing in C++
void main(void){
}
and Assembly.
.global _start
.text
_start:
mov $60, %rax
xor %rdi, %rdi
syscall
I compile the C code and compile and link Assembly code. I make a comparison between two executable file with time command.
Assembly
time ./Assembly
real 0m0.001s
user 0m0.000s
sys 0m0.000s
C
time ./C
real 0m0.002s
user 0m0.000s
sys 0m0.000s
Assembly is two times faster than C. I disassemble the codes, in Assembly code, there was only four lines code (Same). In the C code, there was tons of unnecessary code writed for linking main to _start. In main there was four lines code, three of that is writed for making impossible (you can't access to a function's variable from outside of the function blog) to access 'local' (like function veriables) variables from outside of 'block' (like function blocks).
push %rbp ; push base pointer.
mov %rsp, %rbp ; copy value of stack pointer to base pointer, stack pointer is using for saving variables.
pop %rbp ; 'local' variables are removed, because we pop the base pointer
retq ; ?
What is why of that?
The amount of time required to execute the core of your program you've written is incredibly small. Figure that it consists of three or four assembly instructions, and at several gigahertz that will only require a couple of nanoseconds to run. That's such a small amount of time that it's vastly below the detection threshold for the time program, whose resolution is measured in milliseconds (remember that a millisecond is a million times slower than a nanosecond!) So in that sense, I would be very careful about making judgments about the runtime of one program as being "twice as fast" as the other; the resolution of your timer isn't high enough to say that for certain. You might just be seeing noise terms.
Your question, though, was why there is all this automatically generated code if nothing is going to happen. The answer is "it depends." With no optimization turned on, most compilers generate assembly code that faithfully simulates the program you wrote, possibly doing more work than is necessary. Since most C and C++ functions, you actually will have code that does something, will need local variables, etc., a compiler wouldn't be too wrong in emitting code at the start and end of a function to set up the stack and frame pointer properly to support those variables. With optimization turned up to the max, an optimizing compiler might be smart enough to notice that this isn't necessary and to remove that code, but it's not required.
In principle, a perfect compiler would always emit the fastest code possible, but it turns out that it's impossible to build a compiler that will always do this (this has to do with things like the undecidability of the halting problem). Therefore, it's somewhat assumed that the code generated will be good - even great - but not optimal. However, it's a tradeoff. Yes, the code might not be as fast as it could possibly be, but by working in languages like C and C++ it's possible to write large and complex programs in a way that's (compared to assembly) easy to read, easy to write, and easy to maintain. We're okay with the slight performance hit because in practice it's not too bad and most optimizing compilers are good enough to make the price negligible (or even negative, if the optimizing compiler finds a better approach to solving a problem than the human!)
To summarize:
Your timing mechanism is probably not sufficient to make the conclusions that you're making. You'll need a higher-precision timer than that.
Compilers often generate unnecessary code in the interest of simplicity. Optimizing compilers often remove that code, but can't always.
We're okay paying the cost of using higher-level languages in terms of raw runtime because of the ease of development. In fact, it might actually be a net win to use a high-level language with a good optimizing compiler, since it offloads the optimization complexity.
All the extra time from C is dynamic linker and CRT overhead. The asm program is statically linked, and just calls exit(2) (the sycall directly, not the glibc wrapper). Of course it's faster, but it's just startup overhead and doesn't tell you anything about how fast compiler-emitted code that actually does anything will run.
i.e. if you wrote some code to actually do something in C, and compiled it with gcc -O3 -march=native, you'd expect it to be ~0.001 seconds slower than a statically linked binary with no CRT overhead. (If the your hand-written asm and the compiler output were both near-optimal. e.g. if you used the compiler output as a starting point for a hand-optimized version, but didn't find anything major. It's usually possible to make some improvements to compiler output, but often just to code-size and probably not much effect on speed.)
If you want to call malloc or printf, then the startup overhead is not useless; it's actually necessary to initialize glibc internal data structures so that library functions don't have any overhead of checking that stuff is initialized every time they're called.
From a statically linked hand-written asm program that links glibc, you need to call __libc_init_first, __dl_tls_setup, and __libc_csu_init, in that order, before you can safely use all libc functions.
Anyway, ideally you can expect a constant time difference from the startup overhead, not a factor of 2 difference.
If you're good at writing optimal asm, you can usually do a better job than the compiler on a local scale, but compilers are really good at global optimizations. Moreover, they do it in seconds of CPU time (very cheap) instead of weeks of human effort (very precious).
It can make sense to hand-craft a critical loop, e.g. as part of a video encoder, but even video encoders (like x264, x264, and vpx) have most of the logic written in C or C++, and just call asm functions.
The extra push/mov/pop instructions are because you compiled with optimization disabled, where -fno-omit-frame-pointer is the default, and makes a stack frame even for leaf functions. gcc defaults to -fomit-frame-pointer at -O1 and higher on x86 and x86-64 (since modern debug metadata formats mean it's not needed for debugging or exception-handling stack unwinding).
If you'd told your C compiler to make fast code (-O3), instead of to compile quickly and make dumb code that works well in a debugger (-O0), you would have gotten code like this for main (from the Godbolt compiler explorer):
// this is valid C++ and C99, but C89 doesn't have an implicit return 0 in main.
int main(void) {}
xor eax, eax
ret
To learn more about assembly and how everything works, have a look at some of the links in the x86 tag wiki. Perhaps Programming From the Ground Up would be a good start; it probably explains compilers and dynamic linking.
A much shorter article is A Whirlwind Tutorial on Creating Really Teensy ELF Executables for Linux, which starts with what you did, and then gets down to having _start overlap with some other ELF headers so the file can be even smaller.
Did you compile with optimizations enabled? If not, then this is invalid.
Did you consider that this is a completely trivial example that will have no real-life performance implications worth writing even a postcard about?
Please write clear maintainable code and (in 99% of cases) leave the optimization to the compiler. Please.

Making a JIT compiler

I've written a Brainfuck implementation (C++) that works like this:
Read input brainfuck file
Do trivial optimizations
Convert brainfuck to machine code for the VM
Execute this machine code in the VM
This is pretty fast, but the bottleneck is now at the VM. It's written in C++ and reads a token, executes an action (which aren't many at all, if you know Brainfuck) and so on.
What I want to do is strip out the VM and generate native machine code on the fly (so basicly, a JIT compiler). This can easily be a 20x speedup.
This would mean step 3 gets replaced by a JIT compiler and step 4 with the executing of the generated machine code.
I don't know really where to start, so I have a few questions:
How does this work, how does the generated machine code get executed?
Are there any C++ libraries for generating native machine code?
Generated machine code is just jmp-ed to or call-ed as usual function. Sometimes it also needed to disable no-execution flag (NX bit) on memory, containing generated code. In linux, this is done with mprotect(addr, size, PROT_READ | PROT_WRITE | PROT_EXEC.) In windows the NX is called DEP.
There are some... E.g. http://www.gnu.org/software/lightning/ - GNU Lightning (universal) and https://developer.mozilla.org/En/Nanojit - Nanojit, which is used in Firefox JavaScript JIT engines. More powerful and modern JIT is LLVM, you just need to translate BF code into LLVM IR, and then LLVM can do optimisations and code generation for many platforms, or run LLVM IR on interpreter (virtual machine) with JIT capabilities. There is a post about BF & LLVM with complete LLVM JIT compiler for BF http://www.remcobloemen.nl/2010/02/brainfuck-using-llvm/
Another BF +LLVM compiler is here, in the svn of LLVM: https://llvm.org/svn/llvm-project/llvm/trunk/examples/BrainF/BrainF.cpp
LLVM is a complete C++ library (or set of libraries) for generating native code from an intermediate form, complete with documentation and examples, and which has been used to produce JITters.
(It also has a C/C++ compiler which uses the framework - however the framework itself can be used for other languages).
This might be late but for the sake of help to any other i am posting this answer.
JIT compiler has all the steps that AOT compiler has. The main difference is that AOT compiler outputs the machine dependent code to an executable file like exe etc while the JIT compiler loads the machine dependent code into the memory at run time (hence the performance overhead because every time it needs to recompile and load).
How a JIT compiler loads the machine code into the memory at runtime ?
i will not teach you about the machine code because i assume you already know about it,
for eg. assembly code
mov rax,0x1
is translated to
48 c7 c0 01 00 00 00
you dynamically generate translated code and save it into a vector like this (this is a C vector)
vector machineCode{
0x48, 0xc7, 0xc0, 0x01, 0x00, 0x00, 0x00,
}
then you copy this vector into the memory, for this you need to know the memory size required by this code, which u can get by machinecode.size() and keep in mind the page size.
to copy this vector into the memory u need to call mmap function in C.
set the pointer to the beginning of your code and call it. u are good to go.
Sorry if anything is not clear, u can always check out this post for the simplicity
https://solarianprogrammer.com/2018/01/10/writing-minimal-x86-64-jit-compiler-cpp/
https://github.com/spencertipping/jit-tutorial
GNU Lightning is a set of macros which can generate native code for a few different architectures. You will need a solid understanding of assembly code because your step 3 will involve using Lightning macros to emit machine code directly into a buffer you will later execute.

How is Assembly used in the modern day (with C/C++ for example)?

I understand how a computer works on the basic principles, such as, a program can be written in a "high" level language like C#, C and then it's broken down in to object code and then binary for the processor to understand. However, I really want to learn about assembly, and how it's used in modern day applications.
I know processors have different instruction sets above the basic x86 instruction set. Do all assembly languages support all instruction sets?
How many assembly languages are there? How many work well with other languages?
How would someone go about writing a routine in assembly, and then compiling it in to object/binary code?
How would someone then reference the functions/routines within that assembly code from a language like C or C++?
How do we know the code we've written in assembly is the fastest it possibly can be?
Are there any recommended books on assembly languages/using them with modern programs?
Sorry for the quantity of questions, I do hope they're general enough to be useful for other people as well as simple enough for others to answer!
However, I really want to learn about assembly, and how it's used in modern day applications.
On "normal" PCs it's used just for time-critical processing, I'd say that realtime multimedia processing can still benefit quite a bit from hand-forged assembly. On embedded systems, where there's a lot less horsepower, it may have more areas of use.
However, keep in mind that it's not just "hey, this code is slow, I'll rewrite it in assembly and it by magic it will go fast": it must be carefully written assembly, written knowing what it's fast and what it's slow on your specific architecture, and keeping in mind all the intricacies of modern processors (branch mispredictions, out of order executions, ...). Often, the assembly written by a beginner-to-medium assembly programmer will be slower than the final machine code generated by a good, modern optimizing compiler. Performance stuff on x86 is often really complicated, and should be left to people who know what they do => and most of them are compiler writers. :) Have a look at this, for example. C++ code for testing the Collatz conjecture faster than hand-written assembly - why? gets into some of the specific x86 details for that case which you have to understand to match or beat a compiler with optimization enabled, for a single small loop.
I know processors have different instruction sets above the basic x86 instruction set. Do all assembly languages support all instruction sets?
I think you're confusing some things here. Many (=all modern) x86 processors support additional instructions and instruction sets that were introduced after the original x86 instruction set was defined. Actually, almost all x86 software now is compiled to exploit post-Pentium features like cmovcc; you can query the processor to see if it supports some features using the CPUID instruction. Obviously, if you want to use a mnemonic for some newer instruction set instruction your assembler (i.e. the software which translates mnemonics in actual machine code) must be aware of them.
Most C compilers have intrinsics like _mm_popcnt_u32 and/or command line options like -mpopcnt to enable them that let you take advantage of new instructions without hand-written asm. x86 -mbmi / -mbmi2 extensions have several instructions that compilers know how to use when optimizing ordinary C like x << y (shlx instead of the more clunky shl) or x &= x-1; (blsr / _blsr_u32()). GCC has a -march=native option to enable all the instruction sets your CPU supports, and to set the -mtune= option to optimize for your CPU in terms of how much loop unrolling is a good idea, or which instructions or sequences are faster on one CPU, slower on another.
If, instead, you're talking about other (non-x86) instruction sets for other families of processors, well, each assembler should support the instructions that the target processor can run. Not all the instructions of an assembly language have direct replacement in others, and in general porting assembly code from an architecture to another is usually a hard and difficult work.
How many assembly languages are there?
Theoretically, at least one dialect for each processor family. Keep in mind that there are also different notations for the same assembly language; for example, the following two instructions are the same x86 stuff written in AT&T and Intel notation:
mov $4, %eax // AT&T notation
mov eax, 4 // Intel notation
How would someone go about writing a routine in assembly, and then compiling it in to object/binary code?
If you want to embed a routine in an application written in another language, you should use the tools that the language provides you, in C/C++ you'd use the asm blocks.
You can instead make stand-alone .s or .asm files using the same syntax a C compiler would output, for example gcc -O3 -S will compile to a .s file that you can assemble with gcc -c. Separate files are a good idea if you want to write whole functions in asm instead of wrapping one or a couple instructions. A few open source projects like x264 and x265 (video encoders) have extensive amounts of NASM source code for different versions of functions for different versions of SSE or AVX available.
If you, instead, wanted to write a whole application in assembly, you'd have to write just in assembly, following the syntactic rules of the assembler you'd like to use.
How do we know the code we've written in assembly is the fastest it possibly can be?
In theory, because it is the nearest to the bare metal, so you can make the machine do just exactly what you want, without having the compiler take in account for language features that in some specific case do not matter. In practice, since the machine is often much more complicated than what the assembly language expose, as I said often assembly language will be slower than compiler-generated machine code, that takes in account many subtleties that the average programmer do not know.
Addendum
I was forgetting: knowing to read assembly, at least a little bit, can be very useful in debugging strange issues that can come up when the optimizer is broken/only in the release build/you have to deal with heisenbugs/when the source-level debugging is not available or other stuff like that; have a look at the comments here.
Intel and the x86 are big on reverse compatibility, which certainly helped them out but at the same time hurts greatly. The internals of the 8088/8086 to 286 to 386, to 486, pentium, pentium pro, etc to the present are somewhat of a redesign each time. Early on adding protection mechanisms for operating systems to protect apps from each other and the kernel, and then into performance by adding execution units, superscalar and all that comes with it, multi core processors, etc. What used to be a real, single AX register in the original processor turns into who knows how many different things in a modern processor. Originally your program was executed in the order written, today it is diced and sliced and executed in parallel in such a way that the intent of the instructions as presented are honored but the execution can be out of order and in parallel. Lots and lots of new tricks buried behind what on the surface appears to be a very old instruction set.
The instruction set changed from the 8/16 bit roots to 32 bit, to 64 bit, so the assembly language had to change as well. Adding EAX to AX, AH, and AL for example. Occasionally other instructions were added. But the original load, store, add, subtract, and, or, etc instructions are all there. I have not done x86 in a long time and was shocked to see that the syntax has changed and/or a particular assembler messed up the x86 syntax. There are a zillion tools out there so if one doesnt match the book or web page you are using, there is one out there that will.
So thinking in terms of assembly language for this family is right and wrong, the assembly language may have changed syntax and is not necessarily reverse compatible, but the instruction set or machine language or other similar terms (the opcodes/bits the assembly represents) would say that much of the original instruction set is still supported on modern x86 processors. 286 specific nuances may not work perhaps, as with other new features of specific generations, but the core instructions, load, store, add, subtract, push, pop, etc all still work and will continue to work. I feel it is better to "Drive down the center of the lane", dont get into chip or tool specific ghee whiz features, use the basic boring, been working since the beginning of time syntax of the language.
Because each generation in the family is trying for certain features, usually performance, the way the individual instructions are handed out to the various execution units changes...on each generation...In order to hand tune assembler for performance, trying to out-do a compiler, can be difficult at best. You need detailed knowledge about the specific processor you are tuning for. From the early x86 days to the present, unfortunately, what made the code execute faster on one chip, would often cause the next generation to run extra slow. Perhaps that was a marketing tool in disguise, not sure, "Buy the hot new processor that cost twice as much as the one you have now, advertises twice the clock speed, but runs your same copy of windows 30% slower. In a few years when the next version of windows is compiled (and this chip is obsolete) it will then double in performance". Another side effect of this is that at this point in time you cannot take one C program and create one binary that runs fast on all x86 processors, for performance you need to tune for the specific processor, meaning you need to at least tell the compiler to optimize and what family to optimize for. And like windows or office, or something you are distributing as a binary you likely cannot or do not want to somehow bury several differently tuned copies of the same program in one package or in one binary...drive down the center of the road.
As a result of all the hardware improvements it may be in your best interest to not try to tune the compiler output or hand assembler to any one chip in particular. On average the hardware improvements will compensate for the lack of compiler tuning and your same program hopefully just runs a little faster each generation. One of the chip vendors used to aim to make todays popular compiled binaries run faster tomorrow, the other vendor improved the internals such that if you recompiled todays source for the new internals you could run faster tomorrow. Those activities between vendors has not necessarily continued, each generation runs todays binaries slower, but tomorrows recompiled source the same speed or slower. It will run tomorrows re-written programs faster, sometimes with the same compiler sometimes you need tomorrows compiler. Isnt this fun!
So how do we know a particular compiled or hand assembled program is as fast as it possibly can be? We dont, in fact for x86 you can guarantee it isnt, run it on one chip in the family and it is slow, run it on another it may be blazing fast. x86 or not, other than very short programs or very deterministic programs like you would find on a microcontroller, you cannot definitely say this is the fastest possible solution. Caches for example are very hard if even possible to tune for, and the memory behind it, particularly on a pc, where the user can choose various sizes, speeds, ranks, banks, etc and adjust bios settings to change even more settings, you really cannot tell a compiler to tune for that. So even on the same computer same processor same compiled binary you have the ability to turn some of the knobs and make that program run a lot faster or a lot slower. Change processor families, change chipsets, motherboards, etc. And there is no possible way to tune for so many variables. The nature of the x86 pc business has become too chaotic.
Other chip families are not nearly as problematic. Some perhaps but not all. So these are not general statements, but specific to the x86 chip family. The x86 family is the exception not the rule. Probably the last assembler/instruction set you would want to bother learning.
There are tons of websites and books on the subject, cannot say one is better than the other. I learned from the original set of 8088/86 books from intel and then the 386 and 486 book, didnt look for Intel books after that (or any other boos). You will want an instruction set reference, and an assembler like nasm or gas (gnu assembler, part of binutils that comes with most gcc based compiler toolchains). As far as the C to/from assembler interface you can if nothing else figure that out by experimenting, write a small C program with a few small C functions, disassemble or compile to assembler, and look at what registers and/or how the stack is used to pass parameters between functions. Keep your functions simple and use only a few parameters and your assembler will likely work just fine. If not look at the assembler of the function calling your code and figure out where your parameters are. It is all well documented somewhere, and these days probably much better than old. In the early 8088/86 days you had tiny, small, medium, large and huge compiler models and the calling conventions could vary from one to the other. As well as one compiler to the next, watcom (formerly zortech and perhaps other names) was pass by register, borland and microsoft were passed on the stack and pretty close if not the same. Now with 32 and 64 bit flat memory space, and standards, you can use one model and not have to memorize all the nuances (just one set of nuances). Inline assembly is an option but varies from C compiler to C compiler, and getting it to work properly and effectively is more difficult than just writing assembler in its own file. gcc and perhaps other compilers will allow you to put the assembler file on the C compiler command line as if it were just another C file and it will figure out what you have given it and pass it to the assembler for you. That is if you dont want to call the assembler program yourself and put the object on the C compiler command line.
if nothing else disassemble a lot of simple functions, add a few parameters and return them, etc. Change compiler optimization settings and see how that changes the instructions used, often dramatically. Even if you cannot write assembler from scratch being able to read it is very valuable, both from a debugging and performance perspective.
Not all compilers for all processors are good. Gcc for example is a one size fits all, just like a sock or ball cap that one size doesnt really fit anyone well. Does pretty good for most of the targets but not really great. So it is quite possible to do better than the compiler with hand tuned assembler, but on the average for lots of code you are not going to win. That applies to most processors, which are more deterministic, not just the x86 family. It is not about fewer instructions, fewer instructions does not necessarily equate to faster, to outperform even an average compiler in the long run you have to understand the caches, fetch, decode, execution state machines, memory interfaces, memories themselves, etc. With compiler optimizations turned off it is very easy to produce faster code than the compiler, so you should just use the optimizer but also understand that that increases the risk of the compiler making a mistake. You need to know the tool very well, which goes back to disassebling often to understand how your C code and the compiler you use today interact with each other. No compiler is completely standards compliant, because the standards themselves are fuzzy, leaving some features of the language up to the discretion of the compiler (drive down the middle of the road and dont use those parts of the language).
Bottom line from the nature of your questions, I would recommend writing a bunch of small functions or programs with some small functions, compile to assembler or compile to an object and disassemble to see what the compiler does. Be sure to use different optimization settings on each program. Gain a working reading knowledge of the instruction set (granted the asm output of the compiler or disassembler, has a lot of extra fluff that gets in the way of readability, you have to look past that, you need almost none of it if you want to write assembler). Give yourself 5-20 years of studying and experimenting before you can expect to outperform the compiler on a regular basis, if that is your goal. By then you will learn that, particularly with this chip family, it is a futile effort, you win a few but mostly lose...It would be to your benefit to compile (to assembler) the same code to other chip families like arm and mips, and get a general feel for what C code compiles well in general, and what C code doesnt compile well, and make your C programming better instead of trying to make the assembler better. Also try other compilers like llvm. Gcc has a lot of quirks that many think are the C language standards but are instead nuances or problems with the specific compiler. Being able to read and analyze the assembly output of the compilers and their options will provide this knowledge. So I recommend you work on a reading knowledge of the instruction set, without necessarily having to learn to write it from scratch.
You need to look upon it from the hardware's point of view, the assembly language is created with regard to what the CPU can do. Every time a new feature in a CPU is created an appropriate assembly instruction is created so that it can be used.
Assembly is thus very dependent on the CPU, the high level languages like C++ provides abstractions from this to allow us to not have to think about the details like CPU instructions as well as the compiler generates optimized assembly code.
EDIT:
How many assembly languages are there?
How many work well with other
languages?
as many as there are different types of CPU. The second question I didn't understand. Assembly per se is not interacting with any other language, the output, the machine code is.
How would someone go about writing a
routine in assembly, and then
compiling it in to object/binary
code?`
The principle is similar to writing in any other compiled language, you create a text file with the assembly instructions, use an assembler to compile it to machine code. Then link it with eventual runtime libraries.
How would someone then reference the functions/routines within that
assembly code from a language like C
or C++?
C++ and C provide inline assembly so there is no need to link, but if you want to link you need to create the assembly object following the same/similar calling conventions as the host language. For instance some languages when calling a function push the arguments to the function on the stack in a certain order, so you would have to do the same.
How do we know the code we've written
in assembly is the fastest it possibly
can be?
Because it is closest to the actual hardware. When you are dealing with higher level languages you don't know what the compiler will do with your for loop. However more often than not they do a good and better job of optimizing the code than a human can do (of course in very special circumstances you can probably get a better result).
There are many many different assembly languages out there. Usually there is at least one for every processor instruction set, which means one for every processor type. One thing that you should also keep in mind is that even for a single processor there may be several different assembler programs that may use a different syntax, which from a formal view constitutes a different language. (for x86 there are masm, nasm, yasm, AT&T (what *nix assemblers like the GNU assembler use by default), and probably many more)
For x86 there are lots of different instruction sets because there have been so many changes to the architecture over the years. Some of these changes could be viewed mostly as additional instructions, so they are a super set of the previous assembly. Other changes may actually remove instructions (none are coming to mind for x86, but I've heard of some on other processors). And other changes add modes of operation to processors that make things even more complicated.
There are also other processors with completely different instructions.
To learn assembly you will need to start by picking a target processor and an assembler that you want to use. I'm going to assume that you are going to use x86, so you would need to decide if you want to start with 16 bit segmented, 32 bit, or 64 bit. Many books and online tutorials go the 16 bit route where you write DOS programs. If you are wanting to write parts of C programs in assembly then you will probably want to go the 32 or 64 bit route.
Most of the assembly programming I do is inline in C to either optimize something, to make use of instructions that the compiler doesn't know about, or when I otherwise need to control the instructions used. Writing large amounts of code in assembly is difficult, so I let the C compiler do most of the work.
There are lots of places where assembly is still written by people. This is particularly common in embedded, boot loaders (bios, u-boot, ...), and operating system code, though many developers in these never directly write any assembly. This code may be start up code that has to run before the stack pointer is set to a usable value (or RAM isn't usable yet for some other reason), because they need to fit within small spaces, and/or because they need to talk to hardware in ways that aren't directly supported in C or other higher level languages. Other places where assembly is used in OSes is writing locks (spinlocks, critical sections, mutexes, and semaphores) and context switching (switching from one thread of execution to another).
Other places where assembly is commonly written is in the implementation of some library code. Functions like strcpy are often implemented in assembly for different architectures because there are often several ways that they may be optimized using processor specific operations, while a C implementation might use a more general loop. These functions are also reused so often that optimizing them by hand is often worth the effort in the long run.
Another, related, place where lots of assembly is written is within compilers. Compilers have to know how to implement things and many of them produce assembly, so they have assembly templates (or something similar) built into them for use in generating output code.
Even if you never write any assembly knowing the instructions and registers of your target system are often useful. They can aid in debugging, but they can also aid in writing code. Knowing the target processor can help you write better (smaller and/or faster) code for it (even in a higher level language), and being familiar with a few different processors will help you to write code that will be good for many processors because you will know generally how CPUs work.
We do a fair bit of it in our Real-Time work (more than we should really). A wee bit of assembly can also be quite useful when you are talking to hardware, and need specific machine instructions executed (eg: All writes must be 16-bit writes, or you'll hose nearby registers).
What I tend to see today is assembly insertions in higher-level language code. How exactly this is done depends on your language and sometimes compiler.
I know processors have different
instruction sets above the basic x86
instruction set. Do all assembly
languages support all instruction
sets?
"Assembly language" is a kind of misnomer, at least in the way you are using it. Assemblers are less of a language (CS graduates may object) and more of a converter tool which takes textual representation and generates a binary image from it, with a close to 1:1 relationship between text elements (memnonics, labels and numbers) and binary elements. There is no deeper logic behind the elements of an assembler language because their possibilities to be quoted and redirected ends mostly at level 1; you can, for example, use EAX only in one instruction at a time - the next use of EAX in the next instruction bears no relationship with its previous use EXCEPT for the unwritten logical connection which the programmer had in mind - this is the reason why it is so easy to create bugs in assembler.
How would someone go about writing a
routine in assembly, and then
compiling it in to object/binary code?
One would need to pin down the lowest common denominator of instruction sets and code the function times the expected architectures the code is intended to run on. IOW if you are not coding for a certain hardware platform which is defined at the time of writing (e.g. a game console, an embedded board) you no longer do this.
How would someone then reference the
functions/routines within that
assembly code from a language like C
or C++?
You need to declare them in your HLL - see your compilers handbook.
How do we know the code we've written
in assembly is the fastest it possibly
can be?
There is no way to know. Be happy about that and code on.

How to do inline assembly in C++ (Visual Studio 2010)

I'm writing a performance-critical, number-crunching C++ project where 70% of the time is used by the 200 line core module.
I'd like to optimize the core using inline assembly, but I'm completely new to this. I do, however, know some x86 assembly languages including the one used by GCC and NASM.
All I know:
I have to put the assembler instructions in _asm{} where I want them to be.
Problem:
I have no clue where to start. What is in which register at the moment my inline assembly comes into play?
You can access variables by their name and copy them to registers.
Here's an example from MSDN:
int power2( int num, int power )
{
__asm
{
mov eax, num ; Get first argument
mov ecx, power ; Get second argument
shl eax, cl ; EAX = EAX * ( 2 to the power of CL )
}
// Return with result in EAX
}
Using C or C++ in ASM blocks might be also interesting for you.
The microsoft compiler is very poor at optimisations when inline assembly gets involved. It has to back up registers because if you use eax then it won't move eax to another free register it will continue using eax. The GCC assembler is far more advanced on this front.
To get round this microsoft started offering intrinsics. These are a far better way to do your optimisation as it allows the compiler to work with you. As Chris mentioned inline assembly doesn't work under x64 with the MS compiler as well so on that platform you REALLY are better off just using the intrinsics.
They are easy to use and give good performance. I will admit I am often able to squeeze a few more cycles out of it by using an external assembler but they're bloody good for the productivity improvement they provide
Nothing is in the registers. as the _asm block is executed. You need to move stuff into the registers. If there is a variable: 'a', then you would need to
__asm {
mov eax, [a]
}
It is worth pointing out that VS2010 comes with Microsofts assembler. Right click on a project, go to build rules and turn on the assembler build rules and the IDE will then process .asm files.
this is a somewhat better solution as VS2010 supports 32bit AND 64bit projects and the __asm keyword does NOT work in 64bit builds. You MUST use external assembler for 64bit code :/
I prefer writing entire functions in assembly rather than using inline assembly. This allows you to swap out the high level language function with the assembly one during the build process. Also, you don't have to worry about compiler optimizations getting in the way.
Before you write a single line of assembly, print out the assembly language listing for your function. This gives you a foundation to build upon or modify. Another helpful tool is the interweaving of assembly with source code. This will tell you how the compiler is coding specific statements.
If you need to insert inline assembly for a large function, make a new function for the code that you need to inline. Again replace with C++ or assembly during build time.
These are my suggestions, Your Mileage May Vary (YMMV).
Go for the low hanging fruit first...
As other have said, the Microsoft compiler is pretty poor at optimisation. You may be able to save yourself a lot of effort just by investing in a decent compiler, such as Intel's ICC, and re-compiling the code "as is". You can get a 30 day free evaluation license from Intel and try it out.
Also, if you have the option to build a 64-bit executable, then running in 64-bit mode can yield a 30% performance improvement, due to the x2 increase in number of available registers.
I really like assembly, so I'm not going to be a nay-sayer here. It appears that you've profiled your code and found the 'hotspot', which is the correct way to start. I also assume that the 200 lines in question don't use a lot of high-level constructs like vector.
I do have to give one bit of warning: if the number-crunching involves floating-point math, you are in for a world of pain, specifically a whole set of specialized instructions, and a college term's worth of algorithmic study.
All that said: if I were you, I'd step through the code in question in the VS debugger, using the Disassembly view. If you feel comfortable reading the code as you go along, that's a good sign. After that, do a Release compile (Debug turns off optimization) and generate an ASM listing for that module. Then if you think you see room for improvement...you have a place to start. Other people's answers have linked to the MSDN documentation, which is really pretty skimpy but still a reasonable start.

How to figure out source line number from Linker Map

For some reason I have only the linker map for an application I am debugging. There is a crash log which says crash occurred at offset "myApp.exe! + 4CA24".
From the linker map I am able to locate the method. Say this is at offset "myApp.exe! + 4BD7C".
Is there anyway to figure out the exact line in source code using just the above info?
I know if we have a .cod file it makes it very easy, but I don't have one (and can't create).
The best you can do if you only have MAP-files is to study the EXE-file in a disassembler and compare to constructs that you recognize from the common ways the compiler generates code. These you have to learn. That means learning at least some assembler is required. This is good knowledge that will help you in the future, especially if you have to debug a lot of code.
A slightly simpler approach is to download the free Intel-books on processor instructions and simply check out their sizes. This way you can count your way to the faulting instruction. For best results the two methods should be used in conjunction with each other.
Typically what you'd be looking for is something that looks a bit like this:
mov DWORD PTR [edi+40], eax
(Instruction, register, offset, size and order can be different but indirection is typically where most code crashes)
Whatever you do you should seriously consider turning on COD-file generation for the future as that makes it super-easy to find the faulting line.
It depends on the actual information in the map file - if it has line number information (which is pretty rare nowadays), it'll be obvious and you'll be able to do it. Otherwise the best you can do is guess.