Does all the programming languages have assemblers [closed] - c++

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I've found that Go language have its assembler, various compilers of C language have their own assemblers like Turbo C, LCC, OpemWatcom. But some are only asemblers like nasm, gas, etc. Why the compilers of high level languages comes with an assembler, if the compiler has to convert the source code into assembly code, then all the programming languages must be having their own assemblers. Please someone explain me this.

Does all the programming languages have assemblers
No.
Modern C++ compiler can output machine code (or intermediate object code in case of llvm) directly, without the need to generate assembly. The assembler is optional and if exists, it is provided as a convenient tool for the programmer for code inspection.

Can only one driver drive a particular car? No. The car is the assembler and can be made for multiple drivers.
It is in the best interest of the company that invents a processor to both document the instruction set machine code as well as an assembly language. Assembly language is not a universal thing, it is very specific to each assembler, the program that parses it and turns it into machine code. So the processor vendors documentation should use some assembly language to make the documentation readable. It is in their best interest to promote the new processor to create or hire someone to create an assembler and other tools, but you always have to create an assembler, if for no other reason than to validate the processor. So a generic assembler will exist.
Then compiler vendors may choose of course to support a generic one, a competitors, or make their own or any combination. And reasons for those choices can be many. For various reasons some compilers may choose to not generate assembly language as an intermediate step but go straight to machine code.
Your typical generic retargettable compiler will tend to support the compiler-assembler-linker model. The might do more than that but will tend for that model. Again which assembler, which linker? Many reasons why they might make their own or support a generic one. but if they use that model they have to at least pick one to support. Someone in the business of selling that compiler will likely want to control not just the compiler but all of the toolchain including a gui. They make take from open source or do their own but it is in their best interest to control their destiny.

Related

Difference between developing C/C++ code for different target MCUs [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I see on quite a few job descriptions for embedded developer positions, that a requirement would be for example to have knowledge of embedded C programming on ARM microcontrollers.
What's not very clear to me is , what is the difference between having knowledge of developing C for a particular ARM processor, or developing C for a PowerPC or other type of processor. Developing with C we are abstracted from the actual instructions which will get compiled and the architecture so why does it really matter?
I would guess that it is important mainly to have knowledge of embedded development, knowing memory constraints etc. and that the actual target processor wouldn't matter too much. For example, won't a developer who was writing code for PowerPC MCUs for many years be just as good as developing code for an ARM-based or other type of MCU?
Perhaps i'm missing something obvious here. Please could someone give an example of what could be some major differences between writing C/C++ code for one type of processor and another type. Thanks
All you write about Linux and kernels, but 99% of the embedded work is purely bare metal. therefore, the knowledge of the core and peripherals is essencial.
In the most cases, before Linux programmers learn how to program the bare metal, they are way useless.
And this time needed costs your employer money.
As usual, it depends.
If you are going to develop at a deep C level, it is a good idea to know about the architecture, in order to optimize your code according to speed or/and memory (data bus size, instructions per clock, etc).
It is a must if you are going to develop at a deeper level (and if we are talking about ARM and its thumb, thumb2 and ARM... It is madness).
Furthermore, each architecture because of legacy reasons or to make our life better, adds some tricks. For instance, ARM PC points to the instruction after the next one to be executed, which is quite different in PowerPC or RH850 or whatever.
Above that (I mean, above or close to an OS), the compiler uses to be better than us, but anyway, it is a nice to know how it is working (and definitely it will make the debugging easier)
The answer is yes and no.
If a job is more of application related then MCU specific knowledge will be of little help
If the software development is at kernel/BSP/drivers or hardware related then the knowledge of MCU/CPU hardware is critical.
Please check the Job Description in details to know the kind of work expected.
Well, I haven't written any ARM code myself, but I would think the ARM-specific experience would be (in no particular order):
Experience with ARM-specific toolchains (particularly non-standard parts of that toolchain, pieces of proprietary software etc)
Knowledge of which relevant libraries are available on ARM rather than on Intel machines, and whether their ARM version is performant enough, stable enough etc.
Knowledge of how certain coding patterns look like when compiled to ARM machine code, and consequently, good discretion regarding certain choices regarding low-level implementation.
Ability to write inline ARM assembly when and if necessary
Knowledge of ARM systems' cache layout
Ability to estimate/forecast the power economy of code rather than its mere performance (as ARM processors are sometimes more concerned with that).

Is it true that a C++ compiler is written multiple times for different platforms (Linux, windows etc) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
As far as I know C++ is an ISO standard, so they provide some sort of standards and list of features to be implemented for the coming release.
Is it the case that every platform owner will go and write their own implementation for those standards?
Or is there any core compiler code which is implemented once and then every other platform will write wrappers around it?
Or do they write their own C++ compiler from scratch?
Yes and no. Compiler basically consists of two parts: parser (a.k.a front-end) and code generator (a.k.a. back-end). Parser is responsible of recogninizing C++ grammar. Code generator constructs machine code for target platform (hardware type and operating system) based on information it gets from parser. While parser is platform independent, code generator part is tied to target platform. So, in order to support a new platform, one can reuse existing parser part, but has to write new code generator part.
Basically what ISO standards set is set of rules that should be followed by the compiler vendors.
But these are the standards for implementation and not the actual implementation.
Every major hardware vendor knows how to use its own hardware best
This includes aspects like
1) ABI support - This include things like binary formats, system calls and other interfaces
2) Shared Libraries.
3) Architecture Support.
So Microsoft, IBM, Intel, Oracle, and HP all have their own C++ compilers, which create optimal code on their latest hardware.
Standards, however do provide the draft that has to be purchased
https://isocpp.org/std/the-standard
The following table presents compiler support for new C++ features. These include C++11, C++14, C++17, and later accepted revisions to the standard, as well as various technical specifications.
http://en.cppreference.com/w/cpp/compiler_support
Likely once they where created platform specific, put simple a windows exe or com file wont run on linux, however later versions can be compiled trough older versions.

Is it possible to make C++ platform independent by making it run inside a VM just like in Java? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Given that Java is highly portable and it is not having serious overheads, can't C++ be made platform independent?
Yes, it is perfectly possible. For example, you can compile C++ to JavaScript ( see https://softwareengineering.stackexchange.com/questions/197940/how-to-run-c-code-in-browser-using-asm-js ) or to CLI byte code ( https://en.wikipedia.org/wiki/C%2B%2B/CLI ) to run on Windows or Linux, or various other targets.
None of these currently performs as well as native C++, and most lack direct access to operating system resources. So the portability comes at some cost, and usually if you wanted to pay the cost of targeting web browsers or CLI, you have languages better suited to those platforms.
In reality, the method of code execution (whether the code is compiled, interpreted, run by VM, etc) is more a property of the implementation, and not the language.
when people say C++ is a compiled language and that JavaScript is an interpreted language, that does not necessarily mean that you can't write a compiler that translates JavaScript code to machine code on your hardware of choice, but rather what is a common way to provide implementation for said language.
In practice, C++ is used because of its efficiency and close to the metal features that is a good choice for performance critical tasks like embedded systems programming, systems programming, graphics, etc, so getting C++ to run in a VM would defeat its purpose.
kinda like buying a fillet Mignon and cooking it in the microwave.
Java compiles to an intermediate platform-independent byte code that is then interpreted at runtime by platform-specific JVMs. That is what allows Java to be portable. Each type of JVM is tailored to run the byte code on the platform's particular hardware architecture.
C/C++ compiles to native machine code that runs directly on the CPU (or as directly as the OS will allow). So no, you cannot compile C/C++ in a platform-independent manner. You have to use platform-specific compilers to compile C/C++ code for each hardware architecture that you want to run your code on.

Efficient C++ for ARM [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I will give internal training about C++ on ARM, focusing on programming tips and hints, and I searched some webpages like:
Embedded C interview Questions for Embedded Systems Engineers
Efficient C for ARM
all of above are mainly for C on ARM, in that I am wondering they apply to C++ as well, say struct padding and etc
can you help me on that, T.H.X
I didnt look at the first link, the second link Efficient C for ARM is very good, thanks for finding and sharing that, I am going to refer people to that link.
In the same way that the Zen of Assembly language is still as relevant today as when it came out is not because the modern x86 is related to the 8088/86 and the "cycle eaters", but because the thought process and analysis taught doesnt change over time. The cycle eaters might from language to language or target to target, but how you find them doesnt. That book was outdated and irrelevant when it came out as far as the tuning for an 8088/86 or so I read somewhere, but I read it then and use what I learned every day since.
Same here the Efficient C for ARM applies well to similar items in C++, but more importantly look at the early slides, before any specific structures or code is shown. You have to analyze by inspection, and by using a profiler (no different than what Zen of Assembly language says, look at it and time it). Then the Efficient C for ARM page goes on to inspect some examples, take your C++ code and compile it, then disassemble, and see what is really happening. The problem with doing that is that you have to realize that there are many tuning knobs on a compiler and compilers are constantly evolving, and different compilers say, gcc, llvm (clang) and visual C/C++ are completely different. The same C++ source code presented to the different compilers and different versions of the compilers and the same compilers with different optimzation settings is going to produce different results.
When you want to micro-optimize you have to learn a lot of how the compilers work by getting a considerable amount of experience at disassembling and analyzing what the compiler does with your code (FOR EACH TARGET YOU CARE ABOUT). Only then can you begin to do some real optimization without having to resort to writing assembler. Despite what folks may tell you you can do this, you can for some situations significantly improve execution performance by simply rearranging structures, functions, lines of code, etc. Also make code that is more portable to other processors and make code that is generally faster on a number of platforms, not just one. The nay-sayers are right in that you need to have a good reason for it, sometimes it doesnt change the readability, but often taking it to far makes your code unreadable or unmaintainable or fragile, etc. Always arrange your structures in a sensible way, larger, aligned variables first, then progressively smaller. Other things though you may not want to do as a habit but only for special occasions.

recommended guides/books to read assembly [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
So lately I've been intrested in reading assembly which is displayed by a disassembler like ollydbg. The reason why I want to read this assembly is to learn how other developers build their applications or things like file formats of binary files the program has.
It's not like I'm a complete newbie at programming since I've been using C++ and C# for a while now. And I have a solid understanding of C++ so the whole pointer concept is clear to me.
Well I know that there are tons of assembly guides out there on the internet but I have no idea how reliable they are this tutorial: http://jakash3.wordpress.com/2010/04/24/x86-assembly-a-crash-course-tutorial-i/ was very usefull too me and this is the kind of tutorial with just a short explanation of the instruction. This one was very clear to me but it doesn't cover all of the assembly codes.
I hope someone could give a good guide/tutorial.
I think the standard guide to assembly is The Art of Assembly. It's online and free.
If you are interesed in x86 assembly and the opcodes for all instructions, try the Intel manuals (free download). If you want to know how to program in assembler, use the recommendation by Seth Carnegie. Another download would be the 32 bit edition.
I learned much of what I know about assembly language from Matt Pietrek's Just Enough Assembly Language to Get By and its sequel. This is especially made for C or C++ programmers wanting to read what the compiler emits in their debugger.
Pietrek's material is above any doubt and he writes clear and entertaining. It's tailored to the Windows platform, though.
This is a different approach. I recently released a first draft of learning assembly by example. It is not x86 assembly though, on purpose. Learning a better and easier to understand instruction set first, will get you through the basic concepts, from there other instruction sets, x86 included are often a matter of downloading an instruction set reference for the desired processor. https://github.com/dwelch67/lsasim
Normally googling xyz instruction set will get you a number of hits (instead of xyz choose the one you are interested in, arm, avr, 6502, etc). Ideally you want the vendors documentation which is usually free. There have been so many variations on the x86 by different companies that it adds to the mess. There are a lot of good online references though. For other families, msp430, avr, arm, mips, pic, etc, you can often go to the (core processor) vendors site to find a good reference. msp430, arm and thumb are also good first time instruction sets if you are not interested in the lsa thing. mips or dlx as well so I am told unfortunately I have not learned those yet. avr and x86 after you have learned something else first. Pic has a few flavors, the non-mips traditional pic instruction set is certainly educational in its simplicity and approach, might be a stepping stone to x86 (I dont necessarily recommend pic as a first instruction set either). I recommend learning a couple of non-x86 instruction sets first. And I recommend learning the 8088/86 instructions first, I can give you an ISBN number for the original intel manuals, can probably be found for a few bucks at a used book store (online). Lots of websites have the x86 instruction set defined as well, I highly recommend a visible simulator first before trying on hardware, will make life easier...qemu for example is not very visible nor easy to make visible. gdb's simulators might be as you can at least step and dump things out.
I really liked Programming From the Ground Up, a free book which aims to teach you the basics of ASM programming in a pretty easy to understand way. You can check it out here:
Programming from the Ground up