Is There a C++ Command Line? - c++

So Python has a sort of command line thing, and so does Linux bash (obviously), and I'm sure other programming languages do, but does C++? If not, why do C++ scripts have to be compiled first and then run?

If not, why do C++ scripts have to be compiled first and then run?
C++ code does not need to be compiled to be run. There are interpreters.
The reason most of us prefer compiled C++ is that the resulting executable is 'faster'.
Interpreted computer languages can do extra things to achieve similar performance (i.e. just-in-time compile), but generally, 'scripts' are not in the same league of fast.
Some developers think not having to edit, compile, link is a good thing ... just type in code and see what it does.
Anyway, the answer is, there is no reason that C++ "has to" be compiled. It is just the preferred tool for most C++ developers.
Should you want to try out C++ interpreters, search the net for CINT, Ch, and others.

Indeed there are interpreters for C++ that do what you want. Check out
Cling.
To the commenters saying C++ can't have interpreters because it's a compiled language: yes, typically you use a compiler with C++. But that doesn't mean it's impossible to write an interpreter for it.

There is no command lines to run C++ instructions. It is compiled first and then target machine code generated (intermediate obj code, and linked) to run.
The reason is, It is matter of language design for various considerations like performance, error recovery etc. Compiled code generate target machine code directly and run faster than interpreted languages. Compiled code take program as whole and generate the target machine code vs interpreted code take few instruction at once. Interpreted language require intermediate programs to target the final machine code, so it may be slow.
In nutshell, it is language design evolution. When first computers appeared programming is done directly in machine language. Those programs run instruction by instruction. Later high level language appeared, where machine language is abstracted with human friendly instructions and compilers designed to generate equivalent machine code.
Later Computer program design advanced, and CPU instruction cycle speed increased, we could afford intermediate interpreters for writing safer programs.
Choice is wider now, earlier performance centric apps demanded compiled code. Now even interpreted code equally faster in common use cases.

While there are interpreters for C++-like languages, that is not really the point; C++ is a compiled language that is translated to native machine code. Conversely scripting languages are (typically) interpreted (albeit that there are also compilers for scripting languages to translate them to native code).
C++ is a systems-level capable language. You have to ask yourself - if all languages ran in a shell with a command line and were interpreted, what language is that shell or interpreter, or even the OS they are running on written in?
Ultimately you need a systems level language, and those are most often C, C++ and assembler.
Moreover because it is translated to machine level code at compilation, that code runs directly and stand-alone without the presence of any interpreter, and consequently can be simpler to deploy, and will execute faster.

Related

Which is faster, Clojure or ClojureScript (and why)?

If I had to guess, I'm pretty sure the answer is Clojure, but I'm not sure why. Logically (to me) it seems like ClojureScript should be faster:
Both are "dynamic", but ClojureScript
Compiles to JavaScript, running on V8
V8 engine is arguably the fastest dynamic language engine there is
V8 is written in C
whereas Clojure:
Is also dynamic
Runs in JVM, which has no built-in dynamic support, so I'm thinking thus JVM has to do whatever V8 is doing too, to enable dynamic support
and Java is slower than C
So how could Clojure be faster than ClojureScript? Does "dynamic" mean something different when saying JavaScript is dynamic and Clojure is dynamic? What am I not seeing?
(Of course if ClojureScript is indeed faster, then is the above reasoning correct?)
I guess, what does Clojure compile to....is at least part of the question. I know the JVM part can't just be a plain interpreter (otherwise ClojureScript would be faster), but Clojure can't compile to regular bytecode, as there's no "dynamic" in the JVM. So what's the difference between how ClojureScript is compiled/executed and how Clojure is compiled/excecuted and how plain Java is compiled/executed, and the performance differences implied in each?
Actually, V8 is written in C++. However, does basically the same thing as the JVM, and JVM is written in C. V8 JITs Javascript code and executes the JIT'd code. Likewise the JVM JIT compiles (or hotspot compiles) bytecode (NOT Java) and executes that generated code.
Bytecode is not static, as Java is. In fact it can be quite dynamic. Java, on the other hand is mostly static, and it is not correct to conflate Java with bytecode. The java compiler transforms Java source code into bytecode, and the JVM executes the bytecode. For more information, I recommend you look at John Rose's blog (example). There's a lot of good information there. Also, try to look for talks by Cliff Click (like this one).
Likewise, Clojure code is directly compiled to bytecode, and the JVM then does the same process with that bytecode. Compiling Clojure is usually done at runtime, which is not the speediest process. Likewise the translation of Clojurescript into Javascript is not fast either. V8's translation of Javascript to executable form is obviously quite fast. Clojure can be ahead of time compiled to bytecode though, and that can eliminate a lot of startup overhead.
As you said, it's also not really correct to say that the JVM interprets bytecode. The 1.0 release did that more than 17 years ago!
Traditionally, there were two compilation modes. The first mode is a JIT (Just in Time) compiler. Where bytecode is translated directly to machine code. Java's JIT compiling executes fast, and it doesn't generate highly optimized code. It runs OK.
The second mode is called the hotspot compiler. The hotspot compiler is very sophisticated. It starts the program very quickly in interpreted mode, and it analyzes it as the program runs. As it detects hotspots (spots in the code that execute frequently), it will compile those. Whereas the JIT compiler has to be fast because nothing executes unless it's JIT'ed the hotspot compiler can afford to spend extra time to optimize the snot out of the code that it's compiling.
Additionally, it can go back and revisit that code later on and apply yet more optimizations to it if necessary and possible. This is the point where the hotspot compiler can start to beat compiled C/C++. Because it has runtime knowledge of the code, it can afford to apply optimizations that a static C/C++ compiler cannot do. For example, it can inline virtual functions.
Hotspot has one other feature, which to the best of my knowledge no other environment has, it can also deoptimize code if necessary. For example, if the code were continually taking a single branch, and that was optimized and the runtime conditions change forcing the code down the other (unoptimized) branch and performance suddenly becomes terrible. Hotspot can deoptimize that function and begin the analysis again to figure out how to make it run better.
A downside of hotspot is that it starts a bit slow. One change in the Java 7 JVM has been to combine the JIT compiler and the hotspot compiler. This mode is new, though, and it's not the default, but once it is initial startup should be good and then it can begin the advanced optimizations that the JVM is so good at.
Cheers!
This question is hard to answer precisely, without reference to a specific benchmark task (or even specific versions of Clojure or ClojureScript).
Having said that, in most situation I would expect Clojure to be somewhat faster. Reasons:
Clojure usually compiles down to static code, so it doesn't actually do any dynamic lookups at runtime. This is quite important: high performance code often produces bytecode that is very similar to statically typed Java. The question appears to be making the false assumption that a dynamic language has to do dynamic method lookups at runtime: this is not always the case (and usually isn't in Clojure)
The JVM JIT is very well engineered, and I believe it is currently still a bit better than the JavaScript JITs, despite how good V8 is.
If you need concurrency or need to take advantage of multiple cores then clearly there is no contest since JavaScript is single-threaded.....
The Clojure compiler is more mature than ClojureScript, and has had quite a lot of performance tuning work in recent years (including things like primitive support, protocols etc.)
Of course, it is possible to write fast or slow code in any language. This will make more of a difference than the fundamental difference between the language implementations.
And more fundamentally, your choice between Clojure and ClojureScript shouldn't be about performance in any case. Both offer compelling productivity advantages. The main deciding factor should be:
If you want to run on the web, use ClojureScript
If you want to run on the server in a JVM environnment, use Clojure
This is not so much an answer as an historical comment: Both the HotSpot VM and the V8 js engine can have their origins traced to the Self project at Sun Microsystems, which I think prototyped a lot of the technology that allows them to run as fast as they do. Something to consider when comparing them both. I would've posted this as a comment but the reputation system prevented me.

Can a scripting language be translated into other languages?

Can a scripting language be translated into C, C++, or Java so it can be run on an IDE without rewriting the code?
In theory, yes, it is possible to translate any scripting language into C, C++, or Java code. A theoretically valid way of doing this would be to take the source code for the interpreter and then to hardcode in the script that it's going to be executing. The resulting code would then be "run the interpreter written in C/C++/Java on the specified source code."
In practice, there usually isn't a good way of translating from a scripting language to some other target language in a way that preserves the original coding style. Each language has its own constructs, idioms, and idiosyncrasies and in translating from the source scripting language to a target language much of the original structure is lost. That said, there are many projects that do this sort of conversion for performance reasons. For example, Facebook's HipHop compiler translates PHP into C++ for efficiency reasons. The resulting code is not intended to be read by humans, though.
So in short, yes, it can be done, but not in a way that's going to result in pretty code.
Take a look at shedskin for an example of a Python to C++ translator. It isn't perfect. It has some limitations on what code can be translated. But in general it works.
The main reason to do so, in this case, is speed and ease of integration with other existing C++ software.
In theory, yes it's possible. Depending on the scripting language and it supporting "virtual machine", there are tools to do this (semi-)automatically. The more heavily interpreted the language is the less likely you will be able to translate the code (for example, translating an HTML webpage into native C is kinda ridiculous, as opposed to translating matlab code into C or C++). In general, generic tools for translating code are rarely good enough that you can compile and run the code that is produced, very often they will do most of the syntax translation (basically find & replace operations) and maybe some more advanced stuff. But, most of the time, you will still have significant work to do (like using google translator to translate a webpage from one language to another, it is never perfect and it depends on how close the two languages are).
In my opinion, however, I would say that code translation is a very dangerous business. It is a lot easier to make typos or other mistakes when you are manually rewriting code that you know very little about. An automatic translation tool won't perform much better on that front either. And then, once you have translated the code, what if there are bugs? How are you supposed to find them? and fix them? when you know very little about the actual code. This can very rapidly become a nightmarish experience. I have done it in the past, and don't do it anymore!
BTW: if you are looking to use code that is written in a script language inside a project written in another language, then you might consider interfacing the languages instead of translating one code to the other language. Most programming languages and scripting languages have facilities to interface with other languages (e.g. DLLs or COM/ActiveX components). It is always better to preserve the code in the language it was originally written if at all possible.
There is a programming language called Haxe that can be translated into C++, Java, Javascript, C# and several other languages. This language appeared relatively recently, and is designed to be translated into as many target languages as possible.
For C, there are some scripting languages which look a little like C. Maybe that's a starting point. Lua comes to my mind.
For java, there is a scripting language, beanshell/bsh which runs a simplified java code as script. But you would have to rewrite it to make Javacode out of it, and I don't know how easy the process would be, to make this automatically happen.
A different approach would be: Write an C interpreter, so you can use C-Code for scriping, and just compile it when you need to.

Developing embedded software library, C or C++?

I'm in the process of developing a software library to be used for embedded systems like an ARM chip or a TI DSP (for mostly embedded systems, but it would also be nice if it could also be used in a PC environment). Obviously this is a pretty broad range of target systems, so being able to easily port to different systems is a priority.The library will be used for interfacing with a specific hardware and running some algorithms.
I am thinking C++ is the best option, over C, because it is much easier to maintain and read. I think the additional overhead is worth it for being able to work in the object oriented paradigm. If I was writing for a very specific system, I would work in C but this is not the case.
I'm assuming that these days most compilers for popular embedded systems can handle C++. Is this correct?
Is there any other factors I should consider? Is my line of thinking correct?
If portability is very important for you, especially on an embedded system, then C is certainly a better option than C++. While C++ compilers on embedded platforms are catching up, there's simply no match for the widespread use of C, for which any self-respecting platform has a compliant compiler.
Moreover, I don't think C is inferior to C++ where it comes to interfacing hardware. The amount of abstraction is sufficiently low (i.e. no deep class hierarchies) to make C just as good an option.
There is certainly good support of C++ for ARM. ARM have their own compiler and g++ can also generate EABI compliant ARM code. When it comes to the DSPs, you will have to look at their toolchain to decide what you are going to do. Be aware that the library that comes with a DSP may well not implement the full C or C++ standard library.
C++ is suitable for low-level embedded development and is used in the SymbianOS Kernel. Having said that, you should keep things as simple as possible.
Avoid exceptions which may demand more library support than what is present (therefore use new (std::nothrow) Foo instead of new Foo).
Avoid memory allocations as much as possible and do them as early as possible.
Avoid complex patterns.
Be aware that templates can bloat your code.
I have seen many complaints that C++ is "bloated" and inappropriate for embedded systems.
However, in an interview with Stroustrup and Sutter, Bjarne Stroustrup mentioned that he'd seen heavily templated C++ code going into (IIRC) the braking systems of BMWs, as well as in missile guidance systems for fighter aircraft.
What I take away from this is that experts of the language can generate sophisticated, efficient code in C++ that is most certainly suitable for embedded systems. However, a "C With Classes"[1] programmer that does not know the language inside out will generate bloated code that is inappropriate.
The question boils down to, as always: in which language can your team deliver the best product?
[1] I know that sounds somewhat derogatory, but let me say that I know an awful lot of these guys, and they churn out an awful lot of relatively simple code that gets the job done.
C++ compilers for embedded platforms are much closer to 83's C with classes than 98's C++ standard, let alone C++0x. For instance, some platform we use still compile with a special version of gcc made from gcc-2.95!
This means that your library interface will not be able to provide interfaces with containers/iterators, streams, or such advanced C++ features. You'll have to stick with simple C++ classes, that can very easily be expressed as a C interface with a pointer to a structure as first parameter.
This also means that within your library, you won't be able to use templates to their full power. If you want portability, you will still be restricted to generic containers use of templates, which is, I'm sure you'll admit, only a very tiny part of C++ templates power.
C++ has little or no overhead compared to C if used properly in an embedded environment. C++ has many advantages for information hiding, OO, etc. If your embedded processor is supported by gcc in C then chances are it will also be supported with C++.
On the PC, C++ isn't a problem at all -- high quality compilers are extremely widespread and almost every C compiler is directly associated with a C++ compiler that's quite good, though there are a few exceptions such as lcc and the newly revived pcc.
Larger embedded systems like those based on the ARM are generally quite similar to desktop systems in terms of tool chain availability. In fact, many of the same tools available for desktop machines can also generate code to run on ARM-based machines (e.g., lots of them use ports of gcc/g++). There's less variety for TI DSPs (and a greater emphasis on quality of generated code than source code features), but there are still at least a couple of respectable C++ compilers available.
If you want to work with smaller embedded systems, the situation changes in a hurry. If you want to be able to target something like a PIC or an AVR, C++ isn't really much of an option. In theory, you could get (for example) Comeau to produce a custom port that generated code you could compile on that target's C compiler -- but chances are pretty good that even if you did, it wouldn't work out very well. These systems are really just too limitated (especially on memory size) for C++ to fit them well.
Depending on what your intended use is for the library, I think I'd suggest implementing it first as C - but the design should keep in mind how it would be incorporated into a C++ design. Then implement C++ classes on top of and/or along side of the C implementation (there's no reason this step cannot be done concurrently with the first). If your C design is done with a C++ design in mind, it's likely to be as clean, readable and maintainable as the C++ design would be. This is somewhat more work, but I think you'll end up with a library that's useful in more situations.
While you'll find C++ used more and more on various embedded projects, there are still many that restrict themselves to C (and I'd guess this is more often the case than not) - regardless of whether or not the tools support C++. It would be a shame to have a nice library of routines that you could bring to a new project you're working on, but be unable to use them because C++ isn't being used on that particular project.
In general, it's much easier to use a well-designed C library from C++ than the other way around. I've taken this approach with several sets of code including parsing Intel Hex files, a simple command parser, manipulating synchronization objects, FSM frameworks, etc. I'm planning on doing a simple XML parser at some point.
Here's an entirely different C++-vs-C argument: stable ABIs. If your library exports a C ABI, it can be compiled with any compiler that works on the system, because C ABIs are generally platform standards. If your library exports a C++ ABI, it can only be compiled with a matching compiler -- because C++ ABIs are usually not platform standards, and often differ from compiler to compiler and even version to version.
Interestingly, one of the rare exceptions to this is ARM; there's an ARM C++ ABI specification, and all compliant ARM compilers follow it. This is not true on x86; on x86, you're lucky if a C++ library compiled with a 4.1 version of GCC will link correctly with an application compiled with GCC 4.4, and don't even ask about 3.4.6.
Even if you export a C ABI, you can have problems. If your library uses C++ internally, it will then link to libstdc++ for things in the C++ std:: namespace. If your user compiles a C++ application that uses your library, they'll also link to libstdc++ -- and so the overall application gets linked to libstdc++ twice, and their libstdc++ may not be compatible with your libstdc++, which can (or so I understand) lead to odd errors from the intersection of the two. Considerably less likely, but still possible.
All of these arguments only apply because you're writing a library, and they're not showstoppers. But they are things to be aware of.

Can C++ be compiled into platform independent code? Why Not?

Is it possible to compile C++ program into some intermediate stage (similar to bytecode in java) where the output is platform independent and than later compile/link at runtime to run in native (platform dependent) code?
If answer is no, why?
It is indeed possible, see for example LLVM.
Of course. Keep in mind that the C++ standard only specifies behavior: What should happen when this program executes. It doesn't specify how it should be implemented.
C++ code can be compiled to an intermediate format and JIT'ed to machine code, or it can be interpreted or anything else you like.
This is trivial, and most compilers already do that. gcc compiles to RTL (register transfer language) which is then translated to the target CPU.
Similarly, managed C++ and C++/CLI are compiled to .NET.
Finally you can consider the Church Turing thesis that is a statement of equivalence of programming languages, so C++ can be compiled/translated to your favorite platform independent language (say, Perl, lisp, C--, etc).
C++ source code (with some restrictions) is a platform-independent bytecode.
Why is it not?
Indeed, "bytecode" compilation procedure is then mere copying. The virtual machine that runs the "bytecode" is C++ compiler and a wrapper script. Yeah, it does some stuff that resembles compilation to machine code--but that's an implementation detail.
Here's a Linux implementation of such a "C++ virtual machine":
#/bin/sh
tmp=`mktemp`
g++ $1 -o $tmp && $tmp $2 $3 $4 ...
Does it answer the question? I think, it does. To the extent how specific the question is. Because it clearly explains theoretical possibility of compiling C++ into bytecode. Practical implementations also exist, for example, LLVM.
Yes it is technically feasible. A bit of a plug for a former employer, but here's an implementation of exactly that: http://antixlabs.com/products/antixgamedevelopmentkit/. The packaging process is, roughly speaking, C/C++ -> (compiler) -> LLVM -> (backend) -> bespoke bytecode -> zip file. This is platform-independent. Once it's on the user's device the "player" converts bespoke bytecode -> (translator for that device) -> native elf file -> (loader/linker) -> fixed up code.
If the real question is, "does there exist any such industry-standard intermediate format which is widely supported on multiple platforms and suitable for all-purpose use, like Java bytecode?" then the answer is "no".
As for why, I'd say it's because there is no one organisation which has enough influence over C++ programmers, and no true necessity for Java-style deployment of C++ applications. Sun invented Java and a GUI library in one go, presented it to programmers, and didn't introduce the big proliferation of profiles until later.
C++ doesn't even have a standard GUI, and C++ environments are far more fragmented than Java. How do you tell a Windows app developer, a mobile phone developer, a smartcard implementer and a stock exchange backend implementer that they need to ditch their existing toolchain in favour of a platform-independent deployment mechanism for C++? They don't. And that's even before you get to the folks writing OSes and device drivers in C or C++ mixed with assembly. It's simply impossible to come up with a standard environment to support all of them.
Parrot project will have c++ bytecode compilation and execution parrot Visual Studio can compile C++ as bytecode C++ managed

dynamic code compilation

I'm working on a program that renders iterated fractal systems. I wanted to add the functionality where someone could define their own iteration process, and compile that code so that it would run efficiently.
I currently don't know how to do this and would like tips on what to read to learn how to do this.
The main program is written in C++ and I'm familiar with C++. In fact given most of the scenarios I know how to convert it to assembly code that would accomplish the goal, but I don't know how to take the extra step to convert it to machine code. If possible I'd like to dynamically compile the code like how I believe many game system emulators work.
If it is unclear what I'm asking, tell me so I can clarify.
Thanks!
Does the routine to be compiled dynamically need to be in any particular language. If the answer to that question is "Yes, it must be C++" you're probably out of luck. C++ is about the worst possible choice for online recompilation.
Is the dynamic portion of your application (the fractal iterator routine) a major CPU bottleneck? If you can afford using a language that isn't compiled, you can probably save yourself an awful lot of trouble. Lua and JavaScript are both heavily optimized interpreted languages that only run a few times slower than native, compiled code.
If you really need the dynamic functionality to be compiled to machine code, your best bet is probably going to be using clang/llvm. clang is the C/Objective-C front end being developed by Apple (and a few others) to make online, dynamic recompilation perform well. llvm is the backend clang uses to translate from a portable bytecode to native machine code. Be advised that clang does not currently support much of C++, since that's such a difficult language to get right.
Some CPU emulators treat the machine code as if it was byte code and they do a JIT compile, almost as if it was Java. This is very efficient, but it means that the developers need to write a version of the compiler for each CPU their emulator runs on and for each CPU emulated.
That usually means it only works on x86 and is annoying to anyone who would like to use something different.
They could also translate it to LLVM or Java byte code or .Net CIL and then compile it, which would also work.
In your case I am not sure that sort of thing is the best way to go. I think that I would do this by using dynamic libraries. Make a directory that is supposed to contain "plugins" and let the user compile their own. Make your program scan the directory and load each DLL or .so it finds.
Doing it this way means you spend less time writing code compilers and more time actually getting stuff done.
If you can write your dynamic extensions in C (not C++), you might find the Tiny C Compiler to be of use. It's available under the LGPL, it's compatible for Windows and Linux, and it's a small executable (or library) at ~100kb for the preprocessor, compiler, linker and assembler, all of which it does very fast. The downside to that, of course, is that it can't compare to the optimizations you can get with GCC. Another potential downside is that it's X86 only AFAIK.
If you did decide to write assembly, TCC can handle that -- the documentation says it supports a gas-like syntax, and it does support X86 opcodes.
TCC also fully supports ANSI C, and it's nearly fully compliant with C99.
That being said, you could either include TCC as an executable with your application or use libtcc (there's not too much documentation of libtcc online, but it's available in the source package). Either way, you can use tcc to generate dynamic or shared libraries, or executables. If you went the dynamic library route, you would just put in a Render (or whatever) function in it, and dlopen or LoadLibrary on it, and call Render to finally run the user-designed rendering. Alternatively, you could make a standalone executable and popen it, and do all your communication through the standalone's stdin and stdout.
Since you're generating pixels to be displayed on a screen, have you considered using HLSL with dynamic shader compile? That will give you access to SIMD hardware designed for exactly this sort of thing, as well as the dynamic compiler built right into DirectX.
LLVM should be able to do what you want to do. It allows you to form a description of the program you'd like to compile in an object-oriented manner, and then it can compile that program description into native machine code at runtime.
Nanojit is a pretty good example of what you want. It generates machine code from an intermediate langauge. It's C++, and it's small and cross-platform. I haven't used it very extensively, but I enjoyed toying around just for demos.
Spit the code to a file and compile it as a dynamically loaded library, then load it and call it.
Is there are reason why you can't use a GPU-based solutions? This seems to be screaming for one.