I'm working on a program that renders iterated fractal systems. I wanted to add the functionality where someone could define their own iteration process, and compile that code so that it would run efficiently.
I currently don't know how to do this and would like tips on what to read to learn how to do this.
The main program is written in C++ and I'm familiar with C++. In fact given most of the scenarios I know how to convert it to assembly code that would accomplish the goal, but I don't know how to take the extra step to convert it to machine code. If possible I'd like to dynamically compile the code like how I believe many game system emulators work.
If it is unclear what I'm asking, tell me so I can clarify.
Thanks!
Does the routine to be compiled dynamically need to be in any particular language. If the answer to that question is "Yes, it must be C++" you're probably out of luck. C++ is about the worst possible choice for online recompilation.
Is the dynamic portion of your application (the fractal iterator routine) a major CPU bottleneck? If you can afford using a language that isn't compiled, you can probably save yourself an awful lot of trouble. Lua and JavaScript are both heavily optimized interpreted languages that only run a few times slower than native, compiled code.
If you really need the dynamic functionality to be compiled to machine code, your best bet is probably going to be using clang/llvm. clang is the C/Objective-C front end being developed by Apple (and a few others) to make online, dynamic recompilation perform well. llvm is the backend clang uses to translate from a portable bytecode to native machine code. Be advised that clang does not currently support much of C++, since that's such a difficult language to get right.
Some CPU emulators treat the machine code as if it was byte code and they do a JIT compile, almost as if it was Java. This is very efficient, but it means that the developers need to write a version of the compiler for each CPU their emulator runs on and for each CPU emulated.
That usually means it only works on x86 and is annoying to anyone who would like to use something different.
They could also translate it to LLVM or Java byte code or .Net CIL and then compile it, which would also work.
In your case I am not sure that sort of thing is the best way to go. I think that I would do this by using dynamic libraries. Make a directory that is supposed to contain "plugins" and let the user compile their own. Make your program scan the directory and load each DLL or .so it finds.
Doing it this way means you spend less time writing code compilers and more time actually getting stuff done.
If you can write your dynamic extensions in C (not C++), you might find the Tiny C Compiler to be of use. It's available under the LGPL, it's compatible for Windows and Linux, and it's a small executable (or library) at ~100kb for the preprocessor, compiler, linker and assembler, all of which it does very fast. The downside to that, of course, is that it can't compare to the optimizations you can get with GCC. Another potential downside is that it's X86 only AFAIK.
If you did decide to write assembly, TCC can handle that -- the documentation says it supports a gas-like syntax, and it does support X86 opcodes.
TCC also fully supports ANSI C, and it's nearly fully compliant with C99.
That being said, you could either include TCC as an executable with your application or use libtcc (there's not too much documentation of libtcc online, but it's available in the source package). Either way, you can use tcc to generate dynamic or shared libraries, or executables. If you went the dynamic library route, you would just put in a Render (or whatever) function in it, and dlopen or LoadLibrary on it, and call Render to finally run the user-designed rendering. Alternatively, you could make a standalone executable and popen it, and do all your communication through the standalone's stdin and stdout.
Since you're generating pixels to be displayed on a screen, have you considered using HLSL with dynamic shader compile? That will give you access to SIMD hardware designed for exactly this sort of thing, as well as the dynamic compiler built right into DirectX.
LLVM should be able to do what you want to do. It allows you to form a description of the program you'd like to compile in an object-oriented manner, and then it can compile that program description into native machine code at runtime.
Nanojit is a pretty good example of what you want. It generates machine code from an intermediate langauge. It's C++, and it's small and cross-platform. I haven't used it very extensively, but I enjoyed toying around just for demos.
Spit the code to a file and compile it as a dynamically loaded library, then load it and call it.
Is there are reason why you can't use a GPU-based solutions? This seems to be screaming for one.
Related
This question has been bothering me so much for the past couple of days. I was wondering how the standard library works, in terms of functionality. I couldn't find an answer anywhere, even by checking the source code provided by the LLVM compiler which is, for a beginner like me, a really complicated piece of code.
What I'm basically trying to understand here is how does the C++ standard library work. For example let's take the fstream header file which consist of a bunch of functions that help to write to and read from files.
How does it work? Does it use the OS specific API (since the library is cross platform), or what? And, if the standard library can do it, aren't I supposed to be able to mess with some files as well without calling the standard fstream file (which to my experience I can't do)?
I apologize if my questions are unclear since I'm not a native English speaker: feel free to modify this text so as to make it clearer.
Does it use the OS specific API (since the library is cross platform), or what?
At some point, the OS specific API is used. The fstream implementation does not necessarily call an OS function directly. It might use other classes, which call functions inherited from C, etc., but eventually the call chain will lead to an OS call. (Yes, the details are often too complicated for an intermediate programmer to follow. So, as a self-described beginner, your findings are not surprising.)
The library is cross-platform in the sense that on your end (the C++ programmer), the interface is the same regardless of platform. It is not, however, the same library on every platform. Each platform has its own library, exposing the same interface on the C++ side, but making use of different OS calls. (In fact, the same platform might have multiple standard libraries, as the library implementation is provided by your toolchain, not by the standards committee.)
And, if the standard library can do it, aren't I supposed to be able to mess with some files as well without calling the standard fstream file (which to my experience I can't do)?
Yes, you are allowed to. Apparently, you have not been able to yet, but with some practice and guidance you should be able to. Everything in the standard library can be recreated in your own code. The point of the standard library (and most libraries, for that matter) is to save you time, not to enable something that was otherwise unavailable. For example, you don't have to implement a file stream for every program you write; it's in the standard library so you can focus on more interesting aspects of your project.
A compiler is just a program which create executable file or library. You can use the compiler default libraries to gain time or write your own. The default libraries communicate with the os for file operation or memory allocation and provide a simple standard classes to allow the developper to write only one code which work on all target platforms supported by the compiler and the libraries. If you want to write your own you have to write each function for all your target os.
The standard library is cross-platform in a sense that its interface does not change between platforms but its implementation does - or in practical terms - if you only use C++ and its standard library, you can write your code the same way for Linux / Windows / MacOS / Android / Whatever and if you find a C++ compiler for one of those platforms that supports the language features you used, you will be able to compile your code for that platform without rewriting anything.
So while you can use std::vector or std::fstream or any other feature in the library independently of the platform you're writing for and expect the function definitions, type names, etc. to look the same, you cannot expect the executable which you compiled for PC with Windows 10 to run on a phone with Android. You cannot even expect the same executable to run on the same PC but with different system - that is what I mean by "the implementation is different"
There are two main reasons for this difference:
Processors with different architectures (x86-64 and ARM for example) use different instruction sets and as such the C++ source would need to be compiled to a completely different machine code to run properly
Computers with processors of the same architecture which have a different operating system have different ways of dynamically allocating memory, creating files, creating streams, writing to console, creating and scheduling threads etc. - which is part of the system functionality that you use via the standard library
If you really wanted to you could use HeapAlloc() instead of operator new() or CreateThread() instead of stdlib's std::thread but that would force you to both rewrite your program every time you wanted to compile it for something else than Windows and recompile it with the target platform's compiler (and by proxy learn its API). Standard library saves you from that trouble by abstracting away those system calls.
As for the fstream in particular, here is what it uses internally on most PCs nowadays.
Basically, fstream, iostream and printf works based on a kernel function write(). When your code call printf (we use printf as an example), it will finally call write() to let the kernel work on the IO stuff. After that, write() returns and printf returns and your code continues.
So if you really want to know how the printf works internally, you have to read the source code of the Kernel.
But you shouldn't do that for now.
For a beginner, do not try to go deeper when you haven't got a basic cognition about computer. A computer is a project, just like a building. So the right way to learn it is to learn it level by level. First, learning how to use brick and cement to build a building, this is what you should do for now. What you shouldn't do is that you are learning how to build a building and this is your first time to try to use brick, then you are interested in how to produce a brick and start to focus on brick, this is a wrong way to learn IT.
If you are learning C/C++, just learn it. Remember, learn it level by level. For now, knowing how to use printf is enough.
I'm not sure the way I'm organising my library is the most elegant way to organise it. I'm mainly concerned about making all the code that I type compile/run for all systems that I'm targeting (keeping it portable), and also keeping it up-to-date for every system.
For example:
I'm not sure using __declspec(dllexport/dllimport) would be used for Mac or Linux. I assume that it's not, but I don't know what the equivalent is for Mac/Linux is. Or another example might be calling specific Operating System functions, which I try to avoid. However, things such as: measuring how long something takes to happen*, in a precise manner, does require me to call OS specific functions.
*as in getting the user's time precisely (down to micro/milli-seconds).
The systems that I am currently (perhaps more in the future) aiming for is Mac, Windows and Linux. But testing that the code compiles (and runs correctly) for every system just seems like a waste of time. As currently, the way I'm proposing to do it, requires me to make a separate project for every system. i.e. I create a Visual Studio Project for Windows, an Xcode project for Mac, and use the command line for Linux or use some other IDE.
Okay, so my main problems with the way I organise things are:
1. Time Consumption, as in creating the projects for all systems and keeping each indiviual project, for each system up-to-date.
2. Knowledge on all IDE's/Compilers that I require to use
Please note, I've never really used* Linux before, I'm thinking about switching. The only problem that I'm concerned about is finding the right tools for me to use with it, and finding the right Distro that would suit me, for what I need. I would really appreciate if someone that is experienced with Linux could guide me, or give me some advice whether to switch or not.
I really like Visual Studio, and it's been my main IDE for quite a while now. I'm just not sure if I want to ditch Visual Studio or not, for Linux; as I don't know if the tools that are available for Linux can do what Visual Studio can or not. What I mean is, it being as user friendly as Visual Studio is. I'm afraid to learn how to use Linux's tools, I'm just not sure if it's worth doing so. Time isn't a major factor on this library, I have plenty of time, I'm fairly young and determined to program as my career in the future.
*I have used it before, but I've never replaced it for Windows.
I am currently hosting all my source code for my project on BitBucket, but at the moment, I only have the RAW code on my repo. There is no project files or any other tool to compile it with, just the code and a readme file. I was thinking of using Makefiles, since they seem popular. But I've never made a Makefile before, don't get me wrong, I am willing to learn. I'm just not really sure where to start. I've heard that people use CMake to create portable libraries, such as SFML and Ogre3D. I've built a couple of libraries with CMake, but I have no clue on how to actually make my own library with it to make my project/make files. Should I learn and incorporate CMake with my library, or is there a better option available?
EDIT:
I'm not aiming to write a library for actual Software that uses a GUI. I'm mainly aimed to write games.
1 - Boost. Boost will help your portability more than you can imagine. Its only real sticking point is, believe it or not, OS X.
2 - Use CMake. It integrates with Visual Studio project files as the build tool, and you can put most of your different-platform compilation voodoo in there.
3 - If you're seriously writing a portable library, consider writing it in C/writing a C wrapper, or making it header-only, or providing the source-code. Making it a shared or statically linkable library does not mean that it will play nice. Name mangling leads to inconsistencies that will blow your mind.
4 - Always be explicit about the number of bits in each variable.
5 - use git. It'll allow you to setup a crappy local server for a repository very easy and get very fast transfers of the kind of huge changes MSVC will make annoyingly
There are a lot more best-practices that can be discussed about cross-platform development. All of that advice isn't applicable in every case; I have a very code-heavy Linux/Windows library that I code almost exclusively in MSVC2k10 and mostly build/test for in Linux, and it is nowhere close to header-only.
EDIT in response to comments:
git was suggested because I find it very easy to use and manage locally. I've use svn before and liked it, I won't really endorse any others, but there are probably plenty of fine ones.
To expound on point 3,
A C wrapper would make it so that anyone anywhere could use your library - FORTRAN developers, Ruby, even Java.
Otherwise you generally have to have similar compiler versions to link properly and it will only link to other C++ code, outside the case of DLLs, and there are still versioning issues. It's one of the stupidest problems in C++ left over, check "name mangling" on Wikipedia. There is a reason widely-used libraries are written in C or have C wrappers, such as libz, openssl etc.
There are other advantages to it. Exception propagation across dynamic libraries is non-existent; with static libraries it can be inconsistent or non-existent.
You'll find that the most widely-used C++ libraries are mostly header-only, like Boost. A header-only library solves many problems by putting all the code directly into a project in a relatively intuitive way, and modern compilers can still optimize away much (but certainly not all) of the extra compile time associated with it.
With all this said, it is certainly possible to do without a C wrapper or header-only, it is just annoying and very troublesome. DLL hell and its Linux equivalents still exist.
You also asked about Boost. That depends. If you're distributing the sourcecode then you certainly must distribute Boost with your code/have people install it. Having people install libraries in order to compile other libraries or use programs is common practice. Think of how specific versions of DirectX come with games for an example.
However if you are distributing binary versions of your library, statically linking against Boost will eliminate any need to include it as long as you are careful to keep Boost headers out of the outward-facing parts of your library. This is where you start seeing things like void * pointers inside C++ headers; an unfortunate side-effect of some of the shortcomings of C++ compilation and library distribution unfortunately.
CMake is a good choice. You can learn to use it. Read a tutorial:
http://www.cmake.org/cmake/help/cmake_tutorial.html
But, if your targets are Linux and Windows only. It is probably OK, in your case (small/average first multi-platform project), to maintain 2 separate build systems.
On Linux, Use Make. It is standard and has a very good reference manual:
http://www.gnu.org/software/make/manual/make.html
On Windows. Use your IDE project file, be it Visual, DevC++ or other. That is the simplest way to go.
Most important, make it easy to test your library/software on different platforms. Install a virtual machine on your desktop. Or at least compile your library into Cygwin.
Once you are here come back on stackoverflow and we will help !
Personally I'd leverage a framework like Qt, because it is quite portable, it does abstract a lot of OS functionality (files, timing, threads, networking), and you get a decent, free IDE (Qt Creator) that is also portable and runs on Windows, OS X and any Unix flavor that runs Qt. It'd give you the lowest barrier to entry. Qt Creator can leverage the Visual Studio compiler and the CDB debugger if they are available.
You do not need to use OpenGL to use Qt, in fact you're not bound to any particular graphics subsystem. Qt only "uses" OpenGL in Qt 5 for the Qt Quick 2 graphics backend. It's not needed for Qt 4, nor for Qt Quick 1 (even in Qt 5!).
You can use any 2D or 3D framework you fancy to push images and other content to the screen. What Qt is good at is creating the kind of 2D imagery that is often needed in games - menus, configuration screens, HUDs, etc. There's a lot of controls and drawing primitives that Qt makes easy to leverage for your purposes.
Qt also lets you use a reasonably powerful model-view and networking frameworks, thus you'd be able to reasonably easily generate server or client lists that update in real time.
There'd need to be a small amount of shim code between Qt and DirectX, of course. On the output side, you typically end up with a QImage in Qt, and then use DirectX, SDL, OpenGL, etc. to push it to the screen. On the input side, you need to call qApp->processEvents() within your main game loop, and you will need to post user input events from DirectX etc. to Qt's event queue using qApp->postEvent(...). This would be only needed if, say, DirectX main loop consumes all Windows messages and won't let standard winapi/win32 code (Qt's windows event dispatcher) see them. I haven't deal with DirectX much, so others feel free to chime in with details, of course.
Does anyone know how to compile your c++ code wich you write while your program is already running?
And later i would like to run that code.
I want to do this because I am trying to make a game that woul teach you programing and so the user would have to write the code while the game is running and test it.
Thanks for any help
You'd have an easier time if you chose a language that was designed with embedding in mind - like LUA or python. For C++, you'd have to go for something extremely clumsy and fragile like invoking an external compiler (which is also a logistics nightmare when shipping your game), or something as complex as integrating a compiler in your game (probably doable with llvm components, but...)
Also, for "teaching programming", C++ probably isn't the best language :)
You have to call the compiler to compile and link the user-entered code. This has to be made either into an executable that is then run from another process you create, or as a library that you dynamically load and call.
How this is done is different on POSIX platforms (like Linux and OSX) and Windows.
I came here to ask this question because this site has been very useful to me in the past, seems to have very knowledgeable users who are willing to discuss a question even if it is metaphysical at times. And also because googling it did not work.
Java has a compiler and then it has a JDT library that can compile java on the fly (for example used in JasperReports to turn a report template into Java code).
My question: Does anyone know of a library/project that would offer compiling as a set of library classes in c/c++. For example: a suite of classes to perform Preprocessing, Parsing, CodeOptimization and of course Binary rendering to executable images such as ELF or Win format. Basically something that would allow one to compile c or c++ scriptlets as part of an application.
Yes: llvm. In particular, clang. At least, that's how they advertise the projects. Also, check this question. It might be relevant if you decide to use llvm.
You might be able to adapt something from the LLVM project to your needs.
You can just require that a compiler be installed, then call it. This is a fairly hefty requirement, but about the only way to truly "embed" C or C++. There are interpreters that you may be able to embed, but that currently seems a poor choice, not the least because any libraries used in the script must have development versions (i.e. headers and source/compiled-libraries) installed, and those libraries could be restricted to the feature set supported by the interpreter (e.g. quality of template implementation).
You're better off using a language like Python or Lua to embed.
There is the ch interpreter but I have not used it. Generally for scripting type applications a more natural scripted language is used.
Great. It looks like LLVM is what I was after.
Thanks a lot for your feedback.
I am not primarily after C++ as a scripting language. I have noticed that Python is used as an embedded script engine.
My primary reason is two fold:
Get rid off Make,CMake and the hell that is Autoconf and replace it with something like Scons that binds into and interacts with all phases of compiling
Hook into the compiling process after parsing and auto generate code. Specificaly meta related code. In my case I have been able to implement almost every feature of Java in C++ except one: Reflection.
Why impose on your code uneeded bload like RTTI that is often inadequate. Instead one could selectively generate added features. But developer would have to choice when and how to use this extra code.
is there a multiplatform c++ compiler that could be linked into any software ?
Lets say I want to generate c++ code at runtime, compile it and run it.
I'm looking for a compact solution (bunch of classes), preferably LGPL/BSD licence :)
As far as I know it can be done in Java and c#. What about c++ ?
Well maybe one of the modules of CLang will be of help? It's not dry yet on the C++ side but certainly will be soon.
I don't know of any open source ones for C++, but if you want small and compact scripting and are not hung up on C++ LUA might be an option for you
I'd drop C++ altogether and use Google V8. If you wanted to use C++ because the people using your app only know this, they should have no difficulties going to javascript.
And it's damn fast. And Javascript is a cool language too.
I've done this years ago in Linux by generating C++-code into a file, compile it by shell execute (with gcc) and then linking in the generated library dynamically. The dynamic linking differs of course between platforms.
This kind of thing is much much harder in C++, because the language doesn't use a virtual machine (or "runtime") that abstracts machine specifics away.
You could look into gcc, it's under the GPL IIRC, and ports exist for all major platforms.
When we looked into scripting we chose AngelScript because of the similarity with C++.
V8 is great but it's certainly limited to some platforms, AngelScript is a lot easier to compile with and probably to learn (if you came from C++) and it has a zlib license.
http://www.angelcode.com/angelscript/