SML more or less large systems: compilers and interpreters interoperability - sml

This is about programming in the large with SML. First a summary of what's seems to be available for that purpose, then a tiny summary, then finally, the simple question.
The use pseudo‑clause
Top-level type, exception, and value identifiers (standardml.org)
Note that the use function is special. Although
not defined precisely, its intended purpose is
to take the pathname of a file and treat the
contents of the file as SML source code typed
in by the user. It can be used as a simple build
mechanism, especially for interactive sessions.
Most implementations will provide a more sophisticated
build mechanism for larger collections of source
files. Implementations are not required to supply
a use function.
Then later
val use : string -> unit (* implementation dependent *)
Its drawbacks are: not supported by MLton at least, and while not standardized, seems to have the same behaviour with all major SML systems, which is to reload a unit as many times as a use is encountered for it, which is not OK due to the generative semantic of SML (defining a structure multiple times, will result into as much different definitions, which is especially wrong with types definitions).
ML Basis Files
There exist so called “ML Basis Files”: MLBasis (mlton.org) and ML‑Kit ML Basis Files (sourceforge.net).
The load pseudo‑clause
MoscowML has load which acts like use which uses only once, i.e. does not reload a unit if it's already loaded, which is what's expected to compose a system.
Summary
load is nice, but only recognized by MoscowML
MLBasis Files may be nice, but it's not recognized by neither Poly/ML nor Moscow ML
MLton does not recognize use
Putting everything in a single big bundle file, is the only one interoperable thing working with all compilers and interpreters; that works, but that quickly become a burden.
The question
Is there a known interoperable way to compose a system made of multiple SML source files?

One system you did not mention is SML/NJ's Compilation Manager (CM), which is quite powerful. And there are a few other, less known systems.
But that notwithstanding, the situation is indeed dire. There simply is no standardised separate compilation mechanism for SML. In practice that means that writing portable Makefiles or something alike is rather painful.
For HaMLet I went through that pain, in order to make it compile with 7 different SML implementations. The approach is to use a restricted (dependency-ordered) CM file and the necessary amount of make + sed hackery to generate meta files for other systems from that. It can also generate a file containing respective 'use' invocations for all the sources, for all other systems that at least support that. All in all it's not pretty, but works sufficiently well.

Related

C++ header and implementation, (why) is it not automatically handled by the IDE/compiler?

In C++, your classes are often divided into two parts, being the header-file and the actual implementation. In my (unexperienced) opinion, this is awful. It requires me to do all sorts of unnecessary book-keeping, clutters up my project directory and goes against everything I've learned about software development (double implementation). Languages where you only deal with the implementation, such as Java or Python, are much nicer to work with.
I've always learned that the reason to use them was to significantly decrease compilation time. However, wouldn't a modern IDE (CLion in my case) or even the compiler be smart enough to either:
Keep some sort of "shadow"-header file, which would automatically be updated whenever a definition is changed in the implementation?
Automatically split it into the header and implementation during compile time, allowing you to only have to deal with one file? (Something that Lazy C++ seems to do)
Or are there any plugins available that offer this kind of behaviour? C++ modules also seem to offer a solution to this problem, but their current status/support is unclear to me and to make matters worse there seem to be two competing standards (Clang's and Microsoft's).
Unfortunately it is not that simple. Header/source file separation C++ inherited from C due to preprocessor, that both share. Automatic generation of a header file is not possible in general, first of all that separation is not trivial, second header file often has preprocessor code that manually written and generates compilation code. Third almost all templated code goes to a header file due to process of compilation and rules of visibility. Changing all of that would require breaking compatibility with existing code, amount of which is significant and nobody wants to do that. More easy would be to create yet another language (like D) but many people would not want to migrate due to various reasons. We know that committee is working on modules and if they manage to make them work without breaking compatibility, that would be helpful for many of us. But again this is not trivial task at all, the way you describe it would only work in certain environments (when you limit yourself) but cannot be applied to everybody.

Compiling SML projects from multiple files

I've got a project with many files in it and I want it to work with most popular compilers.
Unfortunately, PolyML and SML/NJ require use statements, while MosML additionally requires explicitly loading basis library structures using load, which is not recognised by either poly or sml.
On top of that, MLton and MLKit require a completely different .mlb file simply listing filenames and also require an explicit import of basis library, which is done in a different way to MosML:
$(SML_LIB)/basis/basis.mlb
Is there some standard universal "include this file" command, and if it doesn't exist, is there some other way to have all compilers read from one entry-point file?
P.S. Wouldn't mind someone going on a small rant about compiler differences. I'm always interested in what people think and there's not too much info available :-)
The use function is the standard universal "include this file" command,
included in the Top-level environment
val use : string -> unit implementation dependent
I generally maintain the build environment in smlnj's CM,
then convert to mlb with cm2mlb. It will define a flag MLton
when parsing the sources.cm file so that you can use that to work around differences in module loading behavior.
#if(defined(MLton))
runmain.sml
#endif
There is also a set of sml-buildscripts which converts from
mlb to polyml. I am not familiar with them nor polyml however
CM is convenient as the authoritative source, since it provides programmatic access from SML via the structure CM.
This is what cm2mlb uses, So while i'm not aware of anything which exists already that converts from CM to polyml, it should be possible.

Why are generated binaries so large?

Why are the binaries that are generated when I compile my C++ programs so large (as in easily 10 times the size of the source code files)? What advantages does this offer over interpreted languages for which such compilation is not necessary (and thus the program size is only the size of the code files)?
Modern interpreted languages do typically compile the code to some manner of representation for faster execution... it might not get written out to disk, but there's certainly no guarantee that the program is represented in a more compact form. Some interpreters go the whole hog and generate machine code anyway (e.g. Java JIT). Then there's the interpreter itself sitting in memory which can be large.
A few points:
The more sophisticated the commands in the source code, the more machine code operations might be required to execute them. Thus, higher level language features tend to have a higher ratio of compiled-code to source code. That's not necessarily a bad thing: think of it as "I only have to say a little about what I want done and it infers all those necessary steps". The challenge in programming is to ensure they are necessary - that requires good library and program design.
The compiler often deliberately decides to trade some executable size for faster expected execution speed: inline vs out-of-line code is part of this compromise, though for small functions neither may be consistently more compact.
More sophisticated run-time environments (e.g. adding support for C++ exceptions) can involve a bit of extra code that runs when the program first starts to construct the necessary environment for that language feature.
Libraries feature may not be comparable. As well as the sort of add-on libraries you're very likely to have had to track down yourself and be very aware of using (e.g. XML, PDF parsing, OpenGL), languages often quietly use supporting libraries for what seem like language features and functions. Any of these can be suprisingly large.
For example, many interpreters just expose the C library's printf() statement or something similar, while for output formatting C++ has ostream - a more complex, extensible and type-safe system with (for better or worse) persistent state across function calls, routines to query and set that state, an additional layer of customisable buffering, customisable character types and localisation, and generally a lot of small inline functions that can lead to smaller or larger programs depending on the exact use and compiler settings. What's best depends on your application and memory vs performance goals.
Inbuilt language statements may be compiled differently: a switch on an integer expression and have 100 case labels spread randomly between 1 and 1000: one compiler/languages might decide to "pack" the 100 cases and do a binary search for a match, another to use a sparsely populated array of 1000 elements and do direct indexing (which wastes space in the executable but typically makes for faster code). So, it's hard to draw conclusions based on executable size.
Typically, memory usage and execution speed become increasingly important as the program gets larger and more complex. You don't see systems like Operating Systems, enterprise web servers or full-featured commercial word processors written in interpreted languages because they don't have the scalability.
Interpreted languages assume an interpreter is available while compiled programs are in most cases standalone.
Take a trivial case: Suppose you have a one line program
print("hello world")
what does that "print" do? Surely it's clear that your asking some other code to do some work? And that code isn't free, the sum total of what needs to run is much more than the lines of code you write. In more realistic programs you exploit many sophisticated libraries managing windows and other UI features, networks, databases and so on. Now whether that code is bundled into your application or loaded from DLLs or is present in the interpreter it's got to be somewhere.
There are plenty of trades-off between compilation and interpretation, and intermediate solutions such as Java's compilation/byte-code interpreatation approach. For example, you might consider
the run-time cost of interpreting the source every time you run versus running the compiled code
the portability advantages of interpreters - you need to compile separate versions of an app for different platforms.
Usually, programs are written in higher level languages, for these programs to be executed by the CPU, the programs have to be converted to machine code. This conversion is done by a Compiler or an Interpreter.
A Compiler makes the conversion just once, while an Interpreter typically converts it every time a program is executed.
Interpreted programs run much slower than compiled programs because the interpreter must analyze each statement in the program each time it is executed and then perform the desired action, whereas the compiled code just performs the action within a fixed context determined by the compilation(which is the reason for presence of large sized binary files).
Another disadvantage of Interpreters is that they must be present in the enviornment as additional software to run the source code.

Open-source C++ scanning library

Rationale: In my day-to-day C++ code development, I frequently need to
answer basic questions such as who calls what in a very large C++ code
base that is frequently changing. But, I also need to have some
automated way to exactly identify what the code is doing around a
particular area of code. "grep" tools such as Cscope are useful (and
I use them heavily already), but are not C++-language-aware: They
don't give any way to identify the types and kinds of lexical
environment of a given use of a type or function a such way that is
conducive to automation (even if said automation is limited to
"read-only" operations such as code browsing and navigation, but I'm
asking for much more than that below).
Question: Does there exist already an open-source C/C++-based library
(native, not managed, not Microsoft- or Linux-specific) that can
statically scan or analyze a large tree of C++ code, and can produce
result sets that answer detailed questions such as:
What functions are called by some supplied function?
What functions make use of this supplied type?
Ditto the above questions if C++ classes or class templates are involved.
The result set should provide some sort of "handle". I should be able
to feed that handle back to the library to perform the following types
of introspection:
What is the byte offset into the file where the reference was made?
What is the reference into the abstract syntax tree (AST) of that
reference, so that I can inspect surrounding code constructs? And
each AST entity would also have file path, byte-offset, and
type-info data associated with it, so that I could recursively walk
up the graph of callers or referrers to do useful operations.
The answer should meet the following requirements:
API: The API exposed must be one of the following:
C or C++ and probably is "C handle" or C++-class-instance-based
(and if it is, must be generic C o C++ code and not Microsoft- or
Linux-specific code constructs unless it is to meet specifics of
the given platform), or
Command-line standard input and standard output based.
C++ aware: Is not limited to C code, but understands C++ language
constructs in minute detail including awareness of inter-class
inheritance relationships and C++ templates.
Fast: Should scan large code bases significantly faster than
compiling the entire code base from scratch. This probably needs to
be relaxed, but only if Incremental result retrieval and Resilient
to small code changes requirements are fully met below.
Provide Result counts: I should be able to ask "How many results
would you provide to some request (and no don't send me all of the
results)?" that responds on the order of less than 3 seconds versus
having to retrieve all results for any given question. If it takes
too long to get that answer, then wastes development time. This is
coupled with the next requirement.
Incremental result retrieval: I should be able to then ask "Give me
just the next N results of this request", and then a handle to the
result set so that I can ask the question repeatedly, thus
incrementally pulling out the results in stages. This means I
should not have to wait for the entire result set before seeing
some subset of all of the results. And that I can cancel the
operation safely if I have seen enough results. Reason: I need to
answer the question: "What is the build or development impact of
changing some particular function signature?"
Resilient to small code changes: If I change a header or source
file, I should not have to wait for the entire code base to be
rescanned, but only that header or source file
rescanned. Rescanning should be quick. E.g., don't do what cscope
requires you to do, which is to rescan the entire code base for
small changes. It is understood that if you change a header, then
scanning can take longer since other files that include that header
would have to be rescanned.
IDE Agnostic: Is text editor agnostic (don't make me use a specific
text editor; I've made my choice already, thank you!)
Platform Agnostic: Is platform-agnostic (don't make me only use it
on Linux or only on Windows, as I have to use both of those
platforms in my daily grind, but I need the tool to be useful on
both as I have code sandboxes on both platforms).
Non-binary: Should not cost me anything other than time to download
and compile the library and all of its dependencies.
Not trial-ware.
Actively Supported: It is likely that sending help requests to mailing lists
or associated forums is likely to get a response in less than 2
days.
Network agnostic: Databases the library builds should be able to be used directly on
a network from 32-bit and 64-bit systems, both Linux and Windows
interchangeably, at the same time, and do not embed hardcoded paths
to filesystems that would otherwise "root" the database to a
particular network.
Build environment agnostic: Does not require intimate knowledge of my build environment, with
the notable exception of possibly requiring knowledge of compiler
supplied CPP macro definitions (e.g. -Dmacro=value).
I would say that CLang Index is a close fit. However I don't think that it stores data in a database.
Anyway the CLang framework offer what you actually need to build a tool tailored to your needs, if only because of its C, C++ and Objective-C parsing / indexing capabitilies. And since it's provided as a set of reusable libraries... it was crafted for being developed on!
I have to admit that I haven't used either because I work with a lot of Microsoft-specific code that uses Microsoft compiler extensions that i don't expect them to understand, but the two open source analyzers I'm aware of are Mozilla Pork and the Clang Analyzer.
If you are looking for results of code analysis (metrics, graphs, ...) why not use a tool (instead of API) to do that? If you can, I suggest you to take a look at Understand.
It's not free (there's a trial version) but I found it very useful.
Maybe Doxygen with GraphViz could be the answer of some of your constraints but not all,for example the analysis of Doxygen is not incremental.

Where do I learn "what I need to know" about C++ compilers?

I'm just starting to explore C++, so forgive the newbiness of this question. I also beg your indulgence on how open ended this question is. I think it could be broken down, but I think that this information belongs in the same place.
(FYI -- I am working predominantly with the QT SDK and mingw32-make right now and I seem to have configured them correctly for my machine.)
I knew that there was a lot in the language which is compiler-driven -- I've heard about pre-compiler directives, but it seems like someone would be able to write books the different C++ compilers and their respective parameters. In addition, there are commands which apparently precede make (like qmake, for example (is this something only in QT)).
I would like to know if there is any place which gives me an overview of what compilers are out there, and what their different options are. I'd also like to know how each of them views Makefiles (it seems that there is a difference in syntax between them?).
If there is no website regarding, "Everything you need to know about C++ compilers but were afraid to ask," what would be the best way to go about learning the answers to these questions?
Concerning the "numerous options of the various compilers"
A piece of good news: you needn't worry about the detail of most of these options. You will, in due time, delve into this, only for the very compiler you use, and maybe only for the options that pertain to a particular set of features. But as a novice, generally trust the default options or the ones supplied with the make files.
The broad categories of these features (and I may be missing a few) are:
pre-processor defines (now, you may need a few of these)
code generation (target CPU, FPU usage...)
optimization (hints for the compiler to favor speed over size and such)
inclusion of debug info (which is extra data left in the object/binary and which enables the debugger to know where each line of code starts, what the variables names are etc.)
directives for the linker
output type (exe, library, memory maps...)
C/C++ language compliance and warnings (compatibility with previous version of the compiler, compliance to current and past C Standards, warning about common possible bug-indicative patterns...)
compile-time verbosity and help
Concerning an inventory of compilers with their options and features
I know of no such list but I'm sure it probably exists on the web. However, suggest that, as a novice you worry little about these "details", and use whatever free compiler you can find (gcc certainly a great choice), and build experience with the language and the build process. C professionals may likely argue, with good reason and at length on the merits of various compilers and associated runtine etc., but for generic purposes -and then some- the free stuff is all that is needed.
Concerning the build process
The most trivial applications, such these made of a single unit of compilation (read a single C/C++ source file), can be built with a simple batch file where the various compiler and linker options are hardcoded, and where the name of file is specified on the command line.
For all other cases, it is very important to codify the build process so that it can be done
a) automatically and
b) reliably, i.e. with repeatability.
The "recipe" associated with this build process is often encapsulated in a make file or as the complexity grows, possibly several make files, possibly "bundled together in a script/bat file.
This (make file syntax) you need to get familiar with, even if you use alternatives to make/nmake, such as Apache Ant; the reason is that many (most?) source code packages include a make file.
In a nutshell, make files are text files and they allow defining targets, and the associated command to build a target. Each target is associated with its dependencies, which allows the make logic to decide what targets are out of date and should be rebuilt, and, before rebuilding them, what possibly dependencies should also be rebuilt. That way, when you modify say an include file (and if the make file is properly configured) any c file that used this header will be recompiled and any binary which links with the corresponding obj file will be rebuilt as well. make also include options to force all targets to be rebuilt, and this is sometimes handy to be sure that you truly have a current built (for example in the case some dependencies of a given object are not declared in the make).
On the Pre-processor:
The pre-processor is the first step toward compiling, although it is technically not part of the compilation. The purposes of this step are:
to remove any comment, and extraneous whitespace
to substitute any macro reference with the relevant C/C++ syntax. Some macros for example are used to define constant values such as say some email address used in the program; during per-processing any reference to this constant value (btw by convention such constants are named with ALL_CAPS_AND_UNDERSCORES) is replace by the actual C string literal containing the email address.
to exclude all conditional compiling branches that are not relevant (the #IFDEF and the like)
What's important to know about the pre-processor is that the pre-processor directive are NOT part of the C-Language proper, and they serve several important functions such as the conditional compiling mentionned earlier (used for example to have multiple versions of the program, say for different Operating Systems, or indeed for different compilers)
Taking it from there...
After this manifesto of mine... I encourage to read but little more, and to dive into programming and building binaries. It is a very good idea to try and get a broad picture of the framework etc. but this can be overdone, a bit akin to the exchange student who stays in his/her room reading the Webster dictionary to be "prepared" for meeting native speakers, rather than just "doing it!".
Ideally you shouldn't need to care what C++ compiler you are using. The compatability to the standard has got much better in recent years (even from microsoft)
Compiler flags obviously differ but the same features are generally available, it's just a differently named option to eg. set warning level on GCC and ms-cl
The build system is indepenant of the compiler, you can use any make with any compiler.
That is a lot of questions in one.
C++ compilers are a lot like hammers: They come in all sizes and shapes, with different abilities and features, intended for different types of users, and at different price points; ultimately they all are for doing the same basic task as the others.
Some are intended for highly specialized applications, like high-performance graphics, and have numerous extensions and libraries to assist the engineer with those types of problems. Others are meant for general purpose use, and aren't necessarily always the greatest for extreme work.
The technique for using each type of hammer varies from model to model—and version to version—but they all have a lot in common. The macro preprocessor is a standard part of C and C++ compilers.
A brief comparison of many C++ compilers is here. Also check out the list of C compilers, since many programs don't use any C++ features and can be compiled by ordinary C.
C++ compilers don't "view" makefiles. The rules of a makefile may invoke a C++ compiler, but also may "compile" assembly language modules (assembling), process other languages, build libraries, link modules, and/or post-process object modules. Makefiles often contain rules for cleaning up intermediate files, establishing debug environments, obtaining source code, etc., etc. Compilation is one link in a long chain of steps to develop software.
Also, many development environments abstract the makefile into a "project file" which is used by an integrated development environment (IDE) in an attempt to simplify or automate many programming tasks. See a comparison here.
As for learning: choose a specific problem to solve and dive in. The target platform (Linux/Windows/etc.) and problem space will narrow the choices pretty well. Which you choose is often linked to other considerations, such as working for a particular company, or being part of a team. C++ has something like 95% commonality among all its flavors. Learn any one of them well, and learning the next is a piece of cake.