I've just started learning Lisp and I can't figure out how to compile and link lisp code to an executable.
I'm using clisp and clisp -c produces two files:
.fas
.lib
What do I do next to get an executable?
I was actually trying to do this today, and I found typing this into the CLisp REPL worked:
(EXT:SAVEINITMEM "executable.exe"
:QUIET t
:INIT-FUNCTION 'main
:EXECUTABLE t
:NORC t)
where main is the name of the function you want to call when the program launches, :QUIET t suppresses the startup banner, and :EXECUTABLE t makes a native executable.
It can also be useful to call
(EXT:EXIT)
at the end of your main function in order to stop the user from getting an interactive lisp prompt when the program is done.
EDIT: Reading the documentation, you may also want to add :NORC t
(read link). This suppresses loading the RC file (for example, ~/.clisprc.lisp).
This is a Lisp FAQ (slightly adapted):
*** How do I make an executable from my programme?
This depends on your implementation; you will need to consult your
vendor's documentation.
With ECL and GCL, the standard compilation process will
produce a native executable.
With LispWorks, see the Delivery User's Guide section of the
documentation.
With Allegro Common Lisp, see the Delivery section of the
manual.
etc...
However, the classical way of interacting with Common Lisp programs
does not involve standalone executables. Let's consider this during
two phases of the development process: programming and delivery.
Programming phase: Common Lisp development has more of an
incremental feel than is common in batch-oriented languages, where an
edit-compile-link cycle is common. A CL developer will run simple
tests and transient interactions with the environment at the
REPL (or Read-Eval-Print-Loop, also known as the
listener). Source code is saved in files, and the build/load
dependencies between source files are recorded in a system-description
facility such as ASDF (which plays a similar role to make in
edit-compile-link systems). The system-description facility provides
commands for building a system (and only recompiling files whose
dependencies have changed since the last build), and for loading a
system into memory.
Most Common Lisp implementations also provide a "save-world" mechanism
that makes it possible to save a snapshot of the current lisp image,
in a form which can later be restarted. A Common Lisp environment
generally consists of a relatively small executable runtime, and a
larger image file that contains the state of the lisp world. A common
use of this facility is to dump a customized image containing all the
build tools and libraries that are used on a given project, in order
to reduce startup time. For instance, this facility is available under
the name EXT:SAVE-LISP in CMUCL, SB-EXT:SAVE-LISP-AND-DIE in
SBCL, EXT:SAVEINITMEM in CLISP, and CCL:SAVE-APPLICATION in
OpenMCL. Most of these implementations can prepend the runtime to the
image, thereby making it executable.
Application delivery: rather than generating a single executable
file for an application, Lisp developers generally save an image
containing their application, and deliver it to clients together with
the runtime and possibly a shell-script wrapper that invokes the
runtime with the application image. On Windows platforms this can be
hidden from the user by using a click-o-matic InstallShield type tool.
Take a look at the the official clisp homepage. There is a FAQ that answers this question.
http://clisp.cons.org/impnotes/faq.html#faq-exec
CLiki has a good answer as well: Creating Executables
For a portable way to do this, I recommend roswell.
For any supported implementation you can create lisp scripts to run the program that can be run in a portable way by ros which can be used in a hash-bang line similarly to say a python or ruby program.
For SBCL and CCL roswell can also create binary executables with ros dump executable.
I know this is an old question but the Lisp code I'm looking at is 25 years old :-)
I could not get compilation working with clisp on Windows 10.
However, it worked for me with gcl.
If my lisp file is jugs2.lisp,
gcl -compile jugs2.lisp
This produces the file jugs2.o if jugs2.lisp file has no errors.
Run gcl with no parameters to launch the lisp interpreter:
gcl
Load the .o file:
(load "jugs2.o")
To create an EXE:
(si:save-system "jugs2")
When the EXE is run it needs the DLL oncrpc.dll; this is in the <gcl install folder>\lib\gcl-2.6.1\unixport folder that gcl.bat creates.
When run it shows a lisp environment, call (main) to run the main function
(main).
Related
I want to upload my version of the solution of the book "Programming Principles and Practice Using C++" to GitHub, in a way that anyone interested can access the code and run it easily.
Assuming that 1 repository can hold 1 project only, and since the solution code for every drill or excercise of the book has to be an independent .cpp file/main function, how can I group all of the code in the same repository, and at the same time allow anyone, who clone or download the file, to run and debug independently? (...because there cannot be two main functions in one project right?)
Separate folder for each solution could do the trick.
You can go even with separating using "one solution" = "one .cpp file" scheme. In this case build is assumed to be done without make file, by specifying .cpp file to the compiler. But it will create mess if some solution require multiple .h / .cpp files.
My misc-basile github repository contains several single source programs (GPLv3+ licensed, for Linux), including manydl.c -which shows that Linux is capable of dealing with many (that is, more than a hundred thousands) plugins by dynamically generating them, and sync-periodically.c -which calls periodically the sync(2) system call.
You can do likewise for your single file C++ programs.
You could write your Makefile compiling each of them to a different executable. Or use some other build automation (e.g. ninja).
However, don't forget to document (in your README.md) what each of your C++ translation units is supposed to do, and how to compile and run them.
Read more about C++, about operating systems, about your C++ compiler (e.g. GCC) and linker (e.g. binutils) and other software development tools (e.g. git, gdb, emacs).
Details are of course implementation specific.
I want to run tools for static C/C++ (and possibly Python, Java etc.) code analysis for a large software project built with help of make. As it is known, make (or any other build tool) invokes compiler and similar tools for specified source code files. It is also possible to control compilation by defining environmental variables to be later passed to the compiler via its arguments.
The key to accurate static analysis is to provide defines and include paths exactly as they were passed to the compiler (basically all its -D and -I arguments). This way, the tool will be able to follow same code paths the compiler have followed.
The problem is, the high complexity of the project means there is no way to statically determine such environment, as different files are built with different sets of defines/include paths and other compilation flags.
The idea is that it should be somehow possible to capture individual invocations of the compiler with all arguments passed to it for each input file. Having such information and after its straightforward filtering (e.g. there is no need to know -O optimization levels or -W warning settings) it should be possible to invoke the static analyzer for each input file with the identical set of defines/includes used just for that input file.
The question is: are there existing tools/workflows that implement the idea I've described? I am mostly interested in a solution for POSIX systems, but ideas for Windows are also welcome.
A few ideas I've come to on my own.
The most trivial solution would be to collect make output and process it afterwards. However, certain projects have makefile rules that give very concise output instead of verbose one, so it might require some tinkering with Makefiles, which is not always desirable. Parallel builds may also have their console output mixed up and impossible to parse. Adaptation to other build systems (Cmake) will not be trivial either, so it is far from being the most convenient way.
Running make under ptrace and recording all invocations of exec* system calls that correspond to starting new applications, including compiler invocations. Then one will need to parse ptrace's output. This approach is build system and language agnostic (will catch all invocations of any compiler for any language) and should work for parallel builds. However it seems to be more technically complex. Performance degradation to the build process because of ptrace sitting on make's back is unclear either. It will also be harder to port it to Windows, as program-tracing API is somewhat different there.
The proprietary static analyzer for C++ on Windows (and recently Linux AFAIK) PVS-Studio seems to implement the second approach, however details on how they do it are welcome. If there are other IDEs/tools that already have something similar to what I need, please share information on them.
There are the following ways to gather information about the parameters of compilation in Linux:
Override environment CC/CXX variables. It is used in the utility scan-build from Clang Analyzer. This method works reliably only with simple projects for Make.
procfs - all the information on the processes is stored in /proc/PID/... . Reading from a disk is a slow process, you might not be able to receive information about all processes of a build.
strace utility (ptrace library). The output of this utility contains a lot of useful information, but it requires a complicated parsing, because information is written randomly. If you do not use many threads to build the project, it is a fairly reliable way to gather information about the processes. It’s used in PVS-Studio.
JSON Compilation Database in CMake. You can get all the compilation parameters using the definition -DCMAKE_EXPORT_COMPILE_COMMANDS=On. It is a reliable method if a project does not depend on non-standard environment variables. Also the project for CMake can be written with errors and issue incorrect Json, although this doesn’t affect the project build. It’s supported in PVS-Studio.
Bear utility (function substitution using LD_PRELOAD). You can get JSON Database Compilation for any project. But without environment variables it’ll be impossible to run the analyzer for some projects. Also, you cannot use it with projects, which already use LD_PRELOAD for a build. It’s supported in PVS-Studio.
Collecting information about compiling in Windows for PVS-Studio:
Visual Studio API to get the compilation parameters of standard projects;
MSBuild API to get the compilation parameters of standard projects;
Win API to get the information on any compilation processes as, for example, Windows Task Manager does it.
VERBOSE=true is a default make option to display all commands with all parameters. It also works with CMake, for instance.
You might want to look at Coverity. They are attaching their tool to the compiler to get everything that the compiler receives. You could overwrite the environment variables CC or CXX to first collect everything and then call the compiler as usual.
My current workflow when developing Apps or programs with Java or C/C++ is as follows:
I don't use any IDE like IntelliJ, Visual Studio, ...
Using linux or OS X, I use vim as code editor. When I build with a makefile or (when in Java) gradle, I :!make and wait for the compiler and linker to create the executable, which will be run automatically.
In case of compilation errors, the output of the compiler can get very long and the lines exceed the columns of the console. So everything gets messy, and sometimes takes too much time to find out, what the first error ist (often causing all following compile errors).
My question is, what is your workflow as a C++ developer? For example is there a way, to generate a nicely formatted local html file, that you can view / update in your browser window. Or other ideas?
Yes, I know. I could use Xcode or any other IDE. But I just don't want.
Compiling in vim with :!make instead of :make doesn't make any sense -- it's even one of the early features of vim. The former will expect us to have good eyes. The latter will display compilation errors into the quickfix window, which we can navigate. In other words, no need to use an auxiliary log file: we can navigate compilation errors even in (a coupled of) editors that run into a console.
I did expand on a related topic in https://stackoverflow.com/a/35702919/15934.
Regarding compilation, there are a few plugins that permits to compile in background. I've added this facility in build-tool-wrapper lately (it requires vim 7.4-1980 -- and it's still in a development branch at this time). This plugin also permits me to easily filter errors in the standard library with the venerable STLfilt, and to manage several build configurations (each in a separate directory).
We have developed an application which has so many C++ files. On Linux we were able to execute it.
We have an U-Boot for the MPC8548E based custom board.
Now we decided to go without OS. So, I tried two methods to execute the C++ applications on U-Boot.
1.) Compiled the C++ application with the g++ (C++ cross-compiler) and tried to link with the U-Boot, which is compiled using gcc (The C-Compiler). But I am unable to do that:
The error message I am seeing is:
/ToolChain/host/usr/powerpc-buildroot-linux-uclibcspe/bin/ld: failed to merge target specific data of file...
2.) Tried to compile my application along with U-Boot in the same way the standalone examples are done. I created a separate directory in the U-Boot and tried to compile it. C++ applications are not getting built, but I am able to build C-Applications.
My main intention is to execute C++ applications directly on U-Boot.
Please help me how to do that?
Sorry, I believe it would be more work to get C++ on U-Boot than it would be for you to e.g. go with OS.
The short answer, from U-Boot tech lead:
> Does u-boot support C++ example programs and if so, how can I build one?
U-Boot does not support C++.
Some of the technical background for this: U-Boot runs on bare metal. A U-Boot standalone application would link to U-Boot's exported functions which the application needs. For example, your C++ application would use 'new', your C++ library needs to perform calls to malloc, which in this case would mean going to the u-boot exported function interface (refer to doc/README.standalone).
For the general topic of C++ on bare metal, I have not done that, but found Miro Samek tutorial that may shed light. I think it would be difficult. Porting linux starts to look good in comparison.
I need to make portable application, that will run on Windows, Linux, MacOS and no install required. It must be one executable file and no other library files (.dll, .so ...). I will use ANSI C and recompile project for each platform. I want to use Lua scripts, so must embed Lua interpreter in my code. I need network and some other modules to write but i now that Lua already have modules for that purpose, so I will use them instead writing my own.
How can I link all that together, Lua interpreter, Lua modules (LuaSocks i.e.) in one executable file that will load .lua script. Lua has "require" system that expects .dll to find, so I wondering what I should do, is it enough just to call functions without "require" statement.
You most certainly can do that (and it is not wrong!), although it is not trivial. The Lua core is made for embedding, to the point that you can just include the Lua sources into your own project and it "just works" :).
The deal is slightly different with modules - not many of them are suited for direct embedding. For example, this has been tried successfully for LuaSocket before and also asked here. The basic idea is to embed the sources of MODULE to your project and insert the luaopen_MODULE function into package.preload['MODULE'], so that require can pick it up later.
One way to go is to look at sources of projects that already embed Lua and other libraries, like LÖVE, MurgaLua and Scrupp.
If the goal of not having a single executable with no external libraries turns out not achievable, you can loosen up a bit and go for portable application - an application that carries all it's dependencies with it, in a single directory, independent of the system. This is what LuaDist was designed for - you use it similar to LuaRocks to install Lua packages. The difference is that these packages can be installed/deployed into a separate directory, where all necessary dependencies are installed too. This directory (a "dist") is fully independent, meaning you can move it somewhere else and it will still work.
Also, I dislike the idea of an application that requires installation (because it puts files all around my system) - uninstallation should be just removal of a directory :)
I believe you cannot do that (and I think it is wrong to do that). An executable is operating system and machine specific (on some systems like MacOSX, there are fat binary executables, which are a mix of various machine specific variants for the same operating system.).
The only way to have a system & machine "independent" program is essentially to target it to some single common "virtual machine" (in the broadest sense). In your case this VM is the Lua VM (it could be the Java VM for others, etc.). But you have to suppose that your user have it, or to provide one which is machine & system specific.
And I would personally dislike the idea of an application which is not installable (because it is then not easily uninstallable).