I have been using ClojureScript on Windows since it first came out and I have noticed that Rich Hickey and others are making occassional updates to it. What is the easiest way to make sure I have the latest changes? Is just copying over the src directory from here enough:
https://github.com/clojure/clojurescript/tree/master/src
?
The ClojureScript setup on Windows is a little more cumbersome than on Unix-based systems (including Mac OS X). As you said, the best bet is to follow the initial setup instructions from the ClojureScript Wiki and then update the contents of the src directory from time to time. Occasionally you might want to check if the .bat files in the bin and script directories as well as the dependencies listed in the script/bootstrap shell script have changed.
On Unix-based systems the process is easier. Initially, clone the ClojureScript git repository:
git clone https://github.com/clojure/clojurescript.git
From time to time, update the contents of your local clone and re-download the dependencies:
git pull
./script/bootstrap
If the manual process bothers you enough, you might want to consider installing Cygwin to get a Unix environment on you Windows machine, but of course that's a matter of personal preference.
Alternatively, you can try to develop a Windows version of the bootstrap script. I'm sure the ClojureScript team would be happy to include it in the distribution.
Related
I contribute to an open source project and would like to use install4j to build installers. I further would like to use Travis-CI as my build server, using their free open-source offer.
Travis runs every build in a fresh VM, and install4j requires a command-line compiler to be present.
Thus, I need to provide/install install4j itself in my build script.
A crude way to go about it would be to fetch an archive from the ej-technologies website every time I build, unzip it and then compile. This requires an OS-switch and a load of archive-handling, depending on the system in use.
I hope there is a better way. How would you go about it?
[Making #IngoKegel's comments into an answer]
Downloading a fresh copy of Install4J from the ej-technologies website is the only way there currently is. This has to be done separately for each OS, since there are no platform-independent installers or archives.
I'm trying to use the Quadprog++ library (http://quadprog.sourceforge.net/). I don't understand the instructions though.
To build the library simply go through the ./configure; make; make
install cycle.
In order to use it, you will be required to include in your code file
the "Array.hh" header, which contains a handy C++ implementation of
Vector and Matrices.
There are some "configure", and "MakeFile" files, but they have no extension and I have no idea what to do with them. There are also some ".am", ".in" and ".ac" extensions in the folder.
Does this look familiar to anyone? What do I do with this?
(Edit: On Windows.)
This package is built using the autotools. These files you talk to (*.am, *.in...) are because of the tools automake, and autoconf.
Autotools is a de-facto standard in the GNU/Linux world. Not everybody uses it, but if they do you ease the work of package and distribution managers. Actually they should be portable to any POSIX system.
That said, I'm guessing that you are using a non-unix machine, such as Windows, so the configure script is not directly runable in your system. If you insist in keep using Windows, wich you probably will, your options are:
Use MinGW and MSYS to get a minimal build enviroment compatible with autotools.
Use Cygwin and create a POSIX like environment in your Windows.
Create a VS project, add all the source of the library in there, compile and debug the errors they may arise, as if the code had been written by you.
Search for someone that already did the work and distributes a binary DLL, or similar.
(My favourite!) Get a Linux machine, install a cross-compiler environment to build Windows binaries, and do configure --host i686-mingw32 ; make.
This instruction say how can be build an program delivered like a tarball in Linux. To understand take a look on Why always ./configure; make; make install; as 3 separate steps?.
This can be confusing at first, but here you go. Type these in as shown below:
cd <the_directory_with_the_configure_file>
./configure
At this point, a bunch of stuff will roll past on the screen. This is Autoconf running (for more details, see http://www.edwardrosten.com/code/autoconf/index.html)
When it's done, type:
make
This initiates the build process. (To learn more about GNU make, check out Comprehensive gnu make / gcc tutorial). This will cause several build messages to be printed out.
When this is done, type:
sudo make install
You will be asked for the root password. If this is not your own machine (or you do not have superuser access), then contact the person who administers this computer.
If this is your computer, type in the root password and the library should install in /usr/local/lib/ or something similar (watch the screen closely to see where it puts the .so file).
The rest of it (include the .hh file) seems self-explanatory.
Hope that helps!
I have found a problem with the test environment in a c++ problem.
We have a machine which downloads the code from the version control system and, build it and execute the unit test, nothing new.
The problem arise when we add a new dependency in our project. We are developing a lot of features at the same time and it is something relatively common. We this happens we have to advise testers and give them an easy way to reproduce the compilation environment ...
And I was thinking if there is any other easy way to go through this ... don't know, some tool like virtualenv or buildout for python ..
I have been searching at google, but with no luck.
Any help will be appreciated.
You can always add all of the dependencies to the revision control system and provide automated scripts that will install the required subsystems. Where I work, if you just download the current version from the repository, you can build in one step an ISO image that can be installed by testers in any computer they want. The image contains everything from the OS up to the application.
Depending on your particular situation, you might want to start with smaller steps, like adding the dependencies to the repository and having the testers check there whether any new file appears or changes version.
No ready tool, AFAIK, except maybe for CMake which can control things like that for you.
For C++, it's fairly easy to manage "by hand" since you can set LIB, LIBPATH and PATH environment variables to carefully selected directories. No site.py, eggs, .pth files and the like as with Python.
We do this at our shop, setting up our build/development environment closely and have everything in revision control (mostly scripts that download huge zips of prebuilt libs and unpack them to the right places).
Small libs are copied to common dirs, larger get their own entry in the env-vars.
This works equally well for Python and Java. Haven't tried other languages...yet. :)
I'm working on a simple command line tool in c++. Half as a fun learning-process thing, and half to distribute to friends/colleagues etc.
I assume the easiest way to make it distributable is just packaging the source code with an installation script---can anyone point me to a good tutorial for setting that up?
In other words, what must a script include to compile the program, put the files in good places*, and make it executable from any directory from the command line?
E.g. I know the compiled binary should go in /usr/local/bin/ , but if I'm writing-to and accessing a text file (for instance), where should that go? What about a file that stores settings/configuration-parameters?
I'm on mac osx, so that would be the starting point, but portability to windows, linux, etc would be great.
You can use CMake to make a cross platform build system, and you can use it's CPack (Wiki here) feature in order to generate binary only packages. First you create a build script that runs and installs on each platform (which CMake makes as easy as can be expected). You then run CPack to generate a package which just includes your binaries.
There is a good tutorial that covers the basic cmake process (including install commands) here.
CMake is generally considered simpler then autoconf (and has better windows support), but each has it's own strengths.
Do not assume that the user installing the program has root access. Prompt, or provide a command-line option, like --install-prefix=/home/user/apps, to specify where to install.
I HATE programs that install shit in /usr/local. If you do that, you'd best wrap it up in an .rpm or .deb or whatever the platform package is so that your app can be cleanly uninstalled.
I would suggest checking out autoconf
What we need in our firm is a sort of release management tool for Linux/C++. Our products consist of multiple libraries and config files. Here I will list the basic features we want such system to have:
Ability to track dependencies, easily increase major versions of libraries whose dependencies got their major version increased. It should build some sort of dependency graph internally so it can know who is affected by an update.
Know how to build the products it handle. Either a specific build file or even better - ability to read and understand makefiles.
Work with SVN so it can check for new releases from there and does the build.
Generate some installers - in rpm or tar.gz format. For that purpose it should be able to understand the rpm spec file format.
Currently we are working on such tool which is already pretty usable. However I believe that our task is not unique and there should be some tool out there which does the job.
You should look into using a mix between Hudson, Maven (for build management), Ivy (for dependencies management) and Archiva (for artifacts archival).
Also, if you are looking into cross.compilation, take a look at Make Project Creator (MPC) and Bakefile.
Have fun!!
In the project I'm currently working on we use cmake and other Kitware tools to handle most of this issues for native code (C++). Answering point by point:
The cmake scripts handle the dependencies for our different projects. We have a dependency graph but I don't know if is a home-made script or it is a functionality that cmake provides.
Well cmake generates the makefiles regarding the platform. I generates projects for eclipse cdt and visual studio if it is asked to do so in case of developing.
Cmake has a couple of tools, ctest and cdash that we use to do the daily build and see how the test are doing.
In order to create the installer cmake has cpack. From just one script it can generate tar.gz, deb or rpm files in Linux or an automatically generated NSIS script to generate installers in windows.
For Java code we use maven and hudson that have been already mentioned here.
Take a look at this article from DDJ, in which a more robust build system concept (than make) is presented and implemented. Not sure it will fit well to your requirements, but it's the closest I've ever seen. I was looking for the same thing months ago, and then I discovered the article.
http://www.drdobbs.com/architect/218400678
Maven has a native code plugin. I don't think it'll do everything you want, but it's good at tracking version numbers of dependencies, will build artefacts and it'll work with your VCS.
No idea
cmake/scons: I have used cmake but I don't exactly love it, but I have heard really good things about scons. But scons is python-based, so you need to have python installed on the build/dev machines.
I use Hudson, which has a plugin to fetch from svn. It performs intelligently in general, and in particular builds only if some file has changed in an svn update. Hudson is easy to get started with. Hudson is java-based and is pretty popular with the Java community. This means it is quite cross-platform, but you need to have JRE installed on the build machine.
Probably can call some rpm tool within hudson.