How can I use ocamlbrowser with opam packages? - ocaml

My system ocaml instalation also includes the /usr/bin/ocamlbrowser executable. Is there a way I can use it to browse packages I installed with opam?
So far the closest I could get was using the -I flag to add extra directories to the search but I don't know how to tell it to search all folders (the -I flag only adds one at a time) and I don't know how to access the source code for the functions (ocamlbrowser is only finding the mli files, not the ml)
ocamlbrowser -I ~/.opam/system/lib/core -I ~/.opam/system/lib/fieldslib

OCamlBrowser is rather legacy and you need manually specify all the include directories.
For code browsing, ~/.opam/<switch>/lib/* dirs are not sufficient since they usually lack source codes (.ml and .mli's).
You should use the build directories, ~/.opam/<switch/build/packagename/... instead, keeping the source code of the installed OPAM packages. You need to set OPAMKEEPBUILDDIR env var or opam install --keep-build-dir for it.
AFAIK, currently (2014/09) we have no alternatives which is 100% compatible with OCamlBrowser which work fully with OPAM/OCamlFind eco system, but we have ocp-index, ocp-browser and http://ocamloscope.herokuapp.com/ . However, things are rapidly evolving around OPAM and newer tools may be released.

Related

How to install lapack++ on linux

I'm making a script on c++ that requires the resolution of linear systems. I've looked around and found that the LAPACK++ gives me functions to achieve that end. However I've been having a lot of trouble just getting them installed.
I have the following files:
lapack.lib
blas.lib
libf2c.lib
clapack.h
f2c.h
Those files were given to me to use with microsoft visual studio 2010 some time ago. From what I've read I need at least the lapack.lib and blas.lib libraries, however I have no idea where to put them, or what to install.
I've searched on the web, but all the information I've gathered only got me more confused. If someone could point me in the right direction I'd highly appreciate.
Thanks.
PS1: Take into consideration that I'm very new with Linux.
PS2: Do I have to install LAPACK++ or will LAPACK do? Because there seems to be more information about the later than the first.
First, you may install liblapack-dev and libblas-dev (dev means the libraries and the include files).
Check that it is not installed yet. It is likely if you have files such as /usr/lib/liblapack.a and /usr/lib/libblas.a
To install liblapack-dev and libblas-dev, you may use the package manager called synaptic. According to http://ubuntuforums.org/showthread.php?t=1505249,
"Go to: System -> Synaptic -> Administration -> Package Manager ->
search on lapack (and/or blas), and mark for installation:
libblas3gf
libblas-doc
libblas-dev
liblapack3gf
liblapack-doc
liblapack-dev
-> Apply "
(it is the usual way to install software on Debian or Ubuntu if you are root.)
The package manager will ask for your administrator's password "root".
Then, you may install lapack++. According to http://lapackpp.sourceforge.net/ , open a Terminal and write (press enter at end of line) :
./configure --prefix=/your/install/path
make
make install
if you face something like permission denied after typing make install, it may be because you do not have the right to modify a folder. You may use sudo make install to do it as administrator, but you really need to trust the origin of the software to do so...security...Best advice may be to change /your/install/path for something like /home/mylogin/softs/lapackpp and then add -L /home/mylogin/softs/lapackpp/lib -I /home/mylogin/softs/lapackpp/include to build and link the code. -I means add to include search path and -L means add to library search path...you still need to trust the software, but it's less risky for the operating system that sudo.
To build your code, go to the right folder and type something like
gcc main.c -o main -L /home/mylogin/softs/lapackpp/lib -I /home/mylogin/softs/lapackpp/include -llapackpp -llapack -lblas -lm
If you are not "root", download blas/lapack and build it ! It is exactly the same procedure as lapackpp. But, as you install lapackpp, you may need to add options to -configure...to signal where these libraries are.
Tell us what happened !
Bye,
Francis
The .lib files are operating system specific. They are useless on Linux. You need a Linux build.
I assume we are talking about lapack++ hosted on sourceforge, yes?
In that case:
Whoever gave you the binaries (.lib files) is obliged to give you the sources if you ask them.
You can get the latest sources on the above site.

How to prevent accidentally including old headers?

Build systems frequently have separate build and install steps. Sometimes, installed versions will have headers that are older installed on the operating system and those headers may be picked up instead of the headers in the source code. This can lead to very subtle and strange behavior in the source code that is difficult to diagnose because the code looks like it does one thing and the binary does another.
In particular, my group uses CMake and C++, but this question is also more broadly relevant.
Are there good techniques to prevent old headers from being picked up in a build?
1. Uninstall
Uninstall package from CMAKE_INSTALL_PREFIX while hacking develop version.
Pros: very effective
Cons: not flexible
2. Custom install location
Use custom location for installed target, don't add custom install prefix to build.
Pros: very flexible
Cons: if every package use this technique tons of -I option passed to
compiler and tons of <PACKAGE>_ROOT to cmake configure step.
3. Include priority
Use headers search priority. See include_directories command and
AFTER/BEFORE suboptions.
Pros: flexible enough
Cons: sometimes it's not a trivial task if you have a lot of find_package/add_subdirectory
commands, error-prone, errors not detected by autotesting.
BTW
Conflicts can occur not only between build/install directories, but also
in install directory itself. For example version 1.0 install: A.hpp and B.hpp,
version 2.0 install: A.hpp. If you sequentially install 1.0 and 2.0 targets
some #include<B.hpp> errors will not be detected locally. This kind of error can be easily
detected by autotesting (clean environment of CI server don't have old B.hpp file from 1.0 version). Uninstall command also can be helpfull.
Guys recently had fixed the exact same problem with shogun package. You basically need to have your source folders including your header files passed by -I to gcc before the system folders. You don't have to pass the system folders as -I to gcc anyway.
Have a look at the search path here. You might need to have a proper way of including your header files in your source code.
This is the pull request which fixed the problem I guess.

Installing GMP on Windows with cygwin

I am new to C++ and I have to handle large integers, so I have to install GMP through Cygwin.
Any documentation I can find on installing this already assumes that you know what you are talking about, and I really don't.
Anyway, I got the right .tar or whatever, extracted it properly, and now any website I see says to run ./configure --prefix=${gmp_install}...
What in the world is gmp_install? And what directory do I run configure from? Huh? I can run it from my little Cygwin terminal, but it just says no such file.
Next, I am supposed to type make. From where?
Help...
Welcome to StackOverflow (SO).
The source directory of GMP should probably contain the file called configure. This is the script which you have to execute to "configure" the build system in your environment. It means that during configuration Autotools (the build system which is used to build GMP) will gather information about your environment and generate the appropriate makefile. Gathering information includes things like: understanding that you are on Windows, understanding that you are using Cygwin, understanding that your compiler is GCC and its version is x.y.z, and etc. All these steps are important for successful build.
You can specify a lot of different options to this configure script to tweak the configuration process. In your case, you specify the prefix option which determines the installation directory, i.e. the directory where you want the built and ready-to-use GMP distribution to reside. For example:
./configure --prefix=/D/Libraries/GMP
will configure the build system to install the GMP binaries to D:\Libraries\GMP directory.
Assuming that the GMP source directory (the one you extracted from *.tar) is located at say D:\Users\Me\Downloads\GMP, in order to build and install GMP you should do the following:
cd /D/Users/Me/Downloads/GMP
./configure --prefix=/D/Libraries/GMP
make
make install
NOTE: The make command will actually execute the makefile (which was generated by configure script) I've mentioned earlier. This file describes the process of building and installing GMP on your system.
NOTE: ${gmp_install} is nothing, but an environment variable. For instance, you could do:
export gmp_install=/D/Libraries/GMP
./configure --prefix=${gmp_install}
this can be useful, for example, when you have to use the same path in multiple places, and don't want to type it everytime. There are other cases, when this is useful, but for that you'll have to learn more about environment variables, what they are for, and Bash scripting in general. However, all this goes far beyond the answer on your question.
You'll have to spend quite some time to understand all these things and how they fit together, and you'd probably have to ask more questions here on SO as understanding all that stuff for a beginner alone might be very challenging.

Install gcc on linux with no root privilege

I have access to computer in a public library and I want to try out some C++ and maybe other code. Problem is that there is no g++ installed and I can't install it using packages because I have no root access. Is there a "smart" way to make a full environment for programming in a home folder?
I have gcc installed (I can compile C code). Also, I have a home folder that is consistent. I don't know where to find precompiled g++, I found only source but I don't know what to do with it. I tried to ask them to install this but it didn't work :)
If you want to install it as a local user
GNU GSRC provides an easy way to do so
Link: http://www.gnu.org/software/gsrc/
After configuration, simply specify the following commands:
cd gsrc
make -C pkg/gnu/gcc
make -C pkg/gnu/gcc install
The second step could also be changed to speed up for an N-core system:
make -C pkg/gnu/gcc MAKE_ARGS_PARALLEL="-jN"
You can run the configure script with the --prefix parameter: ../gcc-4.5.0/configure --prefix=/home/foo/bar. Since it is very likely that the c++ standard library is different then the one on your system, you have to set export LD_LIBRARY_PATH=/home/foo/bar/lib before you can start a program compiled by this compiler.
Once, a long time ago (1992 or so), I went through something similar to this when I bought a SCO system with no development environment. Bootstrapping it up to having a full development environment was a gigantic pain, and not at all easy. Having library header files or gcc on a system would make your job a whole lot easier.
It depends a lot on just how obnoxious the library has been about what kinds of things are installed. If there is no gcc there, your job becomes a bit harder. If there are no header files for glibc there, your job is a LOT harder.
Also, do you get an account on the system so you have a home folder that's consistent from login to login?
If you have no gcc there, you need to find a pre-compiled binary of gcc/g++ and install it somewhere. If you have no header files there, you need to find copies of those and put them on the system.
There is no 'standard' way of installing gcc in your home folder though. All of the solutions are going to have some manner of hand-rolling involved.
Have you asked the librarians if they can change what's installed because you want to learn a bit of programming and only have access to their computers to do it with? That might well be the easiest solution.
From your comment it seems that you do have gcc and if you can compile C code, you have the library header files. So now it's a matter of actually compiling your own version of g++. You could probably find a way to entice the package manager on the system into installing a binary package somewhere other than in a system folder. I think this solution is less fun than compiling your own, and I think there may also be possible subtle problems as that installed package may be expecting to find things in particular places and not finding them there.
First thing to do is to make sure you've downloaded the right source for the gcc package. The place to find that is the GNU United States mirror page. You want to find the gcc-4.5.0.tar.bz2 or gcc-4.5.0.tar.gz file on the mirror site you choose. It will likely be in a gcc directory, and a gcc-4.5.0 sub-folder.
After you have that downloaded, you should untar it. In general you shouldn't build gcc in the folder you untar it into. So create another sibling folder that you actually want to build it in labeled gcc-build. Then the command you want is ../gcc-4.5.0/configure --prefix=$HOME/.local --enable-languages='c c++'.
gcc does require some other packages be installed in order to be able to compile itself. You can use the same --prefix line for these packages to install them in the same place. The gcc website has a list of pre-requisite packages.
$HOME/.local is sort of the standard accepted place for per-user installs of things.
If you have fakeroot, you can use that to set ~/some-path as root to install the packages from. Alternatively, you can setup a chroot environment to do the same.
Given this, you can then use dpkg -i package.deb to install the gcc package(s) on your system. You will need to download and install each package individually (e.g. from the debian website) -- at least binutils, glibc, linux-headers and gcc.
If you are on another system, you will need to get the right packages for that system and install them using the corresponding package manager.

Linux programming environment configuration

The other day I set up an Ubuntu installation in a VM and went to gather the tools and libraries I figured I would need for programming mostly in C++.
I had a problem though, where to put things such as 3rd party source libraries, etc. From what I can gather, a lot of source distributions assume that a lot of their dependencies are already installed in a certain location and assume that a lot of tools are also installed in particular locations.
To give an example of what I currently do on Windows, is I have a directory where I keep all source code. C:\code. In this directory, I have a directory for all 3rd party libraries, c:\code\thirdparty\libs. This way I can easily set up relative paths for all of the dependencies of any projects I write or come across and wish to compile. The reason I am interested in setting up a linux programming environment is that it seems that both the tool and library dependency problems have been solved efficiently making it easy for example to build OpenSSH from source.
So what I was looking for was a decent convention I can use when I am trying to organize my projects and libraries on linux that is easy to maintain and easy to use.
Short answer: don't do a "heaps of code in local dir" thing.
Long answer: don't do a "heaps of code in local dir" thing, because it will be nightmare to keep up-to-date, and if you decide to distribute your code, it will be nightmare to package it for any decent distribution.
Whenever possible, stick to the libraries shipped in the distribution (ubuntu has 20000+ packets, it ought to have most of what you'll need prepackaged). When there is not package, you caninstall by hand to /usr/local (but see above about upgrades and DONT do that).
Better, use "stow" or "installwatch" (or both) to install to per-library dirs (/usr/local/stow/libA-ver123) and then symlink files from there to /usr/local or /usr/ (stow does the simlinking part). Or just package the lib for your distribution.
For libraries/includes...
/usr/local/lib
/usr/local/include
Where possible code against the system/distro provided libraries. This makes it easiest to ship a product on that distro.
However, if you are building a commercial application, because there are so many flavors of Linux distros that can mean you have to maintain a plethora of different application builds for each distro. Which isn't necessarily a bad thing as it means you can more cleanly integrate with the distro's package management system.
But in the case where you can't do that it should be fairly easy to download the source of each 3rd party dependency you have and integrate the building of that dependency into a static lib that is linked to your executable. That way you know exactly what you're linking against but has the downside of bloating out your executable size. This can also be required if you need a specific library (or version) not provided by the distro.
If you want your code to build on as broad a variety of different Unix systems then you're probably wise looking into GNU autoconf and automake. These help you construct a configure script and makefile for your project so that it will build on practically any Unix system.
Also look into pkg-config which is used quite a bit now on Linux distributions for helping you include and link to the right libraries (for libs that support pkg-config).
If you're using subversion to manage your source there is a "convention" that most subversion repositories use to manage their own code and "vendor" code.
Most svn repositories have a "vendor" tree (that goes along with the trunk, branches & tags trees). That is the top for all 3rd party vendor code. In that directory you have directories for each library you use. Eg:
branches/
tags/
trunk/
vendor/somelib
vendor/anotherlib
Beneath each of these libs is a directory for each library version and a "current" directory for the most up-to-date version in your repository.
vendor/somelib/1.0
vendor/somelib/1.1
vendor/somelib/current
Then your project's tree should be laid out something like this:
trunk/source # all your code in here
trunk/libs # all vendor code in here
The libs directory should be empty but it will have svn:externals meta data associated with it, via:
svn propedit svn:externals trunk/libs
The contents of this property would be something along the lines of (assumes subversion 1.5):
^/vendor/somelib/current somelib
^/vendor/anotherlib/1.0 anotherlib
This means that when you checkout your code subversion also checks out your vendor libraries into your trunk/libs directory. So that when checked out it looks like this:
trunk/source
trunk/libs/somelib
trunk/libs/anotherlib
This is described (probably a whole lot better) in the Subversion Book. Particularly the section on handling vendor branches and externals.
Ubuntu = Debian = apt-get goodness
Start with Linux Utilities:
%> sudo apt-get install util-linux