Remove a version of cabal - uninstallation

I accidentally installed cabal twice. I did this by:
cabal install cabal-install
sudo cabal install cabal-install
Now I have two versions of cabal and two versions of each package and it's causing headaches. I want to completely remove cabal from my system (and all packages therein) in order to restart so I don't have two versions. Is this possible and if so, how can I do it?
(Alternatively, I wouldn't mind removing the root version and keeping only the local user version so that I can get xmobar to work without needing to sudo.)
I guess I should note that I have ubuntu 17.10.

You won't have two versions of each package with this. Different cabal-install binaries share the same user packagedb just fine. However, different users will have different packagedbs. For each user, the user packagedb will be by default in ~/.cabal/lib/ and ~/.cabal/store/ (the latter for new-build only).
The bin dir that cabal puts binaries in, including those from cabal install cabal-install is in ~/.cabal/bin. So you can remove the program from either of those bindirs. But you really don't need to remove it from anywhere. Rather, you should just make sure you only ever run it as either root or as local user. But if you really want, you can clear the entire ~/.cabal/ directory from the root user only.

Related

How to move the OPAM root?

Is it possible to move the OPAM root? Or, to create a "portable" (in the sense of e.g. "firefox portable") version of an OPAM root?
That is, install a bunch of packages via opam --root=/PATH/TO/A, then move /PATH/TO/A to /ANOTHER/PATH/TO/B, and run everything from there.
A naive try led to a small error in "opam config env", where the old path slipped through. Also, some config files (findlib, global-config) had to be adjusted. After fixing that, some stuff worked, but "utop" fails with
Fatal error: exception Not_found
Is this a principle issue, or is a portable OPAM root just a matter of setting the right environment variables after the move?
Compiler itself is almost portable, a matter of setting environment variables OCAMLLIB and CAML_LD_LIBRARY_PATH, the remaining problem is running bytecode executables which embed path to ocamlrun, see https://caml.inria.fr/mantis/view.php?id=5950 for details.
As for the packages - there is no universal guarantee, it all depends on what package does during the build. Majority of "normal" libraries that don't expect to find some data files at run time should work though. As comments above suggest, it is often faster to take snapshot of opam state (bundle) and rebuild in different path.

django: pip install 'app' installs to the venv?

I read that it is not proper to have the venv in the git repo, which should be taken care of by requirements.txt, but I have run into a problem...
If I am working in my venv and I do a pip install of some app, it installs it into the venv site-packages. It still works and everything if I add it to installed_apps, but what if I make some changes within that directory? Then git doesn't track them and I am out of luck when I try to push it.
What is the proper way to do this?
EDIT: I must be having a huge miscommunication here so let me explain with a concrete example...
I run...
pip install django-messages
This then install django messages into me venv, I know I can run...
local...pip freeze > requirements.txt
remote....pip install -r requirements.txt
My problem is that I want to make changes to django-messages/templates or django-messages/views thus deviating my django-messages from the one which can be installed from requirements.txt
I don't see how these are to remain in my venv without being completely uneditable/untrackable
This is exactly how it is supposed to work. You track what libraries you install via your requirements.txt, which is committed along with your code. You use that file to generate your venv, and the libraries are installed there. You don't include the venv itself in your repo.
Edit The reason you are finding this hard is that you are not supposed to do that. Don't change third-party projects; you should never need to. They will be configurable.
If you really really find something you need to fix, do as suggested in the comments and fork the app. But this is definitely not something you need to do all the time, which points to the likelihood that you have not understood how to configure the apps from within your own project.
For example, in the case of customising templates, you can simply define the templates inside your own templates dir, rather than editing the ones provided with the app; Django does the right thing and uses yours first.
From your edits it looks like what you want to do is fork the django-messages library. This means that installing it into site-packages is a bad idea in the first place, since site-packages is not supposed to be version controlled or edited, it is designated for 3rd party software. You have two options. You can just grab the source from GitHub and put it somewhere where your Django app can find it (maybe fiddle with your python path) and add this location to git. Maybe even make your own fork on github. The second option is to use pip install -e github.com/project to have pip install an "editable" version. The advantage of the first way is better control over your changes, the advantage of the second way is having pip manage source download and install.
That being said, you seem kinda new to python environment. Are you REALLY sure you want to make your own fork? Is there some functionality you are missing that you want to add to the messages library? You do know that you can override every single template without changing the actual library code?

C++ Boost thread library pulls in the whole development environment

I am using boost-thread in my application. When I deploy this application on a client machine (running Ubuntu 11.10), I need to make sure that libboost_thread.so is available on the machine. However, when I run "apt-get install libboost-thread1.46," it seems to pull in the whole development enviornment (libgcc, libbost1.46-dev, etc.). This machine needs just the runtime environment, not the development environment. I am wondering if there is a better way to handle this.
No such package exception: The package "libboost-thread1.46" does not exist on Ubuntu, is treated by apt-get as a regular expression, and the development package also matches the expression. The two candidate packages are named libboost-thread1.46-dev and libboost-thread1.46.1, where the latter is the package you want. It depends only on three libraries (libgcc, libc, libstdc++), all of which you need to deploy anyway because your program and libboost-thread link against them.
So, deploy by installing libboost-thread1.46.1 and everything should be fine.
You can build individual requirements yourself by download the boost tar and using the bjam build tool.
You could link statically against boost.
You can also use bcp and copy the necessary files into your own source tree. I personally have the headers installed on my system and just added the source files to my project (once.cpp, thread.cpp, timeconv.inl, tss_null.cpp on Linux).

Install gcc on linux with no root privilege

I have access to computer in a public library and I want to try out some C++ and maybe other code. Problem is that there is no g++ installed and I can't install it using packages because I have no root access. Is there a "smart" way to make a full environment for programming in a home folder?
I have gcc installed (I can compile C code). Also, I have a home folder that is consistent. I don't know where to find precompiled g++, I found only source but I don't know what to do with it. I tried to ask them to install this but it didn't work :)
If you want to install it as a local user
GNU GSRC provides an easy way to do so
Link: http://www.gnu.org/software/gsrc/
After configuration, simply specify the following commands:
cd gsrc
make -C pkg/gnu/gcc
make -C pkg/gnu/gcc install
The second step could also be changed to speed up for an N-core system:
make -C pkg/gnu/gcc MAKE_ARGS_PARALLEL="-jN"
You can run the configure script with the --prefix parameter: ../gcc-4.5.0/configure --prefix=/home/foo/bar. Since it is very likely that the c++ standard library is different then the one on your system, you have to set export LD_LIBRARY_PATH=/home/foo/bar/lib before you can start a program compiled by this compiler.
Once, a long time ago (1992 or so), I went through something similar to this when I bought a SCO system with no development environment. Bootstrapping it up to having a full development environment was a gigantic pain, and not at all easy. Having library header files or gcc on a system would make your job a whole lot easier.
It depends a lot on just how obnoxious the library has been about what kinds of things are installed. If there is no gcc there, your job becomes a bit harder. If there are no header files for glibc there, your job is a LOT harder.
Also, do you get an account on the system so you have a home folder that's consistent from login to login?
If you have no gcc there, you need to find a pre-compiled binary of gcc/g++ and install it somewhere. If you have no header files there, you need to find copies of those and put them on the system.
There is no 'standard' way of installing gcc in your home folder though. All of the solutions are going to have some manner of hand-rolling involved.
Have you asked the librarians if they can change what's installed because you want to learn a bit of programming and only have access to their computers to do it with? That might well be the easiest solution.
From your comment it seems that you do have gcc and if you can compile C code, you have the library header files. So now it's a matter of actually compiling your own version of g++. You could probably find a way to entice the package manager on the system into installing a binary package somewhere other than in a system folder. I think this solution is less fun than compiling your own, and I think there may also be possible subtle problems as that installed package may be expecting to find things in particular places and not finding them there.
First thing to do is to make sure you've downloaded the right source for the gcc package. The place to find that is the GNU United States mirror page. You want to find the gcc-4.5.0.tar.bz2 or gcc-4.5.0.tar.gz file on the mirror site you choose. It will likely be in a gcc directory, and a gcc-4.5.0 sub-folder.
After you have that downloaded, you should untar it. In general you shouldn't build gcc in the folder you untar it into. So create another sibling folder that you actually want to build it in labeled gcc-build. Then the command you want is ../gcc-4.5.0/configure --prefix=$HOME/.local --enable-languages='c c++'.
gcc does require some other packages be installed in order to be able to compile itself. You can use the same --prefix line for these packages to install them in the same place. The gcc website has a list of pre-requisite packages.
$HOME/.local is sort of the standard accepted place for per-user installs of things.
If you have fakeroot, you can use that to set ~/some-path as root to install the packages from. Alternatively, you can setup a chroot environment to do the same.
Given this, you can then use dpkg -i package.deb to install the gcc package(s) on your system. You will need to download and install each package individually (e.g. from the debian website) -- at least binutils, glibc, linux-headers and gcc.
If you are on another system, you will need to get the right packages for that system and install them using the corresponding package manager.

Is there a C++ dependency index somewhere?

When trying new software and compiling with the classic ./configure, make, make install process, I frequently see something like:
error: ____.h: No such file or directory
Sometimes, I get really lucky and apt-get install ____ installs the missing piece and all is well. However, that doesn't always happen and I end up googling to find the package that contains what I need. And sometimes the package is the wrong version or flavor and is already used by another package that I downloaded.
How do people know which packages contain which .h files or whatever resource the compiler needs? Is there a dependency resolver website or something that people use to decode failed builds to missing packages? Is there a more modern method of automatically downloading and installing transitive dependencies for a build (somewhat like Java's Maven)?
You can also use "auto-apt ./configure" (on Ubuntu, and probably also on Debian?) and it will attempt to download dependencies automatically.
If it's a package in Debian, you can use apt-get build-dep to get all deps.
Otherwise, read the README that comes with the program -- hopefully, it lists all the deps for that program.
The required packages will hopefully be listed in the documentation for building the package. If it says you require foo, you'll probably want to look for foo and foo-devel, and perhaps libfoo-devel. If that doesn't help, in Fedora I'd do something like
yum install install /usr/include/_____.h
(yum will look for the package containing said file). If none of the above works, look for the file name in Google, that should tell you the package where it comes from. But then the going will get rough...