This question already has answers here:
What's the opposite of 'make install', i.e. how do you uninstall a library in Linux?
(13 answers)
Closed 8 years ago.
I recently installed it++, a C++ signal processing library, from http://itpp.sourceforge.net/4.3.1/index.html by downloading the zip file, doing cmake, make and make install.
I now want to completely undo the installation and re-install again. This is a basic question, but how do I remove it++ in Ubuntu? In general, what commands do I use to remove installed C/C++ libraries in linux?
Thanks.
The libitpp-dev package is available in Ubuntu:
https://launchpad.net/ubuntu/+source/libitpp
Read carefully the post that Ben suggested as duplicate in his comment -- What's the opposite of 'make install', ie. how do you uninstall a library in Linux? which suggests a reversal akin to the
# make uninstall
that was suggested by shengy in his comment, to be run in the directory from which you installed originally (re: bikram990)
Be sure to read carefully the comments to avoid common 'gotchas', including accidentally removing dependencies related to other packages.
As is stated in the answers of that post, the second option is figuring out the build steps and manually reversing them, using the
$ make -n uninstall
command to figure out what those were. If it turns out you do have to do some pruning manually, again, be wary of what you remove in case you accidentally break other packages in the process.
It is recommended to install the package via your package manager to avoid complications and possible problems such as this, especially if you're not 100% sure of what you're doing with cmake, or at least a little wary about how to proceed in the case of an uninstall.
The package can then be installed with:
$ sudo apt-get install libitpp-dev
And this very reason is a very good one to stick with official repositories/packages, as a reversal can be done with:
$ sudo apt-get uninstall libitpp-dev
And your package manager will handle the mundane details, such as dependency checking, updates, and will generally assure that you will not break any other packages when installing or uninstalling.
Additionally, your official package may contain some Ubuntu-specific patches.
It's understandable to install packages manually in the case that a particular package is not available through the official channels, but then you're privy to the whims of the particular package authors, who may not have tested functionality thoroughly on your particular system.
Good Luck.
Related
After trying to build the gitg flatpak I noticed my /var/lib/flatpak/repo folder has become very large.
I'm assuming these are build files? Is there a good way to clean these up?
I'm using Flatpak 1.4.0.
For those landing here who arent building stuff, /var/lib/flatpak/repo is also where every flatpak installs ends up, and when you run upgrades it doesn't clean itself up. For --user installed packages it would be ~/.local/share/flatpak/.
Discovered that answer on this post.
flatpak uninstall --unused
Before
[root#laptop flatpak]# du -sh .
8.4G .
After
[root#laptop flatpak]# du -sh .
4.3G .
/var/lib/flatpak/ & ~/.local/share/flatpak/ are the system & user install location for installed runtimes (e.g. org.gnome.Platform) and applications (e.g. org.gnome.gitg). The repo/ directory is where all the checksum-ed binary files are stored, it's like a git repo. They are not the build files unless you include the downloaded org.gnome.Sdk//master runtime, which would be installed here. But the SDK is shared and not specific to gitg.
If you built with flatpak-builder they would be in a folder called .flatpak-builder & the build folder (what ever you called it). So if you ran the following in a directory like ~/gitg-build-folder/:
flatpak-builder --force-clean --repo=gitg-repo build org.gnome.gitgDevel.json
Delete ~/gitg-build-folder/build & ~/gitg-build-folder/.flatpak-builder to remove any build files produced during building gitg.
If you don't need to build anything in the future you could delete org.gnome.Sdk//master, however, a lot of the files are de-duplicated as org.gnome.Platform is also installed. You might also have the *.Debug SDK extension installed which would take a lot of space.
Answer from duplicate question on Flatpak GitHub:
https://github.com/flatpak/flatpak/issues/2945#issuecomment-499498706
Fixed my greedy Flatpak problem, for what it's worth: I managed to clean like 20 Gb of garbage (/var/lib/flatpak/repo/objects), a bunch of tiny files. I started by uninstalling all the applications that I had installed there, but it didn't make much difference.
Without applications and only with runtimes, it was still the same. Used the flatpak uninstall --unused command, which removes runtimes and extensions not used by installed applications (I didn't have any left, so everything was removed). Despite this, there was no big difference on the hard drive.
Finally, the command sudo flatpak repair, which is to fix inconsistencies, is what cleared almost 20 GB.
I had previously tried it without success. I guess by deleting the apps, Flatpak just became aware of that garbage.
Although I don't need them anymore, because I installed them directly on the system, I reinstalled the Flatpak applications (curious to see what would happen) that I had and everything works fine and only taking up something like 1GB.
My Flatpak version: 1.10.7
I maintain the PPA for Bookworm here:
https://launchpad.net/bookworm
Recently I have changed the package name from "bookworm" to "com.github.babluboy.bookworm" based on the RDNN requirements of Elementary OS AppStore
This requires that the installation on Ubuntu is done by the command "sudo apt-get install com.github.babluboy.bookworm" instead of "sudo apt-get install bookworm".
While I have signposted this on Launchpad and the Bookworm website, there are a lots of posts and blogs on the internet from earlier asking users to use the "sudo apt-get install bookworm" command. This will install an old package (still in the PPA) which I don't update anymore.
Is there a way I can set up in Launchpad so that the older package automatically points to the new one for installation.
A hack that I can think of is to update the old package so that there is a big banner on the app providing instructions to switch to the new package. But thought of asking here if there is a more elegant way to manage package name changes in PPA
What you need is a transitional package with the old name. This will be an empty package with no actual contents, which has the new package as a dependency. When people update/install the bookworm package, it will be installed, and will pull the new package as a dependency. A future version of the new package can declare the old one as a conflict, and remove it while updating.
Debian Wiki has the information you need in much more detail. For a number of package transition scenarios, see this:
https://wiki.debian.org/PackageTransition
Case #5 : Rename is what you need from there. The exact page you want is this
https://wiki.debian.org/RenamingPackages
There are other methods explained on that page, like the 'Clean Slate Method', but the 'Transition package method' is the one which is much more cleaner, and recommended. (If you search apt for 'transitional package', you'll find a lot of them).
I am tearing my hair out trying to install Spatialite for GeoDjango!
I am already using Homebrew, it's generally easy and convenient so I initially tried to follow the Homebrew instructions for GeoDjango.
But this stops short of installing any database, i.e. Spatialite. The next step is to try and install Spatialite itself, but there are no Homebrew-specific instructions provided by Django docs.
I found this tutorial which looks perfect - a Homebrew and virtualenv-friendly install of Spatialite for GeoDjango.
But it doesn't work... it appears that my pysqlite is linked against the non-spatial-enabled version of SQLite that comes with OS X, rather than the Spatial-ised one I installed from Homebrew, I get this error when Django tried to connect to the db:
"The pysqlite library does not support C extension loading. Both SQLite and pysqlite must be configured to allow the loading of extensions to use SpatiaLite."
The author of pysqlite hasn't responded to my pleas for help on Github and I haven't found anything via Google.
So I went back to the drawing board and decided to follow the "Mac OS X-specific instructions" in the GeoDjango docs... by installing the various geo libs from the KyngChaos binary packages.
The docs say "Install the packages in the order they are listed above" but I found I couldn't install UnixImageIO without installing PROJ first. The link in the docs to download Spatialite binaries (http://www.gaia-gis.it/spatialite-2.3.1/binaries.html) is broken so I used the "Spatialite Tools v4.1" from KyngChaos instead.
Proceeding to the next step I get this error:
$ spatialite geodjango.db "SELECT InitSpatialMetaData();"
SQLite header and source version mismatch
2013-10-17 12:57:35 c78be6d786c19073b3a6730dfe3fb1be54f5657a
2013-09-03 17:11:13 7dd4968f235d6e1ca9547cda9cf3bd570e1609ef
Not really sure what's wrong at this point.
There is someone else here on SO who has gone the KyngChaos route and just ends up with the same "Both SQLite and pysqlite must be configured to allow the loading of extensions" error I got from the Homebrew route anyway.
I found this ticket #17756 for adding pyspatialite support to Django - pyspatialite is supposed to be an easier way to pip install everything but unfortunately it doesn't work either (see comments towards bottom of ticket).
I'm a bit reluctant to start trying to build everything from source by hand as it seems likely I'll just run into the same problems again, but spending hours Googling for info about cryptic compiler errors, magic flags and paths etc along the way.
I'm about ready to give up and just use Postgres/PostGIS.
I was able to get this working now, using the tip here:
https://github.com/ghaering/pysqlite/issues/60#issuecomment-50345210
I'm not sure if it was using the real paths that fixed it, or just the Homebrew kegs or underlying packages have been updated and now install cleanly. Still, it works now.
I reproduce below the steps I took:
brew update
brew install sqlite # 3.8.5
brew install libspatialite # 4.2.0
brew install spatialite-tools # 4.1.1
git clone https://github.com/ghaering/pysqlite.git
cd pysqlite
(where brew reported I had existing versions I unlinked them and installed the latest as commented above)
then edited setup.cfg to comment out #define=SQLITE_OMIT_LOAD_EXTENSION and specify the paths:
include_dirs=/usr/local/opt/sqlite/include
library_dirs=/usr/local/opt/sqlite/lib
activated the virtualenv where I want it installed, then
python setup.py build
python setup.py install
(build_static still fails with clang: error: no such file or directory: 'sqlite3.c')
(maybe I should have done pip install . as suggested in the github issue)
now the spatialite geodjango.db "SELECT InitSpatialMetaData();" succeeds, albeit with an ignorable error:
InitSpatiaMetaData() error:"table spatial_ref_sys already exists"
i.e. it's probably not even necessary to run that command
When I was istalling this i follow this instructions https://docs.djangoproject.com/en/dev/ref/contrib/gis/install/spatialite/#pysqlite2
pysqlite2
If you’ve decided to use a newer version of pysqlite2 instead of the sqlite3 Python stdlib module, then you need to make sure it can load external extensions (i.e. the required enable_load_extension method is available so SpatiaLite can be loaded).
This might involve building it yourself. For this, download pysqlite2 2.6, and untar:
$ wget https://pypi.python.org/packages/source/p/pysqlite/pysqlite-2.6.3.tar.gz
$ tar xzf pysqlite-2.6.3.tar.gz
$ cd pysqlite-2.6.3
Next, use a text editor (e.g., emacs or vi) to edit the setup.cfg file to look like the following:
[build_ext]
#define=
include_dirs=/usr/local/include
library_dirs=/usr/local/lib
libraries=sqlite3
#define=SQLITE_OMIT_LOAD_EXTENSION
I had the same error: SQLite header and source version mismatch.
For me it was enough to update libsqlite3-dev.
After that invoking $ spatialite geo.db "SELECT InitSpatialMetaData();" creates proper database.
I have openssl 0.9.8g installed on my computer...
It seems that it has a known bug which I ran into.
I wanted to install the current version 1.0.0d which seems to have fixed the bug.
so, basic install :
$ ./config
$ make
$ sudo make install
However even after recompiled my software I still get the same error, and it is for sure coming from 0.9.8 since it is written in the error :
error:1408F06B:SSL
routines:SSL3_GET_RECORD:bad
decompression:/SourceCache/OpenSSL098/OpenSSL098-35/src/ssl/s3_pkt.c:438:
Let's assume you installed your downloaded version of OpenSSL to /home/yourname/openssl. Then you need to tell your software to use that custom install instead of the pre-packaged 0.9.8 that already resides on your file system. There's no need to uninstall that, you can have several installations on your machine. "Telling" your software where to find your custom installation is by providing the linker with the correct paths where to find libssl and libcrypto. Add these to the linking options in your Makefile:
-L/home/yourname/openssl/lib -Wl,-R/home/yourname/openssl/lib
Then it should link against the new version just fine. To verify it did, you can use
ldd <your_executable_or_library>
and verify that the custom OpenSSL path is listed there and not the old one.
I'm not sure about the OS you are using, but my guess is that you first have to remove the erroneous old version before you move on the install the newer one. Some OSes don't put libraries installed with sudo make install in the same place as with a packet manager. Also the lookup order of the libraries might cause it to load the older one.
Last Friday, I've built an RPM spec for my Django project. The RPM creates a virtualenv, downloads dependencies via pip and puts everything into the packages. Today, I've found out that BeautifulSoup 3.2 has been released. Luckily, I've had my BeautifulSoup version pinned in the requirements.txt, so I found out because of the build failing.
Now a completely different matter is: how do I do avoid upgrading stuff in the future? BeautifulSoup has deleted all previous versions from PyPI, so I can't download a version I've actually tested against. pip's download cache doesn't help here either, since pip always tries to check PyPI first.
Can you recommend something to avoid this situation?
First, this is an unusual situation. I've never seen another package remove all old releases the way BeautifulSoup does. I consider that rather user-hostile behavior, except perhaps in cases of a serious security fix.
That said, if you want a reliable build process using pip, you really need to mirror all the packages you rely on locally. It's not hard to do; you can use pip's --download option (or your existing pip cache) to get all the package tarballs, then just dump them in an indexed, web-served directory and use --find-links in your requirements file to point pip there (plus --no-index to tell it not to use PyPI).
The files in question can still be found: just provide the direct url instead of the package name:
http://www.crummy.com/software/BeautifulSoup/download/3.x/3.0.8.tar.gz
for example.