CUDA missing libGL.so libGLU.so and libX11.so - opengl

This is the standard problem that people have been running across, but I can't get it to work. I'm on Linux Mint 17.3 and did the install via repo. When I try to compile the 5_Simulations directory (really, fluidsGL), I get the following errors:
>>> WARNING - libGL.so not found, refer to CUDA Getting Started Guide for how to find and install them. <<<
>>> WARNING - libGLU.so not found, refer to CUDA Getting Started Guide for how to find and install them. <<<
>>> WARNING - libX11.so not found, refer to CUDA Getting Started Guide for how to find and install them. <<<
However, these do exist on the system, for example:
[name#host: fluidsGL]$ locate libGL.so
/usr/lib/i386-linux-gnu/mesa/libGL.so.1
/usr/lib/i386-linux-gnu/mesa/libGL.so.1.2.0
/usr/lib/nvidia-352/libGL.so
/usr/lib/nvidia-352/libGL.so.1
/usr/lib/nvidia-352/libGL.so.352.68
/usr/lib/x86_64-linux-gnu/libGL.so
/usr/lib/x86_64-linux-gnu/mesa/libGL.so
/usr/lib/x86_64-linux-gnu/mesa/libGL.so.1
/usr/lib/x86_64-linux-gnu/mesa/libGL.so.1.2.0
/usr/lib32/nvidia-352/libGL.so
/usr/lib32/nvidia-352/libGL.so.1
/usr/lib32/nvidia-352/libGL.so.352.6
Even symlinking to /usr/lib/libGL.so with the nvidia-352 version doesn't work. Has anybody had this particular issue? I'm trying not to screw up the computer, as I've had issues with drivers suddenly not working when I start messing with this kind of stuff.

Linux Mint is not an officially supported distro for CUDA. So it's possible that the CUDA install method (the driver install portion, in this case) you are using is placing the necessary GL libraries in a place that the makefile is not equipped to find.
If you study the findgllib.mk makefile "helper" file in the build directory, I suspect a debian based distribution would follow the UBUNTU path in that .mk file. For the non-ppc and non-arm branches, you will find definitions like this:
ifeq ("$(UBUNTU)","0")
ifeq ...
...
else
GLPATH ?= /usr/lib/$(UBUNTU_PKG_NAME)
GLLINK ?= -L/usr/lib/$(UBUNTU_PKG_NAME)
DFLT_PATH ?= /usr/lib
Given that:
You have stated that the GL libraries seemed to be installed.
You've symlinked those libraries into the /usr/lib directory
The GLPATH definition in the .mk file is a "non-override" definition (i.e. ?=)
we can "override" or replace the GLPATH definition concocted by the makefile with the "known good" one of /usr/lib with:
GLPATH=/usr/lib
prepended to your make command.

the same problem occurs to me and i tried all including driver installtion but when i watched the makefile(.mk) there is graphic driver version specified and it checks for OS distribution name like(Ubuntu, fedora etc) while i was using Zorin so it wasn't able to find path assigned to variables. so after miner changes it runs successfully. i hope it help.
The changes were:
// whatever version u have
UBUNTU_PKG_NAME = "nvidia-375"
// add name of distro in this list
ifeq (,$(filter $(DISTRO),ubuntu zorin fedora red rhel centos suse))
DISTRO =
endif
// add this line for specific distro
ZORIN = $(shell echo $(DISTRO) | grep -i zorin >/dev/null 2>&1; echo $$?)
//copy and paste same code which has for ubuntu in file for your specified distro if required
ifeq ("$(ZORIN)","0")
ifeq ($(HOST_ARCH)-$(TARGET_ARCH),x86_64-armv7l)
GLPATH := /usr/arm-linux-gnueabihf/lib
GLLINK := -L/usr/arm-linux-gnueabihf/lib
ifneq ($(TARGET_FS),)
GLPATH += $(TARGET_FS)/usr/lib/$(UBUNTU_PKG_NAME)
GLPATH += $(TARGET_FS)/usr/lib/arm-linux-gnueabihf
GLLINK += -L$(TARGET_FS)/usr/lib/$(UBUNTU_PKG_NAME)
GLLINK += -L$(TARGET_FS)/usr/lib/arm-linux-gnueabihf
endif
else ifeq ($(HOST_ARCH)-$(TARGET_ARCH),x86_64-ppc64le)
GLPATH := /usr/powerpc64le-linux-gnu/lib
GLLINK := -L/usr/powerpc64le-linux-gnu/lib
else
GLPATH ?= /usr/lib/$(UBUNTU_PKG_NAME)
GLLINK ?= -L/usr/lib/$(UBUNTU_PKG_NAME)
DFLT_PATH ?= /usr/lib
endif
endif

For plain Debian you might want to use the following:
…
SUSE = $(shell echo $(DISTRO) | grep -i 'suse\|sles' >/dev/null 2>&1; echo $$?)
DEBIAN = $(shell echo $(DISTRO) | grep -i debian >/dev/null 2>&1; echo $$?)
ifeq ("$(UBUNTU)","0")
…
…
ifeq ("$(CENTOS)","0")
GLPATH ?= /usr/lib64/nvidia
GLLINK ?= -L/usr/lib64/nvidia
DFLT_PATH ?= /usr/lib64
endif
ifeq ("$(DEBIAN)","0")
GLPATH ?= /usr/lib/x86_64-linux-gnu
GLLINK ?= -L/usr/lib/x86_64-linux-gnu
DFLT_PATH ?= /usr/lib64
endif
# find libGL, libGLU
…
in your cuda-samples/common/findgllib.mk, then enter cuda-samples and execute for f in $(find ?_* -name findgllib.mk); do cp -bv common/findgllib.mk $f; done to use that file for every GL sample

Related

Qt Linux to Windows transition: Error in macro substitution

I have a Qt project (Version 5.14.2), which is building just fine under Linux. Now I would like to provide it on Windows as well. However, I have some trouble getting it built. The following error is thrown:
Error: Cannot find = after : in macro substitution.
And then a line in the makefile. When I go to the line there is this command:
443 {C:\Users\Alex\Documents\GitHub\control-station\src\aircraft}.cpp{obj\}.obj::
444 $(CXX) -c $(CXXFLAGS) $(INCPATH) -Foobj\ #<<
445 $<
446 <<
I have no prior experience with windows, so this error leaves me clueless. There is another error following:
Kit Desktop Qt 5.14.2 MSVC2017 64bit has configuration problems.
It looks like this is consecutive of the one prior, but I am not sure. Do you have any suggestions what to check? It seems to be a macro error, but I don't know where to start looking?
Nevermind, found a solution. The problem was burried somewhere completely else in the code. I was using some git shell commands:
exists ($$PWD/.git) {
GIT_DESCRIBE = $(shell git --git-dir $$PWD/.git --work-tree $$PWD describe --always --tags)
GIT_BRANCH = $(shell git --git-dir $$PWD/.git --work-tree $$PWD rev-parse --abbrev-ref HEAD)
GIT_HASH = $(shell git --git-dir $$PWD/.git --work-tree $$PWD rev-parse --short HEAD)
GIT_TIME = $(shell git --git-dir $$PWD/.git --work-tree $$PWD show --oneline --format=\"%ci\" -s HEAD)
But haven't had git shell installed properly. This propagated to the make file in some way and caused the error.

Error loading shared libraries after cross compiling: No such file or directory

I'm having problems to load share libraries after cross-compiling my C++ code using Docker Buildx, having a Raspberry Pi Zero W as the target.
After I perform the build, I copy the generated binary to a Raspberry Pi Zero already running and with OpenCV4 installed.
When I run the executable, the following error message is shown:
pi#raspberrypi:/mnt/system/$ ./software.run
./software.run: error while loading shared libraries: libopencv_freetype.so.4.2: cannot open shared object file: No such file or directory
Despite OpenCV4 being already installed, this particular lib wasn't in the /usr/lib. So, I copied it, run sudo ldconfig but, even after this procedure, my software still cannot find the lib.
I even added the /usr/lib to the path of the system, but, it didn't work.
pi#raspberrypi:/usr/lib $ sudo ldconfig -v | grep libopencv_free
ldconfig: Can't stat /usr/local/lib/arm-linux-gnueabihf: No such file or directory
ldconfig: Path `/lib/arm-linux-gnueabihf' given more than once
ldconfig: Path `/usr/lib/arm-linux-gnueabihf' given more than once
ldconfig: /lib/arm-linux-gnueabihf/ld-2.28.so is the dynamic linker, ignoring
ldconfig: /lib/ld-linux.so.3 is the dynamic linker, ignoring
libopencv_freetype.so.4.2 -> libopencv_freetype.so.4.2.0
pi#raspberrypi:/mnt/system/ $ file software.run
software.run: ELF 32-bit LSB pie executable, ARM, EABI5 version 1 (GNU/Linux), dynamically linked, interpreter /lib/ld-linux.so.3, for GNU/Linux 3.2.0, BuildID[sha1]=409a24c10761e2b9fd7a310cddfb09c86fb3a207, not stripped
pi#raspberrypi:/mnt/system $ echo $LD_LIBRARY_PATH
/usr/lib
Makefile:
CC = g++
STD = --std=c++14
SOFTWARE_SRC = $(wildcard src/software/*.cpp)
SOFTWARE_BIN = software.run
CV_LIBS = $(shell pkg-config --cflags --libs opencv4)
SOFTWARE_INC = -Iinclude -I/usr/include -I/usr/local/include
SOFTWARE_LDFLAGS = -lraspicam_cv -L/opt/vc/lib -lmmal -lmmal_core -lmmal_util -lwiringPi
all: software-out
software-out:
$(CC) $(STD) $(SOFTWARE_SRC) -o $(SOFTWARE_BIN) $(CV_LIBS) $(SOFTWARE_INC) $(SOFTWARE_LDFLAGS)
Other software that also uses OpenCV is working properly.
I also build a "Hello World" software just to validate my cross-compiling environment and it is working.
Thank you all in advance
EDIT
After several attempts, I was able to fix the problem by building the libs locally.
I couldn't identify what caused it, but, the libs generated by the Buildx environment weren't working properly in the Raspberry Pi Zero.
I'm building a truly cross-compiling environment to address this issue.

trouble installing old 2005 BOOST library

Gooday everyone
I'm fairly new to ubuntu C programing although I'm
rather experienced in C programing in windows.
I have recently come accross a number of codes written
in 2005 which I'm interested in learning how they work.
Those codes needs BOOST library to compile, however they won't
compile on the newest BOOST version present on my ubuntu 12.04.
I set the gcc compiler on lenient so that it ignores all those error
messages. The code did compile and ran afterwards.
However, when I used GDB debugger to watch how the program flows
I noticed that there are likely errors in the way the program runs
due to using a different BOOST version rather than it's original. Hence
I like to install the BOOST version corresponding to the code I downloaded.
To do that, I installed Ubuntu 5.04 and BOOST 1.33.0 which seemed to have been created in late 2005. I downloaded it
but I didnt found any detailed instruction on how to install it.
Only vague description on using BOOST jam, I played around with BOOST
jam for quite awhile without success.
And this old BOOST does not have installation commands like
"sudo apt-install boost-dev" style option
Thus I like to ask if anyone can give a easy to understand step by step instruction
on how to install the BOOST library downloaded from the above link.
like.....
step1: download boost jam from boost webpage
step2: unpack it in home/boost/ then type make configure
...and so on...
Big thanks for any useful info.
New Contents appended here
in response to the comments given
Hi, I went through the info given by your link and
managed to run the boost library examples given by your link.
That is, I can compile a single cpp file with the command
g++ -I boost_1_33_0 test.cpp -o test
(I'm keeping the boost library and the cpp file to be compiled in the
same folder)
However, the program package I'm interested in is build with make (not cmake).
I have some experience writting cmake files but not make files.
And I do not see any link to boost library command in the make file of the
program package. The readme file only has one sentence that says I
need to have boost installed without explaining what that meant.
I assume it means that either I have to build and do makeinstall the boost or
I could add some lines in the makefile for a link. I thought
maybe you can quickly point out whats missing in the makefile.
The readme file:
To compile, go into the moses directory and do 'make'. You'll need the
latest boost libraries. If compilation still fails for weird reasons,
you could try g++ with the -fpermissive (newer versions reject lots of
code that was ok with older ones). If you are going to be making
changes and recompiling frequently you'll probably want to disable -O3
in the makefile (I use templates liberally, so -O3 really speeds up
the code, but really slows down compilation).
And the makefile:
CC = g++
PROJ_NAME = moses
LINK_FLAGS = -Wall -Iutils/ -Itrees/ -Irewrite -I./ -Imodeling/ -Ifitness/ \
-Ialignment/ -Isim/ -Ilocal/ -O3
COMP_FLAGS = -Wall -Wno-sign-compare -Iutils/ -Itrees/ -Irewrite -I./ \
-Imodeling/ -Ifitness/ -Ialignment/ -Isim/ -Ilocal/ -O3
src := $(wildcard *.cc) $(wildcard utils/*.cc) $(wildcard trees/*.cc) $(wildcard modeling/*.cc) $(wildcard fitness/*.cc) $(wildcard alignment/*.cc) $(wildcard main/*.cc) $(wildcard rewrite/*.cc) $(wildcard sim/*.cc) $(wildcard local/*.cc)
obj := $(patsubst %.cc,%.o,$(src))
all: $(PROJ_NAME)
%.o: %.cc
$(CC) $(COMP_FLAGS) $< -c -o $#
$(PROJ_NAME): $(obj)
$(CC) $(LINK_FLAGS) $^ -o $(PROJ_NAME)
run:
$(PROJ_NAME)
clean:
find -regex ".*~\|.*\.o"|xargs rm -f
rm -f $(PROJ_NAME)
rm -f $(PROJ_NAME).exe*
depend:
makedepend -Y -- $(COMP_FLAGS) -- $(src)
utils/exceptions.o: utils/exceptions.h utils/utils.h
utils/io_util.o: utils/io_util.h utils/tree.h utils/basic_types.h
# ......lots more lines like that.........
I have an old instruction flying around here for Boost 1.34.1, which reads like this (project-specific stuff cut away):
unpack boost sources
cd into tools/jam/src
run ./build.sh to build bjam
cd into the main source directory
tools/jam/src/bin.linux/bjam threading=multi --layout=system --toolset=gcc --without-python variant=release --prefix=/usr/local install
The --without-python was necessary as the target system didn't have Python installed, which caused the build to fail messily.
Obviously you can / need to fiddle with the individual settings (like threading support, release vs. debug variant) to suit your needs, but it should be a good starting point.
If you need ICU support (for Boost.Regex and Boost.Locale), it gets more complicated...
Note that the build process has changed over the years; you shouldn't use the same procedure for more up-to-date boost versions. It's just what I used back then.
Edit:
As for the second part of your question, the Makefile doesn't need to refer to Boost explicitly if boost is installed in the standard system directories.
You do not have to state -I /usr/include for compilation as that is searched automatically; the same goes for -L /usr/lib during linkage.
The fact that the author of the Makefile copied the compiler flags into the linker flags verbatim doesn't really help intuitivity either... ;-)
If you have Boost in a custom directory (either the headers only, or by stating a custom directory in the --prefix option of my build instructions), you need to make the following modifications (look for "boost"):
LINK_FLAGS = -Wall -Iutils/ -Itrees/ -Irewrite -I./ -Imodeling/ -Ifitness/ \
-Ialignment/ -Isim/ -Ilocal/ -L /path/to/boost/libs -O3
COMP_FLAGS = -Wall -Wno-sign-compare -Iutils/ -Itrees/ -Irewrite -I./ \
-Imodeling/ -Ifitness/ -Ialignment/ -Isim/ -Ilocal/ \
-I /path/to/boost/includes -O3
That should do the trick. As the Makefile does not link any of the Boost binaries (e.g. -l boost_program_options or somesuch), it seems that it makes use of the Boost headers only, which would make the -L /path/to/boost/libs part (and, actually, the whole compilation step detailed above) superfluous. You should be able to get away with simply unpacking the sources and giving the header directory as additional include directory using -I /path/to/boost/headers.

How can I set rpath on gcc binaries during bootstrap?

I am trying to build gcc 4.7.2 using a custom prefix $PREFIX
I have built and installed all the prerequisites into my prefix location, and then successfully configured, built and installed gcc.
The problem that I now have is that $PREFIX is not in the library search path, and therefore the shared libraries cannot be found.
$PREFIX/bin $ ./g++ ~/main.cpp
$PREFIX/libexec/gcc/x86_64-suse-linux/4.7.2/cc1plus: \
error while loading shared libraries: \
libcloog-isl.so.1: \
cannot open shared object file: No such file or directory
What works, but isn't ideal
If I export LD_LIBRARY_PATH=$PREFIX/lib then it works, but I'm looking for something which works without having to set environment variables.
If I use patchelf to set the RPATH on all the gcc binaries then it also works; however this involves searching out all elf binaries and iterating over them calling patchelf, I would rather have something more permanent.
What I think would be ideal for my purposes
So I'm hoping there is a way to have -Wl,-rpath,$PREFIX/lib passed to make during the build process.
Since I know the paths won't need to be changed this seems like the most robust solution, and can be also be used for when we build the next gcc version.
Is configuring the build process to hard code the RPATH possible?
What I have tried, but doesn't work
Setting LDFLAGS_FOR_TARGET prior to calling configure:
All of these fail:
export LDFLAGS_FOR_TARGET="-L$PREFIX/lib -R$PREFIX/lib"
export LDFLAGS_FOR_TARGET="-L$PREFIX/lib"
export LDFLAGS_FOR_TARGET="-L$PREFIX/lib -Wl,-rpath,$PREFIX/lib"
Setting LDFLAGS prior to calling configure:
export LDFLAGS="-L$PREFIX/lib -Wl,-rpath,$PREFIX/lib"
In any event I worry that these will override any of the LDFLAGS gcc would have had, so I'm not sure these are a viable option even if they could be made to work?
My configure line
For completeness here is the line I pass to configure:
./configure \
--prefix=$PREFIX \
--build=x86_64-suse-linux \
--with-pkgversion='SIG build 12/10/2012' \
--disable-multilib \
--enable-cloog-backend=isl \
--with-mpc=$PREFIX \
--with-mpfr=$PREFIX \
--with-gmp=$PREFIX \
--with-cloog=$PREFIX \
--with-ppl=$PREFIX \
--with-gxx-include-dir=$PREFIX/include/c++/4.7.2
I've found that copying the source directories for gmp, mpfr, mpc, isl, cloog, etc. into the top level gcc source directory (or using symbolic links with the same name) works everywhere. This is in fact the preferred way.
You need to copy (or link) to those source directory names without the version numbers for this to work.
The compilers do not need LD_LIBRARY_PATH (although running applications built with the compilers will need an LD_LIBRARY_PATH to the $PREFIX/lib64 or something like that - but that's different)
Start in a source directory where you'll keep all your sources.
In this source directory you have your gcc directory either by unpacking a tarball or svn...
I use subversion.
Also in this top level directory you have, say, the following source tarballs:
gmp-5.1.0.tar.bz2
mpfr-3.1.1.tar.bz2
mpc-1.0.1.tar.gz
isl-0.11.1.tar.bz2
cloog-0.18.0.tar.gz
I just download these and update to the latest tarballs periodically.
In script form:
# Either:
svn checkout svn://gcc.gnu.org/svn/gcc/trunk gcc_work
# Or:
bunzip -c gcc-4.8.0.tar.bz2 | tar -xvf -
mv gcc-4.8.0 gcc_work
# Uncompress sources.. (This will produce version numbered directories).
bunzip -c gmp-5.1.0.tar.bz2 | tar -xvf -
bunzip -c mpfr-3.1.1.tar.bz2 | tar -xvf -
gunzip -c mpc-1.0.1.tar.gz | tar -xvf -
bunzip -c isl-0.11.1.tar.bz2 | tar -xvf -
gunzip -c cloog-0.18.0.tar.gz | tar -xvf -
# Link outside source directories into the top level gcc directory.
cd gcc_work
ln -s ../gmp-5.1.0 gmp
ln -s ../mpfr-3.1.1 mpfr
ln -s ../mpc-1.0.1 mpc
ln -s ../isl-0.11.1 isl
ln -s ../cloog-0.18.0 cloog
# Get out of the gcc working directory and create a build directory. I call mine obj_work.
# I configure the gcc binary and other outputs to be bin_work in the top level directory. Your choice. But I have this:
# home/ed/projects
# home/ed/projects/gcc_work
# home/ed/projects/obj_work
# home/ed/projects/bin_work
# home/ed/projects/gmp-5.1.0
# home/ed/projects/mpfr-3.1.1
# home/ed/projects/mpc-1.0.1
# home/ed/projects/isl-0.11.1
# home/ed/projects/cloog-0.18.0
mkdir obj_work
cd obj_work
../gcc_work/configure --prefix=../bin_work <other options>
# Your <other options> shouldn't need to involve anything about gmp, mpfr, mpc, isl, cloog.
# The gcc build system will find the directories you linked,
# then configure and compile the needed libraries with the necessary flags and such.
# Good luck.
I've been using this configure option with gcc-4.8.0, on FreeBSD, after building and installing gmp, isl and cloog:
LD_LIBRARY_PATH=/path/to/isl/lib ./configure (lots of other options) \
--with-stage1-ldflags="-rpath /path/to/isl/lib -rpath /path/to/cloog/lib -rpath /path/to/gmp/lib"
and the resulting gcc binary does not need any LD_LIBRARY_PATH. The LD_LIBRARY_PATH for configure is needed because it compiles a test program to check for the ISL version, which would fail if it didn't find the ISL shared lib.
I tried it on Linux (Ubuntu) where it failed during configuring because the -rpath args were passed to gcc instead of ld. I could fix this by using
--with-stage1-ldflags="-Wl,-rpath,/path/to/isl/lib,-rpath,/path/to/cloog/lib,-rpath,/path/to/gmp/lib"
instead.
Just using configure --with-stage1-ldflags="-Wl,-rpath,/path/to/lib" was not enough for me to build gcc 4.9.2, bootstrap failed in stage 2. What works is to pass he flags directly to make via
make BOOT_LDFLAGS="-Wl,-rpath,/path/to/lib"
I got this from https://gcc.gnu.org/ml/gcc/2008-09/msg00214.html
While it still involves setting environment variables, what I do is that I define LD_RUN_PATH, which sets the rpath. That way the rest of the system can keep using the system provided libraries instead of using the ones that your gcc build generates.
I am going to make a suggestion that I believe solves your problem, although it definitely does not answer your question. Let's see how many downvotes I get.
Writing a generic wrapper script to set LD_LIBRARY_PATH and then to run the executable is easy; see https://stackoverflow.com/a/7101577/768469.
The idea is to pass something like --prefix=$PREFIX/install to configure, building an install tree that looks like this:
$PREFIX/
install/
lib/
libcloogXX.so
libgmpYY.so
...
bin/
gcc
emacs
...
bin/
.wrapper
gcc -> .wrapper
emacs -> .wrapper
.wrapper is a simple shell script:
#!/bin/sh
here="${0%/*}" # or use $(dirname "$0")
base="${0##*/}" # or use $(basename "$0")
libdir="$here"/../install/lib
if [ "$LD_LIBRARY_PATH"x = x ] ; then
LD_LIBRARY_PATH="$libdir"
else
LD_LIBRARY_PATH="$libdir":"$LD_LIBRARY_PATH"
fi
export LD_LIBRARY_PATH
exec "$here"/../install/bin/"$base" "$#"
This will forward all arguments correctly, handle spaces in arguments or directory names, and so forth. For practical purposes, it is indistinguishable from setting the rpath like you want.
Also, you can use this approach not only for gcc, but for your entire my-personal-$PREFIX tree. I do this all the time in environments where I want an up-to-date suite of GNU tools, but I do not have (or want to admit to have) root access.
Try to add your $PREFIX to /etc/ld.so.conf and then run ldconfig:
# echo $PREFIX >> /etc/ld.so.conf
# ldconfig
This will recreate cache that is used by runtime linker and it will pick up your libraries.
WARNING: This operation will cause ALL applications to use your newly compiled libraries in $PREFIX instead of default location

G++ cannot find library unless it is full path

After installing boost from EPEL 5 on 64-bit CentOS 5.8 I run insto strange problem. I cannot link in other way then providing full path. I.e this works:
g++ ... /usr/lib64/libboost_python.so.5
But this cannot find -lboost_python
g++ ... -L/usr/lib64/ -lboost_python
What could be wrong?
PS. LD_LIBRARY_PATH does not help. It do finds some libraries but even symlinking into /usr/lib does not help. I am building 64-version of program (checked by file *.o).
Try to add a symlink : ln -s /usr/lib64/liboost_python.so /usr/lib64/libboost_python.so.5, then try again.