"Missing separate debuginfos" in non-root account - gdb

I have the same problem as reported here:
Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.47.el6_2.9.i686 libgcc-4.4.6-3.el6.i686 libstdc++-4.4.6-3.el6.i686
However, I am not the root user so I can't just run debuginfo-install .... I was wondering if there's a relatively easy way for me to get these libraries and add a Path to them in my home directory without using a root account.

There is a way, though I'm not sure I would call it easy. The essential idea is to install the files in your $HOME and then tell gdb how to find them.
The steps are like:
Download the RPMs.
Install them somewhere in $HOME. Sometimes you can do this with rpm -i --prefix=..., though I don't know if that will work for debuginfo RPMs. You can always extract the files from an RPM using cpio. Be sure to preserve the directory names.
In gdb, use set debug-file-directory to tell gdb to look at your new directory. You can put multiple directories here by separating them with ;.
Some more fiddling with source directories (see dir) might be needed after this.
It's maybe worth noting that you normally don't actually need system debuginfo.

Related

Why is the main() function of my program named "test" not getting called? [duplicate]

When running scripts in bash, I have to write ./ in the beginning:
$ ./manage.py syncdb
If I don't, I get an error message:
$ manage.py syncdb
-bash: manage.py: command not found
What is the reason for this? I thought . is an alias for current folder, and therefore these two calls should be equivalent.
I also don't understand why I don't need ./ when running applications, such as:
user:/home/user$ cd /usr/bin
user:/usr/bin$ git
(which runs without ./)
Because on Unix, usually, the current directory is not in $PATH.
When you type a command the shell looks up a list of directories, as specified by the PATH variable. The current directory is not in that list.
The reason for not having the current directory on that list is security.
Let's say you're root and go into another user's directory and type sl instead of ls. If the current directory is in PATH, the shell will try to execute the sl program in that directory (since there is no other sl program). That sl program might be malicious.
It works with ./ because POSIX specifies that a command name that contain a / will be used as a filename directly, suppressing a search in $PATH. You could have used full path for the exact same effect, but ./ is shorter and easier to write.
EDIT
That sl part was just an example. The directories in PATH are searched sequentially and when a match is made that program is executed. So, depending on how PATH looks, typing a normal command may or may not be enough to run the program in the current directory.
When bash interprets the command line, it looks for commands in locations described in the environment variable $PATH. To see it type:
echo $PATH
You will have some paths separated by colons. As you will see the current path . is usually not in $PATH. So Bash cannot find your command if it is in the current directory. You can change it by having:
PATH=$PATH:.
This line adds the current directory in $PATH so you can do:
manage.py syncdb
It is not recommended as it has security issue, plus you can have weird behaviours, as . varies upon the directory you are in :)
Avoid:
PATH=.:$PATH
As you can “mask” some standard command and open the door to security breach :)
Just my two cents.
Your script, when in your home directory will not be found when the shell looks at the $PATH environment variable to find your script.
The ./ says 'look in the current directory for my script rather than looking at all the directories specified in $PATH'.
When you include the '.' you are essentially giving the "full path" to the executable bash script, so your shell does not need to check your PATH variable. Without the '.' your shell will look in your PATH variable (which you can see by running echo $PATH to see if the command you typed lives in any of the folders on your PATH. If it doesn't (as is the case with manage.py) it says it can't find the file. It is considered bad practice to include the current directory on your PATH, which is explained reasonably well here: http://www.faqs.org/faqs/unix-faq/faq/part2/section-13.html
On *nix, unlike Windows, the current directory is usually not in your $PATH variable. So the current directory is not searched when executing commands. You don't need ./ for running applications because these applications are in your $PATH; most likely they are in /bin or /usr/bin.
This question already has some awesome answers, but I wanted to add that, if your executable is on the PATH, and you get very different outputs when you run
./executable
to the ones you get if you run
executable
(let's say you run into error messages with the one and not the other), then the problem could be that you have two different versions of the executable on your machine: one on the path, and the other not.
Check this by running
which executable
and
whereis executable
It fixed my issues...I had three versions of the executable, only one of which was compiled correctly for the environment.
Rationale for the / POSIX PATH rule
The rule was mentioned at: Why do you need ./ (dot-slash) before executable or script name to run it in bash? but I would like to explain why I think that is a good design in more detail.
First, an explicit full version of the rule is:
if the path contains / (e.g. ./someprog, /bin/someprog, ./bin/someprog): CWD is used and PATH isn't
if the path does not contain / (e.g. someprog): PATH is used and CWD isn't
Now, suppose that running:
someprog
would search:
relative to CWD first
relative to PATH after
Then, if you wanted to run /bin/someprog from your distro, and you did:
someprog
it would sometimes work, but others it would fail, because you might be in a directory that contains another unrelated someprog program.
Therefore, you would soon learn that this is not reliable, and you would end up always using absolute paths when you want to use PATH, therefore defeating the purpose of PATH.
This is also why having relative paths in your PATH is a really bad idea. I'm looking at you, node_modules/bin.
Conversely, suppose that running:
./someprog
Would search:
relative to PATH first
relative to CWD after
Then, if you just downloaded a script someprog from a git repository and wanted to run it from CWD, you would never be sure that this is the actual program that would run, because maybe your distro has a:
/bin/someprog
which is in you PATH from some package you installed after drinking too much after Christmas last year.
Therefore, once again, you would be forced to always run local scripts relative to CWD with full paths to know what you are running:
"$(pwd)/someprog"
which would be extremely annoying as well.
Another rule that you might be tempted to come up with would be:
relative paths use only PATH, absolute paths only CWD
but once again this forces users to always use absolute paths for non-PATH scripts with "$(pwd)/someprog".
The / path search rule offers a simple to remember solution to the about problem:
slash: don't use PATH
no slash: only use PATH
which makes it super easy to always know what you are running, by relying on the fact that files in the current directory can be expressed either as ./somefile or somefile, and so it gives special meaning to one of them.
Sometimes, is slightly annoying that you cannot search for some/prog relative to PATH, but I don't see a saner solution to this.
When the script is not in the Path its required to do so. For more info read http://www.tldp.org/LDP/Bash-Beginners-Guide/html/sect_02_01.html
All has great answer on the question, and yes this is only applicable when running it on the current directory not unless you include the absolute path. See my samples below.
Also, the (dot-slash) made sense to me when I've the command on the child folder tmp2 (/tmp/tmp2) and it uses (double dot-slash).
SAMPLE:
[fifiip-172-31-17-12 tmp]$ ./StackO.sh
Hello Stack Overflow
[fifi#ip-172-31-17-12 tmp]$ /tmp/StackO.sh
Hello Stack Overflow
[fifi#ip-172-31-17-12 tmp]$ mkdir tmp2
[fifi#ip-172-31-17-12 tmp]$ cd tmp2/
[fifi#ip-172-31-17-12 tmp2]$ ../StackO.sh
Hello Stack Overflow

How to rename bazelisk to bazel

I am currently trying, without great success, to build tensorflow from source.
As suggested here: https://www.tensorflow.org/install/source, I tried to do so by installing bazelisk. Unfortunately, I wasn't able to do so as the ./compile cannot find bazel as bazelisk replaces it.
This link: https://github.com/bazelbuild/bazelisk/issues/122 suggested to alias or rename the environment variable to "bazel" in the PATH.
As described in the issue above, aliasing did not work out for the configure.py.
My next step would be to rename it but I, unfortunately, was not able to figure out how the renaming of environment works under Linux.
I did add the following:export PATH=$PATH:$(go env GOPATH)/bin
to my .profile under my /home folder, which, the way I understand it, adds the path to Bazelisk binaries to my environment path but I am not sure how the renaming would work in this situation.
Would it be possible to explain how I could proceed?
Download the bazelisk binary from the releases page and save the file as bazel in a directory somewhere in your $PATH.
For example, if you have export PATH=$PATH:$HOME/bin in your .profile/.bashrc/.bash_profile, and in $HOME/bin, store the bazelisk binary as $HOME/bin/bazel.
You can have 2 more options:
sudo ln -s /usr/local/bin/bazelisk /usr/local/bin/bazel which makes a symlink to bazelisk (personally i prefer it, because its more explicit)
alias bazel='bazelisk' in your ~/.zshrc, ~/.bashrc or ~/.profile. This also works well, but there could be some issues if you want to run vim-bazel and such.

How to install 2 Opencv versions on one Ubuntu machine and How to activate one at a time for compilation?

I have installed two versions of opencv in my ubuntu12.04 machine , one in /usr/local/ (opencv3.0.0) and another in /usr/ (opencv2.4.9).
To activate particular version i am using these commands in terminals.
Example :To activate opencv2.4.9,
sudo sh -c 'echo "/usr/" > /etc/ld.so.conf.d/opencv.conf' (shell script)
sudo ldconfig
export PKG_CONFIG_PATH=/usr/lib/pkgconfig
After executing these commands version is changing.
Checked with command, pkg-config --modversion opencv.
Then i compiled my code and checked used libraries, Using ldd command,
It is listing opencv3.0.0 version not opencv2.4.9.
Please help correct way of switching opencv versions.
Thanks in advance
Thank you,
I found a solution for this problem, but I am not sure the solution what iIfound is correct way or not. But it is working fine for me.
When we install two versions of opencv in different locations,we will found two opencv.pc file in {path}/lib/pkgconfig/opencv.pc.
In above example opencv2.4.9's opencv.pc file is in this path usr/lib/pkgconfig/opencv.pc.
and opencv3.0.0's opencv.pc file is in this path /usr/local/lib/pkgconfig/opencv.pc
When we compile a code it will search in both location for opencv.pc configuration file, it will use which ever first it is getting, neglecting second one.
so if want compile code with particular version we need to remove this opencv.pc file from that location.
If you want to use opencv2.4.9 remove(or rename)opencv.pc from opencv3.0.0's lib/pkgconfig/ location. Again if want activate opencv3.0.0 add opencv.pc to its lib/pkgconfig/ location and remove opencv2.4.9's opencv.pc file from /lib/pkgconfig/opencv.pc.
If somebody knows a better way to do this, please comment.
You still can install both versions and append on the environment path the path of the version you want to use.
If you don't know how to change system path check this ( How to permanently set $PATH on Linux? )

Using %{buildroot} in a SPEC file

I'm creating a simple RPM installer, I just have to copy files to a directory structure I create in the %install process.
The %install process is fine, I create the following folder /opt/company/application/ with the command mkdir -p %{buildroot}/opt/company/%{name} and then I proceed to copy the files and subdirectories from my package. I've tried to install it and it works.
The doubt I have comes when uninstalling. I want to remove the folder /opt/company/application/ and I thought you're supposed to use %{buildroot} anywhere when referencing the install location. Because my understanding is the user might have a different structure and you can't assume that rmdir /opt/company/%{name}/ will work. Using that command in the %postun section deletes succesfully the directories whereas using rmdir ${buildroot}/opt/company/%{name} doesn't delete the folders.
My question is, shouldn't you be using ${buildroot} in the %postun in order to get the proper install location? If that's not the case, why?
Don't worry about it. If you claim the directory as your own in the %files section, RPM will handle it for you.
FYI, %{buildroot} probably won't exist on the target machine.

Building log4cxx with APR

I need to build the log4cxx library on a SuSE linux system where I am not root. The package manager, zypper, apparently does not know about log4cxx.
I download log4cxx and try to build with autotools
./configure
checking for APR... no
configure: error: APR could not be located. Please use the --with-apr option.
I then search for libapr:
find / -name libapr*
/usr/share/doc/packages/libapr-util1
/usr/share/doc/packages/libapr1
/usr/lib64/libaprutil-1.so.0.3.12
/usr/lib64/libapr-1.so.0.4.5
/usr/lib64/libaprutil-1.so.0
/usr/lib64/libapr-1.so.0
So I try
./configure --with-apr=/usr/lib64/libapr-1.so.0
configure: error: the --with-apr parameter is incorrect. It must specify an install prefix, a build directory, or an apr-config file.
The same for --with-apr=/usr/lib64/libapr-1.so.0.4.5 and --with-apr=/usr/lib64/.
Which file does ./configure look for? What does --with-apr expect? Is one of the two *.so.* files the needed library?
You'll probably want to install libapr1-devel so that you can compile against it. Then try re-running ./configure.
I ran into the same issue, I think you're using the source code off of appache's site which I beleive is outdated. This issue has been fixed in the SVN trunk several years ago (lolol, I guess right around the time this question was asked).
Just pull the svn trunk's source and compile it:
svn checkout http://svn.apache.org/repos/asf/incubator/log4cxx/trunk apache-log4cxx
./autogen.sh
./configure
make
make check
sudo make install
On software.opensuse.org someone has packages built for recent versions of openSUSE as well as SLE at liblog4cxx10. Maybe that'll work for you instead of building your own.
MichaelGoren is right.
There is multiple ".h" file missing.
So you have to add them before launching make.
sed -i '1i#include <string.h>\n' src/main/cpp/inputstreamreader.cpp
sed -i '1i#include <string.h>\n' src/main/cpp/socketoutputstream.cpp
sed -i '1i#include <string.h>\n' src/examples/cpp/console.cpp
sed -i '1i#include <stdio.h>\n' src/examples/cpp/console.cpp
I bumped into the same problem on 3.3.4-5.fc17.x86_64 and resolved it by including the appropriate H files to the CPP files reported by the make utility.
In my case I should run the make utility 3 times each time getting a new error and fixing it by adding the appropriate include H to the reported CPP file.
The main idea is as following:
1) Check by running the man utility, where the function mentioned in the error defined.
For example, man memmove says that it is defined in the string.h header file.
2) Add the appropriate include file to the CPP file.
For example, the make utility complains that inputstreamreader.cpp does not find the memmove function. Open the inputstreamreader.cpp file and add string.h to its header files.
3) Run the make utility until the log4cxx is compiled without errors.