Building gcc 2.95.3 on Ubuntu 12.04 - c++

I want to compile gcc-2.95.3 on my Ubuntu 12.04 machine, but it won't work.
I found this, and this, but nothing helped.
I first tried to build it with my 4.6.3 version of gcc, but I got an error message.
Then I tried to build gcc-3.4.6 because in the first link, this version had been used to build 2.95.3, but it was not successful either.
Trevorpounds is the best page solving this issue you can find, but it just wont work.
I tried other things too, but nothing works.
As far as I know, the newer toolchain may be the problem, but is there some way to fix that without reinstalling the whole OS?
Actually, I dont even care whether I build it myself or not, if there is a place where I can download the binaries and they work, i'm happy.
Okay, the detailed information what I did and what error messages I got:
The whole procedure is from here
1) I have a fresh installation of Ubuntu 12.04
2) I detect what glibc I have installed by ldd --version, I got the answer ldd (Ubuntu EGLIBC 2.15-0ubuntu10.5) 2.15 ... so it is 2.15
3) I download glibc-2.15.tar.gz from http://ftp.gnu.org/gnu/libc/, I save it into my Downloads folder.
4) I unpack glibc by tar xzf glibc-2.15.tar.gz
5) mkdir -p gcc-2.95.3/glibc-workaround/include/bits
6) cp glibc-2.15/bits/stdio-lock.h gcc-2.95.3/glibc-workaround/include/bits
7) cp glibc-2.15/nptl/sysdeps/unix/sysv/linux/x86_64/bits/pthreadtypes.h gcc-2.95.3/glibc-workaround/include/bits
8) sed -i -n '1h;1!H;${;g;s/\(__pthread_slist_t __list;\n[ \t]*}\)/\1 __gcc_295_workaround__/g;p;}' gcc-2.95.3/glibc-workaround/include/bits/pthreadtypes.h
9) Now I download gcc-2.95.3.tar.gz from ftp://ftp.gnu.org/gnu/gcc/, I also save it in my Downloads folder.
10) I unpack gcc by tar xzf gcc-2.95.3.tar.gz
11) cd gcc-2.95.3
12) I download http://www.trevorpounds.com/blog/wp-content/uploads/2010/01/gcc-v2.95.x.debian.x86_64.diff and save it into the Downloads folder.
13) patch -p0 < ../gcc-v2.95.x.debian.x86_64.diff no errors occur
14) mkdir ../gcc-2.95.3-objdir
15) cd ../gcc-2.95.3-objdir
16) ../gcc-2.95.3/configure --prefix=/opt/i386/gcc/gcc-2.95.3 --enable-languages=c,c++ --enable-threads=posix --enable-shared --host i386-pc-linux-gnu seems to work fine
17) make > log.txt
This is where I get my error
I get this in the console:
and this is my log.txt file:
18) The next step would be make install but I didn't do that, since I got the error before.

Related

How to Install compiler g++-4.8.5 in ubuntu 20.04

As the title said I can't install that specific version of g++ in my current ubuntu (20.04).
I have been trying the usual things as: sudo apt install g++- (and displaying all posibilities but there where only versions from 8 to 10). Same happend looking for gcc possibilities.
Also tried this: gist.github.com/application2000/73fd6f4bf1be6600a2cf9f56315a2d91 (same problem)
And after looking for a while I gave up in my research and ended up here. Hope someone with more wisdom than me can give my a hand with this.
These steps should work:
sudo dpkg --add-architecture i386
sudo apt update
sudo apt upgrade
sudo apt-get install gcc-multilib libstdc++6:i386
wget https://ftp.gnu.org/gnu/gcc/gcc-4.8.5/gcc-4.8.5.tar.bz2 --no-check-certificate
tar xf gcc-4.8.5.tar.bz2
# cd gcc-4.8.5
# ./contrib/download_prerequisites
# cd ..
sed -i -e 's/__attribute__/\/\/__attribute__/g' gcc-4.8.5/gcc/cp/cfns.h
sed -i 's/struct ucontext/ucontext_t/g' gcc-4.8.5/libgcc/config/i386/linux-unwind.h
mkdir xgcc-4.8.5
pushd xgcc-4.8.5
$PWD/../gcc-4.8.5/configure --enable-languages=c,c++ --prefix=/usr --enable-shared --enable-plugin --program-suffix=-4.8.5
make MAKEINFO="makeinfo --force" -j
sudo make install -j
Note that you have to uncomment the .../download_prerequisites on some platform. For me it worked without on Centos 7 or Ubuntu 20 with the mandatory packages installed:
Ubuntu/Debian:
sudo apt install make wget git gcc g++ lhasa libgmp-dev libmpfr-dev libmpc-dev flex bison gettext texinfo ncurses-dev autoconf rsync
Centos:
sudo yum install wget gcc gcc-c++ python git perl-Pod-Simple gperf patch autoconf automake make makedepend bison flex ncurses-devel gmp-devel mpfr-devel libmpc-devel gettext-devel texinfo
Few seconds later (/giggles) gcc-4.8.5 is installed and available.
Notes:
if you don't have the resources to run make -j omit -j or use -j4 (or a different number which is adequate for your system)
your milage may vary and you may need to install further i386 packages
Since I can't comment I will add to #bebbo solution that on an Ubuntu 20.04 I had to add to his steps patching the following patches:
Add an include to signal.h to libsanitizer/asan/asan_linux.cc
https://patchwork.ozlabs.org/project/gcc/patch/6824253.3U2boEivI2#devpool21/
change a line in libsanitizer/tsan/tsan_platform_linux.cc
as shown. line number may not be the one stated in the patch so search for the line that was changed. There is no need to apply the patch to the other files
https://git.pantherx.org/mirror/guix/commit/0b93d04ac537d6413999349ebe7cdcb1e961700e
Adding to kpeace's answer...
sed -i '/#include <pthread.h>/a #include <signal.h>' path_to_gcc4.8.5src/libsanitizer/asan/asan_linux.cc
sed -i 's/__res_state \\*statp = (__res_state\\*)state\\;/struct __res_state \\*statp = (struct __res_state\\*)state\\;/g' path_to_gcc4.8.5src/libsanitizer/tsan/tsan_platform_linux.cc
Just adding a couple of sed lines to patch them inline.
Also, I've been writing some Ruby scripts to install some software (for fun of course.) I've recently successfully compiled gcc-4.8.5 under LinuxMint 20.1 (Ubuntu 20.04 based, compiler is the system gcc: 9.3.0, installed with sudo apt install build-essential) with this script. Also, I've installed all the packages that Bebbo suggested, including gcc-multilib and libstdc++6:i386 before running this script. Check up InstGcc4 class at the bottom of the code.
install_gcc.rb
They might end up to be 'un-installable' state a few months later. But at least gcc-4.8.5 works now.
ps. I've started to compile this old gcc due to CUDA... My hardware is a decade old GeForce 9600/9400 (yeah 2008 MBP) and CUDA 6.5 was the best option for that machine.
pps. Anyway, strange thing is, I had to give out '-std=gnu++11' for CXXFLAGS to avoid errors.

C++ compiling project with shared object (libtensorflow_cc.so) failed

at the moment I'm facing some problems compiling (and running) a (huge) own project with support of Tensorflow. On my own system (Ubuntu 16.04 LTS) everything works fine. Same procedure on a cluster leads to a compile error and I'm not able to find a solution yet.
System information
Have I written custom code: YES
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): CentOS 7.4.1708
TensorFlow installed from (source or binary): source (using the git repo)
TensorFlow version (use command below): 1.9
Python version: 2.7.15
Bazel version (if compiling from source): 0.16
GCC/Compiler version (if compiling from source): 7.30
CUDA/cuDNN version: not used
GPU model and memory: Tesla K20m
Exact command to reproduce:
Cloned tensorflow repo from github
Configure Bazel (./configure in tensorflow repo)
Built libtensorflow_cc.so with bazel (worked fine!!!)
Downloaded dependencies with delivered script tensorflow/contrib/makefile/download_dependencies.sh
(tried installing protobuf & eigen manually, too)!
Installed protobuf with ./autogen.sh && ./configure && make && make install
Installed Eigen from downloaded dependencies
Copied libraries, headers and includes into own project:
$ cp bazel-bin/tensorflow/libtensorflow_cc.so ../tf_project/lib/
$ cp bazel-bin/tensorflow/libtensorflow_framework.so ../tf_project/lib/
$ cp /tmp/proto/lib/libprotobuf.a ../tf_project/lib/
$ mkdir -p ../tf_project/include/tensorflow
$ cp -r bazel-genfiles/ * ../tf_project/include/
$ cp -r tensorflow/cc ../tf_project/include/tensorflow
$ cp -r tensorflow/core ../tf_project/include/tensorflow
$ cp -r third_party ../tf_project/include
$ cp -r /tmp/proto/include/ * ../tf_project/include
$ cp -r /tmp/eigen/include/eigen3/ * ../tf_project/include
Remark: On my own system it works this way for weeks now. I can use tensorflow in my own project with an exported model trained with keras inside a python project. I make predictions using client_sessions and many other functions of the tensorflow framework and it works properly.
Problem: At the cluster I can compile tensorflow as dynamic library, install protobuf and eigen.
When I try to compile my project (similar process regarding my own system) without significant changes it doesn't work and stops with following error message:
.../tensorflow/include/tensorflow/core/framework/tensor.pb.h:12:2: error: #error This file was generated by a newer version of protoc which is
#error This file was generated by a newer version of protoc which is
^~~~~
.../include/tensorflow/core/framework/tensor.pb.h:13:2: error: #error incompatible with your Protocol Buffer headers. Please update
#error incompatible with your Protocol Buffer headers. Please update
^~~~~
.../include/tensorflow/core/framework/tensor.pb.h:14:2: error: #error your headers.
#error your headers.
^~~~~
.../include/tensorflow/core/framework/tensor.pb.h:27:10: fatal error: google/protobuf/inlined_string_field.h: No such file or directory
#include <google/protobuf/inlined_string_field.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
So, clearly this should be the issue:
This file was generated by a newer version of protoc which is incompatible with your Protocol Buffer headers. Please update your headers.
and
fatal error: google/protobuf/inlined_string_field.h: No such file or directory
Tried solutions:
I tried different versions of protobuf but there was an error every single try.
I tried installing protobuf and eigen manually (without the download_dependencies.sh script)
I'm wondering because my own installation following exact the same steps works properly. Maybe there is an issue with one of the components unless I tried different versions to make sure that these aren't "new" issues.
Can someone help me solving this error that I can compile and run this project at the other machine?
Looking forward to get helpful solutions :) Thank you very much for the support!
Best regards from Germany!
First, I checked on github, tensorflow r1.9 requires protobuf >=3.6.0. With download_dependencies.sh script, you always get protobuf 3.5.0, in which lack of inlined_string_field.h and some other headers.
Second, some errors that ask you to update protobuf version. I tried many versions of protobuf too. Only version 3.6.0 works well, rather than version 3.6.1 or older ones.
For these errors,
error This file was generated by a newer version of protoc which is
fatal error: google/protobuf/inlined_string_field.h: No such file or directory
It is a mismatch version problem. My solutions is that manually download protobuf 3.6.0 from
https://github.com/protocolbuffers/protobuf/releases/download/v3.6.0/protoc-3.6.0-linux-x86_64.zip
install it and cp /usr/local/include/google to somewhere/tf/include. It works quite well for me. I don't know if you tried version 3.6.0.
download_dependencies.sh script would provide a mismatch version of
protobuf. See the issue I posted on github
https://github.com/tensorflow/tensorflow/issues/22536
Also, I noticed your copied files are different from mine. I did in this way.
sudo mkdir /usr/local/tensorflow/include
sudo cp -r tensorflow/contrib/makefile/downloads/eigen/Eigen /usr/local/tensorflow/include/
sudo cp -r tensorflow/contrib/makefile/downloads/eigen/unsupported /usr/local/tensorflow/include/
sudo cp -r tensorflow/contrib/makefile/gen/protobuf/include/google /usr/local/tensorflow/include/
sudo cp tensorflow/contrib/makefile/downloads/nsync/public/* /usr/local/tensorflow/include/
sudo cp -r bazel-genfiles/tensorflow /usr/local/tensorflow/include/
sudo cp -r tensorflow/cc /usr/local/tensorflow/include/tensorflow
sudo cp -r tensorflow/core /usr/local/tensorflow/include/tensorflow
sudo mkdir /usr/local/tensorflow/include/third_party
sudo cp -r third_party/eigen3 /usr/local/tensorflow/include/third_party/
sudo mkdir /usr/local/tensorflow/lib
sudo cp bazel-bin/tensorflow/libtensorflow_*.so /usr/local/tensorflow/lib
Btw, I run build_all_linux.sh instead of download_dependencies.sh.
I hope it will be helpful for you.
I meet this problem before, following is my solution:
I successfully built tensorflow-r1.8 with bazel. And I found protoc in following path is 3.5.0:
/home/zsb/.cache/bazel/_bazel_zsb/1372f28eb0671f692e7ac38330377d8c/execroot/org_tensorflow/bazel-out/host/bin/external/protobuf_archive/protoc --version
but actually my system config is using protoc version 3.4.0
I confirmed this by type "protoc --version" directly.
So finally I update system protobuffer version to 3.5.0 and this problem fixed.

omz_urlencode:42: -regex-match not available for regex

Okay so i'm trying to install Homebrew, so that I can I can install nodejs and npm. However I'm using this command from brew.sh;
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
Once I install Homebrew It keeps returning this error
It appears Homebrew is already installed. If your intent is to reinstall you
should do the following before running this installer again:
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/uninstall)"
The current contents of /usr/local are bin CODEOFCONDUCT.md etc lib libexec Library LICENSE.txt README.md sbin share .git .github .gitignore
omz_urlencode:42: failed to load module `zsh/regex': dlopen(/usr/local/Cellar/zsh/5.2/lib/zsh/regex.so, 9): image not found
omz_urlencode:42: -regex-match not available for regex'
I honestly have no idea what's happening here. I'm not sure If I have to symlink Homebrew up or what. But I have tried everything I know so far (which isn't much). If anyone could be kind enough to give me instructions as what too look for to solve the issue, I'm all ears.
Thank you for listening!
It seems it's a bug that have been corrected in recent version of OMZ, you should update it.
You get the output:
zsh: command not found: homebrew
Because homebrew is not a command. Try brew -v instead.
To resolve:
Warning: node-5.10.1 already installed, it's just not linked
Use brew link node
Okay so I think I have made some progress, It may have been for some weird reason I didn't have permission writes to run the 'brew link < package > ' so i ran the following command:
'sudo chown -R /usr/local/share/systemtap/tapset && brew doctor'
Once I ran 'brew link node' it successfully created 7 symlinks for the node directory on the following path;
'Linking /usr/local/Cellar/node/5.10.1... 7 symlinks created'
However, when I installed live-server via npm; I typed 'live-server -v' and it again returned 'zsh: command not found: live-server'. In addition to this it keeps telling me I have 'unbrewed dylibs':
Warning: Unbrewed dylibs were found in /usr/local/lib.
If you didn't put them there on purpose they could cause problems when
building Homebrew formulae, and may need to be deleted.
Unexpected dylibs:
/usr/local/lib/libociei.dylib
So im still unsure what the exact issue is.
P.S. Apologies for not posting this correctly, as im trying to see where I can seperate it into commands like you corrected in the first post, if you could link me to a post where it tells me how to use it properly i'll be more than happy to read it.
Thanks,

Error when configuring gmp

hope this is just a very simple question. Ok, here's what I've done: I wanted to install gmp under my Linux Ubuntu 11.10. I have both g++ and gcc on my system. So I downloaded the latest release from the gmp official site (gmp 5.0.2), extracted it and then, since I need the c++ gmp interface, I simply run:
./configure --enable-cxx
But it keeps working for a while and then prints out:
checking for suitable m4... configure: error: No usable m4 in $PATH or /usr/5bin (see config.log for reasons).
Did I do something wrong? Thank you very much!
Matteo
try sudo apt-get install m4 and rerun the ./configure
I know this was from 7 years ago, but Im looking at installing gmp5.1.3 from source on an older system right now. I noted the "funny output" checking for suitable m4... configure: error: No usable m4 in $PATH or /usr/5bin 5bin hunh? I though it was a typo, and it probably is. On line 27285 of configure script, there is ac_dummy="$PATH:/usr/5bin"
that is a shell variable that the script then looks for and doesn't find. in the *nix default FHS, /usr/5bin doesn't exist.
the problem with ac_dummy="$PATH:/usr/5bin" is that the next few lines are a for loop searching the $PATH variable + /usr/5bin for m4.
on my system, /usr/sbin is where the m4 files are located, and is not part of the default $PATH variable.
Fixes:
you could modify your $PATH variable to include /usr/sbin.
you could modify the configure script to say ac_dummy="$PATH:/usr/sbin"
you could wait 7 years for someone to file a bug report.
depending on age and support of your OS, sudo apt-get install m4 could also work.
I have the same error, sudo apt-get install m4 solve this problem.

How do I install and build against OpenSSL 1.0.0 on Ubuntu?

You can consider this a follow-up question to How do I install the OpenSSL C++ library on Ubuntu?
I'm trying to build some code on Ubuntu 10.04 LTS that requires OpenSSL 1.0.0.
Ubuntu 10.04 LTS comes with OpenSSL 0.9.8k:
$ openssl version
OpenSSL 0.9.8k 25 Mar 2009
So after running sudo apt-get install libssl-dev and building, running ldd confirms I've linked in 0.9.8:
$ ldd foo
...
libssl.so.0.9.8 => /lib/i686/cmov/libssl.so.0.9.8 (0x00110000)
...
libcrypto.so.0.9.8 => /lib/i686/cmov/libcrypto.so.0.9.8 (0x002b0000)
...
How do I install OpenSSL 1.0.0 and the 1.0.0 development package?
Update: I'm writing this update after reading SB's answer (but before trying it), because it's clear I need to explain that the obvious solution of downloading and installing OpenSSL 1.0.0 doesn't work:
After successfully doing the following (recommended in the INSTALL file):
$ ./config
$ make
$ make test
$ make install
...I still get:
OpenSSL 0.9.8k 25 Mar 2009
...and:
$ sudo apt-get install libssl-dev
Reading package lists... Done
Building dependency tree
Reading state information... Done
libssl-dev is already the newest version.
The following packages were automatically installed and are no longer required:
linux-headers-2.6.32-21 linux-headers-2.6.32-21-generic
Use 'apt-get autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
...and (just to make sure) after rebuilding my code, ldd still returns the same thing.
Update #2: I added the "-I/usr/local/ssl/include" and "-L/usr/local/ssl/lib" options (suggested by SB) to my makefile, but I'm now getting a bunch of undefine reference compile errors, for example:
/home/dspitzer/foo/foo.cpp:86: undefined reference to `BIO_f_base64'
/home/dspitzer/foo/foo.cpp:86: undefined reference to `BIO_new'
/usr/local/ssl/include/ contains only an openssl directory (which contains numerous .h files), so I also tried "-I/usr/local/ssl/include/openssl" but got the same errors.
Update #3: I tried changing the OpenSSL includes from (for example):
#include <openssl/bio.h>
...to:
#include "openssl/bio.h"
...in the .cpp source file but still get the same undefined reference errors.
Update #4: I now realize those undefined reference errors are linker errors. If I remove the "-L/usr/local/ssl/lib" from my Makefile, I don't get the errors (but it links to OpenSSL 0.9.8). The contents of /usr/local/ssl/lib/ are:
$ ls /usr/local/ssl/lib/
engines libcrypto.a libssl.a pkgconfig
I added -lcrypto, and the errors went away.
Get the 1.0.0a source from here.
# tar -xf openssl-1.0.0a.tar.gz
# cd openssl-1.0.0a
# ./config
# sudo make install
Note: if you have man pages build errors on modern systems, use make install_sw instead of make install.
This puts it in /usr/local/ssl by default
When you build, you need to tell gcc to look for the headers in /usr/local/ssl/include and link with libs in /usr/local/ssl/lib. You can specify this by doing something like:
gcc test.c -o test -I/usr/local/ssl/include -L/usr/local/ssl/lib -lssl -lcrypto
EDIT DO NOT overwrite any system libraries. It's best to keep new libs in /usr/local. Overwriting Ubuntu defaults can be hazardous to your health and break your system.
Additionally, I was wrong about the paths as I just tried this in Ubuntu 10.04 VM. Fixed.
Note, there is no need to change LD_LIBRARY_PATH since the openssl libs you link against by default are static libs (at least by default - there might be a way to configure them as dynamic libs in the ./config step)
You may need to link against libcrypto because you are using some calls that are built and defined in the libcrypto package. Openssl 1.0.0 actually builds two libraries, libcrypto and libssl.
EDIT 2 Added -lcrypto to gcc line.
Instead of:
$ ./config
$ make
$ make test
$ make install
Do:
$ sudo ./config --prefix=/usr
$ sudo make
$ sudo make test
$ sudo make install
This will help you update to openssl 1.0.1g to patch for CVE-2014-0160 (Heartbleed).
OpenSSL Security Advisory [07 Apr 2014]
TLS heartbeat read overrun (CVE-2014-0160)
A missing bounds check in the handling of the TLS heartbeat extension can be
used to reveal up to 64k of memory to a connected client or server.
Only 1.0.1 and 1.0.2-beta releases of OpenSSL are affected including
1.0.1f and 1.0.2-beta1.
Thanks for Neel Mehta of Google Security for discovering this bug and to
Adam Langley and Bodo Moeller for
preparing the fix.
Affected users should upgrade to OpenSSL 1.0.1g. Users unable to immediately
upgrade can alternatively recompile OpenSSL with -DOPENSSL_NO_HEARTBEATS.
1.0.2 will be fixed in 1.0.2-beta2.
Source: https://www.openssl.org/news/secadv_20140407.txt
Here's what solved it for me:
Upgrade latest version OpenSSL on Ubuntu
Transcribing the main information:
Download the OpenSSL v1.0.0g source:
$ wget http://www.openssl.org/source/openssl-1.0.0g.tar.gz
Unpack the archive and install:
$ tar xzvf openssl-1.0.0g.tar.gz
$ cd openssl-1.0.0g
$ ./config
$ make
$ make test
$ sudo make install
All files, including binaries and man pages are install under the directory /usr/local/ssl. To ensure users use this version of OpenSSL instead of the previous version you must update the paths for man pages and binaries.
Edit the file /etc/manpath.config adding the following line before the first MANPATH_MAP:
MANPATH_MAP /usr/local/ssl/bin /usr/local/ssl/man
Update the man database (I honestly can't remember and don't know for sure if this command was necessary - maybe try without it and at the end when testing if the man pages are still the old versions come back and run mandb):
sudo mandb
Edit the file /etc/environment and insert the path for OpenSSL binaries (/usr/local/ssl/bin) before the path for Ubuntu's version of OpenSSL (/usr/bin). My environment file looks like this:
PATH="/usr/local/sbin:/usr/local/bin:/usr/local/ssl/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games"
Logout and login and test:
$ openssl version
OpenSSL 1.0.0g 18 Jan 2012
Also test the man pages by running man openssl and at the very bottom in the left hand corner it should report 1.0.0g.
Note that although the users will now automatically use the new version of OpenSSL, existing programs (e.g. Apache) may not as they are linked against the libraries from the Ubuntu version.