ld: file not found: /usr/lib/crt1.o - c++

When trying to compile Fortran using PGI on Mac OS X Sierra, I get the error
ld: file not found: /usr/lib/crt1.o
I found a workaround for older Mac OS X versions (http://www.pgroup.com/userforum/viewtopic.php?t=4578)
sudo ln -s /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/lib/crt1.o /usr/lib/crt1.o
However, with Sierra, System Integrity Protection prevents writing in /usr/bin. How can I solve this problem?
I tried linking into /usr/local/bin/ (which is permitted), but then, how can I make sure the compiler searches for library in that path?

Installing just the Command Line Tools for Mac OS X solved the problem. Do this in your terminal:
xcode-select --install

Installing Lazarus on MacOS X :
worked for me
http://wiki.lazarus.freepascal.org/Installing_Lazarus_on_MacOS_X#Xcode_5.0.2B_compatibility_.28Mac_OS_X_10.8_and_10.9.29

Solution for command line programs:
The correct answer for me was as explained in this link:
https://medium.com/#kviat/free-pascal-3-0-2-linking-on-macos-sierra-c40706e86fda
After some googling I realized that most libraries were removed from
/usr/lib in macOS Sierra. However this case is handled in FPC, so we
just need to set internal compiler variable MacOSXVersionMin to 10.8
(or later). There is no standard compiler option for it, but after
some search in source code I found the solution: set the environment
variable MACOSX_DEPLOYMENT_TARGET:
You should give the deployment target of MacOS:
MACOSX_DEPLOYMENT_TARGET= XX.XX #for instance 10.15
Solution for generally:
Linking the necessary file to /usr/bin/crt* . As already stated, this linking will be prohibited by MacOs beginning from 10.10. But there is still a way to accomplish this linking procedure and it solves the problem.
1) Reboot the Mac and hold down Command + R keys simultaneously after you hear the startup chime, this will boot Mac OS X into Recovery Mode
2) When the “MacOS Utilities” / “OS X Utilities” screen appears, pull down the ‘Utilities’ menu at the top of the screen instead, and choose “Terminal”
3) Type the following command into the terminal then hit return:
csrutil disable; reboot
4) When you come back, run the command sudo mount -uw /
5) Just run the linking code you want to:
sudo ln -s /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/lib/crt1.o /usr/lib/crt1.o
sources: http://osxdaily.com/2015/10/05/disable-rootless-system-integrity-protection-mac-os-x/
https://www.reddit.com/r/MacOS/comments/caiue5/macos_catalina_readonly_file_system_with_sip/

In my case the problem was actually an error on the PGI installation side. PGI seems to be well aware that newer versions of macOS do not have the /usr/lib/crt1.o and that you can't create files there anymore. But it is possible to setup correct environment variables for the PGI compilers and then the linker should use the correct path to the crt1.o.
This configuration should be done automatically during the installation of the PGI compiler suite by running the makelocalrc command and should generate the file /opt/pgi/osx86-64/$PGIVER/bin/localrc. But in my case this step failed silently.
Reasons for failure seem to be:
license agreement for XCode not (yet) accepted, although this error should leave you with a /opt/pgi/osx86-64/$PGIVER/bin/localrc.error, containing some details
XCode version not supported, which seems to leave you with nothing. This is what I got when I ran the makelocalrc script manually:
makelocalrc -x /opt/pgi/osx86-64/19.10
Error: Unsupported XCode version 11
In my case (PGI 19.10, macOS 10.15, XCode 11.2.1) I manually patched the /opt/pgi/osx86-64/19.10/bin/makelocalrc to not error out on XCode 11:
if test $xcodever -gt 11 ; then # <-- was "-gt 10"!
echo " Error: Unsupported XCode version " $xcodever
exit -1
fi
and then re-ran the script after which compilation with PGI compilers (both pgcc and pgfortran) worked:
sudo /opt/pgi/osx86-64/2019/bin/makelocalrc -x /opt/pgi/osx86-64/19.10
Your case may vary, but you might want to check for a /opt/pgi/osx86-64/$PGIVER/bin/localrc.error or the /opt/pgi/osx86-64/$PGIVER/bin/localrc itself and try to manually (re-) generate it if it is not there or if you upgraded XCode/macOS since the installation of the PGI compilers.

Related

Is the scylladb build hardcoded to work only with gnu gcc?

The python configure.py contains a line
gcc_linker_output = subprocess.check_output(['gcc', '-###', '/dev/null', '-o', 't'], stderr=subprocess.STDOUT).decode('utf-8')
The comments before this line indicate scylladb uses a custom dynamic linker and references details about the ABI layout.
Is there code missing from the configure.py script which would enable building on a strict llvm environment, or is that not possible at this time?
I am building Scylladb on FreeBSD 13 which uses clang++ 13.0.0.
I am on branch master, commit 0efdc45d5981868b1b6, Setp 8, 2022.
I patched SCYLLA-VERSION-GEN to get past the date --utf and USAGE issues, and patched config.py with an entry to read ID from freebsd for the boost error message.
I run configure.py with
./configure.py --mode=release --compiler=clang++ --cflags=-I/usr/local/include
In fact ScyllaDB builds with clang. However its dependency Seastar is very dependent on Linux. If you want it to run on FreeBSD you'll have to port Seastar first (see reactor_backend.{cc,hh})

Breakpad Client not generated when cross-compiling

I'm trying to cross-compile Google Breakpad. I'm executing the following commands:
$ ./configure --prefix=/opt/breakpad CFLAGS="-Os" CC=PATH_ARM_COMPILER/arm-linux-gcc CXX=PATH_ARM_COMPILER/arm-linux-g++ --host=arm
$ make
$ make install
It generates and installs some files in the prefix path. In the include path it has:
|-common
|-google_breakpad
|-processor
but it should has:
|-client
|-common
|-google_breakpad
|-processor
|-third_party
It seems to be a problem related to Breakpad client. What should be the right way to cross-compile Breakpad?
My host is a Ubuntu 18.04 x86-64, target ARM-32.
I have reproduced your problem on my side. In fact, the issue is related to --host compilation flag.
Breakpad documentation shows that:
when building on Linux it will also build the client libraries.
So, In order to get the client binaries and headers, you should use the correct compiler prefix.
For example if you are using the GNU cross compiler arm-linux-gnueabihf-gcc, the --host flag value should be arm-linux-gnueabihf.
In your case (arm-linux-gcc) try to change your configure command as following:
./configure --prefix=/opt/breakpad CFLAGS="-Os" CC=PATH_ARM_COMPILER/arm-linux-gcc CXX=PATH_ARM_COMPILER/arm-linux-g++ --host=arm-linux

Automatic instrumentation with Score-P / Vampirtrace not working with gcc/g++

I have a simple helloworld.cpp to instrument with Score-P or Vampirtrace.
Installation of the performance/ tracing tools works fine. After compiling and running:
# score-p
scorep-g++ helloworld.cpp -o hello
export SCOREP_ENABLE_TRACING=true
export SCOREP_ENABLE_PROFILING=true
# vampirtrace
vtcxx -DVTRACE helloworld.cpp -o hello
# run
./hello
The created OTF files (OTF for vampirtrace/ OTF2 for Score-P) are more or less empty (no timeline data). I'm using Vampir to visualize the data.
More details:
I'm testing on Mac OS X (g++-8) and Xubuntu (g++-7; VirtualBox).
For Mac OS X I have installed brew install gcc.
For the instrumented Score-P version I also got a warning
[Score-P] src/measurement/profiling/scorep_profile_callpath.c:206: Warning: Master thread contains no regions.
but I can't find related issues/ help.
I also installed TAU and PDT for Vampirtrace, but nothing changed. By the way manual instrumentation works for Vampirtrace:
#include "vt_user.h"
...
VT_TRACER("name");
For Vampirtrace I also tested OpenMP instrumentation and this was working, but only that (no application tracing around).
For both environments I did not install Open MPI.
It would be great, if somebody has similar issues and could help.
PS: Later, I want to instrument an application with Poco::Threads. I only read about partial support for POSIX Threads.
Update
The problem is g++. I tried the same instrumentation with Intel icc and it worked.
The missing instrumentation with g++ is also possible with icc, if you add the parameter --nocompiler like
score-p --nocompiler icc helloworld.cpp -o hello
Update
I had to install missing packages. There are logging outputs for ./configure with hints. One of the following package solved it:
apt-get install llvm libwrap0-dev libclang-dev gcc-7-plugin-dev

How to compile CodeBlocks MingW in Windows to Ubuntu or Centos

Is there a way to compile with MingW with CodeBlocks in Windows so they can be used in Ubuntu or Centos distros?
I've tried compiling with GNU GCC option then got the output file with .o extensions under obj/Release/ folder.
When I run I get this error under my Vagrant Ubuntu machine:
-bash: ./main.o: cannot execute binary file
How can I compile it so it runs on my Linux machines?
The technical term for what you're trying to accomplish is cross-compilation. For that, you need to build a specific cross-compiler using GCC sources. If you still want to keep MinGW, there is a page explaining the steps needed to create a ARM cross-compiler : http://www.mingw.org/wiki/HostedCrossCompilerHOWTO. (you'll have to modify the target)
List of targets supported by GCC :
armv5te-android-gcc armv5te-linux-rvct armv5te-linux-gcc
armv5te-none-rvct
armv6-darwin-gcc armv6-linux-rvct armv6-linux-gcc
armv6-none-rvct
armv7-android-gcc armv7-darwin-gcc armv7-linux-rvct
armv7-linux-gcc armv7-none-rvct
mips32-linux-gcc
ppc32-darwin8-gcc ppc32-darwin9-gcc ppc32-linux-gcc
ppc64-darwin8-gcc ppc64-darwin9-gcc ppc64-linux-gcc
sparc-solaris-gcc
x86-android-gcc x86-darwin8-gcc x86-darwin8-icc
x86-darwin9-gcc x86-darwin9-icc x86-darwin10-gcc
x86-darwin11-gcc x86-darwin12-gcc x86-linux-gcc
x86-linux-icc x86-os2-gcc x86-solaris-gcc
x86-win32-gcc x86-win32-vs7 x86-win32-vs8
x86-win32-vs9
x86_64-darwin9-gcc x86_64-darwin10-gcc x86_64-darwin11-gcc
x86_64-darwin12-gcc x86_64-linux-gcc x86_64-linux-icc
x86_64-solaris-gcc x86_64-win64-gcc x86_64-win64-vs8
x86_64-win64-vs9
universal-darwin8-gcc universal-darwin9-gcc universal-darwin10-gcc
universal-darwin11-gcc universal-darwin12-gcc
generic-gnu
There is only one big caveat : since Windows is not POSIX compliant, I don't think you can use signals or pthreads.
Finally, brace yourself because it's a tedious task to build a cx-compiler (lots of obscure bugs). That's why profesionnal devs pays $$$ for "plug'n'play" solutions.
EDIT : this MXE project can be useful to you

How to permanently override HOMEBREW_CC and HOMEBREW_CXX settings?

Since I installed gcc-49 on my Mac I can't get Homebrew to find the C++ compiler anymore. It always fails with error messages like:
configure: error: C++ preprocessor "/lib/cpp" fails sanity check
Running "brew upgrade -v" spits out this:
...
==> ENV
HOMEBREW_CC: llvm-gcc
HOMEBREW_CXX: llvm-g++
...
I have no idea why Homebrew wants to use these compilers. Why can't it use the normal CC/CXX environment variables like everything else?
I already found, that by editing the formula directly like described in Using Homebrew with alternate GCC, I can change the HOMEBREW_CXX to use /usr/local/bin/g++ for example, which makes compiling formulas that need C++ work again.
But I don't want to edit every single formula by hand for the rest of my days. How can I change this HOMEBREW_CXX environment variable permanently? I tried setting them in my .bash_profile and running "export HOMEBREW_CXX=..." in the console and neither of those work, only editing the formula directly.
Does anyone have an idea?
A poor man's solution, to be sure, but this works: put an alias in your .bashrc or .bash_profile:
alias brew='HOMEBREW_CC=gcc-4.8 HOMEBREW_CXX=g++-4.8 brew'
Now, whenever you use brew it will use the compilers you want. Check that it works by doing:
brew --env
HOMEBREW_CC: gcc-4.8
HOMEBREW_CXX: g++-4.8
...
HTH