The python configure.py contains a line
gcc_linker_output = subprocess.check_output(['gcc', '-###', '/dev/null', '-o', 't'], stderr=subprocess.STDOUT).decode('utf-8')
The comments before this line indicate scylladb uses a custom dynamic linker and references details about the ABI layout.
Is there code missing from the configure.py script which would enable building on a strict llvm environment, or is that not possible at this time?
I am building Scylladb on FreeBSD 13 which uses clang++ 13.0.0.
I am on branch master, commit 0efdc45d5981868b1b6, Setp 8, 2022.
I patched SCYLLA-VERSION-GEN to get past the date --utf and USAGE issues, and patched config.py with an entry to read ID from freebsd for the boost error message.
I run configure.py with
./configure.py --mode=release --compiler=clang++ --cflags=-I/usr/local/include
In fact ScyllaDB builds with clang. However its dependency Seastar is very dependent on Linux. If you want it to run on FreeBSD you'll have to port Seastar first (see reactor_backend.{cc,hh})
Related
I have an OCaml program that worked fine on Ubuntu 16 but when recompiled and run on Ubuntu 20 I get the following error:-
$ ocamldebug ./linearizer
OCaml Debugger version 4.08.1
(ocd) r
Loading program... done.
Time: 89534
Program end.
Uncaught exception: Sys_error "Illegal seek"
(ocd) b
Time: 89533 - pc: 624888 - module Netaccel_link
No source file for Netaccel_link.
I thought this was due to missing dev libraries but:-
$ sudo apt install libocamlnet-ocaml-dev
Reading package lists... Done
Building dependency tree
Reading state information... Done
libocamlnet-ocaml-dev is already the newest version (4.1.6-1build6).
0 upgraded, 0 newly installed, 0 to remove and 20 not upgraded.
What setup step am I missing on Ubuntu 20?
This looks like a regression bug in libocamlnet and you should report an issue there or, I am a bit pessimistic that you will get any response, you can try to debug the issue yourself.
The problem that you are facing has nothing to do with missing libraries (they will be reported during installation or, if the package is broken, end up in linker errors). It may result, however, from some misconfiguration of the system. If that is true, then you're lucky as you can fix it yourself.
I will give you some advice that might help you in debugging this issue. For more, please try using discuss.ocaml.org as a more suitable media (SO doesn't favor this kind of a discussion and we might get deleted by admins).
The illegal seek exception is thrown when the seek operation is applied on a non-regular file, aka ESPIPE Unix error. So check your inputs. It could be that what was previously regarded as a file in Ubuntu is now a pipe or a socket.
Try to use ltrace or strace to pinpoint the culprit e.g.,
ltrace ./linearizer
or, if it overwhelms you, try strace
strace ./linearizer
Instead of using ocamldebug you can use plain gdb. You can use gdb's interfaces to provide the path to the source code (though most likely it won't work since ocamlnet is not compiled with debug information). I believe that it will give you a more meaningful backtrace.
Instead of using the system installation try using opam. Install your dependencies with opam and try older versions as well as newer versions of the OCaml compiler. Also, try different versions of ocamlnet. Ideally, try to reproduce the environment that used to work for you.
When nothing else works, you can use objdump -d and look at the disassembly of your binary. OCaml is using a pretty readable and intuitive name mangling scheme (<module_name>__<function_name>_<uid>), so you can easily find the source code (search for <module_name>.ml file and look for the <function_name> there)
Finally, just use docker or any other container to run your application. Consider switching from ocamlnet to something more modern and supported.
When trying to compile Fortran using PGI on Mac OS X Sierra, I get the error
ld: file not found: /usr/lib/crt1.o
I found a workaround for older Mac OS X versions (http://www.pgroup.com/userforum/viewtopic.php?t=4578)
sudo ln -s /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/lib/crt1.o /usr/lib/crt1.o
However, with Sierra, System Integrity Protection prevents writing in /usr/bin. How can I solve this problem?
I tried linking into /usr/local/bin/ (which is permitted), but then, how can I make sure the compiler searches for library in that path?
Installing just the Command Line Tools for Mac OS X solved the problem. Do this in your terminal:
xcode-select --install
Installing Lazarus on MacOS X :
worked for me
http://wiki.lazarus.freepascal.org/Installing_Lazarus_on_MacOS_X#Xcode_5.0.2B_compatibility_.28Mac_OS_X_10.8_and_10.9.29
Solution for command line programs:
The correct answer for me was as explained in this link:
https://medium.com/#kviat/free-pascal-3-0-2-linking-on-macos-sierra-c40706e86fda
After some googling I realized that most libraries were removed from
/usr/lib in macOS Sierra. However this case is handled in FPC, so we
just need to set internal compiler variable MacOSXVersionMin to 10.8
(or later). There is no standard compiler option for it, but after
some search in source code I found the solution: set the environment
variable MACOSX_DEPLOYMENT_TARGET:
You should give the deployment target of MacOS:
MACOSX_DEPLOYMENT_TARGET= XX.XX #for instance 10.15
Solution for generally:
Linking the necessary file to /usr/bin/crt* . As already stated, this linking will be prohibited by MacOs beginning from 10.10. But there is still a way to accomplish this linking procedure and it solves the problem.
1) Reboot the Mac and hold down Command + R keys simultaneously after you hear the startup chime, this will boot Mac OS X into Recovery Mode
2) When the “MacOS Utilities” / “OS X Utilities” screen appears, pull down the ‘Utilities’ menu at the top of the screen instead, and choose “Terminal”
3) Type the following command into the terminal then hit return:
csrutil disable; reboot
4) When you come back, run the command sudo mount -uw /
5) Just run the linking code you want to:
sudo ln -s /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/lib/crt1.o /usr/lib/crt1.o
sources: http://osxdaily.com/2015/10/05/disable-rootless-system-integrity-protection-mac-os-x/
https://www.reddit.com/r/MacOS/comments/caiue5/macos_catalina_readonly_file_system_with_sip/
In my case the problem was actually an error on the PGI installation side. PGI seems to be well aware that newer versions of macOS do not have the /usr/lib/crt1.o and that you can't create files there anymore. But it is possible to setup correct environment variables for the PGI compilers and then the linker should use the correct path to the crt1.o.
This configuration should be done automatically during the installation of the PGI compiler suite by running the makelocalrc command and should generate the file /opt/pgi/osx86-64/$PGIVER/bin/localrc. But in my case this step failed silently.
Reasons for failure seem to be:
license agreement for XCode not (yet) accepted, although this error should leave you with a /opt/pgi/osx86-64/$PGIVER/bin/localrc.error, containing some details
XCode version not supported, which seems to leave you with nothing. This is what I got when I ran the makelocalrc script manually:
makelocalrc -x /opt/pgi/osx86-64/19.10
Error: Unsupported XCode version 11
In my case (PGI 19.10, macOS 10.15, XCode 11.2.1) I manually patched the /opt/pgi/osx86-64/19.10/bin/makelocalrc to not error out on XCode 11:
if test $xcodever -gt 11 ; then # <-- was "-gt 10"!
echo " Error: Unsupported XCode version " $xcodever
exit -1
fi
and then re-ran the script after which compilation with PGI compilers (both pgcc and pgfortran) worked:
sudo /opt/pgi/osx86-64/2019/bin/makelocalrc -x /opt/pgi/osx86-64/19.10
Your case may vary, but you might want to check for a /opt/pgi/osx86-64/$PGIVER/bin/localrc.error or the /opt/pgi/osx86-64/$PGIVER/bin/localrc itself and try to manually (re-) generate it if it is not there or if you upgraded XCode/macOS since the installation of the PGI compilers.
Is there a way to compile with MingW with CodeBlocks in Windows so they can be used in Ubuntu or Centos distros?
I've tried compiling with GNU GCC option then got the output file with .o extensions under obj/Release/ folder.
When I run I get this error under my Vagrant Ubuntu machine:
-bash: ./main.o: cannot execute binary file
How can I compile it so it runs on my Linux machines?
The technical term for what you're trying to accomplish is cross-compilation. For that, you need to build a specific cross-compiler using GCC sources. If you still want to keep MinGW, there is a page explaining the steps needed to create a ARM cross-compiler : http://www.mingw.org/wiki/HostedCrossCompilerHOWTO. (you'll have to modify the target)
List of targets supported by GCC :
armv5te-android-gcc armv5te-linux-rvct armv5te-linux-gcc
armv5te-none-rvct
armv6-darwin-gcc armv6-linux-rvct armv6-linux-gcc
armv6-none-rvct
armv7-android-gcc armv7-darwin-gcc armv7-linux-rvct
armv7-linux-gcc armv7-none-rvct
mips32-linux-gcc
ppc32-darwin8-gcc ppc32-darwin9-gcc ppc32-linux-gcc
ppc64-darwin8-gcc ppc64-darwin9-gcc ppc64-linux-gcc
sparc-solaris-gcc
x86-android-gcc x86-darwin8-gcc x86-darwin8-icc
x86-darwin9-gcc x86-darwin9-icc x86-darwin10-gcc
x86-darwin11-gcc x86-darwin12-gcc x86-linux-gcc
x86-linux-icc x86-os2-gcc x86-solaris-gcc
x86-win32-gcc x86-win32-vs7 x86-win32-vs8
x86-win32-vs9
x86_64-darwin9-gcc x86_64-darwin10-gcc x86_64-darwin11-gcc
x86_64-darwin12-gcc x86_64-linux-gcc x86_64-linux-icc
x86_64-solaris-gcc x86_64-win64-gcc x86_64-win64-vs8
x86_64-win64-vs9
universal-darwin8-gcc universal-darwin9-gcc universal-darwin10-gcc
universal-darwin11-gcc universal-darwin12-gcc
generic-gnu
There is only one big caveat : since Windows is not POSIX compliant, I don't think you can use signals or pthreads.
Finally, brace yourself because it's a tedious task to build a cx-compiler (lots of obscure bugs). That's why profesionnal devs pays $$$ for "plug'n'play" solutions.
EDIT : this MXE project can be useful to you
I have just downloaded clang 3.3 (homebrew) from the LLVM web page to my mac (OS X 10.8.4), but get this compiler error when using std=c++11 stdlib=libc++:
In file included from /usr/include/c++/v1/string:434:
In file included from /usr/include/c++/v1/algorithm:594:
In file included from /usr/include/c++/v1/memory:590:
In file included from /usr/include/c++/v1/typeinfo:61:
/usr/include/c++/v1/exception:146:5: error: an attribute list cannot appear here
_LIBCPP_NORETURN friend void rethrow_exception(exception_ptr);
^~~~~~~~~~~~~~~~
/usr/include/c++/v1/__config:190:28: note: expanded from macro '_LIBCPP_NORETURN'
# define _LIBCPP_NORETURN [[noreturn]]
^~~~~~~~~~~~
It seems that I also need another libc++ (even though it was said that it was 100% complete on MAC ...), but I cannot find any. Any help appreciated. Just for your info:
> clang++ -v
clang version 3.3 (tags/RELEASE_33/final)
Target: x86_64-apple-darwin12.4.0
Thread model: posix
And, yes, I googled it and found this: http://comments.gmane.org/gmane.comp.compilers.llvm.bugs/24138 claiming it's resolved in libc++ trunk ???
Okay, as suggested by Howard, I've downloaded tip-of-the-trunk libc++ into /opt/local/share/libcxx, but have trouble building it. The manual says to cd libcxx/lib, export TRIPLE=-apple-, and run ./buildit. I presume this implies bash (I'm usually a tcsh user, so I moved my .tcshrc, got a new shell and started bash). I did that and the compilations worked, but the library build failed. Apparently ./buildit doesn't see $TRIPLE=-apple-, as it picks the wrong LDSHARED_FLAG (not that on line 81, but that on line 103, which is to be used if $TRIPLE is not set), even though echo $TRIPLE yields -apple- as it should. When I add the statement echo TRIPLE = $TRIPLE at the top of buildit, it reports nothing. How come? What is wrong here?
The failure was that because the wrong LDSHARED_FLAG was picked the loading didn't work (ld complaint about the unknown option -soname which, I think, makes sense under linux). I don't know why buildit (a #! /bin/sh file) didn't pick up the TRIPLE environment variable (it did pick up several unwanted ones such as CXX and CC). I now simply added TRIPLE=-apple- at the top of that file and it did built the library. However, the loader spitted out several warnings all of which were of the form
ld: warning: direct access in ___cxa_bad_typeid to global weak symbol typeinfo for std::bad_typeid means the weak symbol cannot be overridden at runtime. This was likely caused by different translation units being compiled with different visibility settings.
But most importantly, it works (the compilation at least, I have yet to test the library). I have one final question. The advice was to use -I and -L to tell the compiler about the whereabouts of this version. Is it not possible to put it into the usual place /usr/include/c++/v1/? Note that Xcode has its version somewhere else anyway and I had put in a symbolic link (/usr/include/c++/v1/) to that one to get my homebrew clang 3.2 working (after the some Xcode update). What about the library? Can I also put it in a standard place?
Here is the home page of libc++:
http://libcxx.llvm.org
You can download the tip-of-trunk libc++ from there. You can tell clang to point to your download with -nostdinc++ -I<path-to-libc++>/include. You can also tell clang to link to your tip-of-trunk libc++ with -L<path-to-libc++>/lib and export DYLD_LIBRARY_PATH=<path-to-libcxx>/lib. The directions are all on the libc++ home page.
Xcode is the easiest way to get clang + libc++. But if you want the very latest, this is the place to go.
Congratulations!
Don't worry about the ld warning. It is a harmless ld bug that will be fixed in a future release. I see it on 10.8.4 too and it doesn't hurt anything.
The libc++ headers no longer live at /usr/include/c++/v1. Xcode has migrated them into itself. Having libc++ headers at /usr/include/c++/v1 from older installs has been a source of confusion and bugs. I regularly use -nostdinc++ -I to point to the libc++ headers I want (I often have several versions going at the same time), and that works well for me.
It is possible for you to replace your /usr/lib/libc++.1.dylib with that you have built. I do not recommend doing this. I have to sometimes to do a proper test, but I always do so very carefully because sometimes this causes me to have to reboot onto a backup disk and restore my /usr/lib to its original state. If you do go this route, it is a very good idea to have a backup of the original /usr/lib/libc++.1.dylib very handy.
I recommend instead -L on the command line, and export DYLD_LIBRARY_PATH=<path-to-libcxx>/lib in the shell. More than one person (including myself) has gotten their computer into a really nasty place by not following this advice.
If you run testit (under test/), all you need is DYLD_LIBRARY_PATH in that shell. The testit script is set up to point to the right places without an install.
Also I recommend figuring out why you had to modify buildit. No one else is seeing that behavior. printenv on your command line may help in this endeavor.
libc++ is updated often. We try to keep tip-of-trunk always in a shippable state.
I'm working on RHEL WS 4.5.
I've obtained the glibc source rpm matching this system, opened it to get its contents using rpm2cpio.
Working in that tree, I've created a patch to mtrace.c (i want to add more stack backtrace levels) and incorporated it in the spec file and created a new set of RPMs including the debuginfo rpms.
I installed all of these on a test vm (created from the same RH base image) and can confirm that my changes are included.
But with more complex executions, I crash in mtrace.c ... but gdb can't find the debug information so I don't get line number info and I can't actually debug the failure.
Based on dates, I think I can confirm that the debug information is installed on the test system in /usr/src/debug/glibc-2.3.6/
I tried
sharedlibrary libc*
in gdb and it tells me the symbols are already loaded.
My test includes a locally built python and full symbols are found for python.
My sense is that perhaps glibc isn't being built under rpmbuild with debug enabled. I've reviewed the glibc.spec file and even built with
_enable_debug_packages
defined as 1 which looked like it might influence the result. My review of the configure scripts invoked during the rpmbuild build step didn't give me any hints.
Hmmmm .. just found /usr/lib/debug/lib/libc-2.3.4.so.debug
and /usr/lib/debug/lib/tls/i486/libc-2.3.4.so.debug
but both of these are reported as stripped by the file command.
It appears that you are installing non-matching RPMs:
/usr/src/debug/glibc-2.3.6
just found /usr/lib/debug/lib/libc-2.3.4.so.debug
There are not for the same version; there is no way they came from the same -debuginfo RPM.
both of these are reported as stripped by the file command.
These should not show as stripped. Either they were not built correctly, or your strip is busted.
Also note that you don't actually have to get all of this working to debug your problem. In the RPMBUILD directory, you should be able to find the glibc build directory, with full-debug libc.so.6. Just copy that library into your VM, and you wouldn't have to worry about the debuginfo RPM.
Try verifying that debug info for mtrace.c is indeed present. First see if the separate debug info for GLIBC knows about a compilation unit called mtrace.c:
$ eu-readelf -w /usr/lib/debug/lib64/libc-2.15.so.debug > t
$ grep mtrace t
name (strp) "mtrace.c"
name (strp) "mtrace"
1 0 0 0 mtrace.c
[10480] "mtrace.c"
[104bb] "mtrace"
[5052] symbol: mtrace, CUs: 446
Then see if GDB actually finds the source file from the glibc-debuginfo RPM:
(gdb) set pagination off
(gdb) start # pause your test program right after main()
(gdb) set logging on
Copying output to gdb.txt.
(gdb) info sources
Quit GDB then grep for mtrace in gdb.txt and you should find something like /usr/src/debug/glibc-2.15-a316c1f/malloc/mtrace.c
This works with GDB 7.4. I'm not sure the GDB version shipped with RHEL 4.5 supports all the command used above. Building upstream GDB from source is in fact easier than Python though.
When trying to add strack traces to mtrace, make sure you don't call malloc() directly or indirectly in the GLIBC malloc hooks.