GDB gives Segmentation Fault (gdb... not my program) - gdb

By the title, I mean it's not the program being debugged that segfaults, it's gdb itself.
So I'm just trying to create a breakpoint; when I press gdb dies.
I like to believe that this is not a bug, but some confusion in background. Because the intel compiler (icc) has been updated on this computer.
This machine is running over two ssh sessions in a dual socket computer with 2 x Intel Xeon X5650 # 2.67GHz (6 cores each, 12 total, 24 with multi-threading), with 48 Gb of DRAM (NUMA), and 2 GPU (Tesla C2070 and Tesla C2090 [this one is not properly installed]). This is running Cent OS:
~$ cat /proc/version
Linux version 2.6.32-220.17.1.el6.x86_64 (mockbuild#c6b5.bsys.dev.centos.org)
(gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) ) #1 SMP Wed May 16
00:01:37 BST 2012
Any ideas?

Related

Why does Abaqus not find my c++ compiler?

I need to use uMat and user subroutines in Abaqus.
I installed Abaqus 2020, Visual Studio 2019, Intel oneApi Base Toolkit and Intel oneApi HPC Toolkit (in this order). After successfully linking the Fortran Compiler (Intel Fortran Compiler 2021.4) with VS19 (setting all the path variables and editing the abaqus2020.bat and the abaqus_v6.env) I started Abaqus Command (as admin) and used the command abaqus verify -user .. it PASSED right away.
My main problem is: if I plug in "abaqus info=system" everything is fine except the C++ Compiler.
C++ Compiler: Unable to locate or determine the version of a C++ compiler on this
system. If a C++ compiler is installed on this system, please load vcvars64.bat
file before running Abaqus
I tried:
different versions of VS
different OS
different Processor
calling the vcvars64.bat in abaqus2020.bat
Installing third party c++ compilers (MinGW)
My current setup:
Processor: AMD Ryzen 5 3600
RAM: 32 GB DDR4 3200
Graphics: MSI NVidea Geforce GTX 1660 Ti
OS: Windows 11
Linker Version: Microsoft Incremental Linker Version 14.29.30137.0
Fortran Compiler: Intel Fortran Compiler 2021.4 MPI MS-MPI 9.0.12497.11
Error Message

Docker Centos, Cannot Execute Binary File

I have one C++ binary which is running smoothly on local centos. Recently, I started learning docker and trying to run my C++ application on centos docker.
Firstly, I pulled centos:latest from docker hub and installed my C++ application on it and it ran successfully, without any issue. Now i installed docker on raspberry-pi and pulled centos again and tried to ran the same application on it but it gave me error.
bash : cannot execute binary file
Usually, this error comes when we try to run application on different architecture then the one they are built on. I checked cat etc/centos-release on raspberry-pi and result is CentOS Linux release 7.6.1810 (AltArch),where as result on local centos is CentOS Linux release 7.6.1810 (Core)
uname -a on both devices is as follows
raspberry-pi, centos docker Linux c475f349e7c2 4.14.79-v7+ #1159 SMP Sun Nov 4 17:50:20 GMT 2018 armv7l armv7l armv7l GNU/Linux
centos, centos docker Linux a57f3fc2c1a6 4.15.0-46-generic #49-Ubuntu SMP Wed Feb 6 09:33:07 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
EDIT:
Also, file myapplication
TTCHAIN: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked, interpreter /lib64/l, for GNU/Linux 2.6.24, BuildID[sha1]=287b501c8206893f7819f215ee0033586212b143, with debug_info, not stripped
My question is how can i ran the same native application of centos, pulled from docker on raspberry-pi model 3.
Your application has been built for x86-64. Intel x86-64 binaries CAN NOT run on an ARM processor.
You have two paths to pursue:
If you don't have source code for the application, you will need an x86-64 emulator that will run on your Raspberry Pi. Considering the Pi's lesser capabilities and Intel's proclivity to sue anyone who creates an emulator for their processors, I doubt you'll find one that's publicly available.
If you have the source code for the application, you need to rebuild it as a Raspberry Pi executable. You seem to know that it was written in C++. GCC and other toolchains are available for the Raspberry Pi (most likely a "yum install gcc" on your Pi will grab the compiler and tools for you). Building the application should be extremely similar to building it for x86_64.
You could find a cross-compiler that would let you build for the Pi from your x86_64 box, but that can get complicated.
Could be that you are trying to run a 64-bit binary on a 32-bit processor, would need more information to know for sure though.
You can check by using the file command in the shell. You may have to re-compile on the original system with the -m32 flag to gcc.
Please do a "uname -a" on both devices and post the results.
Most likely the processor or library type doesn't match.
I presume (hope) you're not trying to run an x86-compiled app on a Pi. Although Docker is available for both processor types, Docker will not run x86 binaries on Pi or vice versa.
Actually, AltArch currently means one of the following architectures... ppc64, ppc64le, i386, armhfp (arm v7 32-bit), aarch64 (arm v8 64-bit). Core suggests the mainstream x86 and x86_64 builds of CentOS.
Yep, I bet that's what it is...you can't just transfer an x86 binary to a Raspbian and expect it to work. The application must be rebuilt for the platform.

Wrong gcc version linked with nvidia

I had gcc-5 and gcc-7 installed, and when I tried to compile a cuda sample with 'make' i got lot's of errors, after some research i saw that i needed to downgrade my gcc, so i thought the system was using gcc-7 instead of the other and so i uninstalled it using purge, but then gcc was not even recognized, gcc --version gave error. So i purge the other gcc too and installed again with 'sudo apt-get update' and 'suda apt-get install build essential'. 'gcc --version' now already works, but my cuda drivers aren't working anymore. nvidia-smi results in "command not found" and i can't run any cuda sample, although now i can compile it. For example, deviceQuery returns:
cudaGetDeviceCount returned 35
-> CUDA driver version is insufficient for CUDA runtime version
Result = FAIL
'nvcc --version' also works, here's the output:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Sep__1_21:08:03_CDT_2017
Cuda compilation tools, release 9.0, V9.0.176
Running 'lshw -numeric -C display' results in:
WARNING: you should run this program as super-user.
*-display
description: 3D controller
product: GM107M [GeForce GTX 950M] [10DE:139A]
vendor: NVIDIA Corporation [10DE]
physical id: 0
bus info: pci#0000:01:00.0
version: a2
width: 64 bits
clock: 33MHz
capabilities: bus_master cap_list rom
configuration: driver=nvidia latency=0
resources: irq:38 memory:f6000000-f6ffffff memory:e0000000-efffffff memory:f0000000-f1ffffff ioport:e000(size=128) memory:f7000000-f707ffff
*-display
description: VGA compatible controller
product: 4th Gen Core Processor Integrated Graphics Controller [8086:416]
vendor: Intel Corporation [8086]
physical id: 2
bus info: pci#0000:00:02.0
version: 06
width: 64 bits
clock: 33MHz
capabilities: vga_controller bus_master cap_list rom
configuration: driver=i915 latency=0
resources: irq:34 memory:f7400000-f77fffff memory:d0000000-dfffffff ioport:f000(size=64) memory:c0000-dffff
WARNING: output may be incomplete or inaccurate, you should run this program as super-user.
I didn't change nothing on my drivers, but reinstalling gcc broke them. How can I solve this?
Thanks
-- EDIT --
When i do 'locate nvidia-smi' i get the following result:
/etc/alternatives/x86_64-linux-gnu_nvidia-smi.1.gz
/usr/bin/nvidia-smi
/usr/share/man/man1/nvidia-smi.1.gz
Although when i go into those directories, like /usr/bin there is no nvidia-smi executable, under /usr/share/man/man1/ there is no nvidia-smi.1.gz
Doing 'cat /proc/driver/nvidia/version' i get:
NVRM version: NVIDIA UNIX x86_64 Kernel Module 384.111 Tue Dec 19 23:51:45 PST 2017
GCC version: gcc version 7.2.0 (Ubuntu 7.2.0-1ubuntu1~16.04)
It still shows the old gcc, i now have gcc-5, not 7
I managed to solve this, actually it was very simple, i just had to reinstall my nvidia drivers by doing:
sudo apt-get purge nvidia*
sudo apt-get update
sudo apt-get install nvidia-384

qtcreator installation issue invalid encoding

I had previous installation packages for Qt.
/home/star/Downloads/sandeep/Untitled Folder/qt-creator-opensource-linux-x86_64-4.2.1(1).run
/home/star/Downloads/sandeep/Untitled Folder/qt-opensource-linux-x64-5.8.0.run
/home/star/Downloads/sandeep/Untitled Folder/qt-unified-linux-x64-2.0.5-1-online.run
I clicked properties and checked "allow package to run"
But when I double click on the run file,�*B# (invalid encoding) file gets created and it does not execute.
Also, I guess my linux is 32-bit, because output of uname -a gives
Linux star-X555LAB 4.4.0-31-generic #50-Ubuntu SMP Wed Jul 13 00:06:14 UTC 2016 i686 i686 i686 GNU/Linux
and I gues my installables are 4 bit as the name suggests,that may be the problem.
I searched internet and found that it happens when u migrate from windows to linux. So, I formatted my NTFS pen drive to ext and tried again. the same problem repeats.
What should I do now?
I think the current installation packages for qt support only 1 month service. Or is the open-source longer duration free license for qt still valid. if so What is the path to download installables?
Yes you are right. Your OS is 32bit (i686) and your Qt installation is 64bit (x64). You may:
Run the binary from terminal and see the output
Install 64bit OS
Install 32bit Qt toolset

crosscompile c++ binary for Amazon EC2

I tried to just compile on what appears to be similar (both Ubuntu 64bit) but the binary is not runnable by the Amazon instance of Ubuntu (which is 64 bit too, but don't know much more than that).
I've seen a thread suggesting spinning additional EC2 instance just to compile there, but it isn't a solution as I can't transfer sources outside, only a compiled binaries and dynamic libs.
Was thinking about making a virtual environment on my computer to spawn a clone of EC2 to compile there, but is it doable?
kernel info:
uname -a
4.4.0-93-generic #116-Ubuntu SMP Fri Aug 11 21:17:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
ip-xxx-xxx-xxx-xxx 4.4.0-1035-aws #44-Ubuntu SMP Tue Sep 12 17:27:47 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
it uses some amazon tailor made kernel it seems?
file info:
file ./testBinary
./testBinary: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), too many program (2304)
file -Pelf_phnum=3000 ./testBinary
./testBinary: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), corrupted program header size, corrupted section header size
You can't really 'clone' EC2 instance that you've created from some AMI. So since you don't have any details about why exactly your library wasn't working, I would suggest running Amazon Linux instead of Ubuntu.
You can run Amazon Linux in a Docker container on your machine and build your library there (https://hub.docker.com/_/amazonlinux/). That way the library should run without problems in any EC2 with Amazon Linux.
If you want to stick with Ubuntu, at the very least you should match Ubuntu versions (not just architecture) and probably kernel versions.