How can I enable coredump in macos - c++

The following code can cause core dump
#include <cstdio>
int main(){
printf("%d", *((int*)1));
}
However when I run on my mac, there is no coredump generated in /cores
calvin#CalvinPC test % ./a
zsh: segmentation fault ./a
calvin#CalvinPC test % ls -a /cores
. ..
However, I already have ulimit -c unlimited, and sudo sysctl kern.coredump=1
So, how can I cause a core dump in macos?

Related

LLDB is not launching a Clang++ compiled program

I am tying to launch a debugging of clang code via lldb. I'm using a WSL Ubuntu 20.04 LTS. I installed a clang and lldb via sudo apt-get install clang and sudo apt-get install lldb accordingly.
The test code (mytest.cpp) is the following:
#include <iostream>
int main()
{
std::cout << "TEST" << std::endl;
return 0;
}
Compilation command: clang++ -g -std=c++17 -o mytest mytest.cpp
Then I calling a debugger:
lldb mytest
(lldb) target create "mytest"
Current executable set to '/home/adzol/Projects/mytest' (x86_64).
(lldb) r
Process 51 launched: '/home/adzol/Projects/mytest' (x86_64)
And that's it. Nothing is happening. What can be wrong here?
But if I am calling my executable file directly, I am getting expected console output:
./mytest
TEST
I found out that the problem was is WSL 1. I updated my WSL to WSL 2 and this all works.

Different behavior when I running a program compiled with G ++ in Docker

The behavior of the executable is different if it is run inside the docker, or on the host. But this only happens when we change the optimization level of G++.
Compiler:
g++ (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
I am trying to execute the following code:
#include <cstdio>
#include <cstring>
int main()
{
int nOrd =3395;
char cOrd[] = "003395";
char cAux2[256];
strcpy(cAux2, cOrd);
int nRest = nOrd % 26;
printf("BEFORE SPRINTF %s\n\n\n", cAux2);
sprintf(cAux2, "%s%c", cAux2, (nRest+65));
printf("AFTER SPRINTF %s\n\n\n", cAux2);
return 0;
}
If I compile with:
g++ -o FastCompile FastCompile.c -DNDEBUG -Os
And I run in the host. The output is as expected:
BEFORE SPRINTF 003395
AFTER SPRINTF 003395P
If I create an image with this executable and run inside the docker, I have:
Docker version 18.09.4, build d14af54266
Dockerfile:
FROM debian
RUN apt-get update && apt-get install -y \
libssl-dev
COPY fast/ /usr/local/
ENTRYPOINT ["usr/local/FastCompile"]
$docker build -t fastcompile .
$docker run fastcompile
BEFORE SPRINTF 003395
AFTER SPRINTF P
If I remove the -Os and re-compile with:
g++ -o FastCompile FastCompile.c -DNDEBUG
The behavior is correct inside the Docker.
So,
Is it a Docker problem? Or is it expected behavior?
Your code has undefined behavior.
sprintf(cAux2, "%s%c", cAux2, (nRest+65));
reads from and writes to the same object. To fix it you can use cOrd in the call so you are not reading from your buffer. That would look like
sprintf(cAux2, "%s%c", cOrd, (nRest+65));
Also note that (nRest+65) gives you a int, not a char as you format specifier states it should be. That is also undefined behavior. You need to cast it to a char to fix it like
sprintf(cAux2, "%s%c", cOrd, char(nRest+65));

OpenMP/MPI executable error with segmentation fault

I'm trying compile a large model using ifort and with the -qopenmp flag:
FC = mpif90 and FCFLAGS = -g -m64 -qopenmp -O3 -xHost -fp-model \
precise -convert big_endian -traceback -r8
FCDEFS = BLAS LITTLE LINUX INTEGER_IS_INT
LFLAGS = -qopenmp
CC = mpicc
CFLAGS = -g -O3
CCDEFS = BLAS LITTLE LINUX INTEGER_IS_INT _ABI64
OMP_NUM_THREADS=2
OMP_STACKSIZE=1000M
OMP_SCHEDULE=STATIC
ulimit -s unlimited
mpprun -n 192 master.exe -e "exp1" -f d1 -t 2700
However, when I try and run the model I get:
mpprun info: Starting impi run on 12 node ( 192 rank X 1 th ) for 22
==================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 19619 RUNNING AT n457
= EXIT CODE: 11
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================
APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault
(signal 11)
mpprun info: Job terminated with error
Now the thing is, if I compile this model without the OpenMP flag and run it with TotalView, there are no errors and the model executes without error.
I'm trying to find a way to track down what is going wrong. Does anyone have any ideas? Where do I start? how can I do simple tests to see why OpenMP exited with a segmentation fault?
Appreciate the help

How to run valgrind with basic c example?

Installation:
bzip2 -d valgrind-3.10.1.tar.bz2
tar -xf valgrind-3.10.1.tar
then:
./configure
make
make install
or simplier
sudo apt-get install valgrind
How to run valgrind on that simple program example1.c
#include <stdlib.h>
int main()
{
char *x = malloc(100); /* or, in C++, "char *x = new char[100] */
return 0;
}
Run:
valgrind --tool=memcheck --leak-check=yes example1
valgrind: example1: command not found
Output from console:
valgrind: example1: command not found
It looks good. You only need to add a ./ before your executable. Without it, valgrind fails to find it and reports 'command not found'.
valgrind --tool=memcheck --leak-check=yes ./example1
^
First, compile your C program (-g is extremely important; without debug info in the executable valgrind cannot tell you line numbers from the source code where the violations occur nor the original line of the allocations of the memory being violated.):
gcc -g example1.c -o example1
Then run valgrind on the executable:
valgrind --tool=memcheck --leak-check=yes ./example1

"sh: ./<file> not found" error when trying to execute a file

I've come across a weirdest problem I ever met. I'm cross-compiling an app for ARM CPU with Linux on-board. I'm using buildroot, and all goes well until I'm trying to run the application on the target: I'm getting -sh: ./hw: not found. E.g.:
$ cat /tmp/test.cpp
#include <cstdio>
#include <vector>
int main(int argc, char** argv){
printf("Hello Kitty!\n");
return 0;
}
$ ./arm-linux-g++ -march=armv7-a /tmp/test.cpp -o /tftpboot/hw
load the executable to the target; then issuing on the target:
# ./hw
-sh: ./hw: Permission denied
# chmod +x ./hw
# ./hw
-sh: ./hw: not found
# ls -l ./hw
-rwxr-xr-x 1 root root 6103 Jan 1 03:40 ./hw
There's more to it: upon building with distro compiler, like arm-linux-gnueabi-g++ -march=armv7-a /tmp/test.cpp -o /tftpboot/hw, the app runs fine!
I compared executables through readelf -a -W /tftpboot/hw, but didn't notice much defference. I pasted both outputs here. The only thing I noticed, are lines Version5 EABI, soft-float ABI vs Version5 EABI. I tried removing the difference by passing either of -mfloat-abi=softfp and -mfloat-abi=soft, but compiler seems to ignore it. I suppose though, this doesn't really matter, as compiler doesn't even warn.
I also thought, perhaps sh outputs this error if an executable is incompatible in some way. But on my host PC I see another error in this case, e.g.:
$ sh /tftpboot/hw
/tftpboot/hw: 1: /tftpboot/hw: Syntax error: word unexpected (expecting ")")
sh prints this weird error because it is trying to run your program as a shell script!
Your error ./hw: not found is probably caused by the dynamic linker (AKA ELF interpreter) not being found. Try compiling it as a static program with -static or running it with your dynamic loader: # /lib/ld-linux.so.2 ./hw or something like that.
If the problem is that the dynamic loader is named differently in your tool-chain and in your runtime environment you can fix it:
In the runtime environment: with a symbolic link.
In the tool-chain: use -Wl,--dynamic-linker=/lib/ld-linux.so.2