Problem with RHEL 5.5 compilation on Windows Host (Docker) - c++

I have a 64 bit Windows host machine, I have installed WSL (Debian), then docker, and then I'm trying to compile a Qt project on a Red Hat Linux 5.5 32 bit container(sharing a host directory with the code), but... doing the QMake...
/usr/local/Trolltech/Qt-4.8.3/bin/qmake MYFILE.pro -spec linux-g++ -r CONFIG+=debug
I get:
QFSFileEngine::currentPath: stat(".") failed
And I can't continue my build. (The same qmake command works on a rhel5.5 virtual machine, it´s a container problem)
I launch the docker like this:
docker run -it -v E:\codeRepo:/root/codeRepo rhl55 sh /root/codeRepo/00-scripts/make/makeScript.sh

I found a solution.
It's a filesystem problem. I moved "E:\codeRepo" to "\\wsl$\Debian\codeRepo" (WSL filesystem as a network drive in windows) and it works.
Now i'm sharing with the docker an ext4 folder and there is no problem with QMake.
So, this doesn't works:
docker run -it -v E:\codeRepo:/root/codeRepo rhl55 sh /root/codeRepo/00-scripts/make/makeScript.sh
But this works:
docker run -it -v \\wsl$\Debian\codeRepo:/root/codeRepo rhl55 sh /root/codeRepo/00-scripts/make/makeScript.sh

Related

gdbserver remote target: Target description specified unknown architecture "arm" Where is gdb binary for ARMv7?

I have a target machine, raspberry pi ARMv7, that I want to debug remotely from my windows machine.
I want to use gdb gdbserver with 'target remote' to debug the docker container remotely.
My executable is compiled c++ src code.
Dockerfile that I run on raspberry pi
FROM arm32v7/ubuntu:latest
# Install necessary packages and cross-compiling toolchain
RUN apt-get update && apt-get install -y gdb gdbserver g++ bash
RUN apt-get install -y crossbuild-essential-armhf gdb-multiarch
ENV CC=arm-linux-gnueabihf-gcc
ENV CXX=arm-linux-gnueabihf-g++
ENV AR=arm-linux-gnueabihf-ar
# Copy the source code
COPY . /app
# Set the working directory
WORKDIR /app
# Compile the source code
RUN ${CXX} -g -o testapp2 testapp2.cpp
# Run the executable
CMD ["./testapp2"]
I believe my windows machine needs a pre-built binary of gdb for the ARMv7 architecture.
I looked through linaro.org and developer.arm.com but I haven't been able to find a download for this.
Do I need to build it myself on my windows machine?
I was able to successfully target remote by using
gdb-multiarch -ex "target remote Enter-Ip-Address-Here:Enter-TCP-Port-Here" /path/to/executable/in/the/docker/container/here
in Windows Subsystem Linux

Can't get CloudWatchAgent to start on Ubuntu18.04 in Docker build for AWS Batch job

I'm trying build an image for use on EC2 instances in an AWS Batch job. I need to use Ubuntu 18.04 because the goal is to run some Fortran software that I can only get to compile on Ubuntu 18.04. I have the Fortran software and some python scripts running well on a manually started Ubuntu 18.04 EC2 instance.
Now, I'm trying to build an image with Docker (that I'll eventually apply to 100s or 1000s of EC2 instances)... but I have to get CloudWatchAgent (CWA) installed and started, and I can't get CWA to start in the Docker build. CWA starts and runs fine in my manual EC2 development instance (Ubuntu 18.04). I initially had problems with CWA in my manual instance because CWA uses systemctl, and so I had to manually install systemd, and that worked after a reboot. But, I'm not able to replicate this in my Docker build, but always get the error:
System has not been booted with systemd as init system (PID 1). Can't operate.
unknown init system
The command '/bin/sh -c sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c file:amazon-cloudwatch-agent.json' returned a non-zero code: 1
I tried starting with an ubuntu 18.04 image that is supposed to have systemd already installed, and tried rebooting my EC2 instance, same error. Here's the source: https://hub.docker.com/r/jrei/systemd-ubuntu
I looked for other ideas, e.g.: Docker System has not been booted with systemd as init system
... but couldn't figure out how to make it work in a Docker build.
So,
am I using the Ubuntu 18.04 image (that has systemd) in my build wrong- how to use in a Docker build?
is there another way to start CloudWatchAgent in Ubuntu 18.04 that gets around the systemd problem?
would it work/is there a way to restart the operating system inside the Docker container, during the docker build stage?
am I stuck and will have to try recompile everything on a different Ubuntu or AMI like Amazon Linux?
Or is there something else I'm missing?
Here's my Docker file:
#version with systemd already installed
FROM jrei/systemd-ubuntu#sha256:1b65424e0ec4f6772576b55c49e1470ba506504d1033e9da5795785b1d6a4d88 as ubuntu-base
RUN apt-get update && apt-get install -y \
sudo \
wget \
python3-pip
RUN sudo apt-get -y install libgfortran3
RUN sudo pip3 install boto3
RUN wget https://s3.us-east-2.amazonaws.com/amazoncloudwatch-agent-us-east-2/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb
RUN sudo dpkg -i -E ./amazon-cloudwatch-agent.deb
COPY . .
RUN cp amazon-cloudwatch-agent.json /opt/aws/amazon-cloudwatch-agent/etc/
ENV ECS_AVAILABLE_LOGGING_DRIVERS = awslogs
RUN sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c file:amazon-cloudwatch-agent.json
RUN mkdir -p cpseqlogs
CMD python3 cpsequence.py
Thanks for any suggestions, ideas, or tips (I'm fairly new to Docker, but not totally green on linux).

VS2019 - Sudo Remote Debugging on Linux with Cmake project

I have a cmake c++ Linux project that is using Remote Debugging from a windows computer. The program accesses the GPIO pins on Raspberry pi so it needs to run under sudo on the remote machine. Everything is building and working but it crashes on the first line that needs admin access. I have not been able to figure out how to launch the newly compiled application under sudo. I have tried different settings in the launch_schema.json but no luck so far.
I found this and it worked for me.
Unable to launch debugger (gdb) with root permissions.
Basically you decorate the existing gdb binary on the Pi with a bash script and then use the script.
The steps on the Pi are:
cd /usr/bin
sudo mv gdb gdborig
Now create a bash script named gdb with following content.
sudo nano gdb
Content of the bash is;
#!/bin/sh
sudo gdborig $#
Finally, make the script runnable with.
sudo chmod 0755 gdb
Thanks goes to Buğra Aydoğar

SonarQube - C++ - Ubunbtu - build-wrapper LD_PRELOAD Error

I just upgraded my linux build slaves and since then my sonarqube stopped working.
Run to reproduce:
docker run -it ubuntu:18.04 bash
apt-get update
apt-get install wget unzip
wget 192.168.1.5:9000/static/cpp/build-wrapper-linux-x86.zip
unzip build-wrapper-linux-x86.zip
cd build-wrapper-linux-x86
./build-wrapper-linux-x86-64 --out-dir test ls
ERROR: ld.so: object '/build-wrapper-linux-x86/libinterceptor-${PLATFORM}.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
Docker images 16.04, 16.10 are working, 17.10, 18.04 are not working.
The PLATFORM env variable is empty, but i guess it is exported by the build-wrapper script.
Anyone any idea?

docker info command doesn't show anything in ec2 Instance

I have installed docker using sudo yum install -y docker and started the docker service by running the following commands. Initially, it worked and I was able to run docker containers. Now the docker daemon is working but wen I run docker commands like docker ps, docker info..etc. It's not showing anything on stdout.
I have uninstalled the docker version using sudo yum remove docker and removed all the files manually and installed the new one but still it's the same issue.
Here is the link that I have followed to install docker in EC2 instance.
https://aws.amazon.com/blogs/devops/set-up-a-build-pipeline-with-jenkins-and-amazon-ecs/
Docker version
1.12.6, build 7392c3b/1.12.6
uname -a
Linux ip adress 4.4.41-36.55.amzn1.x86_64 #1 SMP Wed Jan 18 01:03:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
I was not able to figure out what went wrong? Could you please help me in debug this issue.
Thank you in advance.
As I understood from what you said and going through the link you mentioned, you have given the docker command capabilities to the user jenkins, which you have done using :
usermod -a -G docker jenkins
So in order to run docker related command you should login as the user Jenkins. You can use the following command to login as the user jenkins.
sudo -su jenkins
From there you should be able to run the docker commands as expected.
PS - Follow the steps again to install docker.
Hope this help.