I have some troubles using a docker container in order to cross compile a program with visual studio 2019.
Here is my docker file
FROM ubuntu:16.04
RUN dpkg-divert --local --rename --add /sbin/initctl
RUN ln -sf /bin/true /sbin/initctl
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update
RUN apt-get -y upgrade
# install 32 bit libraries required for gnuarm tools from
# https://launchpad.net/gcc-arm-embedded & a few minimalistic tools with ssh server
RUN dpkg --add-architecture i386 && \
apt-get update && \
apt-get -y install \
libc6:i386 libncurses5:i386 libstdc++6:i386 libpython2.7:i386 vim \
make git unzip \
sudo curl less tree openssh-server
# clean cache
RUN apt-get clean
RUN mkdir -p /var/run/sshd
COPY *.tgz /tmp/
RUN cd /tmp && \
tar zxvf gcc-linaro-5.3-2016.02-x86_64_arm-linux-gnueabihf_5.3_sub1.0.3.tgz -C /opt && \
tar zxvf Compiler_gcc-linaro-5.3_patch_1.2.2.tgz && \
cd Compiler_gcc-linaro-5.3_patch && \
bash ./install-owa4x-comp-PATCH-1.2.2.sh && \
cd / && \
rm -rf /tmp/*
ENV PATH="${PATH}:/opt/gcc-linaro-5.3-2016.02-x86_64_arm-linux-gnueabihf/bin"
RUN useradd -G sudo --create-home --shell /bin/bash --user-group myuser
RUN echo "myuser:myuser_pwd" | chpasswd
RUN echo "PATH=$PATH:/opt/gcc-linaro-5.3-2016.02-x86_64_arm-linuxgnueabihf/bin" >> /etc/profile
CMD ["source /etc/profile"]
CMD ["/usr/sbin/sshd", "-D"]
EXPOSE 22
ENV WORKSPACE /home/myuser/workspace
VOLUME ${WORKSPACE}
Here is how a create my image
docker build --tag cc_arm .
myuser#ubuntu:~/Documents/share$ docker run -d -p 5000:22 -v /home/myuser/Documents/share:/home/myuser/workspace cc_arm
But when I open a shell and type
which g++
which gcc
I have nothing. I've also tried to set the path manually
export C="/opt/gcc-linaro-5.3-2016.02-x86_64_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-gcc"
export CXX="/opt/gcc-linaro-5.3-2016.02-x86_64_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-g++"
echo 'export CC=/opt/gcc-linaro-5.3-2016.02-x86_64_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-gcc' >> ~/.bashrc
echo 'export CXX=/opt/gcc-linaro-5.3-2016.02-x86_64_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-g++' >> ~/.bashrc
But it doesn't work.
I think my environnement is okay because I can cross compile a simple programm using a g++ from the packages (locate in /usr/bin/g++)
sudo apt install -y openssh-server build-essential gdb rsync ninja-build zip
Thanks in advance.
Finally, I've changed my project in Visual Studio 2019 to a CMake project.
I've created a CMake file in the shared volume of my container.
SET(CMAKE_C_COMPILER /opt/gcc-linaro-5.3-2016.02-x86_64_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-gcc)
SET(CMAKE_CXX_COMPILER /opt/gcc-linaro-5.3-2016.02-x86_64_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-g++)
SET(CMAKE_STRIP /opt/gcc-linaro-5.3-2016.02-x86_64_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-strip)
SET(CMAKE_SYSTEM_NAME Linux)
SET(CMAKE_SYSTEM_PROCESSOR arm)
SET(CMAKE_SYSTEM_VERSION 1)
And in Visual Studio 2019, I've changed the launch.vs.json
"configurations": [
{
"type": "cppgdb",
"name": "CMakeProject6 (CMakeProject6\\CMakeProject6)",
"project": "CMakeLists.txt",
"projectTarget": "CMakeProject6 (CMakeProject6\\CMakeProject6)",
"debuggerConfiguration": "gdb",
"args": [],
"env": {},
"remoteMachineName": "10.0.0.1 (username=root, port=22, authentication=Password)",
"deploy": [
{
"sourceMachine": "192.168.72.144 (username=root, port=5000, authentication=Password)", // DOCKER CONTAINER
"targetMachine": "10.0.0.1 (username=root, port=22, authentication=Password)", // REMOTE MACHINE
"deploymentType": "RemoteRemote"
}
]
}
Related
Rather than necro-post on a two-year old thread, I decided to create a new question.
I want add brew (homebrew) to a Docker container, but I get a brew: not found error.
The suggested solution in that previous article doesn't seem to work. This new Dockerfile...
FROM rust:1.63.0-buster
WORKDIR app
RUN apt-get update && \
apt-get install -y -q --allow-unauthenticated \
git \
sudo
RUN useradd -m -s /bin/zsh linuxbrew && \
usermod -aG sudo linuxbrew && \
mkdir -p /home/linuxbrew/.linuxbrew && \
chown -R linuxbrew: /home/linuxbrew/.linuxbrew
USER linuxbrew
RUN /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
USER root
RUN chown -R $CONTAINER_USER: /home/linuxbrew/.linuxbrew
RUN brew install hello
gives this error... What am I missing? Thanks.
=> ERROR [6/6] RUN brew install hello 0.2s
------
> [6/6] RUN brew install hello:
#9 0.181 /bin/sh: 1: brew: not found
------
executor failed running [/bin/sh -c brew install hello]: exit code: 127
This Dockerfile installs brew in /home/linuxbrew/.linuxbrew/bin/brew. Including that directory in the path (with the ENV command) does the trick.
...
ENV PATH="/home/linuxbrew/.linuxbrew/bin:${PATH}"
RUN brew install hello
I have this docker file:
# We are going to star from the jhipster image
FROM jhipster/jhipster
# install as root
USER root
### Setup docker cli (don't need docker daemon) ###
# Install some packages
RUN apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common -y
# Add Dockers official GPG key:
RUN ["/bin/bash", "-c", "set -o pipefail && curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -"]
# Add a stable repository
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# Setup aws credentials as environment variables
ENV AWS_ACCESS_KEY_ID "change it!"
ENV AWS_SECRET_ACCESS_KEY "change it!"
# noninteractive install for tzdata
ARG DEBIAN_FRONTEND=noninteractive
# set timezone for tzdata
ENV TZ=America/Sao_Paulo
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
# Install the latest version of Docker Engine - Community and also aws cli
RUN apt-get update && apt-get install docker-ce docker-ce-cli containerd.io awscli -y
# change back to default user
USER jhipster
# install skd and java version 1.8
RUN curl -s "https://get.sdkman.io" | bash
RUN bash $HOME/.sdkman/bin/sdkman-init.sh
RUN bash -c "sdk install java 8.0.222.j9-adpt"
When I run a command to build an image from this dockerfile it fails on the last step with a message:
/bin/sh: 1: sdk: not found
When I install it on my local machine it runs sdkman (sdk) on bash. But on this script it calls it from sh not bash. How can I make it calls skdman (sdk) from sh? What I actually want to do is install a specific java version through sdkman (sdk). Is there another way to do it?
For sdk command to be available you need to run source sdkman-init.sh.
Here is a working sample with java 11 on centos.
FROM centos:latest
ARG CANDIDATE=java
ARG CANDIDATE_VERSION=11.0.6-open
ENV SDKMAN_DIR=/root/.sdkman
# update the image
RUN yum -y upgrade
# install requirements, install and configure sdkman
# see https://sdkman.io/usage for configuration options
RUN yum -y install curl ca-certificates zip unzip openssl which findutils && \
update-ca-trust && \
curl -s "https://get.sdkman.io" | bash && \
echo "sdkman_auto_answer=true" > $SDKMAN_DIR/etc/config && \
echo "sdkman_auto_selfupdate=false" >> $SDKMAN_DIR/etc/config
# Source sdkman to make the sdk command available and install candidate
RUN bash -c "source $SDKMAN_DIR/bin/sdkman-init.sh && sdk install $CANDIDATE $CANDIDATE_VERSION"
# Add candidate path to $PATH environment variable
ENV JAVA_HOME="$SDKMAN_DIR/candidates/java/current"
ENV PATH="$JAVA_HOME/bin:$PATH"
ENTRYPOINT ["/bin/bash", "-c", "source $SDKMAN_DIR/bin/sdkman-init.sh && \"$#\"", "-s"]
CMD ["sdk", "help"]
The problem is every RUN command in Dockerfile is executed within a new bash environment, so you need to put both of your last two commands under the same line to look like this:
RUN bash $HOME/.sdkman/bin/sdkman-init.sh && bash -c "sdk install java 8.0.222.j9-adpt"
I'm trying to mount a folder with my docker file instead of copying it on build. We use git for development and I don't want to rebuild the image everytime I make a change for testing.
my docker file is now as such
#set base image
FROM centos:centos7.2.1511
MAINTAINER Alex <alex#app.com>
#install yum dependencies
RUN yum -y update \\
&& yum -y install yum-plugin-ovl \
&& yum -y install epel-release \
&& yum -y install net-tools \
&& yum -y install gcc \
&& yum -y install python-devel \
&& yum -y install git \
&& yum -y install python-pip \
&& yum -y install openldap-devel \
&& yum -y install gcc gcc-c++ kernel-devel \
&& yum -y install libxslt-devel libffi-devel openssl-devel \
&& yum -y install libevent-devel \
&& yum -y install openldap-devel \
&& yum -y install net-snmp-devel \
&& yum -y install mysql-devel \
&& yum -y install python-dateutil \
&& yum -y install python-pip \
&& pip install --upgrade pip
# Create the DIR
#RUN mkdir -p /var/www/itapp
# Set the working directory
#WORKDIR /var/www/itapp
# Copy the app directory contents into the container
#ADD . /var/www/itapp
# Install any needed packages specified in requirements.txt
#RUN pip install -r requirements.txt
# Make port available to the world outside this container
EXPOSE 8000
# Define environment variable
ENV NAME itapp
# Run server when the container launches
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
ive commented out the creation and copy of the itapp Django files as I want to mount them instead, (do I need to rebuild this first?)
then my command for mounting is
docker run -it -v /Users/alex/itapp:/var/www/itapp itapp bash
I now get an error:
bash: warning: setlocale: LC_CTYPE: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_COLLATE: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_MESSAGES: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_NUMERIC: cannot change locale (en_US.UTF-8): No such file or directory
bash: warning: setlocale: LC_TIME: cannot change locale (en_US.UTF-8): No such file or directory
and the dev instance does not run.
how would I also set the working directory to the the volume that I'm mounting at runtime?
Try this command. -w WORKDIR option in docker run sets working directory inside the container.
docker run -d -v /Users/alex/itapp:/var/www/itapp -w /var/www/itapp itapp
Also, you'll to map your container port to your host port to be able to access, for example, from a browser to your app.
To do this, use the following command.
docker run -d -p 8000:8000 -v /Users/alex/itapp:/var/www/itapp -w /var/www/itapp itapp
After this, your app should be running in localhost:8000
I am using dockers to store the dependencies of my c++ program to use when doing CI testing with gitlab CI. I first build a base docker which contains all of the program dependencies (lets call it DOCKER_A):
FROM gcc:5
RUN mkdir -p /usr/src/optimization
WORKDIR /usr/optimization
#COPY . /usr/optimization
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y build-essential && \
apt-get install -y openssh-client && \
apt-get install -y python3 && \
apt-get install -y python3-pip && \
pip3 install --upgrade pip && \
pip3 install virtualenv
RUN wget http://www.cmake.org/files/v3.7/cmake-3.7.2.tar.gz && \
tar xf cmake-3.7.2.tar.gz && \
cd cmake-3.7.2/ && \
./configure && \
make && \
make install && \
export PATH=/usr/local/bin:$PATH && \
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH && \
cd ..
RUN wget -O boost_1_64_0.tar.gz http://sourceforge.net/projects/boost/files/boost/1.64.0/boost_1_64_0.tar.gz/download && \
tar xzvf boost_1_64_0.tar.gz && \
cd boost_1_64_0 && \
./bootstrap.sh --exec-prefix=/usr/local --with-python=python3 && \
./b2 threading=multi && \
./b2 install threading=multi && \
cd .. && \
rm boost_1_64_0.tar.gz && \
rm -r boost_1_64_0 && \
ln -s /usr/lib/x86_64-linux-gnu/libboost_python-py34.so /usr/lib/x86_64-linux-gnu/libboost_python3.so
This docker doesn't change. Then every time I push to gitlab, I build another docker, starting from DOCKER_A:
FROM DOCKER_A
ARG SSH_PRIVATE_KEY
WORKDIR /usr/optimization
COPY . /usr/optimization
RUN chmod +x ADD_KEY.sh
RUN ./ADD_KEY.sh "$SSH_PRIVATE_KEY"
RUN mkdir build && \
cd build && \
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_VERBOSE_MAKEFILE=on .. && \
make && \
cd ..
This builds the code from the new commit (up to this point everything works as expected).
Next, in my YAML file for gitlab CI, I run my tests, which consist of calling the executable files generated by my build process.
before_script:
- docker info
- docker login -u user -p $CI_JOB_TOKEN docker.registry.url
after_script:
- echo "After script section"
- echo "For example you might do some cleanup here"
buildRelease:
stage: build
script:
- echo "Do your build here"
- docker login -u user -p $CI_JOB_TOKEN docker.registry.url
- docker build --pull -i $CONTAINER_IMAGE_PUSH --build-arg SSH_PRIVATE_KEY="$SSH_PRIVATE_KEY" .
- docker push $CONTAINER_IMAGE_PUSH
testDispatch:
stage: test
script:
- echo "Do a test here"
- echo "For example run a test suite"
- docker run -t $CONTAINER_IMAGE_PULL ./bin/dispatch
testState:
stage: test
script:
- docker run -t $CONTAINER_IMAGE_PULL ./bin/state-test
testAlgorithm:
stage: test
script:
- docker run -t $CONTAINER_IMAGE_PULL ./bin/algorithm-test
testSystem:
stage: test
script:
- docker run -t $CONTAINER_IMAGE_PULL ./bin/system-test
Each of these the tests in stage test fails, all giving the same error. Here is an example of the output:
$ docker run -t $CONTAINER_IMAGE_PULL ./bin/algorithm-test
./bin/algorithm-test: error while loading shared libraries:
libboost_graph.so.1.64.0: cannot open shared object file: No such file or directory
I don't understand why my binary cannot find libboost graph, as it is installed in the first docker container, which I am inheriting from.
Any help that could be provided would be appreciated.
I have a Makefile some C++ code that is using PCI device
all:
g++ -o executable main.cpp dragon.pb.cc -std=c++11 -O3 -I/usr/include/postgresql -I/usr/include/hiredis -lzmq -lprotobuf -lpthread -lpq -lhiredis
clean:
rm executable
And It have dependencies on this C library that is using kernel functions. Makefile for this libraby is
# dist and build are folders, not phony targets
.PHONY: all package clean
all: dragon.pb.cc dragon_pb2.py package
dragon.pb.cc: dragon.proto
protoc --cpp_out=. dragon.proto
dragon_pb2.py: dragon.proto
protoc --python_out=. dragon.proto
package: build
clean:
rm -f dragon.pb.*
rm -f dragon_pb*
rm -rf build
rm -rf dist
rm -f MANIFEST
And here is my Dockerfile
FROM ubuntu:14.04
ENV PG_MAJOR 9.3
RUN apt-get update
RUN apt-get install -y git make protobuf-compiler libhiredis-dev postgresql-server-dev-${PG_MAJOR}
RUN apt-get install -y g++
RUN apt-get install -y libzmq-dev
RUN apt-get install -y libprotobuf-dev
RUN apt-get install -y linux-headers-$(uname -r)
ADD deployment_key /root/.ssh/id_rsa
RUN chmod 600 /root/.ssh/id_rsa
RUN echo "StrictHostKeyChecking no" >> /root/.ssh/config
RUN echo >> /root/.ssh/config
RUN echo "Host bitbucket.org" >> /root/.ssh/config
RUN mkdir -p /usr/src/app/
WORKDIR /usr/src/app/
RUN git clone git#bitbucket.org:opticsdevelopment/dragon-protocols.git
WORKDIR ./dragon-protocols
RUN make dragon.pb.cc
RUN cp ./dragon.pb.* ../
COPY . /usr/src/app
WORKDIR ../
RUN git clone git#bitbucket.org:opticsdevelopment/dragon-module.git
WORKDIR ./dragon-module
RUN make all
WORKDIR ../
RUN make
EXPOSE 5570
CMD ["dragon"]
The problem right now is in installing linux-headers. Somehow it can't find headers
E: Unable to locate package linux-headers-3.13.0-19-generic
E: Couldn't find any package by regex 'linux-headers-3.13.0-19-generic'
If your app can compile with any generic linux headers
In your Dockerfile change
RUN apt-get install -y linux-headers-$(uname -r)
to just
RUN apt-get install -y linux-headers-generic
or if you need the same specific one as your host system
why dont you just volume link this directory from host to the docker container with the -v?
on your host system:
sudo apt-get install linux-headers-$(uname -r)
Now you have the kernel headers here: /usr/src/linux-headers-$(uname -r)/include
now on your docker container run command, link that volume like
-v /usr/src/linux-headers-$(uname -r)/include:/usr/src/linux-headers-$(uname -r)/include