I've created a dockerfile to take an official php/apache image, add a bunch of dependencies and clone a GitHub repo, which works great when building a container image locally, but fails when I push it to GitHub and trigger an automated build at Docker Hub.
The command which fails is the git clone
RUN git clone git://github.com/symphonycms/symphony-2.git /var/www/html
and the reason for failure (according to Git) is
Step 5 : RUN git clone git://github.com/symphonycms/symphony-2.git
/var/www/html && git checkout --track origin/bundle && git
submodule update --init --recursive && git clone
git://github.com/symphonycms/workspace.git && chown -R
www-data:www-data *
[91mfatal: destination path '/var/www/html'
already exists and is not an empty directory.
Can someone explain why there is no problem building locally but a failure at the hub?
So the image that you are pushing has the /var/www/html/ directory in it (and probably has the git repo in it).
Try this in your docker file to make sure the directory doesn't exist:
RUN rm -rf /var/www/html
RUN git clone git://github.com/symphonycms/symphony-2.git /var/www/html && git checkout --track origin/bundle && git submodule update --init --recursive && git clone git://github.com/symphonycms/workspace.git && chown -R www-data:www-data
Related
not able to launch the web application in track 3 of quest 10 creating an oject detection application using tensorflow.
used the followding code to do so..
apt-get update
apt-get install -y protobuf-compiler python3-pil python3-lxml python3-pip python3-dev git
pip3 install --upgrade pip
pip3 install Flask==2.1.2 WTForms==3.0.1 Flask_WTF==1.0.1 Werkzeug==2.0.3 itsdangerous==2.1.2 jinja2==3.1.2
Install TensorFlow:
pip3 install tensorflow==2.9.0
Install the Object Detection API library:
cd /opt
Copied!
git clone https://github.com/tensorflow/models
Copied!
cd models/research
Copied!
protoc object_detection/protos/*.proto --python_out=.
Download the pre-trained model binaries by running the following commands:
mkdir -p /opt/graph_def
cd /tmp
for model in
ssd_mobilenet_v1_coco_11_06_2017
ssd_inception_v2_coco_11_06_2017
rfcn_resnet101_coco_11_06_2017
faster_rcnn_resnet101_coco_11_06_2017
faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017
do
curl -OL http://download.tensorflow.org/models/object_detection/$model.tar.gz
tar -xzf $model.tar.gz $model/frozen_inference_graph.pb
cp -a $model /opt/graph_def/
done
Now you will choose a model for the web application to use. For this lab, enter the following command faster_rcnn_resnet101_coco_11_06_2017:
ln -sf /opt/graph_def/faster_rcnn_resnet101_coco_11_06_2017/frozen_inference_graph.pb /opt/graph_def/frozen_inference_graph.pb
Install and launch the web application
Change to the root home directory:
cd $HOME
Copied!
Clone the example application from Github:
git clone https://github.com/GoogleCloudPlatform/tensorflow-object-detection-example
Copied!
Install the application:
cp -a tensorflow-object-detection-example/object_detection_app_p3 /opt/
Copied!
chmod u+x /opt/object_detection_app_p3/app.py
Copied!
Create the object detection service:
cp /opt/object_detection_app_p3/object-detection.service /etc/systemd/system/
Copied!
Reload systemd to reload the systemd manager configuration:
systemctl daemon-reload
Copied!
The application provides a simple user authentication mechanism. You can change the default username and password by modifying the /opt/object_detection_app/decorator.py file and changing the following lines: USERNAME = 'username' PASSWORD = 'passw0rd'
Launch the application.
systemctl enable object-detection
Copied!
systemctl start object-detection
Copied!
systemctl status object-detection
I am trying to set up a CI for my cmake/c++ project hosted on a private repository on GitHub.
The project depends on lots of third party libraries that should be git-cloned and built. The latter takes a while, hence, I created a docker image with all dependencies installed and hosted it on the docker hub. (Ideally, I would like the docker image to be private also, but if it is not possible, I can make it public.)
I want to achieve the following:
On pull requests to the master branch, the application is automatically built on the docker container(because all dependencies are there), all unit tests (gtest) are run and, if everything is alright, the branch is merged to master.
Ideally, I would like to see the logs and statistics generated by gcovr/lcov.
OS: Ubuntu 18.04
I wonder if this is even achievable as I have been searching for 2 days with no luck and a billion of possible readings.
My 2 cents (more a comment) on controlled build using docker.
For automatic merge, I don't know since I would be against it since code review can't be replaced by CI only IMHO...
Take a look at https://github.com/Mizux/cmake-cpp
Introduction
I use a Makefile for orchestration (docker command can be way too long ;)) and docker for isolated build on various distro.
pro:
Be able to test locally (Just need a GNU/Linux distro with Docker & Make)
Can migrate easily to various CI runners provider (Travis-CI, GitHub Workflow, gitlab-runner, bitbucket?)
Contributors can test locally before sending a PR
cons:
Less coupled to github -> more complex to maintain.
more difficult to have a cache between workflow
note: Dockerfile are stored in the repository in ci/docker i.e. I rebuild the images in the first steps but you should be able to replace this step by a simple docker load if your image is located on docker hub (not tested)
Setup
Dockerfile
I split my Dockerfile in several stages (mostly for debug).
note: you can replace ubuntu:rolling with your own image...
ci/docker/ubuntu/Dockerfile:
# Create a virtual environment with all tools installed
# ref: https://hub.docker.com/_/ubuntu
FROM ubuntu:rolling AS env
# Install system build dependencies
# note: here we use the CMake package provided by Ubuntu
# see: https://repology.org/project/cmake/versions
ENV PATH=/usr/local/bin:$PATH
RUN apt-get update -q && \
apt-get install -yq git build-essential cmake && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
CMD [ "/bin/sh" ]
# Add the library src to our build env
FROM env AS devel
# Create lib directory
WORKDIR /home/lib
# Bundle lib source
COPY . .
# Build in an other stage
FROM devel AS build
# CMake configure
RUN cmake -H. -Bbuild
# CMake build
RUN cmake --build build --target all
# CMake install
RUN cmake --build build --target install
# Create an install image to check cmake install config
FROM env AS install
# Copy lib from build to install
COPY --from=build /usr/local /usr/local/
# Copy sample
WORKDIR /home/sample
COPY ci/sample .
Runner jobs
Github action runners have docker installed.
note: you can have one badge per yml file. e.g. You could should use one job per distro for example to have one jobs per distro or one file for Release and one file for Debug...
.github/workflows/docker.yml:
name: C++ CI
on: [push, pull_request]
jobs:
build-docker:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Build env image
run: docker build --target=env --tag project:env -f ci/docker/ubuntu/Dockerfile .
- name: Build devel image
run: docker build --target=devel --tag project:devel -f ci/docker/ubuntu/Dockerfile .
- name: Build build image
run: docker build --target=build --tag project:build -f ci/docker/ubuntu/Dockerfile .
For testing you can add an other stage or run it using the project:build image:
docker run --rm --init -t --name test project:build cmake --build build --target test
Annexes
Faster build
You can add a .dockerignore file to remove unneeded files (e.g. LICENCES, doc, local build dir if testing locally...) to reduce docker context and the COPY . .
.dockerignore:
# Project Files unneeded by docker
ci/cache
ci/docker
ci/Makefile
.git
.gitignore
.github
.dockerignore
.travis.yml
.appveyor.yml
.clang-format
AUTHORS
CONTRIBUTING.md
CONTRIBUTHORS
INSTALL
LICENSE
README.md
doc
# Native CMake build
build/
# Editor directories and files
*.user
*.swp
Custom CMake version install
You can use the following instead of apt install -y cmake
Can take time since you rebuild CMake...
# Install CMake 3.16.4
RUN wget "https://cmake.org/files/v3.16/cmake-3.16.4.tar.gz" \
&& tar xzf cmake-3.16.4.tar.gz \
&& rm cmake-3.16.4.tar.gz \
&& cd cmake-3.16.4 \
&& ./bootstrap --prefix=/usr/local/ \
&& make \
&& make install \
&& cd .. \
&& rm -rf cmake-3.16.4
so you can use the prebuild version instead using:
# Install CMake 3.16.4
RUN wget "https://cmake.org/files/v3.16/cmake-3.16.4-Linux-x86_64.sh" \
&& chmod a+x cmake-3.16.4-Linux-x86_64.sh \
&& ./cmake-3.16.4-Linux-x86_64.sh --prefix=/usr/local/ --skip-license \
&& rm cmake-3.16.4-Linux-x86_64.sh
I would like to clone a GitHub repo through my requirements.txt in Docker.
Actually the requirements file contains :
-e git://github.com/USERNAME/REPO.git
Django==1.11.8
....
what is the specific command that I should add in Dockerfile to execute correctly the git clone command.
I tried RUN git clone git#github.com:USERNAME/REPO.git without any success.
Any suggestions ?
I found a solution,
I simply modify my code from
-e git://github.com/USERNAME/REPO.git to
git+https://github.com/USERNAME/REPO.git and it works great.
I have a Java application that creates and runs JMeter tests.
Those tests need to be run on a remote EC2 instance.
Is it possible to have some command in Jenkins (which is on a separate AWS machine) to clone a git project to a remote EC2 instance? And run the flow there?
I will appreciate any thoughts and ideas!
So, here is my solution:
in Jenkins in Build section add 'Execute shell' step and do scp there for pom.xml and src folder from the Jenkins workspace to ec2 instance tmp folder
in my case it looks like this:
scp -i ../../../jobs/utilities/keys/.pem pom.xml ec2-user#ec2-00-000-00.compute.amazonaws.com:/tmp
scp -i ../../../jobs/utilities/keys/.pem -r src ec2-user#ec2-00-000-00.compute.amazonaws.com:/tmp
then add 'Send files or execute command over SSH' step and in the Exec command section put next:
sudo rm -rf ../../my_project_folder_name/
sudo mkdir ../../my_project_folder_name
cd ../../tmp
sudo cp pom.xml ../my_project_folder_name/
sudo cp -r src ../my_project_folder_name
cd ../my_project_folder_name
sudo mvn clean test
then add one more 'Execute shell' step to copy all the files from tag=rget folder to be able to use them for different reports:
scp -i ../../../jobs/utilities/keys/.pem -r ec2-user#ec2-00-000-00.compute.amazonaws.com:/my_project_folder_name/target .
That's it :)
I am learning about Dockerfile by following some examples and reading the docs. A Dockerfile has the following starting lines:
FROM ubuntu:14.04
RUN mkdir /home/meteorapp
WORKDIR /home/meteorapp
ADD . ./meteorapp
# Do basic updates
RUN apt-get update -q && apt-get clean
# Get curl in order to download what we need
RUN apt-get install curl -y \
# Install Meteor
&& (curl https://install.meteor.com/ | sh) \
# Build the Meteor app
&& cd /home/meteorapp/meteorapp/app \
&& meteor build ../build --directory \
# and more lines ...
The lines && cd /home/meteorapp/meteorapp/app \ fails with error:
/bin/sh: 1: cd: can't cd to /home/meteorapp/meteorapp/app
The Dockerfile is located in the root directory of my app
What is causing this error and how to fix it?
It appears that /home/meteorapp/meteorapp/app doesn't exist inside your docker container.
When you ADD . ./meteorapp you put everything you have in the Dockerfile folder inside your container, so if you don't have an app folder (and it seems that you don't, based on your screenshot), it won't magically appear inside the container