I'm having trouble getting a docker container to update the images (like .png's) on my local system.
The docker container has script in the dockerfile that copies the images into a folder in the container. Then, that folder gets copied into a new directory that is a set up as a shared volume. This process is split between the dockerfile for this container, and a "command" entry in the docker-compose.yaml.
Everything seems to run fine; following the output of the copy command looks right. I don't get any errors, and once the command stops running the container stops.
I've tried destroying the container and image completely and recreating it, but I still see the old images in the application. I'm guessing that it's not overwriting the existing images, but I don't know why.
Dockerfile:
FROM ubuntu:18.04
# Install packages
RUN apt-get update
RUN apt-get install -y apt-utils
RUN apt-get install -y curl unzip
# Intall AWS CLI
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
# Load AWS Access Keys
COPY ./config /root/.aws/
COPY ./credentials /root/.aws/
RUN chmod 600 /root/.aws/config /root/.aws/credentials
# Download media
RUN mkdir -p /var/media
RUN aws s3 cp --recursive s3://images/ /var/media
Snippet from the docker-compose.yaml:
version: '3.1'
services:
client:
[[other stuff here]]
media:
image: [[our own image stored with AWS]]
command: 'cp -R -v -f /var/media/images/ /var/media-access/'
volumes:
- ./src/media:/var/media-access
[[other stuff here]]
Any advice is appreciated!
Related
I have a user-data bootstrap script that creates a folder called content in root directory and downloads files from an S3 bucket.
#!/bin/bash
sudo yum update -y
sudo yum search docker
sudo yum install docker -y
sudo usermod -a -G docker ec2-user
id ec2-user
newgrp docker
sudo yum install python3-pip -y
sudo pip3 install docker-compose
sudo systemctl enable docker.service
sudo systemctl start docker.service
export PATH=$PATH:/usr/local/bin
mkdir content
docker network create web_todos
docker run -d -p 80:80 --name nginx-proxy --network=web_todos -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
aws s3 cp s3://jv-pocho/docker-compose.yaml .
aws s3 cp s3://jv-pocho/backup.sql .
aws s3 cp s3://jv-pocho/dns-updater.sh .
aws s3 sync s3://jv-pocho/images/ ./content/images
aws s3 sync s3://jv-pocho/themes/ ./content/themes
docker-compose up -d
sleep 30
docker exec -i db_jv sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < backup.sql
rm backup.sql
chmod +x dns-updater.sh
This bootstrap works ok, it creates the folder and download the files (it has permissions to download the files) i.e.:
download: s3://jv-pocho/dns-updater.sh to ./dns-updater.sh
[ 92.739262] cloud-init[3203]: Completed 32.0 KiB/727.2 KiB (273.1 KiB/s) with 25 file(s) remaining
so it's copying all the files correctly. The thing is that when i enter via SSH to the instance, i don't have any files inside
[ec2-user#ip-x-x-x-x ~]$ ls
[ec2-user#ip-x-x-x-x ~]$ ls -l
total 0
all commands worked as expected, all the yum installs, python, docker, etc were successfully installed, but no files.
Are files deleted after the bootstrap script ran?
thanks!
Try to copy them in a specific path, then look for it. Because here we don't know which path it's going to use.
Use the following command for specific path:
aws s3 cp s3://Bucket-name/Objet /Path
else you can do one thing,
use pwd command to get the current directory and print it using echo command so that you will get the present working directory.
I have a Dockerfile as follow:
FROM centos
RUN mkdir work
RUN yum install -y python3 java-1.8.0-openjdk java-1.8.0-openjdk-devel tar git wget zip
RUN pip install pandas
RUN pip install boto3
RUN pip install pynt
WORKDIR ./work
CMD ["bash"]
where i am installing some basic dependencies.
Now when I run
docker run imagename
it does nothing but when I run
docker run -it imageName
I lands into the bash shell. But I want to get into the bash shell as soon as I trigger the run command without any extra parameters.
I am using this docker container in AWS codebuild and there I can't specify any parameters like -it but I want to execute my code in the docker container itself.
Is it possible to modify CMD/ENTRYPOINT in such a way that when running the docker image I land right inside the container?
I checked your container, it will not even build due to missing pip. So I modified it a bit so that it at least builds:
FROM centos
RUN mkdir glue
RUN yum install -y python3 java-1.8.0-openjdk java-1.8.0-openjdk-devel tar git wget zip python3-pip
RUN pip3 install pandas
RUN pip3 install boto3
RUN pip3 install pynt
WORKDIR ./glue
Build it using, e.g.:
docker build . -t glue
Then you can run command in it using for example the following syntax:
docker run --rm glue bash -c "mkdir a; ls -a; pwd"
I use --rm as I don't want to keep the container.
Hope this helps.
We cannot login to the docker container directly.
If you want to run any specific commands when the container start in detach mode than either you can give it in CMD and ENTRYPOINT command of the Dockerfile.
If you want to get into the shell directly, you can run
docker -it run imageName
or
docker run imageName bash -c "ls -ltr;pwd"
and it will return the output.
If you have triggered the run command without -it param then you can get into the container using:
docker exec -it imageName
and you will land up into the shell.
Now, if you are using AWS codebuild custom images and concerned about how the commands can be submitted to the container than you have to put your commands into the build_spec.yaml file and put your commands either in pre_build, build or post_build parameter and those commands will be submitted to the docker container.
-build_spec.yml
version: 0.2
phases:
pre_build:
commands:
- pip install boto3 #or any prebuild configuration
build:
commands:
- spark-submit job.py
post_build:
commands:
- rm -rf /tmp/*
More about build_spec here
I want to copy a file from AWS S3 to a local directory through a docker container.
This copying command is easy without docker, I can see the file downloaded in the current directory.
But the problem is with docker that I don’t even know how to access the file.
Here is my Dockerfile:
FROM ubuntu
WORKDIR "/Users/ezzeldin/s3docker-test"
RUN apt-get update
RUN apt-get install -y awscli
ENV AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
ENV AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
CMD [ "aws", "s3", "cp", "s3://ezz-test/s3-test.py", "." ]
The current working folder that I should see the file downloaded to is s3docker-test/. This is what I'm doing after building the Dockerfile to mount a volume myvol to the local directory
docker run -d --name devtest3 -v $PWD:/var/lib/docker/volumes/myvol/_data ubuntu
So after running the image I get this:
download: s3://ezz-test/s3-test.py to ./s3-test.py
which shows that the file s3-test.py is already downloaded, but when I run ls in the interactive terminal I can't see it. So how can I access that file?
Looks like you are overriding containers folder with your empty folder, when you run -v $PWD:/var/lib/docker/volumes/myvol/_data.
Try to simply copy the files from container to host fs by running:
docker cp \
<containerId>:/Users/ezzeldin/s3docker-test/s3-test.py \
/host/path/target/s3-test.py
You could perform this command even on downed container. But first you will have to run it without folder override:
docker run -d --name devtest3 ubuntu
I have an aws code pipeline which currently successfully deploys code to my EC2 instances.
I have a Docker image that has the necessary setup to run my code, Dockerfile provided below. When I run docker run -t it just loads up an interactive shell on my docker but then hangs on any command (eg: ls)
Any advice?
FROM continuumio/anaconda2
RUN apt-get install git
ENV PYTHONPATH /app/phdcode/panaxeaA1
# setting up venv
RUN conda create --name panaxea -y
RUN /bin/bash -c "source activate panaxea"
# Installing necessary packages
RUN conda install -c guyer pysparse
RUN conda install -c conda-forge pympler
RUN pip install pysparse
RUN git clone https://github.com/usnistgov/fipy.git
RUN cd fipy && python setup.py install
RUN cd ~
WORKDIR /app
COPY . /app
RUN cd panaxeaA1/models/alpha04c/launchers
RUN echo "launching..."
CMD python launcher_260818_aws.py
docker run -t simply starts a docker container with a pseuodo-tty connection to the container's stdin. However, just running this command does not establish an interactive shell to the container. You will need this to be able to have run commands within your container.
You need to also append the -i command line flag along with the shell you wish to use. For example, docker run -it IMAGE_NAME bash will launch a container from the image you provide using bash as your interactive shell. You can then run Bash commands as you normally would.
If you are looking for a simple way to run containers on EC2 instances in AWS, I highly recommend AWS EC2 Container Service (ECS) as an option. It is a very simple service for running containers that abstracts and manages much of the server level work involved in running containers.
I am using docker container for rails and ember.I am mounting the source from my local to the container. All the changes I make here on local are reflected in the container.
Now I want to use generators to create files. The files are created, but they are write protected on my machine.
When I try to do docker-compose run frontend bash, I get a root#061e4159d4ef:/frontend# superuser prompt access inside of the container. I can create files when I am in this mode. These files are write protected in my host.
I have also tried docker-compose run --user "$(id -u):$(id -g)" frontend bash, I get a I have no name!#31bea5ae977c:/frontend$, I am unable to create any file in this mode. Below is the error message that I get.
I have no name!#31bea5ae977c:/frontend$ ember g template about
/frontend/node_modules/ember-cli/node_modules/configstore/node_modules/mkdirp/index.js:90
throw err0;
^
Error: EACCES: permission denied, mkdir '/.config'
at Error (native)
at Object.fs.mkdirSync (fs.js:916:18)
at sync (/frontend/node_modules/ember-cli/node_modules/configstore/node_modules/mkdirp/index.js:71:13)
at Function.sync (/frontend/node_modules/ember-cli/node_modules/configstore/node_modules/mkdirp/index.js:77:24)
at Object.create.all.get (/frontend/node_modules/ember-cli/node_modules/configstore/index.js:39:13)
at Object.Configstore (/frontend/node_modules/ember-cli/node_modules/configstore/index.js:28:44)
at clientId (/frontend/node_modules/ember-cli/lib/cli/index.js:22:21)
at module.exports (/frontend/node_modules/ember-cli/lib/cli/index.js:65:19)
at /usr/local/lib/node_modules/ember-cli/bin/ember:26:3
at /usr/local/lib/node_modules/ember-cli/node_modules/resolve/lib/async.js:44:21
Here is my Dockerfile:
FROM node:6.2
ENV INSTALL_PATH /frontend
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
# Copy package.json separately so it's recreated when package.json
# changes.
COPY package.json ./package.json
RUN npm install
COPY . $INSTALL_PATH
RUN npm install -g phantomjs bower ember-cli ;\
bower --allow-root install
EXPOSE 4200
EXPOSE 49152
CMD [ "ember", "server" ]
Here is my docker-compose.yml file, please note it is not in the current directory, but the parent.
frontend:
build: "frontend/"
dockerfile: "Dockerfile"
environment:
- EMBER_ENV=development
ports:
- "4200:4200"
- "49152:49152"
volumes:
- ./frontend:/frontend
I want to know, how can I use generateors? I am new to learning docker. Any help is appreciated.
You get the I have no name! because of this: $(id -u):$(id -g)
The user id and group in your host are not linked to any user in your container.
Solution:
Execute chown UID:GID -R /frontend inside the container if its already running and you cannot stop it for some reason. Otherwise you could just do the chown command in the host and then run your container again. Node that UID and GID must belong to a user inside the container
Example: chown 101:101 -R /frontend with 101 is the UID:GID of www-data.
If there are no other user exept root in your container, you will have to create a new one. To do so you must create a Dockerfile and put something like this:
FROM your_image_name
RUN useradd -ms /bin/bash newuser
More information about Dockerfiles can be found here or just by googlin' it.