Dockerfile does not pull from Nexus - dockerfile

I have configured daemon.json, so I can pull images in my local machine from our Nexus image repository.
daemon.json details:
{
"registry-mirrors": ["http://jenkins.DomaiName.com"],
"insecure-registries": [
"jenkins.DomaiName.com",
"jenkins.DomaiName.com:8312"
],
"debug": true,
"experimental": false,
"features": {
"buildkit": true
}
}
The configuration works fine and I can pull the images using:
docker pull jenkins.DomaiName.com/ImageA
In my dockerfile, I can use the image since I have already pull it.
dockerfile:
FROM jenkins.DomaiName.com/ImageA
VOLUME ./:app/
COPY . /app/
WORKDIR /app
RUN pip install -r requirements.txt
RUN apt-get update
RUN apt-get install -y wget
RUN apt-get install -y tdsodbc unixodbc-dev
RUN apt install unixodbc-bin -y
RUN apt-get clean -y
EXPOSE 9999:9999 1433:1433
ENTRYPOINT python -u flask_api.py 9999
The dockerfile also works fine and it can pull the ImageA.
I was assuming that dockerfile should still work fine even if I had not pull image A from Nexus repo using docker pull jenkins.DomaiName.com/ImageA. So what I did, I removed the imageA from local images and run dockerfile again. But in contrast to my assumption dockerfile did not pull the image from Nexus repo because it could not find the image. Does the dockerfile uses different daemon.json file than the one I modified in /etc/docker?

Related

what is the most efficient way of running PLaywright on Azure App Service

I am hosting Django application on Azure that contains 4 Docker images: Django, React, Celery beats and Celery worker. I have celery task where user can set up python file and run Playwright.
Question
What is the best way to run Playwright. As you can see in my Dockerfile below I am installing chromium usuing playwright install but I am not sure if it is the best approach for this solution:
FROM python:3.9
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
RUN apt-get update && apt-get -y install netcat && apt-get -y install gettext
RUN mkdir /code
COPY . /code/
WORKDIR /code
RUN pip install --no-cache-dir git+https://github.com/ByteInternet/pip-install-privates.git#master#egg=pip-install-privates
RUN pip install --upgrade pip
RUN pip_install_privates --token ${GITHUB_TOKEN} /code/requirements.txt
RUN playwright install --with-deps chromium
RUN playwright install-deps
RUN touch /code/logs/celery.log
RUN chmod +x /code/logs/celery.log
EXPOSE 80

cdk deploy option to re-build image

I am deploying a new stack using AWS Fargate, I am using the cdk in python.
The docker image is build and push in ECR when I do cdk deploy but when I do a change in my entrypoint.sh that is copied in my Dockerfile, the cdk does not detect this change.
So cdk command ends with "no changes".
How to re-build and update the docker image with the cdk?
This is my code to create the service
back = aws_ecs_patterns.ApplicationLoadBalancedFargateService(
self,
"back",
cpu=256,
task_image_options=aws_ecs_patterns.ApplicationLoadBalancedTaskImageOptions(
image=ecs.ContainerImage.from_asset('./back'),
),
desired_count=2,
memory_limit_mib=512,
public_load_balancer=True,
)
Here is my Dockerfile
FROM python:3.8
ENV PYTHONUNBUFFERED=1
WORKDIR /app
RUN apt update && apt install -y python3-dev libpq-dev wait-for-it
COPY requirements.txt /app
RUN pip install -r requirements.txt
COPY . /app
ENTRYPOINT ["/app/entrypoint.sh"]
Thanks!
The ./back directory was a symbolic link.
This change did the trick:
image=ecs.ContainerImage.from_asset(
'./back',
follow_symlinks=cdk.SymlinkFollowMode.ALWAYS,
),

i am not able execute next commands after localstack --host command

FROM ubuntu:18.04
RUN apt-get update -y && \
apt-get install -y apt-utils && \
apt-get install -y python3-pip python3-dev\
pypy-setuptools
COPY . .
WORKDIR .
RUN pip3 install boto3
RUN pip3 install awscli
RUN apt-get install libsasl2-dev
ENV HOST_TMP_FOLDER=/tmp/localstack
RUN apt-get install -y git
RUN apt-get install -y npm
RUN mkdir -p .localstacktmp
ENV TMPDIR=.localstacktmp
RUN pip3 install localstack[full]
RUN SERVICES=s3,lambda,es DEBUG=1 localstack start --host
WORKDIR ./boto3Tools
ENTRYPOINT [ "python3" ]
CMD [ "script.py" ]
You can't start services in a Dockerfile.
In your case what's happening is that your Dockerfile is running RUN localstack start. That goes ahead and starts up the selected set of services and stays running, waiting for connections. Meanwhile, the Dockerfile is waiting for the command you launched to finish before it moves on.
The usual answer to this is to start servers and clients in separate containers (or start a server in a container and run clients directly from your host). In this case, there is already a localstack/localstack Docker image and a prebuilt Docker Compose setup, so you can just run it:
curl -LO https://github.com/localstack/localstack/raw/master/docker-compose.yml
docker-compose up
The localstack GitHub repo has more information on using it.
If you wanted to use a Boto-based application with this, the easiest way is to add it to the same docker-compose.yml file (or, conversely, add Localstack to the Compose setup you already have). At this point you can use normal Docker inter-container communication to reach the mock AWS, but you have to configure this in your code
s3 = boto3.client('s3',
endpoint_url='http://localstack:4566')
You have to make similar changes anyways to use localstack, so the only difference is the hostname you're setting.

Why does CMD never work in my Dockerfiles?

I have a few Dockerfiles where CMD doesn't seem to run. Here is an example (all the way at the bottom).
##########################################################
# Set the base image to Ansible
FROM ubuntu:16.10
# Install Ansible, Python and Related Deps #
RUN apt-get -y update && \
apt-get install -y python-yaml python-jinja2 python-httplib2 python-keyczar python-paramiko python-setuptools python-pkg-resources git python-pip
RUN mkdir /etc/ansible/
RUN echo '[local]\nlocalhost\n' > /etc/ansible/hosts
RUN mkdir /opt/ansible/
RUN git clone http://github.com/ansible/ansible.git /opt/ansible/ansible
WORKDIR /opt/ansible/ansible
RUN git submodule update --init
ENV PATH /opt/ansible/ansible/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin
ENV PYTHONPATH /opt/ansible/ansible/lib
ENV ANSIBLE_LIBRARY /opt/ansible/ansible/library
RUN apt-get update -y
RUN apt-get install python -y
RUN apt-get install python-dev -y
RUN apt-get install python-setuptools -y
RUN apt-get install python-pip
RUN mkdir /ansible/
WORKDIR /ansible
COPY ./ansible ./
WORKDIR /
RUN ansible-playbook -c local ansible/playbooks/installdjango.yml
ENV PROJECTNAME testwebsite
################## SETUP DIRECTORY STRUCTURE ######################
WORKDIR /home
CMD ["django-admin" "startproject" "$PROJECTNAME"]
EXPOSE 8000
If I build and run the container, I can manually run
Django-admin startproject $PROJECTNAME and it will create a new project as expected, but the CMD in my Dockerfile does not seem to be doing anything and this is happening with all my other Dockerfiles so there's something I must not be getting.
ENTRYPOINT and CMD defines the default command that docker runs when it starts your container, not when the image is built. When ENTRYPOINT isn't defined, you simply run the value of CMD. Otherwise, CMD becomes args to the ENTRYPOINT. When you run your image, you can override the value of the CMD by passing args after the container name.
So, in your example above, CMD may be defined as anything, but when you run your container with docker run -it <imagename> /bin/bash, you override any value of CMD and replace it with /bin/bash. To run the defined value of CMD, you would need to run the container with docker run <imagename>.

Amazon S3 + Docker - "403 Forbidden: The difference between the request time and the current time is too large"

I am trying to run my django application in a docker container with static files served from Amazon S3. When I run RUN $(which python3.4) /home/docker/code/vitru/manage.py collectstatic --noinput in my Dockerfile, I get a 403 Forbidden error from Amazon S3 with the following response XML
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>RequestTimeTooSkewed</Code>
<Message>The difference between the request time and the current time is too large.</Message>
<RequestTime>Sat, 27 Dec 2014 11:47:05 GMT</RequestTime>
<ServerTime>2014-12-28T08:45:09Z</ServerTime>
<MaxAllowedSkewMilliseconds>900000</MaxAllowedSkewMilliseconds>
<RequestId>4189D5DAF2FA6649</RequestId>
<HostId>lBAhbNfeV4C7lHdjLwcTpVVH2snd/BW18hsZEQFkxqfgrmdD5pgAJJbAP6ULArRo</HostId>
</Error>
My docker container is running Ubuntu 14.04... if that makes any difference.
I also am running the application using uWSGI, without nginx or apache or any other kind of reverse-proxy server.
I also get the error at run-time, when the files are being served to the site.
Attempted Solution
Other stackoverflow questions have reported a similar error using S3 (none specifically in conjunction with Docker) and they have said that this error is caused when your system clock is out of sync, and can be fixed by running
sudo service ntp stop
sudo ntpd -gq
sudo service ntp start
so I added the following to my Dockerfile, but it didn't fix the problem.
RUN apt-get install -y ntp
RUN ntpd -gq
RUN service ntp start
I also attempted to sync the time on my local machine before building the docker image, using sudo ntpd -gq, but that did not work either.
Dockerfile
FROM ubuntu:14.04
# Get most recent apt-get
RUN apt-get -y update
# Install python and other tools
RUN apt-get install -y tar git curl nano wget dialog net-tools build-essential
RUN apt-get install -y python3 python3-dev python-distribute
RUN apt-get install -y nginx supervisor
# Get Python3 version of pip
RUN apt-get -y install python3-setuptools
RUN easy_install3 pip
# Update system clock so S3 does not get 403 Error
# NOT WORKING
#RUN apt-get install -y ntp
#RUN ntpd -gq
#RUN service ntp start
RUN pip install uwsgi
RUN apt-get -y install libxml2-dev libxslt1-dev
RUN apt-get install -y python-software-properties uwsgi-plugin-python3
# Install GEOS
RUN apt-get -y install binutils libproj-dev gdal-bin
# Install node.js
RUN apt-get install -y nodejs npm
# Install postgresql dependencies
RUN apt-get update && \
apt-get install -y postgresql libpq-dev && \
rm -rf /var/lib/apt/lists
# Install pylibmc dependencies
RUN apt-get update
RUN apt-get install -y libmemcached-dev zlib1g-dev libssl-dev
ADD . /home/docker/code
# Setup config files
RUN ln -s /home/docker/code/supervisor-app.conf /etc/supervisor/conf.d/
RUN pip install -r /home/docker/code/vitru/requirements.txt
# Create directory for logs
RUN mkdir -p /var/logs
# Set environment as staging
ENV env staging
# Run django commands
# python3.4 is at /usr/bin/python3.4, but which works too
RUN $(which python3.4) /home/docker/code/vitru/manage.py collectstatic --noinput
RUN $(which python3.4) /home/docker/code/vitru/manage.py syncdb --noinput
RUN $(which python3.4) /home/docker/code/vitru/manage.py makemigrations --noinput
RUN $(which python3.4) /home/docker/code/vitru/manage.py migrate --noinput
EXPOSE 8000
CMD ["supervisord", "-c", "/home/docker/code/supervisor-app.conf"]
Noted in the comments but for others who come here:
If using boot2docker (i.e. If on windows or Mac), the boot2docker vm has a known time issue when you sleep your machine--see here. Since the host for your docker container is the boot2docker vm, that's where it syncs its time.
I've had success restarting the boot2docker vm. This may cause problems with losing some state, i.e. If you had some data volumes.
Docker containers share clock with the host machine, so syncing your host machine clock should solve the problem. To force the timezone of your container is the same as your host machine you can add -v /etc/localtime:/etc/localtime:ro in docker run.
Anyway, you should not start a service in a Dockerfile. This file contains the steps and commands to build the image for your containers, and any process you run inside a Dockerfile will end after the building process. To start any service you should add a run script or a process control daemon (as supervisord) which will run each time you run a new container.
Restarting Docker for Mac fixes the error on my machine.