cdk deploy option to re-build image - amazon-web-services

I am deploying a new stack using AWS Fargate, I am using the cdk in python.
The docker image is build and push in ECR when I do cdk deploy but when I do a change in my entrypoint.sh that is copied in my Dockerfile, the cdk does not detect this change.
So cdk command ends with "no changes".
How to re-build and update the docker image with the cdk?
This is my code to create the service
back = aws_ecs_patterns.ApplicationLoadBalancedFargateService(
self,
"back",
cpu=256,
task_image_options=aws_ecs_patterns.ApplicationLoadBalancedTaskImageOptions(
image=ecs.ContainerImage.from_asset('./back'),
),
desired_count=2,
memory_limit_mib=512,
public_load_balancer=True,
)
Here is my Dockerfile
FROM python:3.8
ENV PYTHONUNBUFFERED=1
WORKDIR /app
RUN apt update && apt install -y python3-dev libpq-dev wait-for-it
COPY requirements.txt /app
RUN pip install -r requirements.txt
COPY . /app
ENTRYPOINT ["/app/entrypoint.sh"]
Thanks!

The ./back directory was a symbolic link.
This change did the trick:
image=ecs.ContainerImage.from_asset(
'./back',
follow_symlinks=cdk.SymlinkFollowMode.ALWAYS,
),

Related

Docker Stuck at building Golang inside AWS EC2

Im going crazy here... im trying to create a docker container with this file:
#Docker
FROM golang:alpine as builder
RUN apk update && apk add --no-cache git make gcc libc-dev
# download, cache and install deps
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
# copy and compiled the app
COPY . .
RUN make ditto
# start a new stage from scratch
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
# copy the prebuilt binary from the builder stage
COPY --from=builder /app/_build/ditto .
COPY --from=builder /app/send-email-report.sh /usr/bin/
ENTRYPOINT ["./ditto"]
Running: docker build .
On my Pc it works perfect
BUT in my AWS instance of EC2 same code:
docker build .
Sending build context to Docker daemon 108kB
Step 1/13 : FROM golang:1.18-alpine as builder
---> 155ead2e66ca
Step 2/13 : RUN apk update && apk add --no-cache git make gcc libc-dev
---> Running in 1d3adab601f3
fetch https://dl-cdn.alpinelinux.org/alpine/v3.16/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.16/community/x86_64/APKINDEX.tar.gz
v3.16.0-99-g5b6c75ce95 [https://dl-cdn.alpinelinux.org/alpine/v3.16/main]
v3.16.0-108-ge392af4f2e [https://dl-cdn.alpinelinux.org/alpine/v3.16/community]
OK: 17022 distinct packages available
And get Stuck there...
It was working fine in the past, I think nobody has change on that docker file and folder...
Can somebody help me? please

Apache Druid Nano Container Service Error

I want to spin up a low configuration containerized service for which I created a Dockerfile as below:
docker build -t apache/druid_nano:0.20.2 -f Dockerfile .
FROM ubuntu:16.04
Install Java JDK 8
RUN apt-get update
&& apt-get install -y openjdk-8-jdk
RUN mkdir /app
WORKDIR /app
COPY apache-druid-0.20.2-bin.tar.gz /app
RUN tar xvzf apache-druid-0.20.2-bin.tar.gz
WORKDIR /app/apache-druid-0.20.2
EXPOSE <PORT_NUMBERS>
ENTRYPOINT ["/bin/start/start-nano-quickstart"]
When I start the container using the command "docker run -d -p 8888:8888 apache/druid_nano:0.20.2, I get an error as below:
/bin/start-nano-quickstart: no such file or directory
I removed the ENTRYPOINT command and built the image again just to check if the file exists in the bin directory inside the container. There is a file start-nano-quickstart under the bin directory inside the container.
Am I missing anything here? Please help.

Dockerfile does not pull from Nexus

I have configured daemon.json, so I can pull images in my local machine from our Nexus image repository.
daemon.json details:
{
"registry-mirrors": ["http://jenkins.DomaiName.com"],
"insecure-registries": [
"jenkins.DomaiName.com",
"jenkins.DomaiName.com:8312"
],
"debug": true,
"experimental": false,
"features": {
"buildkit": true
}
}
The configuration works fine and I can pull the images using:
docker pull jenkins.DomaiName.com/ImageA
In my dockerfile, I can use the image since I have already pull it.
dockerfile:
FROM jenkins.DomaiName.com/ImageA
VOLUME ./:app/
COPY . /app/
WORKDIR /app
RUN pip install -r requirements.txt
RUN apt-get update
RUN apt-get install -y wget
RUN apt-get install -y tdsodbc unixodbc-dev
RUN apt install unixodbc-bin -y
RUN apt-get clean -y
EXPOSE 9999:9999 1433:1433
ENTRYPOINT python -u flask_api.py 9999
The dockerfile also works fine and it can pull the ImageA.
I was assuming that dockerfile should still work fine even if I had not pull image A from Nexus repo using docker pull jenkins.DomaiName.com/ImageA. So what I did, I removed the imageA from local images and run dockerfile again. But in contrast to my assumption dockerfile did not pull the image from Nexus repo because it could not find the image. Does the dockerfile uses different daemon.json file than the one I modified in /etc/docker?

i am not able execute next commands after localstack --host command

FROM ubuntu:18.04
RUN apt-get update -y && \
apt-get install -y apt-utils && \
apt-get install -y python3-pip python3-dev\
pypy-setuptools
COPY . .
WORKDIR .
RUN pip3 install boto3
RUN pip3 install awscli
RUN apt-get install libsasl2-dev
ENV HOST_TMP_FOLDER=/tmp/localstack
RUN apt-get install -y git
RUN apt-get install -y npm
RUN mkdir -p .localstacktmp
ENV TMPDIR=.localstacktmp
RUN pip3 install localstack[full]
RUN SERVICES=s3,lambda,es DEBUG=1 localstack start --host
WORKDIR ./boto3Tools
ENTRYPOINT [ "python3" ]
CMD [ "script.py" ]
You can't start services in a Dockerfile.
In your case what's happening is that your Dockerfile is running RUN localstack start. That goes ahead and starts up the selected set of services and stays running, waiting for connections. Meanwhile, the Dockerfile is waiting for the command you launched to finish before it moves on.
The usual answer to this is to start servers and clients in separate containers (or start a server in a container and run clients directly from your host). In this case, there is already a localstack/localstack Docker image and a prebuilt Docker Compose setup, so you can just run it:
curl -LO https://github.com/localstack/localstack/raw/master/docker-compose.yml
docker-compose up
The localstack GitHub repo has more information on using it.
If you wanted to use a Boto-based application with this, the easiest way is to add it to the same docker-compose.yml file (or, conversely, add Localstack to the Compose setup you already have). At this point you can use normal Docker inter-container communication to reach the mock AWS, but you have to configure this in your code
s3 = boto3.client('s3',
endpoint_url='http://localstack:4566')
You have to make similar changes anyways to use localstack, so the only difference is the hostname you're setting.

How to run Django Daphne service on Google Kubernetes Engine and Google Container Registry

Dockerfile
FROM ubuntu:18.04
RUN apt-get update
RUN apt-get install build-essential -y
WORKDIR /app
COPY . /app/
# Python
RUN apt-get install python3-pip -y
RUN python3 -m pip install virtualenv
RUN python3 -m virtualenv /env36
ENV VIRTUAL_ENV /env36
ENV PATH /env36/bin:$PATH
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
# Start Daphne [8443]
ENV DJANGO_SETTINGS_MODULE=settings
CMD daphne -e ssl:8443:privateKey=/ssl-cert/privkey.pem:certKey=/ssl-cert/fullchain.pem asgi:application
# Open port 8443
EXPOSE 8443
Enable Google IP Alias in order that we may connect to Google Memorystore/Redis
Build & Push
$ docker build -t [GCR_NAME] -f path/to/Dockerfile .
$ docker tag [GCR_NAME] gcr.io/[GOOGLE_PROJECT_ID]/[GCR_NAME]:[TAG]
$ docker push gcr.io/[GOOGLE_PROJECT_ID]/[GCR_NAME]:[TAG]
Deploy to GKE
$ envsubst < k8s.yml > patched_k8s.yml
$ kubectl apply -f patched_k8s.yml
$ kubectl rollout status deployment/[GKE_WORKLOAD_NAME]
I configured Daphne on GKE/GCR. If you guys have other solutions, please give me your advice.
system is not included in the Ubuntu:18.04 docker image.
Add an ENTRYPOINT to your Dockerfile with commands in ExecStart property of project-daphne.service.