Unable to pass Variables to Dockerfile in Cloud Run - dockerfile

Trying to pass runtime Environment variable from Cloud Run > Variables
Environment variables:
Name: api_service_url
Value: https://someapiurl.com
Below is Dockerfile. I tried to pass through ARG through CONTAINER tab CONTAINER arguments as well
In both cases echo in Dockerfile didn't receive those parameters in build log.
Container is NGINX instance with React build output. App itself is loading fine, but i am not able to pass api url to NGINX proxy_pass directive in nginx.conf file
--set-env-vars=api_service_url=https://apiserverurl is set as well, it shows in the console, but Dockerfile is not getting that value.
Here is my Dockerfile:
# build environment
FROM node:10-alpine as react-build
WORKDIR /app
COPY . ./
# server environment
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/configfile.template
ENV PORT 3000
ENV HOST 0.0.0.0
ARG api_service_url
RUN echo "========api_service_url========="$api_service_url
ENV API_SERVICE_URL $api_service_url
RUN echo "========API_SERVICE_URL========="$API_SERVICE_URL
RUN sh -c "envsubst '\$PORT \$API_SERVICE_URL' < /etc/nginx/conf.d/configfile.template > /etc/nginx/conf.d/default.conf"
#RUN sh -c "envsubst < /etc/nginx/conf.d/configfile.template > /etc/nginx/conf.d/default.conf"
RUN cat /etc/nginx/conf.d/default.conf
COPY --from=react-build /app/webapp /usr/share/nginx/html
EXPOSE 3000
CMD ["nginx", "-g", "daemon off;"]
How can I pass the build arg to the container?

ARG is provided and used at the time of container image build. It has nothing to do with Cloud Run.
You can use environment variables on Cloud Run, and change your ENTRYPOINT to a script that does the necessary modifications based on the given environment variable in the runtime.

The cloud run variable are runtime variable, they are injected in the container, as environment variables and they are available to your code.

changing Dockerfile with this command seems to work
CMD sh -c "envsubst '$PORT $API_SERVICE_URL' < /etc/nginx/conf.d/site.template > /etc/nginx/conf.d/site.conf && exec nginx -g 'daemon off;'"
earlier it is split into 2 commands
RUN sh -c "envsubst '$PORT $API_SERVICE_URL' < /etc/nginx/conf.d/configfile.template > /etc/nginx/conf.d/default.conf"
CMD ["nginx", "-g", "daemon off;"]
i think substitutions were not happening before nginx starts as a result it gives error.
Now problem is resolved. thanks for your answers

Related

ECS Task running, but I can't access in EC2

My task's status is RUNNING and I can see the image on EC2 instance with docker ps. But when I try to access the public IP and port the browser says that nothing was found. I've alredy set a security group rule to allow access to all TCP ports. What else can I do?
Dockerfile:
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 5000
ENV PERSONAPI_ConnectionStrings__Database="[db connection string]"
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY [".", "personApi/"]
RUN dotnet restore "personApi/personApi.csproj"
RUN dotnet build "personApi/personApi.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "personApi/personApi.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "personApi.dll"]
Task ports:
Docker on EC2 instance
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8277d3be6a74 public.ecr.aws/z0o4m5x8/padrao:latest "dotnet personApi.dll" 2 minutes ago Up 2 minutes 0.0.0.0:32769->80/tcp, 0.0.0.0:32768->5000/tcp ecs-api-contato-2-api-contato-aeeefe81b1c8e7b75700
I finally figured out what was wrong. After some research I discovered that by default swagger only works in development mode, that's why I accessed when executed from Visual Studio but I couldn't access after publish.
So I changed this configuration and published again.

Pass Django SECRET_KEY in Environment Variable to Dockerized gunicorn

Some Background
Recently I had a problem where my Django Application was using the base settings file despite DJANGO_SETTINGS_MODULE being set to a different one. It turned out the problem was that gunicorn wasn't inheriting the environment variable and the solution was to add -e DJANGO_SETTINGS_MODULE=sasite.settings.production to my Dockerfile CMD entry where I call gunicorn.
The Problem
I'm having trouble with how I should handle the SECRET_KEY in my application. I am setting it in an environment variable though I previously had it stored in a JSON file but this seemed less secure (correct me if I'm wrong please).
The other part of the problem is that when using gunicorn it doesn't inherit the environment variables that are set on the container normally. As I stated above I ran into this problem with DJANGO_SETTINGS_MODULE. I imagine that gunicorn would have an issue with SECRET_KEY as well. What would be the way around this?
My Current Approach
I set the SECRET_KEY in an environment variable and load it in the django settings file. I set the value in a file "app-env" which contains export SECRET_KEY=<secretkey>, the Dockerfile contains RUN source app-env in order to set the environment variable in the container.
Follow Up Questions
Would it be better to set the environment variable SECRET_KEY with the Dockerfile command ENV instead of sourcing a file? Is it acceptable practice to hard code a secret key in a Dockerfile like that (seems like it's not to me)?
Is there a "best practice" for handling secret keys in Dockerized applications?
I could always go back to JSON if it turns out to be just as secure as environment variables. But it would still be nice to figure out how people handle SECRET_KEY and gunicorn's issue with environment variables.
Code
Here's the Dockerfile:
FROM python:3.6
LABEL maintainer x#x.com
ARG requirements=requirements/production.txt
ENV DJANGO_SETTINGS_MODULE=sasite.settings.production_test
WORKDIR /app
COPY manage.py /app/
COPY requirements/ /app/requirements/
RUN pip install -r $requirements
COPY config config
COPY sasite sasite
COPY templates templates
COPY logs logs
COPY scripts scripts
RUN source app-env
EXPOSE 8001
CMD ["/usr/local/bin/gunicorn", "--config", "config/gunicorn.conf", "--log-config", "config/logging.conf", "-e", "DJANGO_SETTINGS_MODULE=sasite.settings.production_test", "-w", "4", "-b", "0.0.0.0:8001", "sasite.wsgi:application"]
I'll start with why it doesn't work as is, and then discuss the options you have to move forward:
During the build process of a container, a single RUN instruction is run as its own standalone container. Only changes to the filesystem of that container's write layer are captured for subsequent layers. This means that your source app-env command runs and exits, and likely makes no changes on disk making that RUN line a no-op.
Docker allows you to specify environment variables at build time using the ENV instruction, which you've done with the DJANGO_SETTINGS_MODULE variable. I don't necessarily agree that SECRET_KEY should be specified here, although it might be okay to put a value needed for development in the Dockerfile.
Since the SECRET_KEY variable may be different for different environments (staging and production) then it may make sense to set that variable at runtime. For example:
docker run -d -e SECRET_KEY=supersecretkey mydjangoproject
The -e option is short for --env. Additionally, there is --env-file and you can pass in a file of variables and values. If you aren't using the docker cli directly, then your docker client should have the ability to specify these there as well (for example docker-compose lets you specify both of these in the yaml)
In this specific case, since you have something inside the container that knows what variables are needed, you can call that at runtime. There are two ways to accomplish this. The first is to change the CMD to this:
CMD source app-env && /usr/local/bin/gunicorn --config config/gunicorn.conf --log-config config/logging.conf -e DJANGO_SETTINGS_MODULE=sasite.settings.production_test -w 4 -b 0.0.0.0:8001 sasite.wsgi:application
This uses the shell encapsulation syntax of CMD rather than the exec syntax. This means that the entire argument to CMD will be run inside /bin/sh -c ""
The shell will handle running source app-env and then your gunicorn command.
If you ever needed to change the command at runtime, you'd need to remember to specify source app-env && where needed, which brings me to the other approach, which is to use an ENTRYPOINT script
The ENTRYPOINT feature in Docker allows you to handle any necessary startup steps inside the container when it is first started. Consider the following entrypoint script:
#!/bin/bash
cd /app && source app-env && cd - && exec "$#"
This will explicitly cd to the location where app-env is, source it, cd back to whatever the oldpwd was, and then execute the command. Now, it is possible for you to override both the command and working directory at runtime for this image and have any variables specified in the app-env file to be active. To use this script, you need to ADD it somewhere in your image and make sure it is executable, and then specify it in the Dockerfile with the ENTRYPOINT directive:
ADD entrypoint.sh /entrypoint.sh
RUN chmod a+x /entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
With the entrypoint strategy, you can leave your CMD as-is without changing it.

Docker expose not working elastic beanstalk

I have a docker container of tinyproxy that I have put on EB. The container is running fine, Docker ps shows ports 8888/tcp, and logging into the container and doing netstat I see it waiting 0.0.0.0:8888 & :::8888.
But on my EB host I don't see any connection. Below is my dockerfile, I don't have a Dockerrun.aws.json for my single container as it seems not be be mandatory. Any ideas?
FROM alpine:3.7
MAINTAINER Daniel Middleton <monokal.io>
RUN apk add --no-cache \
bash \
tinyproxy
COPY run.sh /opt/docker-tinyproxy/run.sh
RUN chmod +x /opt/docker-tinyproxy/run.sh
EXPOSE 8888
ENTRYPOINT ["/opt/docker-tinyproxy/run.sh","docker run -d --name='tinyproxy' -p 8888:8888 dannydirect/tinyproxy:latest ANY"]
Edit from comments below:
docker ps -a output is:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cd352352ddb ffe23df0902 "/opt/docker-tinypro…" 2 hours ago Up 2 hours 8888/tcp zealous_volhard
and when accessing url from allowed ip in security group following output is given:
Not Implemented
Unknown method or unsupported protocol.
Generated by tinyproxy version 1.8.4.
Any ideas?
Judging from your output, you are reaching tinyproxy so your ports are properly exposed and utilitized.
Now, what is strange is your entrypoint... "/opt/docker-tinyproxy/run.sh" seems like regular tinyproxy start, but... "docker run -d --name='tinyproxy' -p 8888:8888 dannydirect/tinyproxy:latest ANY" seems completely out of place. As far as I can tell from inspecting tinyproxy container, following should be set as entrypoint and cmd:
"Cmd": [
"ANY"
],
"Entrypoint": [
"/opt/docker-tinyproxy/run.sh"
]
So you might want to adjust your entrypoint accordingly, since it basically expects /opt/docker-tinyproxy/run.sh ANY as starting point and gets something completely different.

docker restart container failed: "already in use", but there's no more docker image

I first got my nginx docker image:
docker pull nginx
Then I started it:
docker run -d -p 80:80 --name webserver nginx
Then I stopped it:
docker stop webserver
Then I tried to restart it:
$docker run -d -p 80:80 --name webserver nginx
docker: Error response from daemon: Conflict. The container name "/webserver" is already in use by container 036a0bcd196c5b23431dcd9876cac62082063bf62a492145dd8a55141f4dfd74. You have to remove (or rename) that container to be able to reuse that name..
See 'docker run --help'.
Well, it's an error. But in fact there's nothing in container list now:
docker container list
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Why I restart nginx image failed? How to fix it?
It is because
you have used --name switch.
container is stopped and not removed
You find it stopped
docker ps -a
You can simply start it using below command:
docker start webserver
EDIT: Alternatives
If you want to start the container with below command each time,
docker run -d -p 80:80 --name webserver nginx
then use one of the following:
method 1: use --rm switch i.e., container gets destroyed automatically as soon as it is stopped
docker run -d -p 80:80 --rm --name webserver nginx
method 2: remove it explicitly after stopping the container before starting the command that you are currently using.
docker stop <container name>
docker rm <container name>
As the error says.
You have to remove (or rename) that container to be able to reuse that name
This leaves you two options.
You may delete the container that is using the name "webserver" using the command
docker rm 036a0bcd196c5b23431dcd9876cac62082063bf62a492145dd8a55141f4dfd74
and retry.
Or you may use a different name during the run command. This is not recommended, as you no longer need that docker.
It's better to remove the unwanted docker and reuse the name.
While the great answers are correct, they didn't actually solve the problem I was facing.
How To:
Safely automate starting of named docker container regardless of its prior state
The solution is to wrap the docker run command with an additional check and either do a run or a stop + run (effectively restart with the new image) based on the result.
This achieves both of my goals:
Avoids the error
Allows me to periodically update the image (say new build) and restart safely
#!/bin/bash
# Adapt the following 3 parameters to your specific case
NAME=myname
IMAGE=myimage
RUN_OPTIONS='-d -p 8080:80'
ContainerID="$(docker ps --filter name="$NAME" -q)"
if [[ ! -z "$ContainerID" ]]; then
echo "$NAME already running as container $ContainerID: stopping ..."
docker stop "$ContainerID"
fi
echo "Starting $NAME ..."
exec docker run --rm --name "$NAME" $RUN_OPTIONS "$IMAGE"
Now I can run (or stop + start if already running) the $NAME docker container in a idempotent way, without worrying about this possible failure.

docker-compose.yml file behaves differently on ECS than local docker-compose

I have the following minimal docker-compose.yml:
worker:
working_dir: /app
image: <my-repo>.dkr.ecr.us-east-1.amazonaws.com/ocean-boiler:latest
cpu_shares: 4096
mem_limit: 524288000
command: /bin/bash -c "bin/delayed-job --pool=*:1"
When I run it locally it using docker-compose, love and happiness.
When I request ECS runs it remotely, I get the following:
ecs-cli up
=>
time="2016-05-03T11:40:00-07:00" level=info msg="Stopped container..." container="<cid-redacted>/worker" desiredStatus=STOPPED lastStatus=STOPPED taskDefinition="ecscompose-spud:73"
then; we check the fallout using ps:
ecs-cli ps
=>
<cid-redacted>/worker STOPPED Reason: DockerStateError: [8] System error: exec: "/bin/bash -c \"bin/delayed-job --pool=*:1\"": stat /bin/bash -c "bin/delayed-job --pool=*:1": no such file or directory ecscompose-spud:73
I've been down the rabbit hole of not referring to any files without complete paths. My docker instance functions as intended whether I run it locally or on a remote machine, however ordering it about with ecs-cli seems to be sad-panda business.
Just running it locally with docker-compose up functions as intended... Any halp would be appreciated!
EDIT: Finally fixed. Using command: categorically worked super bad for me - my docker containers now contain the command needed to run; and my advice is to avoid the use of command unless you really need it.