ECS Task running, but I can't access in EC2 - amazon-web-services

My task's status is RUNNING and I can see the image on EC2 instance with docker ps. But when I try to access the public IP and port the browser says that nothing was found. I've alredy set a security group rule to allow access to all TCP ports. What else can I do?
Dockerfile:
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 5000
ENV PERSONAPI_ConnectionStrings__Database="[db connection string]"
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY [".", "personApi/"]
RUN dotnet restore "personApi/personApi.csproj"
RUN dotnet build "personApi/personApi.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "personApi/personApi.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "personApi.dll"]
Task ports:
Docker on EC2 instance
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8277d3be6a74 public.ecr.aws/z0o4m5x8/padrao:latest "dotnet personApi.dll" 2 minutes ago Up 2 minutes 0.0.0.0:32769->80/tcp, 0.0.0.0:32768->5000/tcp ecs-api-contato-2-api-contato-aeeefe81b1c8e7b75700

I finally figured out what was wrong. After some research I discovered that by default swagger only works in development mode, that's why I accessed when executed from Visual Studio but I couldn't access after publish.
So I changed this configuration and published again.

Related

Unable to pass Variables to Dockerfile in Cloud Run

Trying to pass runtime Environment variable from Cloud Run > Variables
Environment variables:
Name: api_service_url
Value: https://someapiurl.com
Below is Dockerfile. I tried to pass through ARG through CONTAINER tab CONTAINER arguments as well
In both cases echo in Dockerfile didn't receive those parameters in build log.
Container is NGINX instance with React build output. App itself is loading fine, but i am not able to pass api url to NGINX proxy_pass directive in nginx.conf file
--set-env-vars=api_service_url=https://apiserverurl is set as well, it shows in the console, but Dockerfile is not getting that value.
Here is my Dockerfile:
# build environment
FROM node:10-alpine as react-build
WORKDIR /app
COPY . ./
# server environment
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/configfile.template
ENV PORT 3000
ENV HOST 0.0.0.0
ARG api_service_url
RUN echo "========api_service_url========="$api_service_url
ENV API_SERVICE_URL $api_service_url
RUN echo "========API_SERVICE_URL========="$API_SERVICE_URL
RUN sh -c "envsubst '\$PORT \$API_SERVICE_URL' < /etc/nginx/conf.d/configfile.template > /etc/nginx/conf.d/default.conf"
#RUN sh -c "envsubst < /etc/nginx/conf.d/configfile.template > /etc/nginx/conf.d/default.conf"
RUN cat /etc/nginx/conf.d/default.conf
COPY --from=react-build /app/webapp /usr/share/nginx/html
EXPOSE 3000
CMD ["nginx", "-g", "daemon off;"]
How can I pass the build arg to the container?
ARG is provided and used at the time of container image build. It has nothing to do with Cloud Run.
You can use environment variables on Cloud Run, and change your ENTRYPOINT to a script that does the necessary modifications based on the given environment variable in the runtime.
The cloud run variable are runtime variable, they are injected in the container, as environment variables and they are available to your code.
changing Dockerfile with this command seems to work
CMD sh -c "envsubst '$PORT $API_SERVICE_URL' < /etc/nginx/conf.d/site.template > /etc/nginx/conf.d/site.conf && exec nginx -g 'daemon off;'"
earlier it is split into 2 commands
RUN sh -c "envsubst '$PORT $API_SERVICE_URL' < /etc/nginx/conf.d/configfile.template > /etc/nginx/conf.d/default.conf"
CMD ["nginx", "-g", "daemon off;"]
i think substitutions were not happening before nginx starts as a result it gives error.
Now problem is resolved. thanks for your answers

Connecting to local docker-compose container Windows 10

Very similar to this question, I cannot connect to my local docker-compose container from my browser (Firefox) on Windows 10 and have been troubleshooting for some time, but I cannot seem to find the issue.
Here is my docker-compose.yml:
version: "3"
services:
frontend:
container_name: frontend
build: ./frontend
ports:
- "3000:3000"
working_dir: /home/node/app/
environment:
DEVELOPMENT: "yes"
stdin_open: true
volumes:
- ./frontend:/home/node/app/
command: bash -c "npm start & npm run build"
my_app_django:
container_name: my_app_django
build: ./backend/
environment:
SECRET_KEY: "... not included ..."
command: ["./rundjango.sh"]
volumes:
- ./backend:/code
- media_volume:/code/media
- static_volume:/code/static
expose:
- "443"
my_app_nginx:
container_name: my_app_nginx
image: nginx:1.17.2-alpine
volumes:
- ./nginx/nginx.dev.conf:/etc/nginx/conf.d/default.conf
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
- ./frontend:/home/app/frontend/
ports:
- "80:80"
depends_on:
- my_app_django
volumes:
static_volume:
media_volume:
I can start the containers with docker-compose -f docker-compose.yml up -d and there are no errors when I check the logs with docker logs my_app_django or docker logs my_app_nginx. Additionally, doing docker ps shows all the containers running as they should.
The odd part about this issue is that on Linux, everything runs without issue and I can find my app on localhost at port 80. The only thing I do differently when I am on Windows is that I run a dos2unix on my .sh files to ensure that they run properly. If I omit this step, then I get many errors which leads me to believe that I have to do this.
If anyone could give guidance/advice as to what may I be doing incorrectly or missing altogether, I would be truly grateful. I am also happy to provide more details, just let me know. Thank you!
EDIT #1: As timur suggested, I did a docker run -p 80:80 -d nginx and here was the output:
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
bf5952930446: Pull complete
ba755a256dfe: Pull complete
c57dd87d0b93: Pull complete
d7fbf29df889: Pull complete
1f1070938ccd: Pull complete
Digest: sha256:36b74457bccb56fbf8b05f79c85569501b721d4db813b684391d63e02287c0b2
Status: Downloaded newer image for nginx:latest
19b56a66955145e4f59eefff57340b4affe5f7e0d82ad013742a60b479687c40
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: driver failed programming external connectivity on endpoint naughty_hoover (8c7b2fa4aef964899c366e1897e38727bb7e4c38431875c5cb8456567005f368): Bind for 0.0.0.0:80 failed: port is already allocated.
This might be the cause of the error but I don't really understand what needs to be done at this point.
EDIT #2: As requested, here are my Dockerfiles (one for backend, one for frontend)
Backend Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install -y imagemagick libxmlsec1-dev pkg-config
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code
Frontend Dockerfile:
FROM node
WORKDIR /home/node/app/
COPY . /home/node/app/
RUN npm install -g react-scripts
RUN npm install
EDIT #3: When I do docker ps, this is what I get:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0da02ad8d746 nginx:1.17.2-alpine "nginx -g 'daemon of…" About an hour ago Up About an hour 0.0.0.0:80->80/tcp my_app_nginx
070291de8362 my_app_frontend "docker-entrypoint.s…" About an hour ago Up About an hour 0.0.0.0:3000->3000/tcp frontend
2fcf551ce3fa my_app_django "./rundjango.sh" 12 days ago Up About an hour 443/tcp my_app_django
As we established you use Docker Toolbox that is backed by VirtualBox rather than default Hyper-V Docker for Windows. In this case you might think of it as a VBox VM that actually runs Docker - so all volume mounts and port mappings apply to docker machine VM, not your host. And management tools (i.e. Docker terminal and docker-compose) actually run on your host OS through MinGW.
Due to this, you don't get binding ports on localhost by default (but you can achieve this by editing VM properties in VirtualBox manually if you so desire - I just googled the second link for some picture tutorials). Suprisingly, the official documentation on this particular topic is pretty scarce - you can get a hint by looking at their examples though.
So in your case, the correct url should be http://192.168.99.100
Another thing that is different between these two solutions is volume mounts. And again, documentation sorta hints at what it should be but I can't point you a more explicit source. As you have probably noticed the terminal you use for all your docker interactions encodes paths a bit differently (I presume because of that MinGW layer) and converted paths get sent off to docker-machine - because it's Linux and would not handle windows-style paths anyway.
From here I see a couple of avenues for you to explore:
Run your project from C:\Users\...\MyProject
As the documentation states, you get c:\Users mounted into /c/Users by default. So theoretically, if you run your docker-compose from your user home folder - paths should automagically align - but since you are having this issue - you are probably running it from somewhere else.
Create another share
You also can create your own mounting mount in Virtual Box. Run pwd in your terminal and note where project root is. Then use Virtual Vox UI and create a path that would make it align with your directory tree (for example, D:\MyProject\ should become /d/MyProject.
Hopefully this will not require you to change your docker-compose.yml either
Alternatively, switch to Hyper-V Docker Desktop - and these particular issues will go away.
Bear in mind, that Hyper-V will not coexist with VirtualBox. So this option might not be available to you if you need VBox for something else.

Elastic Beanstalk - .ebextensions not running on deploy

I have a dotnet core application. And I'm packed it for docker.
My aim is deploying this application to EB but I need to run some commands after deploy.
Thats why I have created a Dockerfile
# https://hub.docker.com/_/microsoft-dotnet-core
FROM mcr.microsoft.com/dotnet/core/sdk:2.2
WORKDIR /
# copy csproj and restore as distinct layers
COPY . ./App/
WORKDIR /App/WebApi
RUN dotnet restore
RUN dotnet publish -c release -o /build --no-restore
WORKDIR /build
ENV ASPNETCORE_URLS=http://+:8080
EXPOSE 8080
ENTRYPOINT ["dotnet", "WebApi.dll"]
And I have a Dockerrun.aws.json
{
"AWSEBDockerrunVersion": "1"
}
And finally this is my .ebextensions/01_nginx.conf
commands:
test_command:
command: "touch /tmp/x.f"
Then I'm creating an EB application
$ eb init
and creating an enviorment
$eb create
It is deploying my application successfully.
What is expected?
When I login to my EC2 container with ssh I want to see the /tmp/x.f file.
What is the problem?
I have tried several ways, I'm sure that .ebextensions/01_nginx.conf not running any way, because /tmp/x.f file not exists.
Notes :
I'm sure that the zip file which is deployed has .ebextensions/01_nginx.conf file
I'm sure that it is not about git. Because I'm including .ebignore in my root directory.
I can react the end point without any problem, my application is deploying successfully.
What is my mistake?
A probable reason is wrong extension of your files in .ebextensions. It should be .config, not .conf:
Configuration files are YAML- or JSON-formatted documents with a .config file extension that you place in a folder named .ebextensions and deploy in your application source bundle.

Run node.js database migrations on Google Cloud SQL during Google Cloud Build

I would like to run database migrations written in node.js during the Cloud Build process.
Currently, the database migration command is being executed but it seems that the Cloud Build process does not have access to connect to Cloud SQL via an IP address with username/password.
In the case with Cloud SQL and Node.js it would look something like this:
steps:
# Install Node.js dependencies
- id: yarn-install
name: gcr.io/cloud-builders/yarn
waitFor: ["-"]
# Install Cloud SQL proxy
- id: proxy-install
name: gcr.io/cloud-builders/yarn
entrypoint: sh
args:
- "-c"
- "wget https://storage.googleapis.com/cloudsql-proxy/v1.20.1/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy && chmod +x cloud_sql_proxy"
waitFor: ["-"]
# Migrate database schema to the latest version
# https://knexjs.org/#Migrations-CLI
- id: migrate
name: gcr.io/cloud-builders/yarn
entrypoint: sh
args:
- "-c"
- "(./cloud_sql_proxy -dir=/cloudsql -instances=<CLOUD_SQL_CONNECTION> & sleep 2) && yarn run knex migrate:latest"
timeout: "1200s"
waitFor: ["yarn-install", "proxy-install"]
timeout: "1200s"
You would launch yarn install and download Cloud SQL Proxy in parallel. Once these two steps are complete, you run launch the proxy, wait 2 seconds and finally run yarn run knex migrate:latest.
For this to work you would need Cloud SQL Admin API enabled in your GCP project.
Where <CLOUD_SQL_INSTANCE> is your Cloud SQL instance connection name that can be found here. The same name will be used in your SQL connection settings, e.g. host=/cloudsql/example:us-central1:pg13.
Also, make sure that the Cloud Build service account has "Cloud SQL Client" role in the GCP project, where the db instance is located.
As of tag 1.16 of gcr.io/cloudsql-docker/gce-proxy, the currently accepted answer no longer works. Here is a different approach that keeps the proxy in the same step as the commands that need it:
- id: cmd-with-proxy
name: [YOUR-CONTAINER-HERE]
timeout: 100s
entrypoint: sh
args:
- -c
- '(/workspace/cloud_sql_proxy -dir=/workspace -instances=[INSTANCE_CONNECTION_NAME] & sleep 2) && [YOUR-COMMAND-HERE]'
The proxy will automatically exit once the main process exits. Additionally, it'll mark the step as "ERROR" if either the proxy or the command given fails.
This does require the binary is in the /workspace volume, but this can be provided either manually or via a prereq step like this:
- id: proxy-install
name: alpine:3.10
entrypoint: sh
args:
- -c
- 'wget -O /workspace/cloud_sql_proxy https://storage.googleapis.com/cloudsql-proxy/v1.16/cloud_sql_proxy.linux.386 && chmod +x /workspace/cloud_sql_proxy'
Additionally, this should work with TCP since the proxy will be in the same container as the command.
Use google-appengine/exec-wrapper. It is an image to do exactly this. Usage (see README in link):
steps:
- name: "gcr.io/google-appengine/exec-wrapper"
args: ["-i", "gcr.io/my-project/appengine/some-long-name",
"-e", "ENV_VARIABLE_1=value1", "-e", "ENV_2=value2",
"-s", "my-project:us-central1:my_cloudsql_instance",
"--", "bundle", "exec", "rake", "db:migrate"]
The -s sets the proxy target.
Cloud Build runs using a service account and it looks like you need to grant access to Cloud SQL for this account.
You can find additional info about setting service account permissions here.
Here's how to combine Cloud Build + Cloud SQL Proxy + Docker.
If you're running your database migrations/operations within a Docker container in Cloud Build, it won't be able to directly access your proxy, because Docker containers are isolated from the host machine.
Here's what I managed to get up and running:
- id: build
# Build your application
waitFor: ['-']
- id: install-proxy
name: gcr.io/cloud-builders/wget
entrypoint: bash
args:
- -c
- wget -O /workspace/cloud_sql_proxy https://storage.googleapis.com/cloudsql-proxy/v1.15/cloud_sql_proxy.linux.386 && chmod +x /workspace/cloud_sql_proxy
waitFor: ['-']
- id: migrate
name: gcr.io/cloud-builders/docker
entrypoint: bash
args:
- -c
- |
/workspace/cloud_sql_proxy -dir=/workspace -instances=projectid:region:instanceid & sleep 2 && \
docker run -v /workspace:/root \
--env DATABASE_HOST=/root/projectid:region:instanceid \
# Pass other necessary env variables like db username/password, etc.
$_IMAGE_URL:$COMMIT_SHA
timeout: '1200s'
waitFor: [build, install-proxy]
Because our db operations are taking place within the Docker container, I found the best way to provide the access to Cloud SQL by specifying the Unix socket -dir/workspace instead of exposing a TCP port 5432.
Note: I recommend using the directory /workspace instead of /cloudsql for Cloud Build.
Then we mounted the /workspace directory to Docker container's /root directory, which is the default directory where your application code resides. When I tried to mount it to other than /root, nothing seemed to happen (perhaps a permission issue with no error output).
Also: I noticed the proxy version 1.15 works well. I had issues with newer versions. Your mileage may vary.

Issues with Docker Swarm running TeamCity using rexray/ebs for drive persistence in AWS EBS

I'm quite new to Docker but have started thinking about production set-ups, hence needing to crack the challenge of data persistence when using Docker Swarm. I decided to start by creating my deployment infrastructure (TeamCity for builds and NuGet plus the "registry" [https://hub.docker.com/_/registry/] for storing images).
I've started with TeamCity. Obvious this needs data persistence in order to work. I am able to run TeamCity in a container with an EBS drive and everything looks like it is working just fine - TeamCity is working through the set-up steps and my TeamCity drives appear in AWS EBS, but then the worker node TeamCity gets allocated to shuts down and the install process stops.
Here are all the steps I'm following:
Phase 1 - Machine Setup:
Create one AWS instance for master
Create two AWS instances for workers
All are 64-bit Ubuntu t2.mircro instances
Create three elastic IPs for convenience and assign them to the above machines.
Install Docker on all nodes using this: https://docs.docker.com/install/linux/docker-ce/ubuntu/
Install Docker Machine on all nodes using this: https://docs.docker.com/machine/install-machine/
Install Docker Compose on all nodes using this: https://docs.docker.com/compose/install/
Phase 2 - Configure Docker Remote on the Master:
$ sudo docker run -p 2375:2375 --rm -d -v /var/run/docker.sock:/var/run/docker.sock jarkt/docker-remote-api
Phase 3 - install the rexray/ebs plugin on all machines:
$ sudo docker plugin install --grant-all-permissions rexray/ebs REXRAY_PREEMPT=true EBS_ACCESSKEY=XXX EBS_SECRETKEY=YYY
[I lifted the correct values from AWS for XXX and YYY]
I test this using:
$ sudo docker volume create --driver=rexray/ebs --name=delete --opt=size=2
$ sudo docker volume rm delete
All three nodes are able to create and delete drives in AWS EBS with no issue.
Phase 4 - Setup the swarm:
Run this on the master:
$ sudo docker swarm init --advertise-addr eth0:2377
This gives the command to run on each of the workers, which looks like this:
$ sudo docker swarm join --token XXX 1.2.3.4:2377
These execute fine on the worker machines.
Phase 5 - Set up visualisation using Remote Powershell on my local machine:
$ $env:DOCKER_HOST="{master IP address}:2375"
$ docker stack deploy --with-registry-auth -c viz.yml viz
viz.yml looks like this:
version: '3.1'
services:
viz:
image: dockersamples/visualizer
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
ports:
- "8080:8080"
deploy:
placement:
constraints:
- node.role==manager
This works fine and allows me to visualise my swarm.
Phase 6 - Install TeamCity using Remote Powershell on my local machine:
$ docker stack deploy --with-registry-auth -c docker-compose.yml infra
docker-compose.yml looks like this:
version: '3'
services:
teamcity:
image: jetbrains/teamcity-server:2017.1.2
volumes:
- teamcity-server-datadir:/data/teamcity_server/datadir
- teamcity-server-logs:/opt/teamcity/logs
ports:
- "80:8111"
volumes:
teamcity-server-datadir:
driver: rexray/ebs
teamcity-server-logs:
driver: rexray/ebs
[Incorporating NGINX as a proxy is a later step on my to do list.]
I can see both the required drives appear in AWS EBS and the container appear in my swarm visualisation.
However, after a while of seeing the progress screen in TeamCity the worker machine containing the TeamCity instance shuts down and the process abruptly ends.
I'm at a loss as to what to do next. I'm not even sure where to look for logs.
Any help gratefully received!
Cheers,
Steve.
I found a way to get logs for my service. First do this to list the services the stack creates:
$ sudo docker service ls
Then do this to see logs for the service:
$ sudo docker service logs --details {service name}
Now I just need to wade through the logs and see what went wrong...
Update
I found the following error in the logs:
infra_teamcity.1.bhiwz74gnuio#ip-172-31-18-103 | [2018-05-14 17:38:56,849] ERROR - r.configs.dsl.DslPluginManager - DSL plugin compilation failed
infra_teamcity.1.bhiwz74gnuio#ip-172-31-18-103 | exit code: 1
infra_teamcity.1.bhiwz74gnuio#ip-172-31-18-103 | stdout: #
infra_teamcity.1.bhiwz74gnuio#ip-172-31-18-103 | # There is insufficient memory for the Java Runtime Environment to continue.
infra_teamcity.1.bhiwz74gnuio#ip-172-31-18-103 | # Native memory allocation (mmap) failed to map 42012672 bytes for committing reserved memory.
infra_teamcity.1.bhiwz74gnuio#ip-172-31-18-103 | # An error report file with more information is saved as:
infra_teamcity.1.bhiwz74gnuio#ip-172-31-18-103 | # /opt/teamcity/bin/hs_err_pid125.log
infra_teamcity.1.bhiwz74gnuio#ip-172-31-18-103 |
infra_teamcity.1.bhiwz74gnuio#ip-172-31-18-103 | stderr: Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000e2dfe000, 42012672, 0) failed; error='Cannot allocate memory' (errno=12)
Which is making me think this is a memory problem. I'm going to try this again with a better AWS instance and see how I get on.
Update 2
Using a larger AWS instance solved the issue. :)
I then discovered that rexray/ebs doesn't like it when a container switches between hosts in my swarm - it duplicates the EBS volumes so that it keeps one per machine. My solution to this was to use an EFS drive in AWS and mount it to each possible host. I then updated the fstab file so that the drive is remounted on every reboot. Job done. Now to look into using a reverse proxy...