I have deployed a Spring boot app to AWS Beanstalk through Github action but it is not accessible. Set up Spring boot to run on port 5000 and exposed it because from my understanding beanstalk open the port 5000. Watching the AWS logs I see that Spring boot correctly starts at port 5000. Below my configuration files:
Dockerfile.dev
FROM eclipse-temurin:17-jdk-alpine
VOLUME /tmp
ADD /target/demoCI-CD-0.0.1-SNAPSHOT.jar app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
EXPOSE 5000
This is the link not working: http://dockerreact-env.eba-v2y3spbp.eu-west-3.elasticbeanstalk.com/test
Having a docker-compose.yml in the project beanstalk takes it in consideration and there was the issue with port mapping. Below the correct map porting in docker-composer.yml.
version: "3"
services:
web:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "80:8080"
Related
I am trying to setup prometheus to monitor my Django application using django-prometheus and Docker compose. I've been following some guides online but different from all the guides I've seen, I want to run Django locally for now so simply python manage.py runserver and run prometheus with docker-compose (and later add grafana). I want to do this to test it locally and later I will deploy it to Kubernetes but this is for another episode.
My issue is to make the local running django server communicate in the same network as the prometheus running container because I get this error in the /targets on prometheus dashboard:
Get "http://127.0.0.1:5000/metrics": dial tcp 127.0.0.1:5000: connect: connection refused
These are my docker-compose file and prometheus configuration:
docker-compose.yml
version: '3.6'
services:
prometheus:
image: prom/prometheus
volumes:
- ./prometheus/:/etc/prometheus/
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- 9090:9090
prometheus.yaml
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: prometheus
static_configs:
- targets:
- localhost:9090
- job_name: django-app
static_configs:
- targets:
- localhost:8000
- 127.0.0.1:8000
If you want to run the Django app outside of (a container and outside of) Docker Compose then, when run, it will bind to one of the host's ports.
You need to get the Docker Compose prometheus service to bind to the host's network too.
You should be able to do this using network_mode: host under the prometheus service.
Then, prometheus will be able to access the Django app on the host port that it's using and prometheus will be accessible as localhost:9090 (without needing the ports section).
I'm trying to deploy a keycloak service on Cloud Run using a Postgres database on Cloud SQL. The dockefile I'm using looks as follows:
# syntax=docker/dockerfile:1
FROM quay.io/keycloak/keycloak:latest
ENV DB_VENDOR postgres
ENV DB_ADDR <IP_ADDRESS>
ENV DB_DATABASE <DB_NAME>
ENV DB_SCHEMA public
ENV DB_USER postgres
ENV DB_PASSWORD postgres
ENV KEYCLOAK_USER admin
ENV KEYCLOAK_PASSWORD admin
ENV PROXY_ADDRESS_FORWARDING true
ENV PORT 8080
EXPOSE ${PORT}
Running this file on my localhost (via docker-compose) runs smoothly without issues, however, once I deploy it using GCP SDK it points to the following issue that I've been unable to fix. Does anyone have come to an issue like this one?
ERROR: (gcloud.run.services.update) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
[UPDATE]
After reviewing the logs I realized that I was having an error connecting to my Postgres database. However, even using two CPUs and 4Gi memory, the deployed service seems to be very slow in response when comparing with the same configuration deployed on App Engine.
I have created a spring boot app following this tutorial. Following the tutorial I managed to dockerize my app with the command:
docker-compose up
My docker-compose.yml file:
version: '3'
services:
app:
build: .
volumes:
- .:/app
- ~/.m2:/root/.m2
working_dir: /app
ports:
- 8080:8080
command: mvn clean spring-boot:run
When I check for docker images afterwards I see the new image for the app. Now I want to deploy this app to AWS elastic beanstalk. When creating the environment with docker platform what do I need to upload for the application code? How do I upload my docker image to aws? I can't find a good tutorial on how to deploy a dockerized app/image to aws like this. I am new to docker so any help would be appreciated!
Update Oct 2020
Docker-compose is now officially supported by EB:
AWS Elastic Beanstalk Adds Support for Running Multi-Container Applications on AL2 based Docker Platform
Original answer below
EB does not support docker-compose. To make your container (is it single, or multiple-conatiner setup?) you have to use either single or multi-container EB platforms.
In both cases you have to translate your docker-compose.yml into Dockerrun.aws.json. The file has different form, depending on whether you are using single or multi-container setup.
How do I upload my docker image to aws?
If its single EB setup, you can just provide your Dockerfile to EB and it will take care of everything for you. For multi-conainer EB, you can store your images in public repo such as dockerhub, or a private repo such as ECR.
To translate your docker-compose.yml file into Dockerrun.aws.json, you can try using container-transform tool. It can be helpful, though you will most likely need to manually make further adjustments to the file generated.
As a part of my CI process, I am creating a docker-machine EC2 instance and running 2 docker containers inside of it via docker-compose. The server container test script attempts to connect to an AWS elasticache redis instance within the same VPC as the EC2. When the test script is run I get the following error:
1) Storage
check cache connection
should return seeded value in redis:
Error: Timeout of 2000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/usr/src/app/test/scripts/test.js)
at listOnTimeout (internal/timers.js:549:17)
at processTimers (internal/timers.js:492:7)
Update: I can connect via redis-cli from the EC2 itself:
redis-cli -c -h ***.cache.amazonaws.com -p 6379 ping
> PONG
It looks like I cant connect to my redis instance because my docker container is using an IP that is not within the same VPC as my elasticache instance. How can I setup my docker config to use the same IP as the host machine while building my containers from remote images? Any help would be appreciated.
Relevant section of my docker-compose.yml:
version: '3.8'
services:
server:
build:
context: ./
dockerfile: Dockerfile
image: docker.pkg.github.com/$GITHUB_REPOSITORY/$REPOSITORY_NAME-server:github_ci_$GITHUB_RUN_NUMBER
container_name: $REPOSITORY_NAME-server
command: npm run dev
ports:
- "8080:8080"
- "6379:6379"
env_file: ./.env
Server container Dockerfile:
FROM node:12-alpine
# create app dir
WORKDIR /usr/src/app
# install dependencies
COPY package*.json ./
RUN npm install
# bundle app source
COPY . .
EXPOSE 8080 6379
CMD ["npm", "run", "dev"]
Elasticache redis SG inbound rules:
EC2 SG inbound rules:
I solved the problem through extensive trial and error. The major hint that pointed me in the right direction was found in the Docker docs:
By default, the container is assigned an IP address for every Docker network it connects to. The IP address is assigned from the pool assigned to the network...
Elasticache instances are only accessible internally from their respective VPC. Based on my config, the docker container and the ec2 instance were running on 2 different IP addresses but only the EC2 IP was whitelisted to connect to Elasticache.
I had to bind the docker container IP to the host EC2 IP in my docker.compose.yml by setting the container network_mode to "host":
version: '3.8'
services:
server:
image: docker.pkg.github.com/$GITHUB_REPOSITORY/$REPOSITORY_NAME-server:github_ci_$GITHUB_RUN_NUMBER
container_name: $REPOSITORY_NAME-server
command: npm run dev
ports:
- "8080:8080"
- "6379:6379"
network_mode: "host"
env_file: ./.env
...
In my EC2, I pulled my Docker images from my ECR : API + WEB
I then start both of them up via Docker Compose
It seems to start fine, but I don't know why I can't seem to go to my API.
I can go to my site
When I go to my API, I see this
I already open up the 3002 port on my EC2 inbound rule
docker-compose.yml
version: "2"
services:
iproject-api:
image: '616934057156.dkr.ecr.us-east-2.amazonaws.com/bheng-api-script:latest'
ports:
- '3002:3002'
iproject-web:
image: '616934057156.dkr.ecr.us-east-2.amazonaws.com/bheng-web-script:latest'
ports:
- '80:8080'
links:
- iproject-api
Did I forgot to restart any service?
Inbound rule looks fine. Check your API code status in EC2 docker logs {API_Container_Id}/telnet localhost 3002