Is it possible to deploy Keycloak service on GCP's CloudRun? - google-cloud-platform

I'm trying to deploy a keycloak service on Cloud Run using a Postgres database on Cloud SQL. The dockefile I'm using looks as follows:
# syntax=docker/dockerfile:1
FROM quay.io/keycloak/keycloak:latest
ENV DB_VENDOR postgres
ENV DB_ADDR <IP_ADDRESS>
ENV DB_DATABASE <DB_NAME>
ENV DB_SCHEMA public
ENV DB_USER postgres
ENV DB_PASSWORD postgres
ENV KEYCLOAK_USER admin
ENV KEYCLOAK_PASSWORD admin
ENV PROXY_ADDRESS_FORWARDING true
ENV PORT 8080
EXPOSE ${PORT}
Running this file on my localhost (via docker-compose) runs smoothly without issues, however, once I deploy it using GCP SDK it points to the following issue that I've been unable to fix. Does anyone have come to an issue like this one?
ERROR: (gcloud.run.services.update) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
[UPDATE]
After reviewing the logs I realized that I was having an error connecting to my Postgres database. However, even using two CPUs and 4Gi memory, the deployed service seems to be very slow in response when comparing with the same configuration deployed on App Engine.

Related

Dockerized Spring Boot on AWS Beanstalk not accessible

I have deployed a Spring boot app to AWS Beanstalk through Github action but it is not accessible. Set up Spring boot to run on port 5000 and exposed it because from my understanding beanstalk open the port 5000. Watching the AWS logs I see that Spring boot correctly starts at port 5000. Below my configuration files:
Dockerfile.dev
FROM eclipse-temurin:17-jdk-alpine
VOLUME /tmp
ADD /target/demoCI-CD-0.0.1-SNAPSHOT.jar app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
EXPOSE 5000
This is the link not working: http://dockerreact-env.eba-v2y3spbp.eu-west-3.elasticbeanstalk.com/test
Having a docker-compose.yml in the project beanstalk takes it in consideration and there was the issue with port mapping. Below the correct map porting in docker-composer.yml.
version: "3"
services:
web:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "80:8080"

How to run prometheus on docker-compose and scrape django server running locally?

I am trying to setup prometheus to monitor my Django application using django-prometheus and Docker compose. I've been following some guides online but different from all the guides I've seen, I want to run Django locally for now so simply python manage.py runserver and run prometheus with docker-compose (and later add grafana). I want to do this to test it locally and later I will deploy it to Kubernetes but this is for another episode.
My issue is to make the local running django server communicate in the same network as the prometheus running container because I get this error in the /targets on prometheus dashboard:
Get "http://127.0.0.1:5000/metrics": dial tcp 127.0.0.1:5000: connect: connection refused
These are my docker-compose file and prometheus configuration:
docker-compose.yml
version: '3.6'
services:
prometheus:
image: prom/prometheus
volumes:
- ./prometheus/:/etc/prometheus/
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- 9090:9090
prometheus.yaml
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: prometheus
static_configs:
- targets:
- localhost:9090
- job_name: django-app
static_configs:
- targets:
- localhost:8000
- 127.0.0.1:8000
If you want to run the Django app outside of (a container and outside of) Docker Compose then, when run, it will bind to one of the host's ports.
You need to get the Docker Compose prometheus service to bind to the host's network too.
You should be able to do this using network_mode: host under the prometheus service.
Then, prometheus will be able to access the Django app on the host port that it's using and prometheus will be accessible as localhost:9090 (without needing the ports section).

Docker image access on aws ec2

I created a Docker image of a Flask application running the following code on a EC2 server:
docker build -t app .
docker run -80:80 app .
The result seems to work as the server returns:
Serving Flask app "app" (lazy loading)
Environment: production
Debug mode: off
Running on http://127.0.0.1:5000/
How can I access http://127.0.0.1:5000/ direction on the EC2 server, or change the direction in order to see it?
Also the Docker image is supposed to be running on port 80, but I don't see what role this port playing on the process.
I am following "Simple way to deploy machine learning models to cloud".
Update your docker run, or add another port mapping i.e.
docker run -p 5000:5000 app .
OR
docker run -p 80:80 -p 5000:5000 app .
First of all the python server has to run on 0.0.0.0. Otherwise, the flask server will not accept any connections from outside.
And if you deploy it on an EC2 Instance, you'll probably need an elastic Load balancing to expose or a Public IP. With ELB you can show the flask app from 80 through port 5000.
And always remember to set -p 5000:5000. If not, you never expose that port.
Warning: if you use public IP, set your security groups, with the ports and IP address with CIDRs correctly. Otherwise, your machine will be hacked.
I figured out I had to add to my flask app the port and the host. Substitute this:
if __name__ == '__main__':
app.run()
by this:
if __name__ == '__main__':
app.run(host= '0.0.0.0',port=80)

Squid proxy in kubernetes

I have a squid proxy installed in one of the AWS ec2 instance and pod running in kubernetes cluster.
I have added the env variable in deployment.yaml file to export the squid proxy LB as below
env:
-
name: http_proxy
value: "http://XXXX:3128"
-
name: https_proxy
value: "http://XXXX:3128"
I can see the access denied if i do a curl request from kubernetes pod console.
curl -k google.com
The request is not routing to squid proxy if I try to access from the application running in kubernetes pod
Can anyone suggest where I am doing wrong?
How to route all requests from the application running in pod to squid proxy?
You can try next:
1) Fix this on docker.service.d level by creating
/etc/systemd/system/docker.service.d/http-proxy.conf with following content:
[Service]
Environment="HTTP_PROXY=http://XXXX:3128"
Environment="HTTPS_PROXY=http://XXXX:3128"
Dont forget do afterwards
systemctl daemon-reload
systemctl restart docker
2)If you use own image, you can build it with specifying below in Dockerfile. With this approach only current container will use your squid proxy
ENV http_proxy XXXX:3128
ENV https_proxy XXXX:3128
3) Another way is to look into /etc/default/docker (For ubuntu):
cat /etc/default/docker
...
# If you need Docker to use an HTTP proxy, it can also be specified here.
#export http_proxy="http://127.0.0.1:3128/
This way you will set up set up proxy for ALL containers and not for only chosen one
I have also found some github kubernetes-squid sulotion. PLease take a lokk, but I feel that this is not you need, but anyway..
Hope it helps

Kafka on AWS ECS, how to handle advertised.host without known instance?

I'm trying to get Kafka running on an AWS ECS container. I have this setup already / working fine on my local docker environment, using the spotify/kafka image
To get this working locally, I needed to ensure the ADVERTISED_HOST environment variable was set. ADVERTISED_HOST needed to be set as the containers external IP, otherwise when I try to connect it was just giving me connection refused.
My local docker-compose.yaml has this for the kafka container:
kafka:
image: spotify/kafka
hostname: kafka
environment:
- ADVERTISED_HOST=192.168.0.70
- ADVERTISED_PORT=9092
ports:
- "9092:9092"
- "2181:2181"
restart: always
Now the problem is, I don't know what the IP is going to be, as I dont know which instance this will run on. So how do I set that environment variable?
Your entrypoint script will need to call the EC2 Metadata Service on startup (in this case http://169.254.169.254/latest/meta-data/local-hostname) to get the external-to-docker hostname and set that variable.
Sample:
[ec2-user ~]$ curl http://169.254.169.254/latest/meta-data/local-hostname
ip-10-251-50-12.ec2.internal