I have some services running on the cluster, and the ALB is working fine. I want to configure SSL communication from ALB/ingress to Keycloak17.0.1 by creating a self signed certificate and establish a communication to route through port 8443 instead of http(80). Keycloak is being built by the docker image. Docker compose file is exposing the port 8443.I should also make sure to have the keystore defined as a Kubernetes PVC within the deployment instead of a docker volume.
Below is deployment file:
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "keycloak"
namespace: "test"
spec:
volumes:
- name: keycloak-pv-volume
persistentVolumeClaim:
claimName: keycloak-pv-claim
spec:
selector:
matchLabels:
app: "keycloak"
replicas: 3
strategy:
type: "RollingUpdate"
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: "keycloak"
spec:
containers:
-
name: "keycloak"
image: "quay.io/keycloak/keycloak:17.0.1"
imagePullPolicy: "Always"
livenessProbe:
httpGet:
path: /realms/master
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 300
periodSeconds: 30
env:
-
name: "KEYCLOAK_USER"
value: "admin"
-
name: "KEYCLOAK_PASSWORD"
value: "admin"
-
name: "PROXY_ADDRESS_FORWARDING"
value: "true"
-
name: HTTPS_PROXY
value: "https://engineering-exodos*********:3128"
-
name: KC_HIDE_REALM_IN_ISSUER
value:************
ports:
- name: "http"
containerPort: 8080
- name: "https"
containerPort: 8443
self certificate is being created like below: (.groovy)
def secretPatch = 'kc alb-secret-patch.yaml'
sh loadMixins() + """
openssl req -newkey rsa:4096
-x509
-sha256
-days 395
-nodes
-out keycloak_alb.crt
-keyout keycloak_alb.key
-subj "/C=US/ST=MN/L=MN/O=Security/OU=IT Department/CN=www.gateway.com"
EXISTS=$(kubectl -n istio-system get secret --ignore-not-found keycloak_alb-secret)
if [ -z "$EXISTS" ]; then
kubectl -n create secret tls keycloak_alb-secret --key="keycloak_alb.key" --cert="keycloak_alb.crt"
else
# base64 needs the '-w0' flag to avoid wrapping long lines
echo -e "data:\n tls.key: $(base64 -w0 keycloak_alb.key)\n tls.crt: $(base64 -w0 keycloak.crt)" > ${secretPatch}
kubectl -n istio-system patch secret keycloak_alb-secret -p "$(cat ${secretPatch})"
fi
"""
}
Docker file:
FROM quay.io/keycloak/keycloak:17.0.1 as builder
ENV KC_METRICS_ENABLED=true
ENV KC_CACHE=ispn
ENV KC_DB=postgres
USER keycloak
RUN chmod -R 755 /opt/keycloak \
&& chown -R keycloak:keycloak /opt/keycloak
COPY ./keycloak-benchmark-dataset.jar /opt/keycloak/providers/keycloak-benchmark-dataset.jar
COPY ./ness-event-listener.jar /opt/keycloak/providers/ness-event-listener.jar
# RUN curl -o /opt/keycloak/providers/ness-event-listener-17.0.0.jar https://repo1.uhc.com/artifactory/repo/com/optum/dis/keycloak/ness_event_listener/17.0.0/ness-event-listener-17.0.0.jar
# Changes for hiding realm in issuer claim in access token
COPY ./keycloak-services-17.0.1.jar /opt/keycloak/lib/lib/main/org.keycloak.keycloak-services-17.0.1.jar
RUN /opt/keycloak/bin/kc.sh build`enter code here`
FROM quay.io/keycloak/keycloak:17.0.1
COPY --from=builder /opt/keycloak/lib/quarkus/ /opt/keycloak/lib/quarkus/
COPY --from=builder /opt/keycloak/providers /opt/keycloak/providers
COPY --from=builder /opt/keycloak/lib/lib/main/org.keycloak.keycloak-services-17.0.1.jar /opt/keycloak/lib/lib/main/org.keycloak.keycloak-services-17.0.1.jar
COPY --chown=keycloak:keycloak cache-ispn-remote.xml /opt/keycloak/conf/cache-ispn-remote.xml
COPY conf /opt/keycloak/conf/
# Elastic APM integration changes
USER root
RUN mkdir -p /opt/elastic/apm
RUN chmod -R 755 /opt/elastic/apm
RUN curl -L https://repo1.uhc.com/artifactory/Thirdparty-Snapshots/com/elastic/apm/agents/java/current/elastic-apm-agent.jar -o /opt/elastic/apm/elastic-apm-agent.jar
ENV ES_AGENT=" -javaagent:/opt/elastic/apm/elastic-apm-agent.jar"
ENV ELASTIC_APM_SERVICE_NAME="AIDE_007"
ENV ELASTIC_APM_SERVER_URL="https://nonprod.uhc.com:443"
ENV ELASTIC_APM_VERIFY_SERVER_CERT="false"
ENV ELASTIC_APM_ENABLED="true"
ENV ELASTIC_APM_LOG_LEVEL="WARN"
ENV ELASTIC_APM_ENVIRONMENT="Test-DIS"
CMD export JAVA_OPTS="$JAVA_OPTS $ES_AGENT"
USER keycloak
ENTRYPOINT ["/opt/keycloak/bin/kc.sh", "start"]
Docker Compose:
services:
keycloak:
image: quay.io/keycloak/keycloak:17.0.1
command: start-dev
environment:
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
KC_HTTPS_CERTIFICATE_FILE: /opt/keycloak/tls.crt
KC_HTTPS_CERTIFICATE_KEY_FILE: /opt/keycloak/tls.key
ports:
- 8080:8080
- 8443:8443
volumes:
- ./localhost.crt:/opt/keycloak/conf/tls.crt
- ./localhost.key:/opt/keycloak/conf/tls.key
What is best standard way of practice to go ahead and route the traffic via SSL from ALB to Keycloak?
Related
Small question regarding Redis deployed in AWS (not AWS Elastic Cache) and an issue connecting to it.
Here is the setup of the Redis deployed in AWS: (pasting only the Kubernetes StatefulSet and Service)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: redis
replicas: 3
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
initContainers:
- name: config
image: redis:7.0.5-alpine
command: [ "sh", "-c" ]
args:
- |
cp /tmp/redis/redis.conf /etc/redis/redis.conf
echo "finding master..."
MASTER_FDQN=`hostname -f | sed -e 's/redis-[0-9]\./redis-0./'`
if [ "$(redis-cli -h sentinel -p 5000 ping)" != "PONG" ]; then
echo "master not found, defaulting to redis-0"
if [ "$(hostname)" = "redis-0" ]; then
echo "this is redis-0, not updating config..."
else
echo "updating redis.conf..."
echo "slaveof $MASTER_FDQN 6379" >> /etc/redis/redis.conf
fi
else
echo "sentinel found, finding master"
MASTER="$(redis-cli -h sentinel -p 5000 sentinel get-master-addr-by-name mymaster | grep -E '(^redis-\d{1,})|([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})')"
echo "master found : $MASTER, updating redis.conf"
echo "slaveof $MASTER 6379" >> /etc/redis/redis.conf
fi
volumeMounts:
- name: redis-config
mountPath: /etc/redis/
- name: config
mountPath: /tmp/redis/
containers:
- name: redis
image: redis:7.0.5-alpine
command: ["redis-server"]
args: ["/etc/redis/redis.conf"]
ports:
- containerPort: 6379
name: redis
volumeMounts:
- name: data
mountPath: /data
- name: redis-config
mountPath: /etc/redis/
volumes:
- name: redis-config
emptyDir: {}
- name: config
configMap:
name: redis-config
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: nfs-1
resources:
requests:
storage: 50Mi
---
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
ports:
- port: 6379
targetPort: 6379
name: redis
selector:
app: redis
type: LoadBalancer
The pods are healthy, I can exec into it and perform operations fine. Here is the get all:
NAME READY STATUS RESTARTS AGE
pod/redis-0 1/1 Running 0 22h
pod/redis-1 1/1 Running 0 22h
pod/redis-2 1/1 Running 0 22h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/redis LoadBalancer 192.168.45.55 10.51.5.2 6379:30315/TCP 26h
NAME READY AGE
statefulset.apps/redis 3/3 22h
Here is the describe of the service:
Name: redis
Namespace: Namespace
Labels: <none>
Annotations: <none>
Selector: app=redis
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 192.168.22.33
IPs: 192.168.22.33
LoadBalancer Ingress: 10.51.5.2
Port: redis 6379/TCP
TargetPort: 6379/TCP
NodePort: redis 30315/TCP
Endpoints: 192.xxx:6379,192.xxx:6379,192.xxx:6379
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal IPAllocated 68s metallb-controller Assigned IP ["10.51.5.2"]
Normal nodeAssigned 58s (x5 over 66s) metallb-speaker announcing from node "someaddress.com" with protocol "bgp"
Normal nodeAssigned 58s (x5 over 66s) metallb-speaker announcing from node "someaddress.com" with protocol "bgp"
I then try to connect to it, i.e. inserting some data with a very straightforward Spring Boot application. The application has no business logic, just trying to insert data.
Here are the relevant parts:
#Configuration
public class RedisConfiguration {
#Bean
public ReactiveRedisConnectionFactory reactiveRedisConnectionFactory() {
return new LettuceConnectionFactory("10.51.5.2", 30315);
}
#Repository
public class RedisRepository {
private final ReactiveRedisOperations<String, String> reactiveRedisOperations;
public RedisRepository(ReactiveRedisOperations<String, String> reactiveRedisOperations) {
this.reactiveRedisOperations = reactiveRedisOperations;
}
public Mono<RedisPojo> save(RedisPojo redisPojo) {
return reactiveRedisOperations.opsForValue().set(redisPojo.getInput(), redisPojo.getOutput()).map(__ -> redisPojo);
}
Each time I am trying to write the data, I am getting this exception:
2022-12-02T20:20:08.015+08:00 ERROR 1184 --- [ctor-http-nio-3] a.w.r.e.AbstractErrorWebExceptionHandler : [8f16a752-1] 500 Server Error for HTTP POST "/save"
org.springframework.data.redis.RedisConnectionFailureException: Unable to connect to Redis
at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory$ExceptionTranslatingConnectionProvider.translateException(LettuceConnectionFactory.java:1602) ~[spring-data-redis-3.0.0.jar:3.0.0]
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
*__checkpoint ⇢ Handler com.redis.controller.RedisController#test(RedisRequest) [DispatcherHandler]
*__checkpoint ⇢ HTTP POST "/save" [ExceptionHandlingWebHandler]
Original Stack Trace:
at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory$ExceptionTranslatingConnectionProvider.translateException(LettuceConnectionFactory.java:1602) ~[spring-data-redis-3.0.0.jar:3.0.0]
Caused by: io.lettuce.core.RedisConnectionException: Unable to connect to 10.51.5.2/<unresolved>:30315
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:78) ~[lettuce-core-6.2.1.RELEASE.jar:6.2.1.RELEASE]
at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:56) ~[lettuce-core-6.2.1.RELEASE.jar:6.2.1.RELEASE]
at io.lettuce.core.AbstractRedisClient.getConnection(AbstractRedisClient.java:350) ~[lettuce-core-6.2.1.RELEASE.jar:6.2.1.RELEASE]
at io.lettuce.core.RedisClient.connect(RedisClient.java:216) ~[lettuce-core-6.2.1.RELEASE.jar:6.2.1.RELEASE]
Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /10.51.5.2:30315
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:261) ~[netty-transport-4.1.85.Final.jar:4.1.85.Final]
at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98) ~[netty-common-4.1.85.Final.jar:4.1.85.Final]
This is particularly puzzling, because I am quite sure the code of the Spring Boot app is working. When I change the IP of return new LettuceConnectionFactory("10.51.5.2", 30315);: to
a regular Redis on my laptop ("localhost", 6379),
a dockerized Redis on my laptop,
a dockerized Redis on prem, all are working fine.
Therefore, I am quite puzzled what did I do wrong with the setup of this Redis in AWS.
What should I do in order to connect to it properly.
May I get some help please?
Thank you
By default, Redis binds itself to the IP addresses 127.0.0.1 and ::1 and does not accept connections against non-local interfaces. Chances are high that this is your main issue and you may want to review your redis.conf file to bind Redis to the interface you need or to the generic * -::*, as explained in the comments of the config file itself (which I have linked above).
With that being said, Redis also does not accept connections on non-local interfaces if the default user has no password - a security layer named Protected mode. Thus you should either give your default user a password or disable protected mode in your redis.conf file.
Not sure if this applies to your case but, as a side note, I would suggest to always avoid exposing Redis to the Internet.
You are mixing 2 things.
To enable this service for pods in different namespaces you do not need external load balancer, you can just try to use redis.namespace-name:6379 dns name and it will just work. Such dns is there for every service you create (but works only inside kubernetes)
Kubernetes will make sure that your traffic will be routed to proper pods (assuming there is more than one).
If you want to expose redis from outside of kubernetes then you need to make sure there is connectivity from the outside and then you need network load balancer that will forward traffic to your kubernetes service (in your case node port, so you need NLB with eks worker nodes: 30315 as a targets)
If your worker nodes have public IP and their SecurityGroups allow connecting to them directly, you could try to connect to worker node's IP directly just to test things out (without LB).
And regardless off yout setup you can always create proxy via kubectl
kubectl port-forward -n redisNS svc/redis 6379:6379
and connect from spring boot app to localhost:6379
How do you want to connect from app to redis in a final setup?
Bit complicated but will try to explain as much to give clarity any help much apricated,
I use azure devops to do deployment in eks using helm , everything working fine and i now have a requirement to add certificate to the pod.
for this i have a der file with me , which i should copy to the pods(replica 3) and do keytool to import the certificate and put that in an appropriate location before my application starts
My setup is i have a dockerfile and i call a shell script inside a docker file and do helm install using deployment.yml file
I now tried using configmap to mount the der file which is used to importcert and then i will execute some unix commands to import the certificate , the unix command is not working , can one some help here ?
docker file
FROM amazonlinux:2.0.20181114
RUN yum install -y java-1.8.0-openjdk-headless
ARG JAR_FILE='**/*.jar'
ADD ${JAR_FILE} car_service.jar
ADD entrypoint.sh .
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"] # split ENTRYPOINT wrapper from
CMD ["java", "-jar", "/car_service.jar"] # main CMD
entrypoint.sh
#!/bin/sh
# entrypoint.sh
# Check: $env_name must be set
if [ -z "$env_name" ]; then
echo '$env_name is not set; stopping' >&2
exit 1
fi
# Install aws client
yum -y install curl
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
yum -y install unzip
unzip awscliv2.zip
./aws/install
# Retrieve secrets from Secrets Manager
export KEYCLOAKURL=`aws secretsmanager get-secret-value --secret-id myathlon/$env_name/KEYCLOAKURL --query SecretString --output text`
cd /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.amzn2.0.2.x86_64/jre/bin
keytool -noprompt -importcert -alias msfot-$(date +%Y%m%d-%H%M) -file /tmp/msfot.der -keystore msfot.jks -storepass msfotooling
mkdir /data/keycloak/
cp /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.amzn2.0.2.x86_64/jre/bin/msfot.jks /data/keycloak/
cd /
# Run the main container CMD
exec "$#"
myconfigmap
create configmap msfot1 --from-file=msfot.der
my deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "helm-chart.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
helm.sh/chart: {{ include "helm-chart.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
date: "{{ now | unixEpoch }}"
spec:
volumes:
- name: msfot1
configMap:
name: msfot1
items:
- key: msfot.der
path: msfot.der
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
volumeMounts:
- name: msfot1
mountPath: /tmp/msfot.der
subPath: msfot.der
ports:
- name: http
containerPort: 8080
protocol: TCP
env:
- name: env_name
value: {{ .Values.environmentName }}
- name: SPRING_PROFILES_ACTIVE
value: "{{ .Values.profile }}"
my values.yml file is
replicaCount: 3
#pass repository and targetPort values during runtime
image:
repository:
tag: "latest"
pullPolicy: Always
service:
type: ClusterIP
port: 80
targetPort:
profile: "aws"
environmentName: dev
i have 2 questions here
in my entrypoint.sh files keytool,mkdir,cp and cd command is not getting executed (so the Certificate is not getting added to keystore)
as you know this setup works for all the env as i use the same deployment.yml file though i have different values.yml file for each environment
i want this certificate generation to happen only in acc and prod not for dev and test
is their any other easy method doing this rather than configmap/deployment.yml ??
Please advice
Thanks
I have 2 docker imags with gcloud sdk and my entrypoint script performs some checks using gcloud, like following
gcloud pubsub subscriptions describe $GCP_SUB_NAME --quiet
result="$?"
if [ "$result" -ne 0 ]; then
echo "Subscription not found, exited with non-zero status $result"
exit $result
fi
I am running these in gke...
I have a different GCP Service Account for each docker image which is connected to GKE Service Account using workload-identity.
My problem is that both deployments don't succeed at the same time. The one which runs first succeeds and other fails with following error. Something to do with the gke/GCP credentials.
I get following error
gcloud pubsub subscriptions describe local-test-v1 --quiet
ERROR: (gcloud.pubsub.subscriptions.describe) You do not currently have an active account selected.
Please run:
$ gcloud auth login
to obtain new credentials.
If you have already logged in with a different account:
$ gcloud config set account ACCOUNT
to select an already authenticated account to use.
Even if I make following changes I don't get it through
gcloud config set account sa#project.iam.gserviceaccount.com
gcloud pubsub subscriptions describe $GCP_SUB_NAME --quiet
result="$?"
if [ "$result" -ne 0 ]; then
echo "Subscription not found, exited with non-zero status $result"
exit $result
fi
Error I get now
gcloud config set account sa#project.iam.gserviceaccount.com
Updated property [core/account].
+ gcloud pubsub subscriptions describe local-test-v1 --quiet
ERROR: (gcloud.pubsub.subscriptions.describe) Your current active account [sa#project.iam.gserviceaccount.com] does not have any valid credentials
Please run:
$ gcloud auth login
to obtain new credentials.
For service account, please activate it first:
$ gcloud auth activate-service-account ACCOUNT
I don't wanna use the GCP client libraries as I want to keep it light weight so either gcloud r curl r the best option.
Can I use gcloud in GKE without the key file?
Can I call googleapis via curl without passing bearer token or how shall I get that in the docker container?
Any ideas... Thanks...
Note#1: workload identity
resource "google_service_account_iam_member" "workload_identity_iam" {
member = "serviceAccount:${var.gcp_project}.svc.id.goog[${var.kubernetes_namespace}/${var.kubernetes_service_account_name}]"
role = "roles/iam.workloadIdentityUser"
service_account_id = google_service_account.sa.name
depends_on = [google_project_iam_member.pubsub_subscriber_iam, google_project_iam_member.bucket_object_admin_iam] }
Note#2: GKE SAs
Name: sa1
Namespace: some-namespace
Labels: <none>
Annotations: iam.gke.io/gcp-service-account: sa1#project.iam.gserviceaccount.com
Image pull secrets: <none>
Mountable secrets: sa1-token-shj9w
Tokens: sa1-token-shj9w
Events: <none>
Name: sa2
Namespace: some-namespace
Labels: <none>
Annotations: iam.gke.io/gcp-service-account: sa2#project.iam.gserviceaccount.com
Image pull secrets: <none>
Mountable secrets: sa2-token-dkhdl
Tokens: sa2-token-dkhdl
Events: <none>
Note#3: job template for container
apiVersion: batch/v1
kind: Job
metadata:
namespace: some-namespace
name: check
labels:
helm.sh/chart: check-0.1.0
app.kubernetes.io/name: check
app.kubernetes.io/instance: check
app: check
app.kubernetes.io/version: "0.1.0"
app.kubernetes.io/managed-by: Helm
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-weight: "-4"
spec:
backoffLimit: 1
completions: 1
parallelism: 1
template:
metadata:
name: check
labels:
app.kubernetes.io/name: check
app.kubernetes.io/instance: check
app: check
spec:
restartPolicy: Never
terminationGracePeriodSeconds: 0
serviceAccountName: sa1
securityContext:
{}
containers:
- name: check
securityContext:
{}
image: "eu.gcr.io/some-project/check:500c4166"
imagePullPolicy: Always
env:
# Define the environment variable
- name: GCP_PROJECT_ID
valueFrom:
configMapKeyRef:
name: check
key: gcpProjectID
- name: GCP_SUB
valueFrom:
configMapKeyRef:
name: check
key: gcpSubscriptionName
- name: GCP_BUCKET
valueFrom:
configMapKeyRef:
name: check
key: gcpBucket
resources:
limits:
cpu: 1000m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
Docker image:
FROM ubuntu:18.04
COPY /checks/pre/ /checks/pre/
ENV HOME /checks/pre/
# Install needed packages
RUN apt-get update && \
apt-get -y install --no-install-recommends curl \
iputils-ping \
tar \
jq \
python \
ca-certificates \
&& mkdir -p /usr/local/gcloud && cd /usr/local/gcloud \
&& curl -o google-cloud-sdk.tar.gz -L -O https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz \
&& tar -xzf google-cloud-sdk.tar.gz \
&& rm -f google-cloud-sdk.tar.gz \
&& ./google-cloud-sdk/install.sh --quiet \
&& mkdir -p /.config/gcloud && chmod 775 -R /checks/pre /.config/gcloud \
&& apt-get autoclean \
&& apt-get autoremove \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
WORKDIR /checks/pre
USER 1001
ENTRYPOINT [ "/checks/pre/entrypoint.sh" ]
I have a minikube running with the deployment of django app. Till today, we used server which django spins up. Now, I have added another Nginx container so that we can deploy django app cause I read django is not really for production. After reading some documentation and blogs, I configured the deployment.yaml file and it is running very much fine now.
The problem is that no static content is being served. This is really because static content is in django container and not Nginx container. (Idk if they can share volume or not, please clarify this doubt or misconception) What will be the best way so I can serve my static content?
This is my deployment file's spec:
spec:
containers:
- name: user-django-app
image: my-django-app:latest
ports:
- containerPort: 8000
env:
- name: POSTGRES_HOST
value: mysql-service
- name: POSTGRES_USER
value: admin
- name: POSTGRES_PASSWORD
value: admin
- name: POSTGRES_PORT
value: "8001"
- name: POSTGRES_DB
value: userdb
- name: user-nginx
image: nginx
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx-config
configMap:
name: nginx-config
I believe that
server {
location /static {
alias /var/www/djangoapp/static;
}
}
needs to be changed. But I don't know what should I write? Also, how can I run python manage.py migrate and python manage.py collectstatic as soon as the deployment is made.
Kindly provide resource/docs/blogs which will assist me doing this. Thank you!
Thank you.
After #willrof 's answer, this is my current YAML file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-deployment
labels:
app: web
spec:
replicas: 1
selector:
matchLabels:
app: web
micro-service: user
template:
metadata:
name: user
labels:
app: web
micro-service: user
spec:
containers:
- name: user-django-app
image: docker.io/dev1911/drone_plus_plus_user:latest
ports:
- containerPort: 8000
env:
- name: POSTGRES_HOST
value: mysql-service
- name: POSTGRES_USER
value: admin
- name: POSTGRES_PASSWORD
value: admin
- name: POSTGRES_PORT
value: "8001"
- name: POSTGRES_DB
value: userdb
volumeMounts:
- name: shared
mountPath: /shared
command: ["/bin/sh", "-c"]
args: ["apt-get install nano"]
- name: user-nginx
image: nginx
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- name: shared
mountPath: /var/www/user/static
volumes:
- name: nginx-config
configMap:
name: nginx-config
- name: shared
emptyDir: {}
And nginx-config file is
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 4096; ## Default: 1024
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format ltsv 'domain:$host\t'
'host:$remote_addr\t'
'user:$remote_user\t'
'time:$time_local\t'
'method:$request_method\t'
'path:$request_uri\t'
'protocol:$server_protocol\t'
'status:$status\t'
'size:$body_bytes_sent\t'
'referer:$http_referer\t'
'agent:$http_user_agent\t'
'response_time:$request_time\t'
'cookie:$http_cookie\t'
'set_cookie:$sent_http_set_cookie\t'
'upstream_addr:$upstream_addr\t'
'upstream_cache_status:$upstream_cache_status\t'
'upstream_response_time:$upstream_response_time';
access_log /var/log/nginx/access.log ltsv;
sendfile on;
tcp_nopush on;
server_names_hash_bucket_size 128; # this seems to be required for some vhosts
keepalive_timeout 65;
gzip on;
server {
listen 80;
server_name example.com
;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /static {
alias /var/www/user/static;
}
}
# include /etc/nginx/conf.d/*.conf;
}
I did not write this config but found this and edited to my use.
After our chat in comments, you told me you are having difficulties with using cmd and args.
Here is an example called two-containers.yaml:
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
containers:
- name: python
image: python
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "apt-get update && apt-get install -y curl && mkdir /curl-folder && cp /usr/bin/curl /curl-folder && cp -r /curl-folder /pod-data/"]
- name: user-nginx
image: nginx
volumeMounts:
- name: shared-data
mountPath: /tmp/pod-data
volumes:
- name: shared-data
emptyDir: {}
python will start up, run apt-get update then apt-get install -y curl then mkdir /curl-folder then copy usr/bin/curl to /curl-folder then copy the folder /curl-folder to /pod-data shared mounted volume.
A few observations:
The container image has to have the binary mentioned in command (like /bin/sh in python).
Try using && to chain commands consecutively in the args field it's easier to test and deploy.
Reproduction:
$ kubectl apply -f two-container-volume.yaml
pod/two-containers created
$ kubectl get pods -w
NAME READY STATUS RESTARTS AGE
two-containers 2/2 Running 0 7s
two-containers 1/2 NotReady 0 30s
$ kubectl describe pod two-containers
...
Containers:
python:
Container ID: docker://911462e67d7afab9bca6cdaea154f9229c80632efbfc631ddc76c3d431333193
Image: python
Command:
/bin/sh
Args:
-c
apt-get update && apt-get install -y curl && mkdir /curl-folder && cp /usr/bin/curl /curl-folder && cp -r /curl-folder /pod-data/
State: Terminated
Reason: Completed
Exit Code: 0
user-nginx:
State: Running
The python container executed and completed correctly, now let's check the files logging inside the nginx container.
$ kubectl exec -it two-containers -c user-nginx -- /bin/bash
root#two-containers:/# cd /tmp/pod-data/curl-folder/
root#two-containers:/tmp/pod-data/curl-folder# ls
curl
If you need further help, post the yaml with the command+args as you are trying to run and we can help you troubleshoot the syntax.
If it's a django app, consider using whitenoise http://whitenoise.evans.io/en/stable/ to serve your static content from minikube or kubernetes.
This is straightforward advice, but I had to search quite a bit before someone mentioned it.
I'm trying to use docker-compose and kubernetes as two different solutions to setup a Django API served by Gunicorn (as the web server) and Nginx (as the reverse proxy). Here are the key files:
default.tmpl (nginx) - this is converted to default.conf when the environment variable is filled in:
upstream api {
server ${UPSTREAM_SERVER};
}
server {
listen 80;
location / {
proxy_pass http://api;
}
location /staticfiles {
alias /app/static/;
}
}
docker-compose.yaml:
version: '3'
services:
api-gunicorn:
build: ./api
command: gunicorn --bind=0.0.0.0:8000 api.wsgi:application
volumes:
- ./api:/app
api-proxy:
build: ./api-proxy
command: /bin/bash -c "envsubst < /etc/nginx/conf.d/default.tmpl > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"
environment:
- UPSTREAM_SERVER=api-gunicorn:8000
ports:
- 80:80
volumes:
- ./api/static:/app/static
depends_on:
- api-gunicorn
api-deployment.yaml (kubernetes):
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-myapp-api-proxy
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: myapp-api-proxy
template:
metadata:
labels:
app.kubernetes.io/name: myapp-api-proxy
spec:
containers:
- name: myapp-api-gunicorn
image: "helm-django_api-gunicorn:latest"
imagePullPolicy: Never
command:
- "/bin/bash"
args:
- "-c"
- "gunicorn --bind=0.0.0.0:8000 api.wsgi:application"
- name: myapp-api-proxy
image: "helm-django_api-proxy:latest"
imagePullPolicy: Never
command:
- "/bin/bash"
args:
- "-c"
- "envsubst < /etc/nginx/conf.d/default.tmpl > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"
env:
- name: UPSTREAM_SERVER
value: 127.0.0.1:8000
volumeMounts:
- mountPath: /app/static
name: api-static-assets-on-host-mount
volumes:
- name: api-static-assets-on-host-mount
hostPath:
path: /Users/jonathan.metz/repos/personal/code-demos/kubernetes-demo/helm-django/api/static
My question involves the UPSTREAM_SERVER environment variable.
For docker-compose.yaml, the following values have worked for me:
Setting it to the name of the gunicorn service and the port it's running on (in this case api-gunicorn:8000). This is the best way to do it (and how I've done it in the docker-compose file above) because I don't need to expose the 8000 port to the host machine.
Setting it to MY_IP_ADDRESS:8000 as described in this SO post. This method requires me to expose the 8000 port, which is not ideal.
For api-deployment.yaml, only the following value has worked for me:
Setting it to localhost:8000. Inside of a pod, all containers can communicate using localhost.
Are there any other values for UPSTREAM_SERVER that work here, especially in the kubernetes file? I feel like I should be able to point to the container's name and that should work.
You could create a service to target container myapp-api-gunicorn but this will expose it outside of the pod:
apiVersion: v1
kind: Service
metadata:
name: api-gunicorn-service
spec:
selector:
app.kubernetes.io/name: myapp-api-proxy
ports:
- protocol: TCP
port: 8000
targetPort: 8000
You might also use hostname and subdomain fields inside a pod to take advantage of FQDN.
Currently when a pod is created, its hostname is the Pod’s metadata.name value.
The Pod spec has an optional hostname field, which can be used to specify the Pod’s hostname. When specified, it takes precedence over the Pod’s name to be the hostname of the pod. For example, given a Pod with hostname set to “my-host”, the Pod will have its hostname set to “my-host”.
The Pod spec also has an optional subdomain field which can be used to specify its subdomain. For example, a Pod with hostname set to “foo”, and subdomain set to “bar”, in namespace “my-namespace”, will have the fully qualified domain name (FQDN) “foo.bar.my-namespace.svc.cluster-domain.example”.
Also here is a nice article from Mirantis which talks about exposing multiple containers in a pod