I have a small application built in Django. it serves as a frontend and it's being installed in one of out K8S clusters.
I'm using helm to deploy the charts and I fail to serve the static files of Django correctly.
Iv'e searched in multiple locations, but I ended up with inability to find one that will fix my problem.
That's my ingress file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: orion-toolbelt
namespace: {{ .Values.global.namespace }}
annotations:
# ingress.kubernetes.io/secure-backends: "false"
# nginx.ingress.kubernetes.io/secure-backends: "false"
ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/rewrite-target: /
ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
ingress.kubernetes.io/ingress.allow-http: "true"
nginx.ingress.kubernetes.io/ingress.allow-http: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 500m
spec:
rules:
- http:
paths:
- path: /orion-toolbelt
backend:
serviceName: orion-toolbelt
servicePort: {{ .Values.service.port }}
the static file location in django is kept default e.g.
STATIC_URL = "/static"
the user ended up with inability to access the static files that way..
what should I do next?
attached is the error:
HTML-static_files-error
-- EDIT: 5/8/19 --
The pod's deployment.yaml looks like the following:
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: {{ .Values.global.namespace }}
name: orion-toolbelt
labels:
app: orion-toolbelt
spec:
replicas: 1
selector:
matchLabels:
app: orion-toolbelt
template:
metadata:
labels:
app: orion-toolbelt
spec:
containers:
- name: orion-toolbelt
image: {{ .Values.global.repository.imagerepo }}/orion-toolbelt:10.4-SNAPSHOT-15
ports:
- containerPort: {{ .Values.service.port }}
env:
- name: "USERNAME"
valueFrom:
secretKeyRef:
key: username
name: {{ .Values.global.secretname }}
- name: "PASSWORD"
valueFrom:
secretKeyRef:
key: password
name: {{ .Values.global.secretname }}
- name: "MASTER_IP"
valueFrom:
secretKeyRef:
key: master_ip
name: {{ .Values.global.secretname }}
imagePullPolicy: {{ .Values.global.pullPolicy }}
imagePullSecrets:
- name: {{ .Values.global.secretname }}
EDIT2: 20/8/19 - adding service.yam
apiVersion: v1
kind: Service
metadata:
namespace: {{ .Values.global.namespace }}
name: orion-toolbelt
spec:
selector:
app: orion-toolbelt
ports:
- protocol: TCP
port: {{ .Values.service.port }}
targetPort: {{ .Values.service.port }}
You should simply contain the /static directory within the container, and adjust the path to it in the application.
Otherwise, if it must be /static, or you don't want to contain the static files in the container, or you want other containers to access the volume, you should think about mounting a Kubernetes volume to your Deployment/ Statefulset.
#Edit
You can test, whether this path exists in your kubernetes pod this way:
kubectl get po <- this command will give you the name of your pod
kubectl exec -it <name of pod> sh <-this command will let you execute commands in the container shell.
There you can test, if your path exists. If it does, it is fault of your application, if it does not, you added it wrong in the Docker.
You can also add path to your Kubernetes pod, without specifying it in the
Docker container. Check this link for details
As described by community member Marcin Ginszt
According to the informatiom applied in the post. It's difficult to quess where is the problem with your django/app config/settings.
Please refer to Managing static files (e.g. images, JavaScript, CSS)
NOTE:
Serving the files - STATIC_URL = '/static/'
In addition to these configuration steps, you’ll also need to actually serve the static files.
During development, if you use django.contrib.staticfiles, this will be done automatically by runserver when DEBUG is set to True (see django.contrib.staticfiles.views.serve()).
This method is grossly inefficient and probably insecure, so it is unsuitable for production.
See Deploying static files for proper strategies to serve static files in production environments.
Django doesn’t serve files itself; it leaves that job to whichever Web server you choose.
We recommend using a separate Web server – i.e., one that’s not also running Django – for serving media. Here are some good choices:
Nginx
A stripped-down version of Apache
Here you can find example how you can serve static files using collectstatic command.
Please let me know if it helped.
Related
I have the following Minikube default service account:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
imagePullSecrets:
- name: gcr-secret
- name: awsecr-cred
- name: dpr-secret
- name: acr-secret
kind: ServiceAccount
metadata:
creationTimestamp: "2022-11-18T20:21:13Z"
name: default
namespace: default
resourceVersion: "10953591"
uid: edcc687f-dbb5-472d-8847-b4dc29096b48
I can add a new imagePullSecrets entry using the following kubectl patch command:
kubectl patch serviceaccount default --type=json -p '[{"op": "add", "path": "/imagePullSecrets/-", "value": {name: artifactory-credentials}}]'
Here's the update default service account:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
imagePullSecrets:
- name: gcr-secret
- name: awsecr-cred
- name: dpr-secret
- name: acr-secret
- name: artifactory-credentials
kind: ServiceAccount
metadata:
creationTimestamp: "2022-11-18T20:21:13Z"
name: default
namespace: default
resourceVersion: "10956724"
uid: edcc687f-dbb5-472d-8847-b4dc29096b48
However, when I run the kubectl patch command a second time, a duplicate imagePullSecrets entry is added:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
imagePullSecrets:
- name: gcr-secret
- name: awsecr-cred
- name: dpr-secret
- name: acr-secret
- name: artifactory-credentials
- name: artifactory-credentials
kind: ServiceAccount
metadata:
creationTimestamp: "2022-11-18T20:21:13Z"
name: default
namespace: default
resourceVersion: "10957065"
uid: edcc687f-dbb5-472d-8847-b4dc29096b48
How can I use kubectl patch to add a imagePullSecrets entry only when the entry doesn't already exist? I don't want duplicate imagePullSecrets entries.
I'm using Minikube v1.28.0 and kubectl client version v1.26.1 / server version v1.25.3 on Ubuntu 20.04.5 LTS.
AFAIK unfortunately there is no such filter available the official documentation. But We can do a workaround by using the general syntax like kubectl patch serviceaccount default --type=json -p '{"imagePullSecrets":[{"name": "gcr-secret"},{"name": "artifactory-credentials"},{"name": "acr-secret"}]}'. But we have to update all the imagePullSecrets everytime.
As #Geoff Alexander mentioned the other way is to get the details of resource and validate if the required property is available in the resource, as mentioned in the above comment like $kubectl get serviceaccount -o json or $kubectl get serviceaccount -o yaml.
I would like to build an image from a Dockerfile using an OpenShift BuildConfig that references an existing ImageStream in the FROM line. That is, if I have:
$ oc get imagestream openshift-build-example -o yaml
apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
name: openshift-build-example
namespace: sandbox
spec:
lookupPolicy:
local: true
I would like to be able to submit a build that uses a Dockerfile like
this:
FROM openshift-build-example:parent
But this doesn't work. If I use a fully qualified image specification,
like this...
FROM image-registry.openshift-image-registry.svc:5000/sandbox/openshift-build-example:parent
...it works, but this is problematic, because it requires referencing
the namespace in the image specification. This means the builds can't
be conveniently deployed into another namespace.
Is there any way to make this work?
For reference purposes, the build is configure in the following
BuildConfig resource:
apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
name: buildconfig-child
spec:
failedBuildsHistoryLimit: 5
successfulBuildsHistoryLimit: 5
output:
to:
kind: ImageStreamTag
name: openshift-build-example:child
runPolicy: Serial
source:
git:
ref: main
uri: https://github.com/larsks/openshift-build-example
type: Git
contextDir: image/child
strategy:
dockerStrategy:
dockerfilePath: Dockerfile
type: Docker
triggers:
- type: "GitHub"
github:
secretReference:
name: "buildconfig-child-webhook"
- type: "Generic"
generic:
secret: "buildconfig-child-webhook"
And the referenced Dockerfile is:
# FIXME
FROM openshift-build-example:parent
COPY index.html /var/www/localhost/htdocs/index.html
I have created an apps cluster deployment on AWS EKS that is deployed using Helm. For proper operation of my app, I need to set env variables, which are secrets stored in AWS Secrets manager. Referencing a tutorial, I set up my values in values.yaml file someway like this
secretsData:
secretName: aws-secrets
providerName: aws
objectName: CodeBuild
Now I have created a secrets provider class as AWS recommends: secret-provider.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: aws-secret-provider-class
spec:
provider: {{ .Values.secretsData.providerName }}
parameters:
objects: |
- objectName: "{{ .Values.secretsData.objectName }}"
objectType: "secretsmanager"
jmesPath:
- path: SP1_DB_HOST
objectAlias: SP1_DB_HOST
- path: SP1_DB_USER
objectAlias: SP1_DB_USER
- path: SP1_DB_PASSWORD
objectAlias: SP1_DB_PASSWORD
- path: SP1_DB_PATH
objectAlias: SP1_DB_PATH
secretObjects:
- secretName: {{ .Values.secretsData.secretName }}
type: Opaque
data:
- objectName: SP1_DB_HOST
key: SP1_DB_HOST
- objectName: SP1_DB_USER
key: SP1_DB_USER
- objectName: SP1_DB_PASSWORD
key: SP1_DB_PASSWORD
- objectName: SP1_DB_PATH
key: SP1_DB_PATH
I mount this secret object in my deployment.yaml, the relevant section of the file looks like this:
volumeMounts:
- name: secrets-store-volume
mountPath: "/mnt/secrets"
readOnly: true
env:
- name: SP1_DB_HOST
valueFrom:
secretKeyRef:
name: {{ .Values.secretsData.secretName }}
key: SP1_DB_HOST
- name: SP1_DB_PORT
valueFrom:
secretKeyRef:
name: {{ .Values.secretsData.secretName }}
key: SP1_DB_PORT
further down in same deployment file, I define secrets-store-volume as :
volumes:
- name: secrets-store-volume
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aws-secret-provider-class
All drivers are installed into cluster and permissions are set accordingly
with helm install mydeployment helm-folder/ --dry-run I can see all the files and values are populated as expected. Then with helm install mydeployment helm-folder/ I install the deployment into my cluster but with kubectl get all I can see the pod is stuck at Pending with warning Error: 'aws-secrets' not found and eventually gets timeout. In AWS CloudTrail log, I can see that the cluster made request to access the secret and there was no error fetching it. How can I solve this or maybe further debug it? Thank you for your time and efforts.
Error: 'aws-secrets' not found - looks like CSI Driver isn't creating kubernetes secret that you're using to reference values
Since yaml files looks correctly, I would say it's probably CSI Driver configuration Sync as Kubernetes secret - syncSecret.enabled (which is false by default)
So make sure that secrets-store-csi-driver runs with this flag set to true, for example:
helm upgrade --install csi-secrets-store \
--namespace kube-system secrets-store-csi-driver/secrets-store-csi-driver \
--set grpcSupportedProviders="aws" --set syncSecret.enabled="true"
I'm trying to use docker-compose and kubernetes as two different solutions to setup a Django API served by Gunicorn (as the web server) and Nginx (as the reverse proxy). Here are the key files:
default.tmpl (nginx) - this is converted to default.conf when the environment variable is filled in:
upstream api {
server ${UPSTREAM_SERVER};
}
server {
listen 80;
location / {
proxy_pass http://api;
}
location /staticfiles {
alias /app/static/;
}
}
docker-compose.yaml:
version: '3'
services:
api-gunicorn:
build: ./api
command: gunicorn --bind=0.0.0.0:8000 api.wsgi:application
volumes:
- ./api:/app
api-proxy:
build: ./api-proxy
command: /bin/bash -c "envsubst < /etc/nginx/conf.d/default.tmpl > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"
environment:
- UPSTREAM_SERVER=api-gunicorn:8000
ports:
- 80:80
volumes:
- ./api/static:/app/static
depends_on:
- api-gunicorn
api-deployment.yaml (kubernetes):
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-myapp-api-proxy
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: myapp-api-proxy
template:
metadata:
labels:
app.kubernetes.io/name: myapp-api-proxy
spec:
containers:
- name: myapp-api-gunicorn
image: "helm-django_api-gunicorn:latest"
imagePullPolicy: Never
command:
- "/bin/bash"
args:
- "-c"
- "gunicorn --bind=0.0.0.0:8000 api.wsgi:application"
- name: myapp-api-proxy
image: "helm-django_api-proxy:latest"
imagePullPolicy: Never
command:
- "/bin/bash"
args:
- "-c"
- "envsubst < /etc/nginx/conf.d/default.tmpl > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"
env:
- name: UPSTREAM_SERVER
value: 127.0.0.1:8000
volumeMounts:
- mountPath: /app/static
name: api-static-assets-on-host-mount
volumes:
- name: api-static-assets-on-host-mount
hostPath:
path: /Users/jonathan.metz/repos/personal/code-demos/kubernetes-demo/helm-django/api/static
My question involves the UPSTREAM_SERVER environment variable.
For docker-compose.yaml, the following values have worked for me:
Setting it to the name of the gunicorn service and the port it's running on (in this case api-gunicorn:8000). This is the best way to do it (and how I've done it in the docker-compose file above) because I don't need to expose the 8000 port to the host machine.
Setting it to MY_IP_ADDRESS:8000 as described in this SO post. This method requires me to expose the 8000 port, which is not ideal.
For api-deployment.yaml, only the following value has worked for me:
Setting it to localhost:8000. Inside of a pod, all containers can communicate using localhost.
Are there any other values for UPSTREAM_SERVER that work here, especially in the kubernetes file? I feel like I should be able to point to the container's name and that should work.
You could create a service to target container myapp-api-gunicorn but this will expose it outside of the pod:
apiVersion: v1
kind: Service
metadata:
name: api-gunicorn-service
spec:
selector:
app.kubernetes.io/name: myapp-api-proxy
ports:
- protocol: TCP
port: 8000
targetPort: 8000
You might also use hostname and subdomain fields inside a pod to take advantage of FQDN.
Currently when a pod is created, its hostname is the Pod’s metadata.name value.
The Pod spec has an optional hostname field, which can be used to specify the Pod’s hostname. When specified, it takes precedence over the Pod’s name to be the hostname of the pod. For example, given a Pod with hostname set to “my-host”, the Pod will have its hostname set to “my-host”.
The Pod spec also has an optional subdomain field which can be used to specify its subdomain. For example, a Pod with hostname set to “foo”, and subdomain set to “bar”, in namespace “my-namespace”, will have the fully qualified domain name (FQDN) “foo.bar.my-namespace.svc.cluster-domain.example”.
Also here is a nice article from Mirantis which talks about exposing multiple containers in a pod
On kubernetes 1.6.1 (Openshift 3.6 CP) I'm trying to get the subdomain of my cluster using $(OPENSHIFT_MASTER_DEFAULT_SUBDOMAIN) but it's not dereferencing at runtime. Not sure what I'm doing wrong, docs show this is how environment parameters should be acquired.
https://v1-6.docs.kubernetes.io/docs/api-reference/v1.6/#container-v1-core
- apiVersion: v1
kind: DeploymentConfig
spec:
template:
metadata:
labels:
deploymentconfig: ${APP_NAME}
name: ${APP_NAME}
spec:
containers:
- name: myapp
env:
- name: CLOUD_CLUSTER_SUBDOMAIN
value: $(OPENSHIFT_MASTER_DEFAULT_SUBDOMAIN)
You'll need to set that value as an environment variable, this is the usage:
oc set env <object-selection> KEY_1=VAL_1
for example if your pod is named foo and your subdomain is foo.bar, you would use this command:
oc set env dc/foo OPENSHIFT_MASTER_DEFAULT_SUBDOMAIN=foo.bar