helm single template for multiple similar deployments - templates

Apologies if this is a really simple question
I have a 2 applications that can potentially share the same template
applications:
#
app1:
containerName: app1
replicaCount: 10
logLevel: warn
queue: queue1
#
app2:
containerName: app2
replicaCount: 20
logLevel: info
queue: queue2
...
If I create a single template for both apps, is there a wildcard or variable i can use
that will iterate over both of the apps (i.e. app1 or app2) ? ...e.g. the bit where ive put <SOMETHING_HERE> below ...
spec:
env:
- name: LOG_LEVEL
value: "{{ .Values.applications.<SOMETHING_HERE>.logLevel }}"
Currently (which im sure is not very efficient) I have two seperate templates that each define their own app .e.g
app1_template.yaml
{{ .Values.applications.app1.logLevel }}
app2_template.yaml
{{ .Values.applications.app2.logLevel }}
Which im pretty sure is not the way Im supposed to do it?
any help on this would be greatly appreciated

One of the solutions would be to have one template and multiple value files, one per deployment/environment
spec:
env:
- name: LOG_LEVEL
value: "{{ .Values.logLevel }}"
values-app1.yaml:
containerName: app1
replicaCount: 10
logLevel: warn
queue: queue1
values-app2.yaml:
containerName: app2
replicaCount: 20
logLevel: info
queue: queue2
then, specify which values file should be used, by adding this to helm command:
APP=app1 # or app2
helm upgrade --install . --values ./values-${APP}.yaml
you can also have shared values, let say in regular values.yaml and provide multiple files:
APP=app1
helm upgrade --install . --values ./values.yaml --values ./values-${APP}.yaml

You can just use a single values file as you've done, and then set the app name when you run helm..
helm upgrade --install app1 ./charts --set app=app1
and
helm upgrade --install app2 ./charts --set app=app2
Then in your templates use:
spec:
env:
- name: LOG_LEVEL
value: "{{ .Values.applications (.Values.app) "loglevel" }}"

Related

k8s not executing commands after deployment in pod using configmap

Bit complicated but will try to explain as much to give clarity any help much apricated,
I use azure devops to do deployment in eks using helm , everything working fine and i now have a requirement to add certificate to the pod.
for this i have a der file with me , which i should copy to the pods(replica 3) and do keytool to import the certificate and put that in an appropriate location before my application starts
My setup is i have a dockerfile and i call a shell script inside a docker file and do helm install using deployment.yml file
I now tried using configmap to mount the der file which is used to importcert and then i will execute some unix commands to import the certificate , the unix command is not working , can one some help here ?
docker file
FROM amazonlinux:2.0.20181114
RUN yum install -y java-1.8.0-openjdk-headless
ARG JAR_FILE='**/*.jar'
ADD ${JAR_FILE} car_service.jar
ADD entrypoint.sh .
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"] # split ENTRYPOINT wrapper from
CMD ["java", "-jar", "/car_service.jar"] # main CMD
entrypoint.sh
#!/bin/sh
# entrypoint.sh
# Check: $env_name must be set
if [ -z "$env_name" ]; then
echo '$env_name is not set; stopping' >&2
exit 1
fi
# Install aws client
yum -y install curl
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
yum -y install unzip
unzip awscliv2.zip
./aws/install
# Retrieve secrets from Secrets Manager
export KEYCLOAKURL=`aws secretsmanager get-secret-value --secret-id myathlon/$env_name/KEYCLOAKURL --query SecretString --output text`
cd /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.amzn2.0.2.x86_64/jre/bin
keytool -noprompt -importcert -alias msfot-$(date +%Y%m%d-%H%M) -file /tmp/msfot.der -keystore msfot.jks -storepass msfotooling
mkdir /data/keycloak/
cp /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.312.b07-1.amzn2.0.2.x86_64/jre/bin/msfot.jks /data/keycloak/
cd /
# Run the main container CMD
exec "$#"
myconfigmap
create configmap msfot1 --from-file=msfot.der
my deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "helm-chart.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
helm.sh/chart: {{ include "helm-chart.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "helm-chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
date: "{{ now | unixEpoch }}"
spec:
volumes:
- name: msfot1
configMap:
name: msfot1
items:
- key: msfot.der
path: msfot.der
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
volumeMounts:
- name: msfot1
mountPath: /tmp/msfot.der
subPath: msfot.der
ports:
- name: http
containerPort: 8080
protocol: TCP
env:
- name: env_name
value: {{ .Values.environmentName }}
- name: SPRING_PROFILES_ACTIVE
value: "{{ .Values.profile }}"
my values.yml file is
replicaCount: 3
#pass repository and targetPort values during runtime
image:
repository:
tag: "latest"
pullPolicy: Always
service:
type: ClusterIP
port: 80
targetPort:
profile: "aws"
environmentName: dev
i have 2 questions here
in my entrypoint.sh files keytool,mkdir,cp and cd command is not getting executed (so the Certificate is not getting added to keystore)
as you know this setup works for all the env as i use the same deployment.yml file though i have different values.yml file for each environment
i want this certificate generation to happen only in acc and prod not for dev and test
is their any other easy method doing this rather than configmap/deployment.yml ??
Please advice
Thanks

Mounting AWS Secrets Manager on Kubernetes/Helm chart

I have created an apps cluster deployment on AWS EKS that is deployed using Helm. For proper operation of my app, I need to set env variables, which are secrets stored in AWS Secrets manager. Referencing a tutorial, I set up my values in values.yaml file someway like this
secretsData:
secretName: aws-secrets
providerName: aws
objectName: CodeBuild
Now I have created a secrets provider class as AWS recommends: secret-provider.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: aws-secret-provider-class
spec:
provider: {{ .Values.secretsData.providerName }}
parameters:
objects: |
- objectName: "{{ .Values.secretsData.objectName }}"
objectType: "secretsmanager"
jmesPath:
- path: SP1_DB_HOST
objectAlias: SP1_DB_HOST
- path: SP1_DB_USER
objectAlias: SP1_DB_USER
- path: SP1_DB_PASSWORD
objectAlias: SP1_DB_PASSWORD
- path: SP1_DB_PATH
objectAlias: SP1_DB_PATH
secretObjects:
- secretName: {{ .Values.secretsData.secretName }}
type: Opaque
data:
- objectName: SP1_DB_HOST
key: SP1_DB_HOST
- objectName: SP1_DB_USER
key: SP1_DB_USER
- objectName: SP1_DB_PASSWORD
key: SP1_DB_PASSWORD
- objectName: SP1_DB_PATH
key: SP1_DB_PATH
I mount this secret object in my deployment.yaml, the relevant section of the file looks like this:
volumeMounts:
- name: secrets-store-volume
mountPath: "/mnt/secrets"
readOnly: true
env:
- name: SP1_DB_HOST
valueFrom:
secretKeyRef:
name: {{ .Values.secretsData.secretName }}
key: SP1_DB_HOST
- name: SP1_DB_PORT
valueFrom:
secretKeyRef:
name: {{ .Values.secretsData.secretName }}
key: SP1_DB_PORT
further down in same deployment file, I define secrets-store-volume as :
volumes:
- name: secrets-store-volume
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aws-secret-provider-class
All drivers are installed into cluster and permissions are set accordingly
with helm install mydeployment helm-folder/ --dry-run I can see all the files and values are populated as expected. Then with helm install mydeployment helm-folder/ I install the deployment into my cluster but with kubectl get all I can see the pod is stuck at Pending with warning Error: 'aws-secrets' not found and eventually gets timeout. In AWS CloudTrail log, I can see that the cluster made request to access the secret and there was no error fetching it. How can I solve this or maybe further debug it? Thank you for your time and efforts.
Error: 'aws-secrets' not found - looks like CSI Driver isn't creating kubernetes secret that you're using to reference values
Since yaml files looks correctly, I would say it's probably CSI Driver configuration Sync as Kubernetes secret - syncSecret.enabled (which is false by default)
So make sure that secrets-store-csi-driver runs with this flag set to true, for example:
helm upgrade --install csi-secrets-store \
--namespace kube-system secrets-store-csi-driver/secrets-store-csi-driver \
--set grpcSupportedProviders="aws" --set syncSecret.enabled="true"

How to serve static files in Django application running inside Kubernetes

I have a small application built in Django. it serves as a frontend and it's being installed in one of out K8S clusters.
I'm using helm to deploy the charts and I fail to serve the static files of Django correctly.
Iv'e searched in multiple locations, but I ended up with inability to find one that will fix my problem.
That's my ingress file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: orion-toolbelt
namespace: {{ .Values.global.namespace }}
annotations:
# ingress.kubernetes.io/secure-backends: "false"
# nginx.ingress.kubernetes.io/secure-backends: "false"
ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/rewrite-target: /
ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
ingress.kubernetes.io/ingress.allow-http: "true"
nginx.ingress.kubernetes.io/ingress.allow-http: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 500m
spec:
rules:
- http:
paths:
- path: /orion-toolbelt
backend:
serviceName: orion-toolbelt
servicePort: {{ .Values.service.port }}
the static file location in django is kept default e.g.
STATIC_URL = "/static"
the user ended up with inability to access the static files that way..
what should I do next?
attached is the error:
HTML-static_files-error
-- EDIT: 5/8/19 --
The pod's deployment.yaml looks like the following:
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: {{ .Values.global.namespace }}
name: orion-toolbelt
labels:
app: orion-toolbelt
spec:
replicas: 1
selector:
matchLabels:
app: orion-toolbelt
template:
metadata:
labels:
app: orion-toolbelt
spec:
containers:
- name: orion-toolbelt
image: {{ .Values.global.repository.imagerepo }}/orion-toolbelt:10.4-SNAPSHOT-15
ports:
- containerPort: {{ .Values.service.port }}
env:
- name: "USERNAME"
valueFrom:
secretKeyRef:
key: username
name: {{ .Values.global.secretname }}
- name: "PASSWORD"
valueFrom:
secretKeyRef:
key: password
name: {{ .Values.global.secretname }}
- name: "MASTER_IP"
valueFrom:
secretKeyRef:
key: master_ip
name: {{ .Values.global.secretname }}
imagePullPolicy: {{ .Values.global.pullPolicy }}
imagePullSecrets:
- name: {{ .Values.global.secretname }}
EDIT2: 20/8/19 - adding service.yam
apiVersion: v1
kind: Service
metadata:
namespace: {{ .Values.global.namespace }}
name: orion-toolbelt
spec:
selector:
app: orion-toolbelt
ports:
- protocol: TCP
port: {{ .Values.service.port }}
targetPort: {{ .Values.service.port }}
You should simply contain the /static directory within the container, and adjust the path to it in the application.
Otherwise, if it must be /static, or you don't want to contain the static files in the container, or you want other containers to access the volume, you should think about mounting a Kubernetes volume to your Deployment/ Statefulset.
#Edit
You can test, whether this path exists in your kubernetes pod this way:
kubectl get po <- this command will give you the name of your pod
kubectl exec -it <name of pod> sh <-this command will let you execute commands in the container shell.
There you can test, if your path exists. If it does, it is fault of your application, if it does not, you added it wrong in the Docker.
You can also add path to your Kubernetes pod, without specifying it in the
Docker container. Check this link for details
As described by community member Marcin Ginszt
According to the informatiom applied in the post. It's difficult to quess where is the problem with your django/app config/settings.
Please refer to Managing static files (e.g. images, JavaScript, CSS)
NOTE:
Serving the files - STATIC_URL = '/static/'
In addition to these configuration steps, you’ll also need to actually serve the static files.
During development, if you use django.contrib.staticfiles, this will be done automatically by runserver when DEBUG is set to True (see django.contrib.staticfiles.views.serve()).
This method is grossly inefficient and probably insecure, so it is unsuitable for production.
See Deploying static files for proper strategies to serve static files in production environments.
Django doesn’t serve files itself; it leaves that job to whichever Web server you choose.
We recommend using a separate Web server – i.e., one that’s not also running Django – for serving media. Here are some good choices:
Nginx
A stripped-down version of Apache
Here you can find example how you can serve static files using collectstatic command.
Please let me know if it helped.

How to set dynamic values with Kubernetes yaml file

For example, a deployment yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
container:
- name: guestbook
image: {{Here want to read value from config file outside}}
There is a ConfigMap feature with Kubernetes, but that's also write the key/value to the yaml file. Is there a way to set the key to environment variables?
You can also use envsubst when deploying.
e.g.
cat app/deployment.yaml | envsubst | kubectl apply ...
It will replace all variables in the file with their values.
We are successfully using this approach on our CI when deploying to multiple environments, also to inject the CI_TAG etc into the deployments.
You can't do it automatically, you need to use an external script to "compile" your template, or use helm as suggested by #Jakub.
You may want to use a custom bash script, maybe integrated with your CI pipeline.
Given a template yml file called deploy.yml.template very similar to the one you provided, you can use something like this:
#!/bin/bash
# sample value for your variables
MYVARVALUE="nginx:latest"
# read the yml template from a file and substitute the string
# {{MYVARNAME}} with the value of the MYVARVALUE variable
template=`cat "deploy.yml.template" | sed "s/{{MYVARNAME}}/$MYVARVALUE/g"`
# apply the yml with the substituted value
echo "$template" | kubectl apply -f -
I don't think it is possible to set image through variable or Config Map in Kubernetes. But you can use for example Helm to make your deployments much more flexible and configurable.
One line:
cat app-deployment.yaml | sed "s/{{BITBUCKET_COMMIT}}/$BITBUCKET_COMMIT/g" | kubectl apply -f -
In yaml:
...
containers:
- name: ulisses
image: niceuser/niceimage:{{BITBUCKET_COMMIT}}
...
This kind of thing is painfully easy with ytt:
deployment.yml
## load("#ytt:data", "data")
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
container:
- name: guestbook
image: ## data.values.image
values.yml
##data/values
image: nginx#sha256:fe2fa7bb1ceb86c6d9c935bc25c3dd8cbd64f2e95ed5b894f93ae7ffbd1e92bb
Then...
$ ytt -f deployment.yml -f values.yml | kubectl apply -f -
or even better, use ytt's cousin, kapp for a high-control deployment experience:
$ ytt -f deployment.yml -f values.yml | kapp deploy -a guestbook -f -
I create a script called kubectl_create and use it to run the create command. It will substitute any value in the template that is referenced in an environment variable.
#!/bin/bash
set -e
eval "cat <<EOF
$(<$1)
EOF
" | kubectl create -f -
For example, if the template file has:
apiVersion: v1
kind: Service
metadata:
name: nginx-external
labels:
app: nginx
spec:
loadBalancerIP: ${PUBLIC_IP}
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
selector:
app: nginx
Run kubectl_create nginx-service.yaml and then the environment variable PUBLIC_IP will be substituted before running the actual kubectl create command.
After trying sed and envsubst I found Kustomize the most elegant and Kubernetes-native way. As an alternative also yq comes in handy sometimes.
Use Kustomize to change image name
Install the kustomize CLI (e.g. on a Mac this is brew install kustomize) and create a new file called kustomization.yaml in the same directory as your deployment.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
Now use the kustomize edit set image command to change the image name
# optionally define image name
IMAGE_NAME=ghcr.io/yourrepo/guestbook:c25a74c8f919a72e3f00928917dc4ab2944ab061
# replace image tag
kustomize edit set image $IMAGE_NAME
Finally apply your kustomized deployment.yml to your cluster using kubectl apply -k directory/where/your/kustomization/file/is like this:
kubectl apply -k .
For debugging you can see the resulting deployment.yml if you run kustomize build . :
$ kustomize build .
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
containers:
- image: ghcr.io/yourrepo/guestbook:c25a74c8f919a72e3f00928917dc4ab2944ab061
name: guestbook
Alternative: Use yq to change image name
Install the YAML processor yq (e.g. via homebrew brew install yq), define your variables and let yq do the replacement:
# define image name
IMAGE_NAME=ghcr.io/yourrepo/guestbook:c25a74c8f919a72e3f00928917dc4ab2944ab061
# replace image tag
yq e ".spec.template.spec.containers[0].image = \"$IMAGE_NAME\"" -i deployment.yaml
Now your deployment.yaml get's the new image version and then looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
containers:
- image: ghcr.io/yourrepo/guestbook:c25a74c8f919a72e3f00928917dc4ab2944ab061
name: guestbook
FYI: Your deployment.yaml isn't really valid Kubernetes configuration - the template.spec.container should not reside under the metadata tag - and also it is spelled containers.
yaml does not read values from another yaml file. As an alternative approach you could try this.
kind: Pod
metadata:
creationTimestamp: null
annotations:
namespace: &namespaceId dev
imageId: &imgageId nginx
podName: &podName nginx-pod
containerName: &containerName nginx-container
name: *podName
namespace: *namespaceId
spec:
containers:
- image: *imgageId
name: *containerName
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
My approach:
tools/jinja2-cli.py:
#!/usr/bin/env python3
import os
import sys
from jinja2 import Environment, FileSystemLoader
sys.stdout.write(Environment(loader=FileSystemLoader('templates/')).from_string(sys.stdin.read()).render(env=os.environ) + "\n")
Make rule:
_GENFILES = $(basename $(TEMPLATES))
GENFILES = $(_GENFILES:templates/%=%)
$(GENFILES): %: templates/%.j2 $(MKFILES) tools/jinja2-cli.py .env
env $$(cat .env | xargs) tools/jinja2-cli.py < $< > $# || (rm -f $#; false)
Inside the .j2 template file you can use any jinja syntax construct, e.g. {{env.GUEST}} will be replaced by the value of GUEST defined in .env
So your templates/deploy.yaml.j2 would look like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
container:
- name: guestbook
image: {{env.GUEST}}
Another approach (using just bash builtins and xargs) might be
env $(cat .env | xargs) cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: guestbook
spec:
replicas: 2
template:
metadata:
labels:
app: guestbook
spec:
container:
- name: guestbook
image: ${GUEST}
EOF
I have been using kubetpl
It has three different template flavors and supports ConfigMap/Secret freezing.
I think the standard - Helm should be used instead of custom scripts to solve this problem nowadays. You don't need to deploy to generate Kubernets yamls on the machine.
An example:
Install helm on your machine so helm command exists
https://artifacthub.io/packages/helm/pauls-helm-charts/helloworld - Install button
helm repo add pauls-helm-charts http://tech.paulcz.net/charts
helm pull pauls-helm-charts/helloworld --version 2.0.0
tar -zxvf helloworld-2.0.0.tgz && cd helloworld
helm template -f values.yaml --output-dir helloworld . --namespace my-namespace --name-template=my-name
So it created these files from values.yaml:
wrote helloworld/helloworld/templates/serviceaccount.yaml
wrote helloworld/helloworld/templates/service.yaml
wrote helloworld/helloworld/templates/deployment.yaml
Inside values.yaml, you can change predefined repository (or 100% any value can be repeated in Kubernetes yamls as you wish):
image:
repository: paulczar/spring-helloworld
Now if you want to deploy, make sure kubectl works and just apply these generated files using kubectl apply -f serviceaccount.yaml, etc.
I create a script called kubectl_apply. It loads variables from .env, replace ${CUSTOMVAR} in yml and pass it to kubectl command
#!/bin/bash
set -a
source .env
set +a
eval "cat <<EOF
$(<$1)
EOF
" | kubectl apply -f -
I've published a command-line tool ysed that helps exactly with that, in case you plan to script it.
If you just want to change the image or a tag while your deployment is running, you could set the image of a specific container in your deployment:
kubectl apply -f k8s
kubectl set image deployments/worker-deployment worker=IMAGE:TAG
create a file called kubectl_advance as below and enjoy calling it just like kubectl commands.
e.g.
EXPORT MY_VAL="my-v1"
kubectl_advance -c -f sample.yaml # -c option is to call create command
kubectl_advance -r -f sample2.yaml # -r option is to call replace command
Assuming the yaml file has the value like ${MY_VAL} to be replaced by the environment variable.
#!/usr/bin/env bash
helpFunction()
{
echo "Supported option is [-f] for file"
exit 1
}
while getopts "f:cr" opt
do
case "$opt" in
f ) yamlFile="$OPTARG" ;;
c ) COMMAND_IS_CREATE="true" ;;
r ) COMMAND_IS_REPLACE="true" ;;
? ) helpFunction ;; # Print helpFunction in case parameter is non-existent
esac
done
echo 'yaml file is : '$yamlFile
YAML_CONTENT=`eval "cat <<EOF
$(<$yamlFile)
EOF
"`
echo 'Final File Content is :=>'
echo '------------------'
echo "$YAML_CONTENT"
if [[ "$COMMAND_IS_CREATE" == "true" ]]; then
COMMAND="create"
fi
if [[ "$COMMAND_IS_REPLACE" == "true" ]]; then
COMMAND="replace"
fi
echo "$YAML_CONTENT" | kubectl $COMMAND -f -
Helm is exactly meant for such things and a lot more. It handle complex set of resource deployment as a group etc.
But if we are still looking for some simple alternative then how about using ant?
If you want to modify the file as part of build process or test process then you can go with ant task as well.
Using ant you can load all environment values as property or you can simply load properties file like:
<property environment="env" />
<property file="build.properties" />
Then you can have a target which converts template files into your desired yaml file.
<target name="generate_from_template">
<!-- Copy task to replaces values and create new file -->
<copy todir="${dest.dir}" verbose="true" overwrite="true" failonerror="true">
<!-- List of files to be processed -->
<fileset file="${source.dir}/xyz.template.yml" />
<!-- Mapper to transform filename. Removes '.template' from the file
name when copying the file to output directory -->
<mapper type="regexp" from="(.*).template(.*)" to="\1\2" />
<!-- Filter chain that replaces the template values with actual values
fetched from properties file -->
<filterchain>
<expandproperties />
</filterchain>
</copy>
</target>
Of course you can use a fileset instead of file in case you want to change values dynamically for multiple files (nested or whatever)
Your template file xyz.template.yml should look like:
apiVersion: v1
kind: Service
metadata:
name: ${XYZ_RES_NAME}-ser
labels:
app: ${XYZ_RES_NAME}
version: v1
spec:
type: NodePort
ports:
- port: ${env.XYZ_RES_PORT}
protocol: TCP
selector:
app: ${XYZ_RES_NAME}
version: v1
env. property being loaded from environment variables and other from property file
Hope it helped :)
In the jitsi project the tpl == frep command is used to substitute values, an extension to envsubst
https://github.com/jitsi/docker-jitsi-meet/issues/65
I keep on using the old shell tools like sed and friends but such code is quickly unreadable when its more than a handful of value to deal with.
For my deployments, I typically use Helm charts. It requires me to update values.yaml files periodically.
For dynamically updating YAML files, I used 'envsubst' since it is simple and does not require sophisticated configuration.
In addition, most of the tools only work with valid Kubernetes manifests, not simple YAML files.
I created a simple script to handle the YAML modification to simplify the usage
https://github.com/alexusarov/vars_replacer
Example:
./vars_replacer.sh -i [input_file] -o [output_file] -p "[key=value] [key=value]"

how to run celery with django on openshift 3

What is the easiest way to launch a celery beat and worker process in my django pod?
I'm migrating my Openshift v2 Django app to Openshift v3. I'm using Pro subscription. I'm really a noob on Openshift v3 and docker and containers and kubernetes. I have used this tutorial https://blog.openshift.com/migrating-django-applications-openshift-3/ to migrate my app (which works pretty well).
I'm now struggling on how to start celery. On Openshift 2 I just used an action hook post_start:
source $OPENSHIFT_HOMEDIR/python/virtenv/bin/activate
python $OPENSHIFT_REPO_DIR/wsgi/podpub/manage.py celery worker\
--pidfile="$OPENSHIFT_DATA_DIR/celery/run/%n.pid"\
--logfile="$OPENSHIFT_DATA_DIR/celery/log/%n.log"\
python $OPENSHIFT_REPO_DIR/wsgi/podpub/manage.py celery beat\
--pidfile="$OPENSHIFT_DATA_DIR/celery/run/celeryd.pid"\
--logfile="$OPENSHIFT_DATA_DIR/celery/log/celeryd.log" &
-c 1\
--autoreload &
It is a quite simple setup. It just uses the django database as a message broker. No rabbitMQ or something.
Would a openshift "job" be appropriated for that? Or better use powershift image (https://pypi.python.org/pypi/powershift-image) action commands? But I did not understand how to execute them.
here is the current deployment configuration for my only app "
apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: 2017-12-27T22:58:31Z
generation: 67
labels:
app: django
name: django
namespace: myproject
resourceVersion: "68466321"
selfLink: /oapi/v1/namespaces/myproject/deploymentconfigs/django
uid: 64600436-ab49-11e7-ab43-0601fd434256
spec:
replicas: 1
selector:
app: django
deploymentconfig: django
strategy:
activeDeadlineSeconds: 21600
recreateParams:
timeoutSeconds: 600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Recreate
template:
metadata:
annotations:
openshift.io/generated-by: OpenShiftNewApp
creationTimestamp: null
labels:
app: django
deploymentconfig: django
spec:
containers:
- image: docker-registry.default.svc:5000/myproject/django#sha256:6a0caac773acc65daad2e6ac87695f9f01ae3c99faba14536e0ec2b65088c808
imagePullPolicy: Always
name: django
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/app-root/src/data
name: data
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: data
persistentVolumeClaim:
claimName: django-data
test: false
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- django
from:
kind: ImageStreamTag
name: django:latest
namespace: myproject
lastTriggeredImage: docker-registry.default.svc:5000/myproject/django#sha256:6a0caac773acc65daad2e6ac87695f9f01ae3c99faba14536e0ec2b65088c808
type: ImageChange
I'm using mod_wsgi-express and this is my app.sh
ARGS="$ARGS --log-to-terminal"
ARGS="$ARGS --port 8080"
ARGS="$ARGS --url-alias /static wsgi/static"
exec mod_wsgi-express start-server $ARGS wsgi/application
Help is very appreciated. Thank you
I have managed to get it working, though I'm not quite happy with it. I will move to a postgreSQL database very soon. Here is what I did:
wsgi_mod-express has an option called service-script which starts an additional process besides the actual app. So I updated my app.sh:
#!/bin/bash
ARGS=""
ARGS="$ARGS --log-to-terminal"
ARGS="$ARGS --port 8080"
ARGS="$ARGS --url-alias /static wsgi/static"
ARGS="$ARGS --service-script celery_starter scripts/startCelery.py"
exec mod_wsgi-express start-server $ARGS wsgi/application
mind the last ARGS=... line.
I created a python script that starts up my celery worker and beat.
startCelery.py:
import subprocess
OPENSHIFT_REPO_DIR="/opt/app-root/src"
OPENSHIFT_DATA_DIR="/opt/app-root/src/data"
pathToManagePy=OPENSHIFT_REPO_DIR + "/wsgi/podpub"
worker_cmd = [
"python",
pathToManagePy + "/manage.py",
"celery",
"worker",
"--pidfile="+OPENSHIFT_REPO_DIR+"/%n.pid",
"--logfile="+OPENSHIFT_DATA_DIR+"/celery/log/%n.log",
"-c 1",
"--autoreload"
]
print(worker_cmd)
subprocess.Popen(worker_cmd, close_fds=True)
beat_cmd = [
"python",
pathToManagePy + "/manage.py",
"celery",
"beat",
"--pidfile="+OPENSHIFT_REPO_DIR+"/celeryd.pid",
"--logfile="+OPENSHIFT_DATA_DIR+"/celery/log/celeryd.log",
]
print(beat_cmd)
subprocess.Popen(beat_cmd)
this was actually working, but I kept receiving a message when I tried to launch the celery worker saying
"Running a worker with superuser privileges when the worker accepts messages serialized with pickle is a very bad idea!
If you really want to continue then you have to set the C_FORCE_ROOT environment variable (but please think about this before you do)."
Eventhough I added these configurations to my settings.py in order to remove pickle serializer, it kept giving me that same error message.
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_ACEEPT_CONTENT = ['json']
I don't know why.
At the end I added C_FORCE_ROOT to my .s2i/enviroment
C_FORCE_ROOT=true
Now it's working, at least I thinks so. My next job will only run in some hours. I'm still open for any further suggestions and tipps.