On kubernetes 1.6.1 (Openshift 3.6 CP) I'm trying to get the subdomain of my cluster using $(OPENSHIFT_MASTER_DEFAULT_SUBDOMAIN) but it's not dereferencing at runtime. Not sure what I'm doing wrong, docs show this is how environment parameters should be acquired.
https://v1-6.docs.kubernetes.io/docs/api-reference/v1.6/#container-v1-core
- apiVersion: v1
kind: DeploymentConfig
spec:
template:
metadata:
labels:
deploymentconfig: ${APP_NAME}
name: ${APP_NAME}
spec:
containers:
- name: myapp
env:
- name: CLOUD_CLUSTER_SUBDOMAIN
value: $(OPENSHIFT_MASTER_DEFAULT_SUBDOMAIN)
You'll need to set that value as an environment variable, this is the usage:
oc set env <object-selection> KEY_1=VAL_1
for example if your pod is named foo and your subdomain is foo.bar, you would use this command:
oc set env dc/foo OPENSHIFT_MASTER_DEFAULT_SUBDOMAIN=foo.bar
Related
I have the following Minikube default service account:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
imagePullSecrets:
- name: gcr-secret
- name: awsecr-cred
- name: dpr-secret
- name: acr-secret
kind: ServiceAccount
metadata:
creationTimestamp: "2022-11-18T20:21:13Z"
name: default
namespace: default
resourceVersion: "10953591"
uid: edcc687f-dbb5-472d-8847-b4dc29096b48
I can add a new imagePullSecrets entry using the following kubectl patch command:
kubectl patch serviceaccount default --type=json -p '[{"op": "add", "path": "/imagePullSecrets/-", "value": {name: artifactory-credentials}}]'
Here's the update default service account:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
imagePullSecrets:
- name: gcr-secret
- name: awsecr-cred
- name: dpr-secret
- name: acr-secret
- name: artifactory-credentials
kind: ServiceAccount
metadata:
creationTimestamp: "2022-11-18T20:21:13Z"
name: default
namespace: default
resourceVersion: "10956724"
uid: edcc687f-dbb5-472d-8847-b4dc29096b48
However, when I run the kubectl patch command a second time, a duplicate imagePullSecrets entry is added:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
imagePullSecrets:
- name: gcr-secret
- name: awsecr-cred
- name: dpr-secret
- name: acr-secret
- name: artifactory-credentials
- name: artifactory-credentials
kind: ServiceAccount
metadata:
creationTimestamp: "2022-11-18T20:21:13Z"
name: default
namespace: default
resourceVersion: "10957065"
uid: edcc687f-dbb5-472d-8847-b4dc29096b48
How can I use kubectl patch to add a imagePullSecrets entry only when the entry doesn't already exist? I don't want duplicate imagePullSecrets entries.
I'm using Minikube v1.28.0 and kubectl client version v1.26.1 / server version v1.25.3 on Ubuntu 20.04.5 LTS.
AFAIK unfortunately there is no such filter available the official documentation. But We can do a workaround by using the general syntax like kubectl patch serviceaccount default --type=json -p '{"imagePullSecrets":[{"name": "gcr-secret"},{"name": "artifactory-credentials"},{"name": "acr-secret"}]}'. But we have to update all the imagePullSecrets everytime.
As #Geoff Alexander mentioned the other way is to get the details of resource and validate if the required property is available in the resource, as mentioned in the above comment like $kubectl get serviceaccount -o json or $kubectl get serviceaccount -o yaml.
I am writing a terraform file in GCP to run a stateless application on a GKE, these are the steps I'm trying to get into terraform.
Create a service account
Grant roles to the service account
Creating the cluster
Configuring the deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mllp-adapter-deployment
spec:
replicas: 1
selector:
matchLabels:
app: mllp-adapter
template:
metadata:
labels:
app: mllp-adapter
spec:
containers:
- name: mllp-adapter
imagePullPolicy: Always
image: gcr.io/cloud-healthcare-containers/mllp-adapter
ports:
- containerPort: 2575
protocol: TCP
name: "port"
command:
- "/usr/mllp_adapter/mllp_adapter"
- "--port=2575"
- "--hl7_v2_project_id=PROJECT_ID"
- "--hl7_v2_location_id=LOCATION"
- "--hl7_v2_dataset_id=DATASET_ID"
- "--hl7_v2_store_id=HL7V2_STORE_ID"
- "--api_addr_prefix=https://healthcare.googleapis.com:443/v1"
- "--logtostderr"
- "--receiver_ip=0.0.0.0"
Add internal load balancer to make it accesible outside of the cluster
apiVersion: v1
kind: Service
metadata:
name: mllp-adapter-service
annotations:
cloud.google.com/load-balancer-type: "Internal"
spec:
type: LoadBalancer
ports:
- name: port
port: 2575
targetPort: 2575
protocol: TCP
selector:
app: mllp-adapter
I've found this example in order to create an auto-pilot-public cluster, however I don't know where to specify the YAML file of my step 4
Also I've found this other blueprint that deploy a service to the created cluster using the kubernetes provider, which I hope solves my step 5.
I'm new at terraform and GCP architecture in general, I got all of this working following documentation however I'm now trying to find a way to deploy this on a dev enviroment for testing purposes but that's outside of my sandbox and it's supposed to be deployed using terraform, I think I'm getting close to it.
Can someone enlight me what's the next step or how to add those YAML configurations to the .tf examples I've found?
Am I doing this right? :(
You can use this script and extend it further to deploy the YAML files with that : https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/examples/simple_autopilot_public
The above TF script is creating the GKE auto pilot cluster for YAML deployment you can use the K8s provider and apply the files using that.
https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/deployment
Full example : https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/tree/master/examples/simple_autopilot_public
main.tf
locals {
cluster_type = "simple-autopilot-public"
network_name = "simple-autopilot-public-network"
subnet_name = "simple-autopilot-public-subnet"
master_auth_subnetwork = "simple-autopilot-public-master-subnet"
pods_range_name = "ip-range-pods-simple-autopilot-public"
svc_range_name = "ip-range-svc-simple-autopilot-public"
subnet_names = [for subnet_self_link in module.gcp-network.subnets_self_links : split("/", subnet_self_link)[length(split("/", subnet_self_link)) - 1]]
}
data "google_client_config" "default" {}
provider "kubernetes" {
host = "https://${module.gke.endpoint}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(module.gke.ca_certificate)
}
module "gke" {
source = "../../modules/beta-autopilot-public-cluster/"
project_id = var.project_id
name = "${local.cluster_type}-cluster"
regional = true
region = var.region
network = module.gcp-network.network_name
subnetwork = local.subnet_names[index(module.gcp-network.subnets_names, local.subnet_name)]
ip_range_pods = local.pods_range_name
ip_range_services = local.svc_range_name
release_channel = "REGULAR"
enable_vertical_pod_autoscaling = true
}
Another Good example which use the YAML files as template and apply it using the terraform. : https://github.com/epiphone/gke-terraform-example/tree/master/terraform/dev
I would like to build an image from a Dockerfile using an OpenShift BuildConfig that references an existing ImageStream in the FROM line. That is, if I have:
$ oc get imagestream openshift-build-example -o yaml
apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
name: openshift-build-example
namespace: sandbox
spec:
lookupPolicy:
local: true
I would like to be able to submit a build that uses a Dockerfile like
this:
FROM openshift-build-example:parent
But this doesn't work. If I use a fully qualified image specification,
like this...
FROM image-registry.openshift-image-registry.svc:5000/sandbox/openshift-build-example:parent
...it works, but this is problematic, because it requires referencing
the namespace in the image specification. This means the builds can't
be conveniently deployed into another namespace.
Is there any way to make this work?
For reference purposes, the build is configure in the following
BuildConfig resource:
apiVersion: build.openshift.io/v1
kind: BuildConfig
metadata:
name: buildconfig-child
spec:
failedBuildsHistoryLimit: 5
successfulBuildsHistoryLimit: 5
output:
to:
kind: ImageStreamTag
name: openshift-build-example:child
runPolicy: Serial
source:
git:
ref: main
uri: https://github.com/larsks/openshift-build-example
type: Git
contextDir: image/child
strategy:
dockerStrategy:
dockerfilePath: Dockerfile
type: Docker
triggers:
- type: "GitHub"
github:
secretReference:
name: "buildconfig-child-webhook"
- type: "Generic"
generic:
secret: "buildconfig-child-webhook"
And the referenced Dockerfile is:
# FIXME
FROM openshift-build-example:parent
COPY index.html /var/www/localhost/htdocs/index.html
I have a sidecar like this:
apiVersion: networking.istio.io/v1alpha3
kind: Sidecar
metadata:
name: test
namespace: testns
spec:
workloadSelector:
labels:
app: test
...
and a kustomization like:
resources:
- ../../base
nameSuffix: -dev
But kustomize doesn't adapt the workloadSelector label app to test-dev as I would expect it to do. The name suffix is only appended to the name of the sidecar. Any ideas why?
By default kustomize namePrefix and nameSuffix only apply to metadata/name for all resources.
There are a set of configured nameReferences that will also be transformed with the appropriate name, but they are limited to resource names.
See here for more info: https://github.com/kubernetes-sigs/kustomize/blob/master/examples/transformerconfigs/README.md#prefixsuffix-transformer
I have a small application built in Django. it serves as a frontend and it's being installed in one of out K8S clusters.
I'm using helm to deploy the charts and I fail to serve the static files of Django correctly.
Iv'e searched in multiple locations, but I ended up with inability to find one that will fix my problem.
That's my ingress file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: orion-toolbelt
namespace: {{ .Values.global.namespace }}
annotations:
# ingress.kubernetes.io/secure-backends: "false"
# nginx.ingress.kubernetes.io/secure-backends: "false"
ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/rewrite-target: /
ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
ingress.kubernetes.io/ingress.allow-http: "true"
nginx.ingress.kubernetes.io/ingress.allow-http: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 500m
spec:
rules:
- http:
paths:
- path: /orion-toolbelt
backend:
serviceName: orion-toolbelt
servicePort: {{ .Values.service.port }}
the static file location in django is kept default e.g.
STATIC_URL = "/static"
the user ended up with inability to access the static files that way..
what should I do next?
attached is the error:
HTML-static_files-error
-- EDIT: 5/8/19 --
The pod's deployment.yaml looks like the following:
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: {{ .Values.global.namespace }}
name: orion-toolbelt
labels:
app: orion-toolbelt
spec:
replicas: 1
selector:
matchLabels:
app: orion-toolbelt
template:
metadata:
labels:
app: orion-toolbelt
spec:
containers:
- name: orion-toolbelt
image: {{ .Values.global.repository.imagerepo }}/orion-toolbelt:10.4-SNAPSHOT-15
ports:
- containerPort: {{ .Values.service.port }}
env:
- name: "USERNAME"
valueFrom:
secretKeyRef:
key: username
name: {{ .Values.global.secretname }}
- name: "PASSWORD"
valueFrom:
secretKeyRef:
key: password
name: {{ .Values.global.secretname }}
- name: "MASTER_IP"
valueFrom:
secretKeyRef:
key: master_ip
name: {{ .Values.global.secretname }}
imagePullPolicy: {{ .Values.global.pullPolicy }}
imagePullSecrets:
- name: {{ .Values.global.secretname }}
EDIT2: 20/8/19 - adding service.yam
apiVersion: v1
kind: Service
metadata:
namespace: {{ .Values.global.namespace }}
name: orion-toolbelt
spec:
selector:
app: orion-toolbelt
ports:
- protocol: TCP
port: {{ .Values.service.port }}
targetPort: {{ .Values.service.port }}
You should simply contain the /static directory within the container, and adjust the path to it in the application.
Otherwise, if it must be /static, or you don't want to contain the static files in the container, or you want other containers to access the volume, you should think about mounting a Kubernetes volume to your Deployment/ Statefulset.
#Edit
You can test, whether this path exists in your kubernetes pod this way:
kubectl get po <- this command will give you the name of your pod
kubectl exec -it <name of pod> sh <-this command will let you execute commands in the container shell.
There you can test, if your path exists. If it does, it is fault of your application, if it does not, you added it wrong in the Docker.
You can also add path to your Kubernetes pod, without specifying it in the
Docker container. Check this link for details
As described by community member Marcin Ginszt
According to the informatiom applied in the post. It's difficult to quess where is the problem with your django/app config/settings.
Please refer to Managing static files (e.g. images, JavaScript, CSS)
NOTE:
Serving the files - STATIC_URL = '/static/'
In addition to these configuration steps, you’ll also need to actually serve the static files.
During development, if you use django.contrib.staticfiles, this will be done automatically by runserver when DEBUG is set to True (see django.contrib.staticfiles.views.serve()).
This method is grossly inefficient and probably insecure, so it is unsuitable for production.
See Deploying static files for proper strategies to serve static files in production environments.
Django doesn’t serve files itself; it leaves that job to whichever Web server you choose.
We recommend using a separate Web server – i.e., one that’s not also running Django – for serving media. Here are some good choices:
Nginx
A stripped-down version of Apache
Here you can find example how you can serve static files using collectstatic command.
Please let me know if it helped.