I have created a GKE cluster using terraform script. I have a scenario where the /etc/hosts file has to be updated. Is it possible to update the host file on worker nodes during K8 cluster creation using terraform?
With terraform it's not possible to access the directory, You can use a DeamonSet with Security Context as privileged see below:
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: ssd-startup-script
labels:
app: ssd-startup-script
spec:
template:
metadata:
labels:
app: ssd-startup-script
spec:
hostPID: true
containers:
- name: ssd-startup-script
image: gcr.io/google-containers/startup-script:v1
imagePullPolicy: Always
securityContext:
privileged: true
env:
- name: STARTUP_SCRIPT
value: |
#!/bin/bash
<YOUR COMMAND LINE>
<YOUR COMMAND LINE>
<YOUR COMMAND LINE>
echo Done
you need to run the kubectl apply -f <demonset yaml file>
Related
I was trying to get started with AWS EKS Kubernetes cluster provisioning using Terraform. I was following the tutorial and got stuck on the moment with the kubeconfig file.
After the command: terraform output kubeconfig > ~/.kube/config, I should be able to communicate with the cluster. Unfortunately, when I try to use any command together with kubectl, for example cluster-info, I get an error error loading config file, yaml: line 4: mapping values are not allowed in this context
This is the code of the outputs.tf file:
# Outputs
#
locals {
config_map_aws_auth = <<CONFIGMAPAWSAUTH
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: ${aws_iam_role.demo-node.arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
CONFIGMAPAWSAUTH
kubeconfig = <<KUBECONFIG
apiVersion: v1
clusters:
- cluster:
server: ${aws_eks_cluster.demo.endpoint}
certificate-authority-data: ${aws_eks_cluster.demo.certificate_authority[0].data}
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "${var.cluster-name}"
KUBECONFIG
}
output "config_map_aws_auth" {
value = local.config_map_aws_auth
}
output "kubeconfig" {
value = local.kubeconfig
}```
We a pod which needs certificate file
We need to provide a path to a certificate file, (we have this certificate) how should we put this certificate file into k8s that the pod will have an access to it
e.g. that we were able to provide it like the following to the pod "/path/to/certificate_authority.crt”
Should we use secret/ configmap, if yes than how?
Create a TLS secret then mount it to the desired folder.
apiVersion: v1
kind: Secret
metadata:
name: secret-tls
type: kubernetes.io/tls
data:
# the data is abbreviated in this example
tls.crt: |
MIIC2DCCAcCgAwIBAgIBATANBgkqh ...
tls.key: |
MIIEpgIBAAKCAQEA7yn3bRHQ5FHMQ ...
Documentation
To mount the secret in a volume from your pod:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/path/to/"
readOnly: true
volumes:
- name: foo
secret:
secretName: secret-tls
Documentation
Create a secret, then mount it in the desired folder.
Official docs here: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod
I have a deployment on Kubernetes (AWS EKS), with several environment variables defined in the deployment .yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myApp
name: myAppName
spec:
replicas: 2
(...)
spec:
containers:
- env:
- name: MY_ENV_VAR
value: "my_value"
image: myDockerImage:prodV1
(...)
If I want to upgrade the pods to another version of the docker image, say prodV2, I can perform a rolling update which replaces the pods from prodV1 to prodV2 with zero downtime.
However, if I add another env variable, say MY_ENV_VAR_2 : "my_value_2" and perform the same rolling update, I don't see the new env var in the container. The only solution I found in order to have both env vars was to manually execute
kubectl delete deployment myAppName
kubectl create deployment -f myDeploymentFile.yaml
As you can see, this is not zero downtime, as deleting the deployment will terminate my pods and introduce a downtime until the new deployment is created and the new pods start.
Is there a way to better do this? Thank you!
Here is an example you might want to test yourself:
Noice I used spec.strategy.type: RollingUpdate.
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 2
strategy:
type: RollingUpdate
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
env:
- name: MY_ENV_VAR
value: "my_value"
Apply:
➜ ~ kubectl apply -f deployment.yaml
➜ ~ kubectl exec -it nginx-<hash> env | grep MY_ENV_VAR
MY_ENV_VAR=my_value
Notice the env is as set in yaml
Now we edit the env in deployment.yaml:
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 2
strategy:
type: RollingUpdate
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
env:
- name: MY_ENV_VAR
value: "my_new_value"
apply and wait for it to update:
➜ ~ kubectl apply -f deployment.yaml
➜ ~ kubectl get po --watch
# after it updated use Ctrl+C to stop the watch and run:
➜ ~ kubectl exec -it nginx-<new_hash> env | grep MY_ENV_VAR
MY_ENV_VAR=my_new_value
As you should see, the env changed. That is pretty much it.
Iam trying to follow this guide
https://aws.amazon.com/blogs/containers/using-alb-ingress-controller-with-amazon-eks-on-fargate/
Steps below:
Cluster provisioning
AWS_REGION=us-east-1
CLUSTER_NAME=eks-fargate-alb-demo
eksctl create cluster --name $CLUSTER_NAME --region $AWS_REGION --fargate
kubectl get svc
You should get the following response:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 443/TCP 16h
Set up OIDC provider with the cluster and create the IAM policy used by the ALB Ingress Controller
wget -O alb-ingress-iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/master/docs/examples/iam-policy.json
aws iam create-policy --policy-name ALBIngressControllerIAMPolicy --policy-document file://alb-ingress-iam-policy.json
STACK_NAME=eksctl-$CLUSTER_NAME-cluster
VPC_ID=$(aws cloudformation describe-stacks --stack-name "$STACK_NAME" | jq -r '[.Stacks[0].Outputs[] | {key: .OutputKey, value: .OutputValue}] | from_entries' | jq -r '.VPC')
AWS_ACCOUNT_ID=$(aws sts get-caller-identity | jq -r '.Account')
cat > rbac-role.yaml <<-EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
rules:
- apiGroups:
- ""
- extensions
resources:
- configmaps
- endpoints
- events
- ingresses
- ingresses/status
- services
verbs:
- create
- get
- list
- update
- watch
- patch
- apiGroups:
- ""
- extensions
resources:
- nodes
- pods
- secrets
- services
- namespaces
verbs:
- get
- list
- watch
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: alb-ingress-controller
subjects:
- kind: ServiceAccount
name: alb-ingress-controller
namespace: kube-system
EOF
kubectl apply -f rbac-role.yaml
These commands will create two resources for us and the output should be similar to this:
clusterrole.rbac.authorization.k8s.io/alb-ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/alb-ingress-controller created
And finally the Kubernetes Service Account:
eksctl create iamserviceaccount \
--name alb-ingress-controller \
--namespace kube-system \
--cluster $CLUSTER_NAME \
--attach-policy-arn arn:aws:iam::$AWS_ACCOUNT_ID:policy/ALBIngressControllerIAMPolicy \
--approve
This eksctl command will deploy a new CloudFormation stack with an IAM role. Wait for it to finish before keep executing the next steps.
Deploy the ALB Ingress Controller
Let’s now deploy the ALB Ingress Controller to our cluster:
cat > alb-ingress-controller.yaml <<-EOF
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
namespace: kube-system
spec:
selector:
matchLabels:
app.kubernetes.io/name: alb-ingress-controller
template:
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
spec:
containers:
- name: alb-ingress-controller
args:
- --ingress-class=alb
- --cluster-name=$CLUSTER_NAME
- --aws-vpc-id=$VPC_ID
- --aws-region=$AWS_REGION
image: docker.io/amazon/aws-alb-ingress-controller:v1.1.4
serviceAccountName: alb-ingress-controller
EOF
kubectl apply -f alb-ingress-controller.yaml
Deploy sample application to the cluster
Now that we have our ingress controller running, we can deploy the application to the cluster and create an ingress resource to expose it.
Let’s start with a deployment:
cat > nginx-deployment.yaml <<-EOF
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "nginx-deployment"
namespace: "default"
spec:
replicas: 3
template:
metadata:
labels:
app: "nginx"
spec:
containers:
- image: nginx:latest
imagePullPolicy: Always
name: "nginx"
ports:
- containerPort: 80
EOF
kubectl apply -f nginx-deployment.yaml
The output should be similar to:
deployment.apps/alb-ingress-controller created
Then, let’s create a service so we can expose the NGINX pods:
cat > nginx-service.yaml <<-EOF
apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/target-type: ip
name: "nginx-service"
namespace: "default"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "nginx"
EOF
kubectl apply -f nginx-service.yaml
The output will be similar to:
deployment.extensions/nginx-deployment created
Finally, let’s create our ingress resource:
cat > nginx-ingress.yaml <<-EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "nginx-ingress"
namespace: "default"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
labels:
app: nginx-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: "nginx-service"
servicePort: 80
EOF
kubectl apply -f nginx-ingress.yaml
The output will be:
ingress.extensions/nginx-ingress created
Once everything is done, you will be able to get the ALB URL by running the following command:
kubectl get ingress nginx-ingress
The output of this command will be similar to this one:
NAME HOSTS ADDRESS PORTS AGE
nginx-ingress * 5e07dbe1-default-ngnxingr-2e9-113757324.us-east-2.elb.amazonaws.com 80 9s
but Iam unable to get the ALB URL in this step
kubectl get ingress nginx-ingress
Any help.. Thanks in advance..
I had the same issue and I fixed it by updating alb-ingress-controller.yaml. I replaced $CLUSTER_NAME, $VPC_ID and $AWS_REGION by their value.
Follow this guide to create cluster autoscaler on AWS:
https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
app: cluster-autoscaler
spec:
replicas: 1
selector:
matchLabels:
app: cluster-autoscaler
template:
metadata:
labels:
app: cluster-autoscaler
spec:
containers:
- image: gcr.io/google_containers/cluster-autoscaler:v0.6.0
name: cluster-autoscaler
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 100m
memory: 300Mi
command:
- ./cluster-autoscaler
- --v=4
- --stderrthreshold=info
- --cloud-provider=aws
- --skip-nodes-with-local-storage=false
- --nodes=2:4:k8s-worker-asg-1
env:
- name: AWS_REGION
value: us-east-1
volumeMounts:
- name: ssl-certs
mountPath: /etc/ssl/certs/ca-certificates.crt
readOnly: true
imagePullPolicy: "Always"
volumes:
- name: ssl-certs
hostPath:
path: "/etc/ssl/certs/ca-certificates.crt"
I have changed k8s-worker-asg-1 to my current ASG name which created by kops.
But when run kubectl apply -f deployment.yaml and check pods kubectl get pods -n=kube-system, return:
NAME READY STATUS RESTARTS AGE
cluster-autoscaler-75ccf5b9c9-lhts8 0/1 CrashLoopBackOff 6 8m
I tried to see its logs kubectl logs cluster-autoscaler-75ccf5b9c9-lhts8 -n=kube-system, return:
failed to open log file "/var/log/pods/8edc3073-dc0b-11e7-a6e5-06361ac15b44/cluster-autoscaler_4.log": open /var/log/pods/8edc3073-dc0b-11e7-a6e5-06361ac15b44/cluster-autoscaler_4.log: no such file or directory
I also tried to describe the pod kubectl describe cluster-autoscaler-75ccf5b9c9-lhts8 -n=kube-system, return:
the server doesn't have a resource type "cluster-autoscaler-75ccf5b9c9-lhts8"
So how to debug the issue? What will be the reason? Is it need storage on AWS? I didn't create any storage on AWS yet.
By the way, I have another question. If use kops create a k8s cluster on AWS, then change maxSize, minSize for nodes size:
$ kops edit ig nodes
> maxSize: 2
> minSize: 2
$ kops update cluster ${CLUSTER_FULL_NAME} --yes
Until now the Auto Scaling Groups on AWS has already became Min:2 Max:4.
Is it necessary to run this deployment again?
https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws
Does kops can't change both ASG and k8s cluster? Why do another step to set cluster-autoscaler to kube-system namespace?
NAME READY STATUS RESTARTS AGE
cluster-autoscaler-75ccf5b9c9-lhts8 0/1 CrashLoopBackOff 6 8m
I have tried this official solution from K8s repositories. You also need to add additional IAM policies for accessing to AWS Autoscaling resources.
Then, modify the script in https://github.com/kubernetes/kops/tree/master/addons/cluster-autoscaler to install Cluster Autoscaler on your K8s cluster. Note that you likely want to change AWS_REGION and GROUP_NAME, and probably MIN_NODES and MAX_NODES. I worked for me.
spec:
api:
loadBalancer:
type: Public
authorization:
rbac: {}
additionalPolicies:
node: |
[
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup"
],
"Resource": ["*"]
}
]