Background
We're using Jenkins to deploy a new version of a Kubernetes (k8s) replication controller to our test or prod cluster. The test and prod (k8s) clusters are located under different (google cloud platform) projects. We have configured two profiles for our gcloud SDK on Jenkins, one for test (test-profile) and one for prod (prod-profile). We have defined a managed script in Jenkins that performs the rolling update for our replication controller. The problems is that I cannot find a way to control to which project I want to target the kubectl rolling-update command (you can specify which cluster but not which project afict). So right now our script that does the rolling update to our test server looks something like this:
gcloud config configurations activate test-profile && kubectl rolling-update ...
While this works it could be extremely dangerous if two jobs run concurrently for different environments. Say that job 1 targets the test environment and job 2 targets prod. If job2 switches the active profile to "prod-profile" before job 1 has executed its rolling-update command job 1 will target to wrong project and in worse case update the wrong replication controller (if the clusters have the same name).
Question
Is there a way to specify which project that a kubectl command is targeting (for example during a rolling update) that is safe to run concurrently?
You can pass the --cluster= or --context= flags to kubectl to set a single run. For example, if I have two clusters in my ~/.kube/config "foo" and "bar":
$ kubectl --cluster=foo get pods
NAME READY STATUS RESTARTS AGE
foo-ht1qh 1/1 Running 0 3h
foo-wf8f4 1/1 Running 0 3h
foo-yvgpd 1/1 Running 0 3h
vs
$ kubectl --cluster=bar get pods
NAME READY STATUS RESTARTS AGE
bar-de4h7 1/1 Running 0 9h
bar-c4g03 1/1 Running 0 9h
bar-2sprd 1/1 Running 0 9h
You may use gcloud config set project yourProject to set the project property. See https://cloud.google.com/sdk/gcloud/reference/config/set
Related
I deployed two Cloud Run services (staging and production) using GCP Cloud Build with this command:
entrypoint: gcloud
args: ['run', 'deploy', 'app', '--project', '$PROJECT_ID', '--image', 'image:$COMMIT_SHA', '--region', 'us-central1', '--allow-unauthenticated', '--memory' , '256Mi', '--update-env-vars', 'ENV=production']
I noticed that the same command has different behavior on staging and production. On one of my services, the traffic is not routed automatically to the newest revision.
Already have image (with digest):
Deploying container to Cloud Run service
Deploying...
Setting IAM Policy..............done
Creating Revision..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done
Done.
Service [] revision [] has been deployed and is serving 0 percent of traffic.
I am missing this step :
Routing traffic......done
I checked the cloud run service.yaml and the traffic argument is set :
traffic:
- latestRevision: true
percent: 100
If I run the same command on GCP console, everything works as expected.
Question:
Why gcloud run deploy does not route the traffic when I am running from Cloud Build pipeline ? ( I do not have --no traffic flag set)
It seems to be related to this issue: https://issuetracker.google.com/issues/172165141
There are two modes available to you: route traffic to latest revision or manually distribute it.
If you switched to manual routing the service stays like this until you decide to revert it back with gcloud run services update-traffic testservice --platform="managed" --to-latest.. This is made to keep it simple and fight ambiguity and unexpected traffic switch.
I am trying to deploy the aws-load-balancer-controller on my Kubernetes cluster on AWS = by following the steps given in https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html
After the yaml file is applied and while trying to check the status of the deployment , I get :
$ kubectl get deployment -n kube-system aws-load-balancer-controller
NAME READY UP-TO-DATE AVAILABLE AGE
aws-load-balancer-controller 0/1 1 0 6m39s
I tried to debug it and I got this :
$ kubectl logs -n kube-system deployment.apps/aws-load-balancer-controller
{"level":"info","logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
{"level":"error","logger":"setup","msg":"unable to create controller","controller":"Ingress","error":"the server could not find the requested resource"}
The yaml file is pulled directly from https://github.com/kubernetes-sigs/aws-load-balancer-controller/releases/download/v2.3.0/v2_3_0_full.yaml and apart from changing the Kubernetes cluster name, no other modifications are done.
Please let me know if I am missing some step in the configuration.
Any help would be highly appreciated.
I am not sure if this helps, but for me the issue was that the version of the aws-load-balancer-controller was not compatible with the version of Kubernetes.
aws-load-balancer-controller = v2.3.1
Kubernetes/EKS = 1.22
Github issue for more information:
https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/2495
I am trying my hands on Kubernetes and I tried to deploy an image into k8s service
root#KubernetesMiniKube:/usr/local/bin# kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=8080
pod/hello-minikube created
root#KubernetesMiniKube:/usr/local/bin# kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-minikube 1/1 Running 0 16s
root#KubernetesMiniKube:/usr/local/bin# kubectl get deployments
No resources found in default namespace.
Why i am seeing No resource found but actually there is a resource running inside default namespace.
When you are using $ kubectl run it will create a pod.
In your example thats exactly what happned, it created pod, named hello-minikube.
pod/hello-minikube created
If you want to create deployment
Deployments represent a set of multiple, identical Pods with no unique identities. A Deployment runs multiple replicas of your application and automatically replaces any instances that fail or become unresponsive.
you can do it using command:
$ kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=8080
deployment.apps/hello-minikube created
user#cloudshell:$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
hello-minikube 1/1 1 1 8s
You can also create deployment using YAML.
Save YAML from this documentation example and use kubectl apply.
$ vi nginx.yaml
<paste proper YAML definition. Also you can use nano editor, or download ready yaml>
user#cloudshell:$ kubectl apply -f nginx.yaml
deployment.apps/nginx-deployment created
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
hello-minikube 1/1 1 1 3m48s
nginx-deployment 3/3 3 3 64s
Please let me know if you have further questions regarding this answer.
I am not exactly sure what's going on which is why I am asking this question. When I run this command:
kubectl config get-clusters
I get:
arn:aws:eks:us-west-2:91xxxxx371:cluster/eks-cluster-1
arn:aws:eks:us-west-2:91xxxxx371:cluster/eks1
then I run:
kubectl config current-context
and I get:
arn:aws:eks:us-west-2:91xxxxx371:cluster/eks-cluster-1
and if I run kubectl get pods, I get the expected output.
But how do I switch to the other cluster/context? what's the difference between the cluster and context? I can't figure out how these commands differ:
When I run them, I still get the pods from the wrong cluster:
root#4c2ab870baaf:/# kubectl config set-context arn:aws:eks:us-west-2:913617820371:cluster/eks1
Context "arn:aws:eks:us-west-2:913617820371:cluster/eks1" modified.
root#4c2ab870baaf:/#
root#4c2ab870baaf:/# kubectl get pods
NAME READY STATUS RESTARTS AGE
apache-spike-579598949b-5bjjs 1/1 Running 0 14d
apache-spike-579598949b-957gv 1/1 Running 0 14d
apache-spike-579598949b-k49hf 1/1 Running 0 14d
root#4c2ab870baaf:/# kubectl config set-cluster arn:aws:eks:us-west-2:91xxxxxx371:cluster/eks1
Cluster "arn:aws:eks:us-west-2:91xxxxx371:cluster/eks1" set.
root#4c2ab870baaf:/# kubectl get pods
NAME READY STATUS RESTARTS AGE
apache-spike-579598949b-5bjjs 1/1 Running 0 14d
apache-spike-579598949b-957gv 1/1 Running 0 14d
apache-spike-579598949b-k49hf 1/1 Running 0 14d
so I really don't know how to properly switch between clusters or contexts and also switch the auth routine when doing so.
For example:
contexts:
- context:
cluster: arn:aws:eks:us-west-2:91xxxxx371:cluster/ignitecluster
user: arn:aws:eks:us-west-2:91xxxx371:cluster/ignitecluster
name: arn:aws:eks:us-west-2:91xxxxx371:cluster/ignitecluster
- context:
cluster: arn:aws:eks:us-west-2:91xxxx371:cluster/teros-eks-cluster
user: arn:aws:eks:us-west-2:91xxxxx371:cluster/teros-eks-cluster
name: arn:aws:eks:us-west-2:91xxxxx371:cluster/teros-eks-cluster
To clarify on the difference between set-context and use-context
A context is a group of access parameters. Each context contains a Kubernetes cluster, a user, and a namespace. So when you do set-context, you just adding context details to your configuration file ~/.kube/config, but it doesn't switch you to that context, while use-context actually does.
Thus, as Vasily mentioned, in order to switch between clusters run
kubectl config use-context <CONTEXT-NAME>
Also, if you run kubectl config get-contexts you will see list of contexts with indication of the current one.
Use
kubectl config use-context arn:aws:eks:us-west-2:91xxxxx371:cluster/eks-cluster-1
and
kubectl config use-context arn:aws:eks:us-west-2:91xxxxx371:cluster/eks
Consider using kubectx for managing your contexts.
Usage
View all contexts (the current context is bolded):
$kubectx
arn:aws:eks:us-east-1:12234567:cluster/eks_app
->gke_my_second_cluster
my-rnd
my-prod
Switch to other context:
$ kubectx my-rnd
Switched to context "my-rnd".
Bonus:
In the same link - check also the kubens tool.
This is the best command to switch between different EKS clusters.
I use it every day.
aws eks update-kubeconfig --name example
Documentation:
https://docs.aws.amazon.com/cli/latest/reference/eks/update-kubeconfig.html
I want to build a script to deploy a docker container to ecs.
This is the command I am using.
ecs-cli compose --file src/main/docker/docker-compose-export.yml -p export service up
It works about 60% of the time. The other 40% of the time the command stalls.
This is the compose file
version: '2'
services:
export:
image: 1234567890lalala.dkr.ecr.eu-central-1.amazonaws.com/export:${VERSION}
cpu_shares: 200
mem_limit: 100000000
I have uploaded the image to the ecs registry before.
This is the log I am getting:
WARN[0000] Skipping unsupported YAML option... option name=networks
WARN[0000] Skipping unsupported YAML option for service... option name=networks service name=export
INFO[0000] Using ECS task definition TaskDefinition="ecscompose-export:3"
INFO[0000] Updated the ECS service with a new task definition. Old containers will be stopped automatically, and replaced with new ones desiredCount=1 serviceName=ecscompose-service-export taskDefinition="ecscompose-export:3"
INFO[0000] Describe ECS Service status desiredCount=1 runningCount=1 serviceName=ecscompose-service-export
INFO[0030] Describe ECS Service status desiredCount=1 runningCount=1 serviceName=ecscompose-service-export
INFO[0061] Describe ECS Service status desiredCount=1 runningCount=1 serviceName=ecscompose-service-export
INFO[0091] Describe ECS Service status desiredCount=1 runningCount=1 serviceName=ecscompose-service-export
The running count goes to 2 and then back to 1 (which is expected). But then it does not stop like it does when everything works but it keeps checking the status a while and finally just stalls.
The service on the cluster is in a good state. The new docker image is running and everything is find. Its just that the command doesn't stop.
Has anyone an idea how to fix this? Are there maybe other commands that I could use to achieve the same in a more reliable fashion?