How to inject sidecar via deployment that points to new istiod rather than pointing to old istio - istio

We have installed one istio and labeled namespace(default) istio.io/rev=1-8-1 and all services under this namespace are pointed to 1-14-1 istiod. Now i tried to install new istio whose revision is istio.io/rev=1-14-1. I want to test only one deployment pods which points to new istio rather than pointing to old istio.
Activity performed:-
I updated labels on pods istio.io/rev=1-14-1 and created pod, but that pod still pointing to old istio because i think namespce taking precedence over object labels.
How can i inject sidecar of new istio in pods irresptive of removing namespace lables?
I want to test for some deployments by applying labels or annotation so that it can point to new istio if it works fine then rollout all services

It looks like you are trying to perform a canary upgrade on your control plane.
Try this:
Create a new namespace to test the new control plane. eg. kubectl create ns test
Apply labels in the namespace so that sidecar injection uses latest control plane. eg. kubectl label namespace test istio-injection- istio.io/rev=[replace with control plane revision you want to use]
create the new deployment in new namespace test and verify the side car proxy version. eg. kubectl -n test apply -f deply.yaml. [depl.yaml is the yaml file used for your deployment]
See here for detailed example and description: https://istio.io/latest/docs/setup/upgrade/canary/

Related

Using a sidecar to download artefacts in pod

I am rebuilding a system like gitlab where users can configure pipelines with individual jobs running on AWS ECS (Fargate).
One important functionality is donwloading and uploading of artefacts (files) generated by these jobs.
I want to solve this by running a sidecar with the logic responsible for the artefacts next to the actual code running the logic of the job.
one important requirement: it needs to be assumed that the "main" container runs custom code which i cannot control.
It seems however there is no clean solution in kubernetes for starting a pod with this order of containers:
start a sidecar, trigger download of artefacts
upon completion of artefacts download, start the logic in the main container alongside sidecar and run it to finish
upon finish of main container start upload of new artefacts and end sidecar.
Any suggestion is welcome
Edit:
I found the attribute of container dependencies on AWS ECS and will try it out now: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/example_task_definitions.html#example_task_definition-containerdependency

When using ArgoCD, any modification to initContainer do not update on the pod

I have a question regarding ArgoCD. When using argocd app sync to deploy our helm charts, we have a weird issue which we have to do this manually for changes to initContainers. The scenario is like this:
We make a change to a Deployment service to remove initContainer or edit the initContainer.
If we are to use helm upgrade --install, it works as intended; however, when using argocd app sync, the modifications do not occur on the pods that were updated.
The manual process is for me to go into the Deployment and remove the change of what was to occur.
I am sure this is a simple thing on my side but I would appreciate any feedback.

How to manage automatic deployment to ECS using Terraform Cloud and CircleCI?

I have an ECS task which has 2 containers using 2 different images, both hosted in ECR. There are 2 GitHub repos for the two images (app and api), and a third repo for my IaC code (infra). I am managing my AWS infrastructure using Terraform Cloud. The ECS task definition is defined there using Cloudposse's ecs-alb-service-task, with the containers defined using ecs-container-definition. Presently I'm using latest as the image tag in the task definition defined in Terraform.
I am using CircleCI to build the Docker containers when I push changes to GitHub. I am tagging each image with latest and the variable ${CIRCLE_SHA1}. Both repos also update the task definition using the aws-ecs orb's deploy-service-update job, setting the tag used by each container image to the SHA1 (not latest). Example:
container-image-name-updates: "container=api,tag=${CIRCLE_SHA1}"
When I push code to the repo for e.g. api, a new version of the task definition is created, the service's version is updated, and the existing task is restarted using the new version. So far so good.
The problem is that when I update the infrastructure with Terraform, the service isn't behaving as I would expect. The ecs-alb-service-task has a boolean called ignore_changes_task_definition, which is true by default.
When I leave it as true, Terraform Cloud successfully creates a new version whenever I Apply changes to the task definition. (A recent example was to update environment variables.) BUT it doesn't update the version used by the service, so the service carries on using the old version. Even if I stop a task, it will respawn using the old version. I have to manually go in and use the Update flow, or push changes to one of the code repos. Then CircleCI will create yet aother version of the task definition and update the service.
If I instead set this to false, Terraform Cloud will undo the changes to the service performed by CircleCI. It will reset the task definition version to the last version it created itself!
So I have three questions:
How can I get Terraform to play nice with the task definitions created by CircleCI, while also updating the service itself if I ever change it via Terraform?
Is it a problem to be making changes to the task definition from THREE different places?
Is it a problem that the image tag is latest in Terraform (because I don't know what the SHA1 is)?
I'd really appreciate some guidance on how to properly set up this CI flow. I have found next to nothing online about how to use Terraform Cloud with CI products.
I have learned a bit more about this problem. It seems like the right solution is to use a CircleCI workflow to manage Terraform Cloud, instead of having the two services effectively competing with each other. By default Terraform Cloud will expect you to link a repo with it and it will auto-plan every time you push. But you can turn that off and use the terraform orb instead to run plan/apply via CircleCI.
You would still leave ignore_changes_task_definition set to true. Instead, you'd add another step to the workflow after the terraform/apply step has made the change. This would be aws-ecs/run-task, which should relaunch the service using the most recent task definition, which was (possibly) just created by the previous step. (See the task-definition parameter.)
I have decided that this isn't worth the effort for me, at least not at this time. The conflict between Terraform Cloud and CircleCI is annoying, but isn't that acute.

Application information missing in Spinnaker after re-adding GKE accounts - using spinnaker-for-gke

I am using a Spinnaker implementation set up on GCP using the spinnaker-for-gcp tools. My initial setup worked fine. However, we recently had to re-configure our GKE clusters (independently of Spinnaker). Consequently I deleted and re-added our gke-accounts. After doing that the Spinnaker UI appears to show the existing GKE-based applications but if I click on any of them there are no clusters or load balancers listed anymore! Here are the spinnaker-for-gcp commands that I executed:
$ hal config provider kubernetes account delete company-prod-acct
$ hal config provider kubernetes account delete company-dev-acct
$ ./add_gke_account.sh # for gke_company_us-central1_company-prod
$ ./add_gke_account.sh # for gke_company_us-west1-a_company-dev
$ ./push_and_apply.sh
When the above didn't work I did an experiment where I deleted the two account and added an account with a different name (but the same GKE cluster) and ran push_and_apply. As before, the output messages seem to indicate that everything worked, but the Spinnaker UI continued to show all the old account names, despite the fact that I deleted them and added new ones (which did not show up). And, as before, not details could be seen for any of the applications. Also note that hal config provider kubernetes account list did show the new account name and did not show the old ones.
Any ideas for what I can do, other than complete recreating our Spinnaker installation? Is there anything in particular that I should look for in the Spinnaker logs in GCP to provide more information?
Thanks in advance.
-Mark
The problem turned out to be that the data that was in my .kube/config file in Cloud Shell was obsolete. Removing that file, recreating it (via the appropriate kubectl commands) and then running the commands mentioned in my original description fixed the problem.
Note, though, that it took a lot of shell script and GCP log reading by our team to figure out the problem. Ultimately, what would have been nice would have been if the add_gke_account.sh or push_and_apply.sh scripts could have detected the issue, presumably by verifying that the expected changes did, in fact, correctly occur in the running spinnaker.

Cloud Run deployment using image from last revision

We need to deploy labels to multiple CLoud Run services using below API method
https://cloud.google.com/run/docs/reference/rest/v1/namespaces.services/replaceService
We are looking for options where we can apply labels using API without any new image deployment from Container Registry . We understand that there will be deployment and revision change while applying labels but we want that during deployment it should not pull new image from container registry rather it should use the image from last revision . Any configuration parameter in Cloud Run to prevent new images being pulled while applying labels using API or gcloud run services update SERVICE --update-labels KEY=VALUE
The principle of Cloud Run (and Knative, because the behavior is the same) is that the revision is immutable. Thus, if you change something in it, a new revision is created. You can't fake it!
So, the solution is to not use the latest tag of your image, but the SHA of it.
# the latest
gcr.io/PROJECT_ID/myImage
gcr.io/PROJECT_ID/myImage:latest
# A specific version
gcr.io/PROJECT_ID/myImage:SHA123465465dfsqfsdf
Of course, you have to update your YAML for this.