How to configure SNI passthrough for Istio Egress - istio

I'm trying to follow the instructions described at https://istio.io/docs/tasks/traffic-management/egress/wildcard-egress-hosts/#setup-egress-gateway-with-sni-proxy but the actual command
cat <<EOF | istioctl manifest generate --set values.global.istioNamespace=istio-system -f - > ./istio-egressgateway-with-sni-proxy.yaml
.
.
.
fails . I generated an istio issue but no concrete workaround so far. Issue at https://github.com/istio/istio/issues/21379

As I mentioned in comments
Based on this github istio issue I would say now it's only possible to do through helm and it's should be possible to do it via istioctl in 1.5 version. So workaround for now would be to use helm instead of istioctl or wait for the 1.5 version which might actually fix that.
and #Vinay B add
The workaround right now, as #jt97 suggested is to use helm 2.x to generate the yaml
Refer to https://archive.istio.io/v1.2/docs/tasks/traffic-management/egress/wildcard-egress-hosts/#setup-egress-gateway-with-sni-proxy as an example
is actually the only workaround for now.
If you're looking for informations when it's gonna be available via istioctl then follow this github issue which is currently open and it's added to 1.5 milestone, so there is a chance it will be available when 1.5 comes out, which is March 5th.

The workaround right now, as #jt97 suggested is to use helm 2.x to generate the yaml
Refer to https://archive.istio.io/v1.2/docs/tasks/traffic-management/egress/wildcard-egress-hosts/#setup-egress-gateway-with-sni-proxy as an example

Related

Migrate to updated APIs

I'm getting an error to migrate API from GKE though I'm not using the said API /apis/extensions/v1beta1/ingresses
I ran the command kubectl get deployment [mydeployment] -o yaml and did not find the API in question
It seems an IngressList is that calls the old API. To check you can use following command, this will give you the entire ingress info.
kubectl get --raw /apis/extensions/v1beta1/ingresses | jq
I have same issue but i have upgraded node version from 1.21 to 1.22

How to upgrade CDK bootstrapping?

I have an environment which is already bootstrapped, and bootstrapping again (with CDK 1.106.1) doesn't seem to do anything:
$ cdk bootstrap aws://unknown-account/ap-southeast-2
'#aws-cdk/core:newStyleStackSynthesis' context set, using new-style bootstrapping
[…]
⏳ Bootstrapping environment aws://unknown-account/ap-southeast-2...
Trusted accounts: (none)
Using default execution policy of 'arn:aws:iam::aws:policy/AdministratorAccess'. Pass '--cloudformation-execution-policies' to customize.
However, the very next command warns about the bootstrap stack being too old:
$ cdk diff
[…]
Other Changes
[+] Unknown Rules: {"CheckBootstrapVersion":{"Assertions":[{"Assert":{"Fn::Not":[{"Fn::Contains":[["1","2","3"],{"Ref":"BootstrapVersion"}]}]},"AssertDescription":"CDK bootstrap stack version 4 required. Please run 'cdk bootstrap' with a recent version of the CDK CLI."}]}}
What gives? I'm already running bootstrap with the latest CDK version. How do I upgrade the bootstrap version?
I've now deleted the "CDKToolkit" stack and re-bootstrapped successfully, but I'm still getting the same warning. What gives? I'm clearly running cdk bootstrap with a recent version of CDK.
I've now filed a CDK issue for this.
Related project issue; build.
Answered by #rix0rrr:
Nothing is actually wrong. The cdk diff is telling you about a Rule that got added to the template, but it doesn't actually know what the Rule means and so is printing it in a confusing way.
The diff will disappear after your next deployment.
I came to this page because I faced an issue related to bootstrap being considered old.
"--cloudformation-execution-policies can only be passed for the modern
bootstrap experience."
The below command from the article https://docs.aws.amazon.com/cdk/latest/guide/cdk_pipeline.html was giving me an error. It turned out that export(linux/MacOS) and set(windows) were being mixed in my case.
export CDK_NEW_BOOTSTRAP=1
npx cdk bootstrap aws://315997497220/us-east-1 --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess aws://315997497220/us-east-1
Bootstrapping using AWS profiles also works:
export CDK_NEW_BOOTSTRAP=1
cdk --profile=fortune-dev bootstrap

Elastic Kubernetes Service AWS Deployment process to avoid down time

Its been a month I have started working on EKS AWS and up till now successfully deployed by code.
The steps which I follow for deployment are given below:
Create image from docker terminal.
Tag and push to ECR AWS.
Create the deployment "project.json" and service file "project-svc.json".
Save the above file in "kubectl/bin" path and deploy it with following commands below.
"kubectl apply -f projectname.json" and "kubectl apply -f projectname-svc.json".
So if I want to deployment the same project again with change, I push the new image on ECR and delete the existing deployment by using "kubectl delete -f projectname.json" without deleting the existing service and deploy it again using command "kubectl apply -f projectname.json" again.
Now, I'm in confusing that after I delete the existing deployment there is a downtime until I apply or create the deployment again. So, how to avoid this ? Because I don't want the downtime actually that is the reason why I started to use the EKS.
And one more thing is the process of deployment is a bit long too. I know I'm missing something can anybody guide me properly please?
The project is on .NET Core and if there is any simplified way to do deployment using Visual Studio please guide me for that also.
Thank You in advance!
There is actually no need to delete your deployment. Just need to update the desired state (the deployment configuration) and let K8s do its magic and apply the needed changes, like deploying a new version of your container.
If you have a single instance of your container, you will experience a short down time while changes are applied. If your application supports multiple replicas (HA), you can enjoy the rolling upgrade feature.
Start by reading the official Kubernetes documentation of a Performing a Rolling Update.
You only need to use the delete/apply if you are changing (And if you have) the ConfigMap attached to the Deployment.
Is the only change you do is the "image" of the deployment - you must use the "set-image" command.
Kubectl let you change the actual deployment image and it does the Rolling Updates all by itself and with 3+ pods you have the minimum chance for downtime.
Even more, if you use the --record flag, you can "rollback" to your previous image with no effort because it keep track of the changes.
You also have the possibility to specify the "Context" too, with no need to jump from contexts.
You can go like this:
kubectl set image deployment DEPLOYMENT_NAME DEPLOYMENT_NAME=IMAGE_NAME --record -n NAMESPACE
OR Specifying the Cluster
kubectl set image deployment DEPLOYEMTN_NAME DEPLOYEMTN_NAME=IMAGE_NAME_ECR -n NAMESPACE --cluster EKS_CLUSTER_NPROD --user EKS_CLUSTER --record
As an Eg:
kubectl set image deployment nginx-dep nginx-dep=ecr12345/nginx:latest -n nginx --cluster eu-central-123-prod --user eu-central-123-prod --record
The --record is what let you track all the changes, if you want to rollback just do:
kubectl rollout undo deployment.v1.apps/nginx-dep
More documentations about it here:
Updating a deployment
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment
Roll Back Deployment
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-back-a-deployment

Changed API-server manifest file, but changes aren't implemented

I am adding the flag --cloud-provider=aws to /etc/kubernetes/manifests/kube-apiserver.yaml and kube-controller-manager.yaml. When I describe the pods I can see that they pick up the change and are recreated, however the flags have not changed.
Running on Centos7 machines in AWS. I have tried restarting the Kubelet service and tried using kubectl apply.
There are couple of ways to achieve this. But seems like you have choosen the DynamicKubeletConfig way but you didn't configure DynamicKubeletConfig! To do live changes to your cluster you need to enable DynamicKubeletonfig first then follow the steps here
Another Way [Ref]
TL;DR (do this at your own risk!)
Step 1: kubeadm config view > kubeadm-config.yaml
Step 2: edit kubeadm-config.yaml to add your changes [Reference for flags ]
Step 3: kubeadm upgrade apply --config kubeadm-config.yaml

Istio default ingress-gateway got deleted

I am doing chaos testing on all istio core components, pilot, mixer, citadel, and default objects/resources. It am manually deleting the components and document the behavior, which will help when it actually breaks in production.
I have deleted ingress-gateway service. It also deleted egress pods, which i didn't expect.
Since I am going to delete all the default objects one by one, Is there a better or more cleaner way to recreate core objects? For example, how would I recreate ingress and egress services?
In my opinion the best way to re-create lost/deleted components of Istio, is to do it by helm (package manager for Kubernetes).
helm upgrade <your-release-name> <repo-name>/<chart-name> --reuse-values --force
You can also keep track of changes of your istio installation (aka Istio release), and simply restore to its last working version using following commands:
helm history <release_name>
helm rollback --force [RELEASE] [REVISION]
Eventually you can always reach out back to Istio installation directory, and re-apply piece of manifest corresponding to deleted object, for example for istio v1.1.1, the istio-ingressgateway Service object is declared inside 'istio-1.1.1/install/kubernetes/istio-demo.yaml'. Additionally these manifest files can be generated by helm template command directly from source code repository.