Istio with AWS ECS - istio

My company is using AWS ECS as container orchestration service. From Istio's documentation I have understood that it works primarily with Kubernetes. Does Istio work with ECS also?

What i will suggest you to please use Service Discovery for ECS, Where your micro service will be able to connect to each other without hard coding any task ips. Istio is not integrated as of now with ECS. With the help of service discovery, you’ll get a route 53 private hosted zone which can be used to connect with other micro services.

Related

AWS EKS Consul - Ingress ALB and downstream

We are designing a new cluster for our application. We are required to use AWS EKS and Consul. We have the following questions:
1) Is it possible to set an AWS ALB ingress (Application load balancing on Amazon EKS - Amazon EKS) as downstream from consul so I can manage it in the rules?
In our local tests we used an nginx ingress and it worked perfectly, but in EKS, nginx ingress uses classic load balancers and these will be deprecated on August 15, 2022 (Elastic Load Balancing migrate-classic-load-balancer.html).
Obviously we can’t create a new project with something that is going to be deprecated so soon.
2) Is ingress-gateway a replacement? Is it possible to create ingress-gateway using ALB ingress-controller from EKS? In the same case, ingress-gateway uses in AWS Classic load balancer and we have the same problem when deprecation.
3) Following this guide: Deploy Consul on Amazon Elastic Kubernetes Service (EKS) | Consul - HashiCorp Learn I see that no type of ingress controller is taken into account, so does it make sense to control external access to services from Consul? Or would income control suffice?
Thank you very much!
Any advice or documentation will be appreciated.
Cheers!

How do I achieve multicluster pod to pod communication in kubernetes?

I have two kubernetes clusters, one in Google Cloud Platform over GKE and one in Amazon Web Services built using Kops.
How do I implement multi cluster communication between a pod in AWS Kops Cluster and a pod in GKE ?
My Kops cluster uses flannel-vxlan mode of networking, hence there are no routes in the route table on the AWS side. So creating a VPC level VPN tunnel is not helpful. I need to achieve a VPN tunnel between the Kubernetes clusters and manage the routes.
Please advise me as to how I can achieve this.

AWS & Azure Hybrid Cloud Setup - is this configuration at all possible (Azure Load Balancer -> AWS VM)?

We have all of our cloud assets currently inside Azure, which includes a Service Fabric Cluster containing many applications and services which communicate with Azure VM's through Azure Load Balancers. The VM's have both public and private IP's, and the Load Balancers' frontend IP configurations point to the private IP's of the VM's.
What I need to do is move my VM's to AWS. Service Fabric has to stay put on Azure though. I don't know if this is possible or not. The Service Fabric services communicate with the Azure VM's through the Load Balancers using the VM's private IP addresses. So the only way I could see achieving this is either:
Keep the load balancers in Azure and direct the traffic from them to AWS VM's.
Point Azure Service Fabric to AWS load balancers.
I don't know if either of the above are technologically possible.
For #1, if I used Azure's load balancing, I believe the load balancer front-end IP config would have to use the public IP of the AWS VM, right? Is that not less secure? If I set it up to go through a VPN (if even possible) is that as secure as using internal private ip's as in the current load balancer config?
For #2, again, not sure if this is technologically achievable - can we even have Service Fabric Services "talk" to AWS load balancers? If so, what is the most secure way to achieve this?
I'm not new to the cloud engineering game, but very new to the idea of using two cloud services as a hybrid solution. Any thoughts would be appreciated.
As far as I know creating multiregion / multi-datacenter cluster in Service Fabric is possible.
Here are the brief list of requirements to have initial mindset about how this would work and here is a sample not approved by Microsoft with cross region Service Fabric cluster configuration (I know this are different regions in Azure not different cloud provider but this sample can be of use to see how some of the things are configured).
Hope this helps.
Based on the details provided in the comments of you own question:
SF is cloud agnostic, you could deploy your entire cluster without any dependencies on Azure at all.
The cluster you see in your azure portal is just an Azure Resource Screen used to describe the details of your cluster.
Your are better of creating the entire cluster in AWS, than doing the requested approach, because at the end, the only thing left in azure would be this Azure Resource Screen.
Extending the Oleg answer, "creating multiregion / multi-datacenter cluster in Service Fabric is possible." I would add, that is also possible to create an azure agnostic cluster where you can host on AWS, Google Cloud or On Premises.
The only details that is not well clear, is that any other option not hosted in azure requires an extra level of management, because you have to play around with the resources(VM, Load Balancers, AutoScaling, OS Updates, and so on) to keep the cluster updated and running.
Also, multi-region and multi-zone cluster were something left aside for a long time in the SF roadmap because it is something very complex to do and this is why they avoid recommend, but is possible.
If you want to go for AWS approach, I guide you to this tutorial: Create AWS infrastructure to host a Service Fabric cluster
This is the first of a 4 part tutorial with guidance on how you can Setup a SF Cluster on AWS infrastructure.
Regarding the other resources hosted on Azure, You could still access then from AWS without any problems.

Are Kubernetes Ingress objects deployed in cluster

When a Kubernetes service is exposed via an Ingress object, is the load balancer "phisically" deployed in the cluster, i.e. as some pod controller inside the cluster nodes, or is just another managed service provisioned by the given cloud provider?
Are there cloud provider specific differences. Is the above question true for Google Kubernetes Engine and Amazon Web Services?
By default, a kubernetes cluster has no IngressController at all. This means that you need to deploy one yourself if you are on premise.
Some cloud providers do provide a default ingress controller in their kubernetes offer though, and this is the case of GKE. In their case the ingress controller is provided "As a service" but I am unsure about where it is exactly deployed.
Talking about AWS, if you deploy a cluster using kops you're on your own (you need to deploy an ingress controller yourself) but different deploy options on AWS could include an ingress controller deployment.
I would like to make some clarification concerning the Google Ingress Controller starting from its definition:
An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches the apiserver's /ingresses endpoint for updates to the Ingress resource. Its job is to satisfy requests for Ingresses.
First of all if you want to understand better its behaviour I suggest you to read the official Kubernetes GitHub description of this resource.
In particular notice that:
It is a Daemon
It is deployed in a pod
It is in kube-system namespace
It is hidden to the customer
However you will not be able to "see" this resource for example running :
kubectl get all --all-namaspaces, because it is running on the master and not showed to the customer since it is a managed resource considered essential for the operation of the platform itself. As stated in the official documentation:
GCE/Google Kubernetes Engine deploys an ingress controller on the master
Note that the master itself of any the Google Cloud Kubernetes clusters is not accessible to the user and completely managed.
I will answer with respect to Google Cloud Engine.
Yes, everytime, you deploy a new ingress resource, a Load balancer is created which you can view from the section:
GCP Console --> Network services --> LoadBalancing
Clicking on the respective Loadbalancer id gives you all the details, for example the External IP, the backend service, ecc

Google Container Engine Clusters in different regions with cloud load balancer

Is it possible to run a Google Container Engine Cluster in EU and one in the US, and load balancing between the apps they running on this Google Container Engine Clusters?
Google Cloud HTTP(S) Load Balancing, TCP Proxy and SSL Proxy support cross-region load balancing. You can point it at multiple different GKE clusters by creating a backend service that forwards traffic to the instance groups for your node pools, and sends traffic on a NodePort for your service.
However it would be preferable to create the LB automatically, like Kubernetes does for an Ingress. One way to do this is with Cluster Federation, which has support for Federated Ingress.
Try kubemci for some help in getting this setup. GKE does not currently support or recommend Kubernetes cluster federation.
From their docs:
kubemci allows users to manage multicluster ingresses without having to enroll all the clusters in a federation first. This relieves them of the overhead of managing a federation control plane in exchange for having to run the kubemci command explicitly each time they want to add or remove a cluster.
Also since kubemci creates GCE resources (backend services, health checks, forwarding rules, etc) itself, it does not have the same problem of ingress controllers in each cluster competing with each other to program similar resources.
See https://github.com/GoogleCloudPlatform/k8s-multicluster-ingress