Can Istio service entry have destination rule to route to backup if primary is not available - istio

Can Istio service entry with Oracle cluster have destination rule to route to backup cluster if primary cluster is not available.

Related

Kubernetes loadbalancer service vs cloud loadbalancer

In Kubernetes configuration, for external service component we use:
type: LoadBalancer
If we have k8s cluster running inside a cloud provider like AWS, which provides it own loadbalancer, how does all this work then? Do we need to configure so that one of these loadbalancers is not active?
AWS now takes over the open source project: https://kubernetes-sigs.github.io/aws-load-balancer-controller
It works with EKS(easiest) clusters as well as non-EKS clusters(need to install aws vpc cni etc to make IP target mode work, which is required if you have a peered VPC environment.)
This is the official/native solution of managing AWS LB(aka ELBv2) resources(App ELB, Network ELB) using K8s. Kubernetes in-tree controller always reconciles Service object with type: LoadBalancer
Once configured correctly, AWS LB controller will manage the following 2 types of LBs:
Application LB, via Kubernetes Ingress object. It operates on L7 and provides features related to HTTP
Network LB, via Kubernetes Service object with correct annotations. It operates on L4 and provides less features but claimed MUCH higher throughput.
To my knowledge, this works best when used with external-dns together -- it automatically updates your Route53 record with your LB A records thus makes the whole service discovery solution k8s-y.
Also in general, should prevent usage of classic ELB, as it's marked as deprecated by AWS.

AWS Glue Timeout: Creating External Schema In Redshift

I am trying to create an external schema, and my command is as follows. As of course, I have changed the names of the components/items to non-meaningful names just to hide my production values:
create external schema sb_external
from data catalog
database 'dev'
iam_role 'arn:aws:iam::490412345678:role/aws-service-role/redshift.amazonaws.com/AWSServiceRoleForRedshift'
create external database if not exists;
The query is ran in the Redshift database using "psql" CLI from within an EC2 instance. It is a private subnet, while the EC2 instance and the Redshift Database are in 2 different VPCs joined by VPC Peering. On the VPC where we have the EC2 instance, we have a Glue Endpoint.
While I run the above query from the same VPC where I have the Redshift database, I still get an error as follows, even if in the same VPC I have created an Endpoint Interface for Glue.
Failed to perform AWS request, curlError=Failed to connect to glue.eu-west-1.amazonaws.com port 443: Connection timed out
With or Without the VPC Endpoint, we have the same error.
Any help in this regard would be highly appreciated.
I have also faced the same issue and somehow I managed to resolve it.
This error caused when you enable Enhanced VPC routing in your cluster.
By default Glue endpoint uses default security group.
As error starting "glue.eu-west-1.amazonaws.com", you need to enable DNS hostnames and DNS resolution for your VPC.
Also add inbound rule for port number 443 which is for https in default security group with source as Redshift's security group.
listing few links which helped me:
[+]. https://docs.aws.amazon.com/glue/latest/dg/vpc-interface-endpoints.html
[+]. https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html#vpce-interface-limitations
[+]. https://docs.aws.amazon.com/redshift/latest/mgmt/spectrum-enhanced-vpc.html#spectrum-enhanced-vpc-considerations
"Access to AWS Glue or Amazon Athena
Redshift Spectrum accesses your data catalog in AWS Glue or Athena. Another option is to use a dedicated Hive metastore for your data catalog.
To enable access to AWS Glue or Athena, configure your VPC with an internet gateway or NAT gateway. Configure your VPC security groups to allow outbound traffic to the public endpoints for AWS Glue and Athena. Alternatively, you can configure an interface VPC endpoint for AWS Glue to access your AWS Glue Data Catalog. When you use a VPC interface endpoint, communication between your VPC and AWS Glue is conducted within the AWS network."
A reason for this error message is having enabled Enhanced VPC on your Redshift Cluster.
As per documentation https://aws.amazon.com/premiumsupport/knowledge-center/redshift-enhanced-vpc-routing/ enabling Enhanced VPC might impact Unload / Copy commands. Here you are trying to create an external schema and one of the potential reason for this error is having enabled this configuration.
If you are using Enhanced VPC:
Create VPC Endpoints Interface for: S3, Glue and, if using: LakeFormation, Athena.
Create a VPC Endpoint Gateway for S3
Ensure all endpoints have ingress 443 from the security group of Redshift
Ensure Redshift has egress to the endpoints
Check the routing table has routing to S3 Gateway prefix (not just IP to the S3 interface)
Check DNS in the VPC
The security group associated with the Redshift cluster needs to have egress configured for enabling outbound traffic.
Example egress configuration:
from port: 0
to port: 0
protocol: -1 (all protocols)
CIDR IP: "0.0.0.0/0"
References
AWS::EC2::SecurityGroupEgress

Not able to create valid route53 entries with kubernetes Ingress in case of a failover routing.?

I created a kubernetes deployments scripts which will create two different deployments with associated services and the associated Ingress objects.
The creation of all the kubernetes objects (like deployments, services and ingress) are successful.
Regarding Ingress I am facing the problem with creating entries to Route53 failover routing in AWS.
Configurations used:
Primary Failover attributes:
external-dns.alpha.kubernetes.io/set-identifier: Failover
external-dns.alpha.kubernetes.io/aws-health-check-id: ""
external-dns.alpha.kubernetes.io/aws-failover: PRIMARY
Secondary Failover attributes:
external-dns.alpha.kubernetes.io/set-identifier: Failover
external-dns.alpha.kubernetes.io/aws-failover: SECONDARY
The entries are creating successfully in Route53 for the Secondary Failover. But no entries are creating for Primary Failover mode. I tried all the possibilities I am aware.
Even there is no error in ingress describe as well.
Please help me with any ideas or any workable configurations to create the Failover Routing in Route53 using kubernetes ingress.
Any help/suggestion is much appreciated.
Reference Used - https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md

Create nlb-ip loadbalancers in kubernetes created in AWS through Kops

I have a Kubernetes cluster created through the Kops tool. And I have a requirement to expose my service using a network load balancer. And the target groups should be based on IP based. I have found the answer using the annotation mentioned in the site https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.1/guide/service/nlb_ip_mode/.
This seems to work only when we have the cluster created through EKS. Since I'm using a kops tool could you please help me in installing alb load balancer controller which is one of the requirements to create a nlb-IP loadbalancers?
If you want to use IP targets, not instance targets, you need to use a CNI that provisions VPC IPs per pod. Those are:
Cilium with ipam
Lyft VPC
AWS VPC
Then you need to install AWS LB controller, which supports this mode both for NLB and ALB. I would wait until kOps 1.20, which will support installing this controller out of the box, including the various permissions that needs setting.

How does automatic target registering happen in EC2 Target Groups for ECS Tasks?

I have an ECS Cluster with an ECS Service (Fargate) that specifies a Service Discovery Endpoint. I also have a Cloud Map Service setup with a domain and service name that matches the Service Discovery details entered for the ECS Service. Finally, there is an Application Load Balancer with a Target Group setup for IP targets and initially has no Registered Targets (see full details below).
When I start a Task for the above ECS Service, the Task is automatically registered in the 'Registered Targets' for Target Group described above.
My question is how does AWS know that I want the Tasks from the ECS Service to be automatically added to my ALB's Target Group? I don't see anything in the Target Group that connects it to the ECS Service or to the Cloud Map Service? Is there some other configuration that's achieving this?
What I am trying to do is create a new ALB with a new Target Group and I would like to route traffic from this ALB to the same ECS Service, however this does not enjoy the automatically addition of the ECS Tasks to Registered Targets for the Target Group. Is this possible to achieve?
ECS Cluster: MyCluster
ECS Service (Fargate):
Name: MyService
Service Discovery endpoint name: namespace.service-discovery-name
Application Load Balancer:
Name: my-alb
Listener: port 443 (SSL)
Rules: (1) if host = test.domain.com then forward to 'my-target-group'
(2)...
Target Group:
Name: my-target-group
Type: IP
Targets: (initially no registered targets specified. Eventually when a task is started for the above ECS Service a target is automatically registered here.)
Cloud Map:
Domain Name: namespace
Service Name: service-discovery-name
DNS Routing Policy: Multivalue answer routing
Record Type: A
Route 53:
Domain: namespace (Cloud Map Records)
Domain:
Name: mydomain.com
Record: task.mydomain.com -> ALB configured above
This is actually defined and managed within the ECS service when you create it, this service will ensure that the hosts that are created as part of the service will be assigned to the target group of your service.
Looking at the documentation there does not appear to be anyway to replace the target group, in fact looking at the CloudFormation documentation for load balancers it appears that any change would replace the service.
Therefore to apply the service to the new load balancer you would need to create a new service, you can of course use the same task definition though which significantly reduces the amount of work to do. This new service would use your new target group instead.