There is an article https://istio.io/latest/blog/2018/aws-nlb/ from 2018 which says that using AWS NLB in istio is alpha feature and is not recommended for production.
How do I know current status? On official istio feature stages page I do not see NLBs mentioned directly.
https://istio.io/latest/docs/releases/feature-stages/
Related
I'm facing an issue on the Service annotation that enables ALPN policy in an AWS load balancer.
I'm testing an application in production, managed by EKS. I need to enable a Network Load Balancer (NLB) on AWS to manage some ingress rules (tls cert and so on...).
Among annotations is available:
service.beta.kubernetes.io/aws-load-balancer-alpn-policy: HTTP2Preferred
I think I need this to enable ALPN in the TLS handshake.
The issue is that it does not apply to my load balancer (other annotations works), I can confirm it by accessing the AWS dashboard or by executing curl -s -vv https://my.example.com. To enable this ALPN policy I must apply this patch manually, e.g. through the dashboard.
What am I missing? I wonder if that annotation could only be available for the load balancer controller and not for the base Service for NLBs.
EDIT: I found some github issues that requested for this feature in the legacy mode without using a third party controller, here is a comment that resumes all. Since it seems to be an unavailable feature (for now), how can I achieve the configuration result using terraform for example? Do I need to create the NLB first and then attach to my Service?
I have some basic questions about Istio. I installed Istio for my Tyk API gateway. Then I found that simply installing Istio will cause all traffic between the Tyk pods to be blocked. Is this the default behaviour for Istio? The Tyk gateway cannot communicate with the Tyk dashboard.
When I rebuild my deployment without Istio, everything works fine.
I have also read that Istio can be configured with virtual services to perform traffic routing. Is this what I need to do for every default installing of Istio? Meaning, if I don't create any virtual services, then Istio will block all traffic by default?
Secondly, I understand a virtual service is created as a YAML file applied as a CRD. The host name defined in the virtual service rules - in a default Kubernetes cluster implementation on Google Cloud, how do I find out the host name of my application?
Lastly, if I install Tyk first, then later install Istio, and I have created the necessary label in Tyk's nanmespace for the proxy to be injected, can I just perform a rolling upgrade of my Tyk pods to have Istio start the injection?
For example, I have these labels in my Tyk dashboard service. Do I use the value called "app" in my virtual service YAML?
labels:
app: dashboard-svc-tyk-pro
app.kubernetes.io/managed-by: Helm
chart: tyk-pro-0.8.1
heritage: Helm
release: tyk-pro
Sorry for all the basic questions!
For question on Tyk gateway cannot communicate with the Tyk dashboard.
(I think the problem is that your pod tries to connect to the database before the Istio sidecar is ready. And thus the connection can't be established.
Istio runs an init container that configures the pods route table so all traffic is routed through the sidecar. So if the sidecar isn't running and the other pod tries to connect to the db, no connection can be established. Ex case: Application running in Kubernetes cron job does not connect to database in same Kubernetes cluster)
For question on Virtual Services
2.Each virtual service consists of a set of routing rules that are evaluated in order, letting Istio match each given request to the virtual service to a specific real destination within the mesh.
By default, Istio configures the Envoy proxies to passthrough requests to unknown services. However, you can’t use Istio features to control the traffic to destinations that aren’t registered in the mesh.
For question on hostname refer to this documentation.
The hosts field lists the virtual service’s hosts - in other words, the user-addressable destination or destinations that these routing rules apply to. This is the address or addresses the client uses when sending requests to the service.
Adding Istio on GKE to an existing cluster please refer to this documentation.
If you want to update a cluster with the add-on, you may need to first resize your cluster to ensure that you have enough resources for Istio. As when creating a new cluster, we suggest at least a 4 node cluster with the 2 vCPU machine type.If you have an existing application on the cluster, you can find out how to migrate it so it's managed by Istio as mentioned in the Istio documentation.
You can uninstall the add-on following document which includes to shift traffic away from the Istio ingress gateway.Please take a look at this doc for more details on installing and uninstalling Istio on GKE.
Also adding this document for installing Istio on GKE which also includes installing it to an existing cluster to quickly evaluate Istio.
I am running Kubernetes on AWS, and exposing services using a Service with type: LoadBalancer, which provisions an ELB. Is there any way to control the ELB cipher configuration with annotations on this service? I need to disable TLS 1.0 and 1.1.
I am aware that I can do this by hand, but I would like for Kubernetes to do this for me, otherwise I'll have to remember to do it again the next time a new ELB is provisioned (Kubernetes upgrade, config change, etc).
If I understood you right, you would like to adjust security policies directly from Service.yml file.
From what I see, here you can find a list of all the annotations that are supported at the moment.
There is one called "aws-load-balancer-ssl-negotiation-policy". For me it looks exactly as the one you are looking for.
// ServiceAnnotationLoadBalancerSSLNegotiationPolicy is the annotation used on
// the service to specify a SSL negotiation settings for the HTTPS/SSL listeners
// of your load balancer. Defaults to AWS's default
const ServiceAnnotationLoadBalancerSSLNegotiationPolicy = "service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy"
The link to that file is listed under official documentation on K8s.
Additionally, there is a predefined policy ELBSecurityPolicy-TLS-1-2-2017-01 that uses only TLS v1.2 ( with 1.0 and 1.1 disabled).
Hope that helps.
you can use for example annotations like:
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01
ALB Ingress Controller SSL Policy Annotations
Edit the Security policy on the HTTPS listener on the Load Balancer.
Scenario
Istio version 1.5.0 ontop of EKS 1.14.
Enabled components:
Base
Pilot
NOTE Istio 1.5.0 deprecates Mixer, moved to telemetry v2, which happens inside the envoy proxy sidecar.
I want to use Istio to support some metrics out of the box.
Here's the flow
my computer -> Gateway -> Virtual Service A -> Virtual Service B
I made sure that:
K8s Service objects have label app
K8s Deployment objects and their pod templates have label app
I can run the flow just fine, which means the configurations are correct.
The problem is with telemetry.
istio_requests_total{connection_security_policy="unknown",destination_app="unknown",destination_canonical_revision="latest",destination_canonical_service="unknown",destination_principal="spiffe://cluster.local/ns/default/sa/default",destination_service="svcb.default.svc.cluster.local",destination_service_name="svcb.default.svc.cluster.local",destination_service_namespace="unknown",destination_version="unknown",destination_workload="unknown",destination_workload_namespace="unknown",grpc_response_status="0",instance="10.2.55.80:15090",job="envoy-stats",namespace="default",pod_name="svca-77969dc86b-964p5",reporter="source",request_protocol="grpc",response_code="200",response_flags="-",source_app="svca",source_canonical_revision="latest",source_canonical_service="svca",source_principal="spiffe://cluster.local/ns/default/sa/default",source_version="unknown",source_workload="svca",source_workload_namespace="default"}
Question
Why are most destination-* labels unknown?
The official istio mesh dashboard typically filter metrics by reporter=destination. Why do all of my istio_requests_total series have reporter=source?
Oh right, after much digging, here's the answer.
Istio supports proxying all TCP traffic by default, but in order to provide additional capabilities, such as routing and rich metrics, the protocol must be determined. This can be done automatically or explicitly specified
I didn't specify the port name in my Service resource. Once I did that, the problem is resolved.
I have two microservices communicating using gRPC.Both are docker applications deployed on ECS. How do I configure them to use AWS ALB?In documentation it says ALB supports HTTP/2, however I can only see HTTP1 settings.
My application has one gRPC port and one health check API at 8080.How do I configure that on ALB?
I don't believe you can.
ALBs "support" HTTP2 but only in so far as they can accept HTTP2 and de-multiplex it before forwarding on HTTP1.
You can use AWS's newer "NLB" though that has other wrinkles.
More details of doing this https://blog.prefab.cloud/blog/grpc-aws-some-gotchas
As of 30th October 2020, it is now possible to do this, as end-to-end support for HTTP/2 has finally been added to ALBs.
Annoucement about this: https://aws.amazon.com/about-aws/whats-new/2020/10/application-load-balancers-enable-grpc-workloads-end-to-end-http-2-support/
Check these blog posts to understand how to set it up on ECS:
Using Fargate Launch type: https://aws.amazon.com/blogs/opensource/containerize-and-deploy-a-grpc-application-on-aws-fargate/
Using EC2 Launch type: https://dev.to/chaitan94/deploying-a-grpc-service-in-ecs-with-the-ec2-launch-type-2aa