Once istio injection enabled POD not able to connect external endpoints. Based on document it suggested to follow service entry (similar to this Pod cannot curl external website after adding istio egress gateway). But my case I have N number of endpoints.. and dont have ip list for those endpoints
Is there better way to address this issue without service entry.
The options for controlling traffic to external endpoints are described here: https://istio.io/docs/tasks/traffic-management/egress/. It is either a ServiceEntry, or direct access.
if you dont want to control the outbound you can configure istio to not block outgoing traffic by issuing the following command:
helm install install/kubernetes/helm/istio --name istio --namespace istio-system --set gateways.istio-ingressgateway.loadBalancerIP="x.x.x.x" --set global.proxy.includeIPRanges="0.0.0.0"
this will enable all outbound traffic to by pass the egressgateway
Related
I have some basic questions about Istio. I installed Istio for my Tyk API gateway. Then I found that simply installing Istio will cause all traffic between the Tyk pods to be blocked. Is this the default behaviour for Istio? The Tyk gateway cannot communicate with the Tyk dashboard.
When I rebuild my deployment without Istio, everything works fine.
I have also read that Istio can be configured with virtual services to perform traffic routing. Is this what I need to do for every default installing of Istio? Meaning, if I don't create any virtual services, then Istio will block all traffic by default?
Secondly, I understand a virtual service is created as a YAML file applied as a CRD. The host name defined in the virtual service rules - in a default Kubernetes cluster implementation on Google Cloud, how do I find out the host name of my application?
Lastly, if I install Tyk first, then later install Istio, and I have created the necessary label in Tyk's nanmespace for the proxy to be injected, can I just perform a rolling upgrade of my Tyk pods to have Istio start the injection?
For example, I have these labels in my Tyk dashboard service. Do I use the value called "app" in my virtual service YAML?
labels:
app: dashboard-svc-tyk-pro
app.kubernetes.io/managed-by: Helm
chart: tyk-pro-0.8.1
heritage: Helm
release: tyk-pro
Sorry for all the basic questions!
For question on Tyk gateway cannot communicate with the Tyk dashboard.
(I think the problem is that your pod tries to connect to the database before the Istio sidecar is ready. And thus the connection can't be established.
Istio runs an init container that configures the pods route table so all traffic is routed through the sidecar. So if the sidecar isn't running and the other pod tries to connect to the db, no connection can be established. Ex case: Application running in Kubernetes cron job does not connect to database in same Kubernetes cluster)
For question on Virtual Services
2.Each virtual service consists of a set of routing rules that are evaluated in order, letting Istio match each given request to the virtual service to a specific real destination within the mesh.
By default, Istio configures the Envoy proxies to passthrough requests to unknown services. However, you can’t use Istio features to control the traffic to destinations that aren’t registered in the mesh.
For question on hostname refer to this documentation.
The hosts field lists the virtual service’s hosts - in other words, the user-addressable destination or destinations that these routing rules apply to. This is the address or addresses the client uses when sending requests to the service.
Adding Istio on GKE to an existing cluster please refer to this documentation.
If you want to update a cluster with the add-on, you may need to first resize your cluster to ensure that you have enough resources for Istio. As when creating a new cluster, we suggest at least a 4 node cluster with the 2 vCPU machine type.If you have an existing application on the cluster, you can find out how to migrate it so it's managed by Istio as mentioned in the Istio documentation.
You can uninstall the add-on following document which includes to shift traffic away from the Istio ingress gateway.Please take a look at this doc for more details on installing and uninstalling Istio on GKE.
Also adding this document for installing Istio on GKE which also includes installing it to an existing cluster to quickly evaluate Istio.
i am running a minimal stateful database service on GKE. single node cluster. i've setup a database as a stateful set on a single pod as of now. the database has exposed a management console on a particular port along with the mandatory database port. i am attempting to do two things.
expose management port over a global HTTP(S) load balancer
expose database port outside of GKE to be consumed by the likes of Cloud Functions or App Engine Applications.
My stateful set is running fine and i can see from the container logs that the database is properly booted up and is listening on required ports.
i am attempting to setup a standalone NEG (ref: https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg) using a simple ClusterIP service.
the cluster service comes up fine and i can see it using
kubectl get service service-name
but i dont see the NEG setup as such... the following command returns nothing
$ gcloud compute network-endpoint-groups list
Listed 0 items.
my pod exposes the port 8080 my service maps 51000 to 8080 and i have provided the neg annotation
cloud.google.com/neg: '{"exposed_ports": {"51000":{}}'
I dont see any errors as such but neither do i see a NEG created/listed.
Any suggestions on how i would go about debugging this.
As a followup question...
when exposing NEG over global load balancer, how do i enforce authn?
im ok with either of service account roles or oauth/openid.
would i be able to expose multiple ports using a single NEG? for
e.g. if i wanted to expose one port to my global load balancer and
another to local services, is this possible with a single NEG or
should i expose each port using a dedicated ClusterIP service?
where can i find documentation/specification for google kubernetes
annotations. i tried to expose two ports on the neg using the
following annotation syntax. is that even supported/meaningful?
cloud.google.com/neg: '{"exposed_ports": {"51000":{},"51010":{}}'
Thanks in advance!
In order to create the service that is backed by a network endpoint group, you need to be working on a GKE Cluster that is VPC Native:
https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg#before_you_begin
When you create a new cluster, this option is disabled by default and you must enable it upon creation. You can confirm if your cluster is VPC Native going to your Cluster details in GKE. It should appear like this:
VPC-native (alias IP) Enabled
If the cluster is not VPC Native, you won’t be able to use this feature as described on their restrictions:
https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips#restrictions
In case you have VPC Native enabled, make sure as well that the pods have the same labels “purpose:” and “topic:” to make sure they are members of the service:
kubectl get pods --show-labels
You can also create multi-port services as it is described on Kubernetes documentation:
https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services
I'm building a data ingestion layer for my company where I have a lot of different integration points (rest apis).
Some of the API's require you to connect from a whitelisted IP.
I'd really like to use google cloud functions / pubsub to build the ingestion logic because of it's scalability and reduced cost.
But the problem is that google cloud functions always connect from random ips and there is nothing we can do about that, as is answered in this question: Possible to get static IP address for Google Cloud Functions?
So my question is: Is there a way to proxy / nat cloud functions so that they come from a set of static ips?
This is now possible via configuring network settings for Cloud Functions particularly Egress Settings.
Taken from the Official Docs:
Via Console:
Open the Functions Overview page in the Cloud Console
Click Create function. Alternatively, click an existing function to go to its details page, and click Edit
Expand the advanced settings by clicking Environment variables, networking, timeouts and more.
In the Networking section, under Egress settings, select a Serverless VPC Access connector.
Select the appropriate egress setting based on how you want to route outbound traffic through the connector.
Via gcloud:
gcloud functions deploy FUNCTION_NAME \
--vpc-connector CONNECTOR_NAME \
--egress-settings EGRESS_SETTINGS \
FLAGS...
where:
FUNCTION_NAME is the name of your function.
CONNECTOR_NAME is the name of the Serverless VPC Access connector to use. See the gcloud documentation for more information.
Note: You can omit the --vpc-connector flag if you are updating egress
settings on an existing function that already has a connector.
EGRESS_SETTINGS is one of the supported values for egress settings: see gcloud documentation.
FLAGS... refers to other flags you pass to the deploy command.
Select the appropriate egress setting based on how you want to route outbound traffic through the connector.
After this, you only need to
Set up Cloud NAT and
Specify a static IP address for NAT.
Create a Cloud NAT:
gcloud compute routers nats create nat-config \
--router=nat-router \
--auto-allocate-nat-external-ips \
--nat-all-subnet-ip-ranges \
--enable-logging
Specify IP addresses:
gcloud compute routers nats create nat-config \
--router=nat-router \
--nat-external-ip-pool=ip-address1,ip-address2
As mentioned by #Murtaza Kanchwala it's not possible to Proxy / NAT Cloud Functions so that they would come from a set of static IPs. However as this would be a good feature, I opened a feature request for this to be implemented. For all further updates refer to the request itself, since all the updates will be posted there.
im trying to setUp a NAT Gateway for Kubernetes Nodes on the GKE/GCE.
I followed the instructions on the Tutorial (https://cloud.google.com/vpc/docs/special-configurations chapter: "Configure an instance as a NAT gateway") and also tried the tutorial with terraform (https://github.com/GoogleCloudPlatform/terraform-google-nat-gateway)
But at both Tutorials (even on new created google-projects) i get the same two errors:
The NAT isn't working at all. Traffic still outgoing over nodes.
I can't ssh into my gke-nodes -> timeout. I already tried setting up a rule with priority 100 that allows all tcp:22 traffic.
As soon as i tag the gke-node-instances, so that the configured route applies to them, the SSH connection is no longer possible.
You've already found the solution to the first problem: tag the nodes with the correct tag, or manually create a route targeting the instance group that is managing your GKE nodes.
Regarding the SSH issue:
This is answered under "Caveats" in the README for the NAT Gateway for GKE example in the terraform tutorial repo you linked (reproduced here to comply with StackOverflow rules).
The web console mentioned below uses the same ssh mechanism as kubectl exec internally. The short version is that as of time of posting it's not possible to both route all egress traffic through a NAT gateway and use kubectl exec to interact with pods running on a cluster.
Update # 2018-09-25:
There is a workaround available if you only need to route specific traffic through the NAT gateway, for example, if you have a third party whose service requires whitelisting your IP address in their firewall.
Note that this workaround requires strong alerting and monitoring on your part as things will break if your vendor's public IP changes.
If you specify a strict destination IP range when creating your Route in GCP then only traffic bound for those addresses will be routed through the NAT Gateway. In our case we have several routes defined in our VPC network routing table, one for each of our vendor's public IP addresses.
In this case the various kubectl commands including exec and logs will continue to work as expected.
A potential workaround is to use the command in the snippet below to connect to a node and use docker exec on the node to enter a container. This of course means you will need to first locate the node your pod is running on before jumping through the gateway onto the node and running docker exec.
Caveats
The web console SSH will no longer work, you have to jump through the NAT gateway machine to SSH into a GKE node:
eval ssh-agent $SHELL
ssh-add ~/.ssh/google_compute_engine
CLUSTER_NAME=dev
REGION=us-central1
gcloud compute ssh $(gcloud compute instances list --filter=name~nat-gateway-${REGION} --uri) --ssh-flag="-A" -- ssh $(gcloud compute instances list --filter=name~gke-${CLUSTER_NAME}- --limit=1 --format='value(name)') -o StrictHostKeyChecking=no
Source: https://github.com/GoogleCloudPlatform/terraform-google-nat-gateway/tree/master/examples/gke-nat-gateway
You can use kubeip in order to assign IP addresses
https://blog.doit-intl.com/kubeip-automatically-assign-external-static-ips-to-your-gke-nodes-for-easier-whitelisting-without-2068eb9c14cd
I am supposed to install Google cloud SDK on a secured windows server where even port for http(80) and https(443) is not enabled.
What are the ports to be opened to work with gcloud, gsutil and bq commands?
I tested the behaviour in my machine, I expected to need merely port 443 because Google Cloud SDK is based on HTTPS Rest API calls.
For example you can check what is going on behind the scenes with the flag --log-http
gcloud compute instances list --log-http
Therefore you need an egress rule allowing TCP:443 egress traffic.
With respect to the ingress traffic:
if your firewall is smart enough to recognise that since you opened the connection it should let the traffic pass (most common scenario) and therefore you do not need any rule for the incoming.
Otherwise you will need as well to allow TCP:443 incoming traffic.
Update
Therefore you will need to be able to open connection toward:
accounts.google.com:443
*.googleapis.com:443
*:9000 for serialport in case you need this feature
Below error shows it is 443
app> gcloud storage cp C:\Test-file6.txt gs://dl-bugcket-dev/
ERROR: (gcloud.storage.cp) There was a problem refreshing your current auth tokens: HTTPSConnectionPool(host='sts.googleapis.com', port=443): Max retries exceeded with url: /v1/token (Caused by NewConnectionError...
If you run netstat -anb at same time you run any gcloud command which need remote connection, you will also see below entry for the app you are using. In my case PowerShell
[PowerShell.exe]
TCP 142.174.184.157:63546 40.126.29.14:443 SYN_SENT
Do not use any proxy to see above entry else gcloud will connect to proxy and you can't see actual port. you can do this by creating new config.
gcloud config configurations create no-proxy-config