ServiceEntry to connect to external mysql instances - istio

I want to connect my services to one of the 3 MySQL instances that are outside the mesh since it is possible that one of them is not available at some point.
I have the following configuration but every time I try to make a request from inside the mesh it's not possible to reach the ServiceEntry:
telnet non-existing-domain.io 3306
telnet: could not resolve non-existing-domain.io/3306: Name or service not known
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-se-mysql
namespace: mysql-router
spec:
hosts:
- non-existing-domain.io
location: MESH_EXTERNAL
ports:
- number: 3306
name: tcp
protocol: TCP
resolution: DNS
endpoints:
- address: mysql01foo.io
ports:
tcp: 3306
- address: mysql02foo.io
ports:
tcp: 3306
- address: mysql03foo.io
ports:
tcp: 3306
what I'm doing wrong?

Related

How to setup Firewall for GKE

I can't use the external IP of the GKE I deployed it a success by Jenkins and below is:
when i run " kubectl get service":
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello LoadBalancer 10.92.14.31 34.170.30.56 8080:31110/TCP 2d21h
I checked my deployment.yaml and i think no problem with it bellow is file:
spec:
containers:
- name: hello
image: azmassage/hello:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
name: hello
---
apiVersion: v1
kind: Service
metadata:
name: hello
spec:
ports:
- protocol: TCP
port: 8080
nodePort: 31110
selector:
app: hello
tier: hello
type: LoadBalancer
Footer
and I think this is the problem with firewalls after I create the firewall rule:
I can't connect and use it bellow is ms test:
admin_#cloudshell:~$ curl http://34.170.30.56:8080
curl: (7) Failed to connect to 34.170.30.56 port 8080: Connection refused
I'm not sure, but maybe make sense to open 31110 port on the firewall and check it with curl?

How can I connect to kubernetes-dashboard via "https"?

I have a running private Kubernetes Cluster (v1.20) with fargate instances for the pods on the complete cluster. Access is restricted to the nodePorts range and 443.
I use externalDns to create Route53 entries to route from my internal network to the Kubernetes cluster. Everything works fine with e.g. UIs listening on port 443.
So now I want to use the kubernetes-dashboard not via proxy to my localhost but via DNS resolution with Route53. For this, I made two changes in the kubernetes-dashboard.yml:
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
is now:
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
annotations:
external-dns.alpha.kubernetes.io/hostname: kubernetes-dashboard.hostname.local
spec:
ports:
- port: 443
targetPort: 8443
externalTrafficPolicy: Local
type: NodePort
selector:
k8s-app: kubernetes-dashboard
and the container specs now contain the self-signed certificates:
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.0
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
- --tls-cert-file=/tls.crt
- --tls-key-file=/tls.key
The certificates (the same used for the UIs) are mapped via kubernetes secret.
The subnets for the fargate instances are the same as for my other applications. Yet i receive a "Connection refused" when i try to call my dashboard.
Checking the dashboard with the localproxy the configuration setup seems fine. Also the DNS entry is resolved to the correct IP address of the fargate instance.
My problem here is, that I already use this setup and it works. But I see no difference to the dashboard here. Do I miss something?
Can anybody help me out here?
Greetings,
Eric

Connect to redis instance from Istio enabled pod in a EKS cluster

I am running a EKS cluster with Istio enabled. I have launched an EC2 instance, where a redis server is running. EKS cluster and Redis both are in same VPC. All Inbound and Outbound rules allowed for both of them. But, When I am trying to access the redis instance inside of a pod, it is throwing "Connection reset by peer", while it is working fine from non-istio pod. What could be the reason ?
Istio Version :-
image: docker.io/istio/pilot:1.4.3
imagePullPolicy: IfNotPresent
image: docker.io/istio/proxyv2:1.4.3
imagePullPolicy: IfNotPresent
I have also created a Serviceentry in that namespace .
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: svc-redis
namespace: mynamespace
spec:
hosts:
- "redis-X.xxx.xxxx"
location: MESH_EXTERNAL
ports:
- number: 6379
name: http
protocol: REDIS
resolution: NONE
As you are using the Domain name as a host, so you need to set the resolution to DNS. Because When you set the resolution to None. It will try to connect to an IP address instead of using the domain name.
Here is my service entry for external Redis access.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: redis-svc
spec:
hosts:
- redis01.example.com
ports:
- number: 6379
name: redis
protocol: TCP
resolution: DNS
location: MESH_EXTERNAL

AWS EKS: Service(LoadBalancer) running but not responding to requests

I am using AWS EKS.
I have launched my django app with help of gunicorn in kubernetes cluster.
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api
labels:
app: api
type: web
spec:
replicas: 1
template:
metadata:
labels:
app: api
type: web
spec:
containers:
- name: vogofleet
image: xx.xx.com/api:image2
imagePullPolicy: Always
env:
- name: DATABASE_HOST
value: "test-db-2.xx.xx.xx.xx.com"
- name: DATABASE_PASSWORD
value: "xxxyyyxxx"
- name: DATABASE_USER
value: "admin"
- name: DATABASE_PORT
value: "5432"
- name: DATABASE_NAME
value: "test"
ports:
- containerPort: 9000
I have applied these changes and I can see my pod running in kubectl get pods
Now, I am trying to expose it via service object. Here is my service object,
# service
---
apiVersion: v1
kind: Service
metadata:
name: api
labels:
app: api
spec:
ports:
- port: 9000
protocol: TCP
targetPort: 9000
selector:
app: api
type: web
type: LoadBalancer
The service is also up and running. It has given me the external IP to access the service, which is the address of the load balancer. I can see that it has launched a new load balancer in the AWS console. But I am not able to access it from browser. It says that address didn't return any data. The ELB is showing the healthcheck on instances as OutOfService.
There are other pods also running in the cluster. When I run printenv in those pods, here is the result,
root#consumer-9444cf7cd-4dr5z:/consumer# printenv | grep API
API_PORT_9000_TCP_ADDR=172.20.140.213
API_SERVICE_HOST=172.20.140.213
API_PORT_9000_TCP_PORT=9000
API_PORT=tcp://172.20.140.213:9000
API_PORT_9000_TCP=tcp://172.20.140.213:9000
API_PORT_9000_TCP_PROTO=tcp
API_SERVICE_PORT=9000
And I tried to check connection to my api pod,
root#consumer-9444cf7cd-4dr5z:/consumer# telnet $API_PORT_9000_TCP_ADDR $API_PORT_9000_TCP_PORT
Trying 172.20.140.213...
telnet: Unable to connect to remote host: Connection refused
But, when I do port-forward to my localhost, I can access it on my localhost,
$ kubectl port-forward api-6d94dcb65d-br6px 9000
and check the connection,
$ nc -vz localhost 9000
found 0 associations
found 1 connections:
1: flags=82<CONNECTED,PREFERRED>
outif lo0
src ::1 port 53299
dst ::1 port 9000
rank info not available
TCP aux info available
Connection to localhost port 9000 [tcp/cslistener] succeeded!
Why am I not able to access it from other containers and from public internet? And, The security groups are correct.
I have the same problem. Here's the o/p of kubectl describe service command.
kubectl describe services nginx-elb
Name: nginx-elb
Namespace: default
Labels: deploy=slido
Annotations: service.beta.kubernetes.io/aws-load-balancer-internal: true
Selector: deploy=slido
Type: LoadBalancer
IP: 10.100.29.66
LoadBalancer Ingress: internal-a2d259057e6f94965bfc1f08cf86d4ce-884461987.us-west-2.elb.amazonaws.com
Port: http 80/TCP
TargetPort: 3000/TCP
NodePort: http 32582/TCP
Endpoints: 192.168.60.119:3000
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 119s service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 117s service-controller Ensured load balancer

Expose internal IP so it can be accessed from internet

I just deployed nginx on a K8S Node in a cluster, the master and worker communicate using internal IP address.
I can curl http://worker_ip:8080 (nginx) from internal network, but how to make it can be accessed from external/internet network?
Or should I use public IP as my node host?
update the service type to NodePort. grab the nodePort that is assigned to the service.
you should be able to access nginx using host:nodeport
see below for reference
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: my-nginx