I would like to use Google Managed Certificate on GKE.
I have a GKE cluster (1.22) with the external-dns helm chart configured against a CloudDNS zone, then I tried:
$ gcloud compute ssl-certificates create managed-cert \
--description "managed-cert" \
--domains "<hostname>" \
--global
$ kubectl create ns test
$ cat <<EOF | kubectl apply -f -
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-mc-deployment
namespace: test
spec:
selector:
matchLabels:
app: products
department: sales
replicas: 2
template:
metadata:
labels:
app: products
department: sales
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:2.0"
env:
- name: "PORT"
value: "50001"
---
apiVersion: v1
kind: Service
metadata:
name: my-mc-service
namespace: test
spec:
type: NodePort
selector:
app: products
department: sales
ports:
- name: my-first-port
protocol: TCP
port: 60001
targetPort: 50001
---
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: managed-cert
namespace: test
spec:
domains:
- <hostname>
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-psc-ingress
namespace: test
annotations:
networking.gke.io/managed-certificates: "managed-cert"
ingress.gcp.kubernetes.io/pre-shared-cert: "managed-cert"
kubernetes.io/ingress.class: "gce"
spec:
rules:
- host: "<hostname>"
http:
paths:
- path: "/"
pathType: "ImplementationSpecific"
backend:
service:
name: "my-mc-service"
port:
number: 60001
EOF
The DNS zone is correctly updated and I am able to browse http://<hostname>.
Instead if I:
$ curl -v https://<hostname>
* Trying 34.120.218.42:443...
* Connected to <hostname> (34.120.218.42) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
* CApath: none
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.0 (IN), TLS header, Unknown (21):
* TLSv1.3 (IN), TLS alert, handshake failure (552):
* error:0A000410:SSL routines::sslv3 alert handshake failure
* Closing connection 0
curl: (35) error:0A000410:SSL routines::sslv3 alert handshake failure
$ gcloud compute ssl-certificates list
NAME TYPE CREATION_TIMESTAMP EXPIRE_TIME MANAGED_STATUS
managed-cert MANAGED 2022-06-30T00:27:25.708-07:00 PROVISIONING
<hostname>: PROVISIONING
mcrt-fe44e023-3234-42cc-b009-67f57dcdc5ef MANAGED 2022-06-30T00:27:52.707-07:00 PROVISIONING
<hostname>: PROVISIONING
I do not understand why it is creating a new managed certificate (mcrt-fe44e023-3234-42cc-b009-67f57dcdc5ef) even if I am specifing it.
Any ideas?
Thanks
After a bit of experimentation I understand what is going on.
The code above works, it takes around 20 minutes for the certificate to be created and propagated.
Regarding the double certificates: it is not required to create a ssl-certificates object as the ManagedCertificate custom resource will create it for you (mcrt-*).
To recap a example:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-mc-deployment
namespace: test
spec:
selector:
matchLabels:
app: products
department: sales
replicas: 2
template:
metadata:
labels:
app: products
department: sales
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:2.0"
env:
- name: "PORT"
value: "50001"
---
apiVersion: v1
kind: Service
metadata:
name: my-mc-service
namespace: test
spec:
type: NodePort
selector:
app: products
department: sales
ports:
- name: my-first-port
protocol: TCP
port: 60001
targetPort: 50001
---
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: managed-cert
namespace: test
spec:
domains:
- <hostname>
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-psc-ingress
namespace: test
annotations:
networking.gke.io/managed-certificates: "managed-cert"
kubernetes.io/ingress.class: "gce"
spec:
rules:
- host: "<hostname>"
http:
paths:
- path: "/"
pathType: "ImplementationSpecific"
backend:
service:
name: "my-mc-service"
port:
number: 60001
Related
I am trying to make an Istio gateway (with certificates from for public access to a deployed application. Here are the configurations:
Cert manager installed in cluster via helm:
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true
Certificate issuer:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: kube-system
spec:
acme:
email: xxx#gmail.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-staging
# Add a single challenge solver, HTTP01 using istio
solvers:
- http01:
ingress:
class: istio
Certificate file:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: url-certs
namespace: istio-system
annotations:
cert-manager.io/issue-temporary-certificate: "true"
spec:
secretName: url-certs
issuerRef:
name: letsencrypt-staging
kind: ClusterIssuer
commonName: bot.demo.live
dnsNames:
- bot.demo.live
- "*.demo.live"
Gateway file:
# gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
- port:
number: 443
name: https-url-1
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
credentialName: "url-certs" # This should match the Certificate secretName
Application Deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: microbot
name: microbot
namespace: bot-demo
spec:
replicas: 1
selector:
matchLabels:
app: microbot
template:
metadata:
labels:
app: microbot
spec:
containers:
- name: microbot
image: dontrebootme/microbot:v1
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 80
Virtual service and application service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: microbot-virtual-svc
namespace: bot-demo
spec:
hosts:
- bot.demo.live
gateways:
- istio-system/public-gateway
http:
- match:
- uri:
prefix: "/"
route:
- destination:
host: microbot-service
port:
number: 9100
---
apiVersion: v1
kind: Service
metadata:
name: microbot-service
namespace: bot-demo
spec:
selector:
app: microbot
ports:
- port: 9100
targetPort: 80
Whenever I try to curl https://bot.demo.live, I get a certificate error. The certificate issuer is working. I just can't figure out how to expose the deployed application via the istio gateway for external access. bot.demo.live is already in my /etc/hosts/ file and and I can ping it just fine.
What am I doing wrong?
I wanted to setup and use istio egress gateway.
I followed this link https://preliminary.istio.io/latest/blog/2018/egress-tcp/ and made this manifest:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-oracle
spec:
hosts:
- my.oracle.instance.com
addresses:
- 192.168.100.50/32
ports:
- name: tcp
number: 1521
protocol: tcp
location: MESH_EXTERNAL
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: istio-egressgateway
spec:
selector:
istio: egressgateway
servers:
- hosts:
- my.oracle.instance.com
port:
name: tcp
number: 1521
protocol: TCP
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: egressgateway-destination-rule-for-oracle
spec:
host: istio-egressgateway.istio-system.svc.cluster.local
subsets:
- name: external-oracle
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: direct-external-oracle-through-egress-gateway
spec:
gateways:
- mesh
- istio-egressgateway
hosts:
- my.oracle.instance.com
tcp:
- match:
- destinationSubnets:
- 192.168.100.50/32
gateways:
- mesh
port: 1521
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
port:
number: 1521
subset: external-oracle
- match:
- gateways:
- istio-egressgateway
port: 1521
route:
- destination:
host: my.oracle.instance.com
port:
number: 1521
weight: 100
And then my application not able to start because a JDBC error.
I started to watch the egress-gateway pod's logs but I not see any sign of traffic.
So I googled and found this link: https://istio.io/latest/blog/2018/egress-monitoring-access-control/ to boost my egress-gateway pod logging but this looking a bit deprecated for me.
cat <<EOF | kubectl apply -f -
# Log entry for egress access
apiVersion: "config.istio.io/v1alpha2"
kind: logentry
metadata:
name: egress-access
namespace: istio-system
spec:
severity: '"info"'
timestamp: request.time
variables:
destination: request.host | "unknown"
path: request.path | "unknown"
responseCode: response.code | 0
responseSize: response.size | 0
reporterUID: context.reporter.uid | "unknown"
sourcePrincipal: source.principal | "unknown"
monitored_resource_type: '"UNSPECIFIED"'
---
# Handler for error egress access entries
apiVersion: "config.istio.io/v1alpha2"
kind: stdio
metadata:
name: egress-error-logger
namespace: istio-system
spec:
severity_levels:
info: 2 # output log level as error
outputAsJson: true
---
# Rule to handle access to *.cnn.com/politics
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: handle-politics
namespace: istio-system
spec:
match: request.host.endsWith("cnn.com") && request.path.startsWith("/politics") && context.reporter.uid.startsWith("kubernetes://istio-egressgateway")
actions:
- handler: egress-error-logger.stdio
instances:
- egress-access.logentry
---
# Handler for info egress access entries
apiVersion: "config.istio.io/v1alpha2"
kind: stdio
metadata:
name: egress-access-logger
namespace: istio-system
spec:
severity_levels:
info: 0 # output log level as info
outputAsJson: true
---
# Rule to handle access to *.com
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: handle-cnn-access
namespace: istio-system
spec:
match: request.host.endsWith(".com") && context.reporter.uid.startsWith("kubernetes://istio-egressgateway")
actions:
- handler: egress-access-logger.stdio
instances:
- egress-access.logentry
EOF
But when I want to apply to this I have this error:
no matches for kind "logentry" in version "config.istio.io/v1alpha2"
no matches for kind "stdio" in version "config.istio.io/v1alpha2"
no matches for kind "rule" in version "config.istio.io/v1alpha2"
no matches for kind "stdio" in version "config.istio.io/v1alpha2"
no matches for kind "rule" in version "config.istio.io/v1alpha2"
There is a new api version of there kind's?
istioctl version
client version: 1.12.0
control plane version: 1.12.0
data plane version: 1.12.0 (28 proxies)
There is a way to make a working istio egress-gateway with a logging (as the istio ingress gateway logging works).
We are using istio as a service mesh to secure our cluster. We have several web applications exposed through the ingress gateway as follows ingress-gateway-id:80/app1/, ingress-gateway-id:80/app2/ and ingress-gateway-id:80/app3/.
We have a gateway that routes traffic of the ingress gateway on port 80.
For each application, we create a virtual service that routes the traffic from (for example) ingress-gateway-id:80/app1/app1-api-uri/ to app1-service/app1-api-uri/
The main issue we are currently facing is that some applications work by only / (for example) app2-service/ which forces us to allow / through the virtual service and restrict the ingress gateway to allow only one application through the ingress gateway (without specifying hosts in headers as all our applications are web apps therefore accessible through a browser in our use case).
My question is how to allow multiple applications to access / through my ingress gateway (on the same port 80 for example) without the need to deal with setting host headers from the client (in our case the browser)?
If you don't want to use your domains as a virtual service hosts I would say the only options here would be to
use rewrite in your virtual service.
use custom headers
There is an example about rewrite from istio documentation.
HTTPRewrite
HTTPRewrite can be used to rewrite specific parts of a HTTP request before forwarding the request to the destination. Rewrite primitive can be used only with HTTPRouteDestination. The following example demonstrates how to rewrite the URL prefix for api call (/ratings) to ratings service before making the actual API call.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings-route
spec:
hosts:
- ratings.prod.svc.cluster.local
http:
- match:
- uri:
prefix: /ratings
rewrite:
uri: /v1/bookRatings
route:
- destination:
host: ratings.prod.svc.cluster.local
subset: v1
There is an example for 2 nginx deployments, both serving on /.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx1
spec:
selector:
matchLabels:
run: nginx1
replicas: 1
template:
metadata:
labels:
run: nginx1
app: frontend
spec:
containers:
- name: nginx1
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx1 > /usr/share/nginx/html/index.html"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx2
spec:
selector:
matchLabels:
run: nginx2
replicas: 1
template:
metadata:
labels:
run: nginx2
app: frontend
spec:
containers:
- name: nginx2
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx2 > /usr/share/nginx/html/index.html"]
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: frontend
spec:
ports:
- port: 80
protocol: TCP
selector:
app: frontend
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: comp-ingress-gateway
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginxvirt
spec:
gateways:
- comp-ingress-gateway
hosts:
- '*'
http:
- name: match
match:
- uri:
prefix: /a
rewrite:
uri: /
route:
- destination:
host: nginx.default.svc.cluster.local
subset: v1
port:
number: 80
- name: default
match:
- uri:
prefix: /b
rewrite:
uri: /
route:
- destination:
host: nginx.default.svc.cluster.local
subset: v2
port:
number: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nginxdest
spec:
host: nginx.default.svc.cluster.local
subsets:
- name: v1
labels:
run: nginx1
- name: v2
labels:
run: nginx2
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
And a test with curl.
curl -v xx.xxx.xxx.x/a
HTTP/1.1 200 OK
Hello nginx1
curl -v xx.xxx.xxx.x/b
HTTP/1.1 200 OK
Hello nginx2
There is an example about custom headers in istio documentation.
I'm trying to configure an Istio VirtialService / DestinationRule so that a grpc call to the service from a pod labeled datacenter=chi5 is routed to a grpc server on a pod labeled datacenter=chi5.
I have Istio 1.4 installed on a cluster running Kubernetes 1.15.
A route is not getting created in istio-sidecar envoy config for the chi5 subset and traffic is being routed round robin between each service endpoint regardless of pod label.
Kiali is reporting an error in the DestinationRule config: "this subset's labels are not found in any matching host".
Do I misunderstand the functionality of these Istio traffic management objects or is there an error in my configuration?
I believe my pod's are correctly labeled:
$ (dev) kubectl get pods -n istio-demo --show-labels
NAME READY STATUS RESTARTS AGE LABELS
ticketclient-586c69f77d-wkj5d 2/2 Running 0 158m app=ticketclient,datacenter=chi6,pod-template-hash=586c69f77d,run=client-service,security.istio.io/tlsMode=istio
ticketserver-7654cb5f88-bqnqb 2/2 Running 0 158m app=ticketserver,datacenter=chi5,pod-template-hash=7654cb5f88,run=ticket-service,security.istio.io/tlsMode=istio
ticketserver-7654cb5f88-pms25 2/2 Running 0 158m app=ticketserver,datacenter=chi6,pod-template-hash=7654cb5f88,run=ticket-service,security.istio.io/tlsMode=istio
The port-name on my k8s Service object is correctly prefixed with the grpc protocol:
$ (dev) kubectl describe service -n istio-demo ticket-service
Name: ticket-service
Namespace: istio-demo
Labels: app=ticketserver
Annotations: <none>
Selector: run=ticket-service
Type: ClusterIP
IP: 10.234.14.53
Port: grpc-ticket 10000/TCP
TargetPort: 6001/TCP
Endpoints: 10.37.128.37:6001,10.44.0.0:6001
Session Affinity: None
Events: <none>
I've deployed the following Istio objects to Kubernetes:
Kind: VirtualService
Name: ticket-destinationrule
Namespace: istio-demo
Labels: app=ticketserver
Annotations: <none>
API Version: networking.istio.io/v1alpha3
Kind: DestinationRule
Spec:
Host: ticket-service.istio-demo.svc.cluster.local
Subsets:
Labels:
Datacenter: chi5
Name: chi5
Labels:
Datacenter: chi6
Name: chi6
Events: <none>
---
Name: ticket-virtualservice
Namespace: istio-demo
Labels: app=ticketserver
Annotations: <none>
API Version: networking.istio.io/v1alpha3
Kind: VirtualService
Spec:
Hosts:
ticket-service.istio-demo.svc.cluster.local
Http:
Match:
Name: ticket-chi5
Port: 10000
Source Labels:
Datacenter: chi5
Route:
Destination:
Host: ticket-service.istio-demo.svc.cluster.local
Subset: chi5
Events: <none>
I have made reproduction of your issue with 2 nginx pods.
What you want to have can be achieved with sourceLabel,check below example, I think it explain everything.
For start I made 2 ubuntu pods, 1 with label app:ubuntu and 1 without any labels.
apiVersion: v1
kind: Pod
metadata:
name: ubu2
labels:
app: ubuntu
spec:
containers:
- name: ubu2
image: ubuntu
command: ["/bin/sh"]
args: ["-c", "apt-get update && apt-get install curl -y && sleep 3000"]
apiVersion: v1
kind: Pod
metadata:
name: ubu1
spec:
containers:
- name: ubu1
image: ubuntu
command: ["/bin/sh"]
args: ["-c", "apt-get update && apt-get install curl -y && sleep 3000"]
Then 2 deployments with service.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx1
spec:
selector:
matchLabels:
run: nginx1
replicas: 1
template:
metadata:
labels:
run: nginx1
app: frontend
spec:
containers:
- name: nginx1
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx1 > /usr/share/nginx/html/index.html"]
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx2
spec:
selector:
matchLabels:
run: nginx2
replicas: 1
template:
metadata:
labels:
run: nginx2
app: frontend
spec:
containers:
- name: nginx2
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx2 > /usr/share/nginx/html/index.html"]
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: frontend
spec:
ports:
- port: 80
protocol: TCP
selector:
app: frontend
Another thing is virtual service with mesh gateway, so it works only in the mesh, with 2 matches, 1 with sourceLabel which goes from pods with app: ubuntu label to nginx pod with v1 subset, and another match which goes to the nginx pod with v2 subset.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginxvirt
spec:
gateways:
- mesh
hosts:
- nginx.default.svc.cluster.local
http:
- name: match-myuid
match:
- sourceLabels:
app: ubuntu
route:
- destination:
host: nginx.default.svc.cluster.local
port:
number: 80
subset: v1
- name: default
route:
- destination:
host: nginx.default.svc.cluster.local
port:
number: 80
subset: v2
And the last thing is DestinationRule which take subsets from virtual service and sent it to proper nginx pod with label either nginx1 or nginx2
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: nginxdest
spec:
host: nginx.default.svc.cluster.local
subsets:
- name: v1
labels:
run: nginx1
- name: v2
labels:
run: nginx2
kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx1-5c5b84567c-tvtzm 2/2 Running 0 23m app=frontend,run=nginx1,security.istio.io/tlsMode=istio
nginx2-5d95c8b96-6m9zb 2/2 Running 0 23m app=frontend,run=nginx2,security.istio.io/tlsMode=istio
ubu1 2/2 Running 4 3h19m security.istio.io/tlsMode=istio
ubu2 2/2 Running 2 10m app=ubuntu,security.istio.io/tlsMode=istio
Results from ubuntu pods
Ubuntu with label
curl nginx/
Hello nginx1
Ubuntu without label
curl nginx/
Hello nginx2
Let me know if that help.
I’ve the following application which Im able to run in K8S successfully which using service with type load balancer, very simple app with two routes
/ - you should see 'hello application`
/api/books should provide list of book in json format
This is the service
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: go-ms
This is the deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
after applied the both yamls and when calling the URL:
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I was able to see the data in the browser as expected and also for the root app using just the external ip
Now I want to use istio, so I follow the guide and install it successfully via helm
using https://istio.io/docs/setup/kubernetes/install/helm/ and verify that all the 53 crd are there and also istio-system
components (such as istio-ingressgateway
istio-pilot etc all 8 deployments are in up and running)
I’ve change the service above from LoadBalancer to NodePort
and create the following istio config according to the istio docs
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 8080
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: "/"
- uri:
exact: "/api/books"
route:
- destination:
port:
number: 8080
host: go-ms
in addition I’ve added the following
kubectl label namespace books istio-injection=enabled where the application is deployed,
Now to get the external Ip i've used command
kubectl get svc -n istio-system -l istio=ingressgateway
and get this in the external-ip
b0751-1302075110.eu-central-1.elb.amazonaws.com
when trying to access to the URL
http://b0751-1302075110.eu-central-1.elb.amazonaws.com/api/books
I got error:
This site can’t be reached
ERR_CONNECTION_TIMED_OUT
if I run the docker rayndockder/http:0.0.2 via
docker run -it -p 8080:8080 httpv2
I path's works correctly!
Any idea/hint What could be the issue ?
Is there a way to trace the istio configs to see whether if something is missing or we have some collusion with port or network policy maybe ?
btw, the deployment and service can run on each cluster for testing of someone could help...
if I change all to port to 80 (in all yaml files and the application and the docker ) I was able to get the data for the root path, but not for "api/books"
I tired your config with the modification of gateway port to 80 from 8080 in my local minikube setup of kubernetes and istio. This is the command I used:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: go-ms
labels:
app: go-ms
tier: service
spec:
ports:
- port: 8080
selector:
app: go-ms
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: go-ms
labels:
app: go-ms
spec:
replicas: 2
template:
metadata:
labels:
app: go-ms
tier: service
spec:
containers:
- name: go-ms
image: rayndockder/http:0.0.2
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
resources:
requests:
memory: "64Mi"
cpu: "125m"
limits:
memory: "128Mi"
cpu: "250m"
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: go-ms-virtualservice
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
prefix: /
- uri:
exact: /api/books
route:
- destination:
port:
number: 8080
host: go-ms
EOF
The reason that I changed the gateway port to 80 is that, the istio ingress gateway by default opens up a few ports such as 80, 443 and few others. In my case, as minikube doesn't have an external load balancer, I used node ports which is 31380 in my case.
I was able to access the app with url of http://$(minikube ip):31380.
There is no point in changing the port of services, deployments since these are application specific.
May be this question specifies the ports opened by istio ingress gateway.