Sonar cannot be access via istio virtual service but can be locally accessed after port forwarding - google-cloud-platform

I am trying to implement SonarQube in a Kubernetes cluster. The deployment is running properly and is also exposed via a Virtual Service. I am able to open the UI via the localhost:port/sonar but I am not able to access it through my external ip. I understand that sonar binds to localhost and does not allow access from outside the remote server. I am running this on GKE with a MYSQL database. Here is my YAML file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sonarqube
namespace: sonar
labels:
service: sonarqube
version: v1
spec:
replicas: 1
template:
metadata:
name: sonarqube
labels:
name: sonarqube
spec:
terminationGracePeriodSeconds: 15
initContainers:
- name: volume-permission
image: busybox
command:
- sh
- -c
- sysctl -w vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: sonarqube
image: sonarqube:6.7
resources:
limits:
memory: 4Gi
cpu: 2
requests:
memory: 2Gi
cpu: 1
args:
- -Dsonar.web.context=/sonar
- -Dsonar.web.host=0.0.0.0
env:
- name: SONARQUBE_JDBC_USERNAME
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:mysql://***.***.**.*:3306/sonar?useUnicode=true&characterEncoding=utf8
ports:
- containerPort: 9000
name: sonarqube-port
---
apiVersion: v1
kind: Service
metadata:
labels:
service: sonarqube
version: v1
name: sonarqube
namespace: sonar
spec:
selector:
name: sonarqube
ports:
- name: http
port: 80
targetPort: sonarqube-port
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: sonarqube-internal
namespace: sonar
spec:
hosts:
- sonarqube.staging.jeet11.internal
- sonarqube
gateways:
- default/ilb-gateway
- mesh
http:
- route:
- destination:
host: sonarqube
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: sonarqube-external
namespace: sonar
spec:
hosts:
- sonarqube.staging.jeet11.com
gateways:
- default/elb-gateway
http:
- route:
- destination:
host: sonarqube
---
The deployment completes successfully. My exposed services gives a public ip that has been mapped to the host url but I am unable to access the service at the host url.
I need to change the mapping such that sonar binds with the server ip but I am unable to understand how to do that. I cannot bind it to my cluster ip, neither to my internal or external service ip.
What should I do? Please help!

I had the same issue recently and I managed to get this resolved today.
I hope the following solution will work for anyone facing the same issue!.
Environment
Cloud Provider: Azure - AKS
This should work regardless of whatever provider you use.
Istio Version: 1.7.3
K8 Version: 1.16.10
Tools - Debugging
kubectl logs -n istio-system -l app=istiod
logs from Istiod and events happening in the control plane.
istioctl analyze -n <namespace>
This generally gives you any warnings and errors for a given namespace.
Lets you know if things are misconfigured.
Kiali - istioctl dashboard kiali
See if you are getting inbound traffic.
Also, shows you any misconfigurations.
Prometheus - istioctl dashboard prometheus
query metric - istio_requests_total. This shows you the traffic going into the service.
If there's any misconfiguration you will see the destination_app as unknown.
Issue
Unable to access sonarqube UI via external IP, but accessible via localhost (port-forward).
Unable to route traffic via Istio Ingressgateway.
Solution
Sonarqube Service Manifest
apiVersion: v1
kind: Service
metadata:
name: sonarqube
namespace: sonarqube
labels:
name: sonarqube
spec:
type: ClusterIP
ports:
- name: http
port: 9000
targetPort: 9000
selector:
app: sonarqube
status:
loadBalancer: {}
Your targetport is the container port. To avoid any confusion just assign the service port number as same as the service targetport.
The port name is very important here. “Istio required the service ports to follow the naming form of ‘protocol-suffix’ where the ‘-suffix’ part is optional” - KIA0601 - Port name must follow [-suffix] form
Istio Gateway and VirtualService manifest for sonarqube
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: sonarqube-gateway
namespace: sonarqube
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 9000
name: http
protocol: HTTP
hosts:
- "XXXX.XXXX.com.au"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: sonarqube
namespace: sonarqube
spec:
hosts:
- "XXXX.XXXX.com.au"
gateways:
- sonarqube-gateway
http:
- route:
- destination:
host: sonarqube
port:
number: 9000
Gateway protocol must be set to HTTP.
Gateway Server Port and VirtualService Destination Port is the same. If you have different app Service Port, then your VirtualService Destination Port number should match the app Service Port. The Gateway Server Port should match the app Service Targetport.
Now comes to the fun bit! The hosts. If you want to access the service outside of the cluster, then you need to have your host-name (whatever host-name that you want to map the sonarqube server) as an DNS A record mapped to the External Public IP address of the istio-ingressgateway.
To get the EXTERNAL-IP address of the ingressgateway, run kubectl -n istio-system get service istio-ingressgateway.
If you do a simple nslookup (run - nslookup <hostname>), The IP address you get must match with the IP address that is assigned to the istio-ingressgateway service.
Expose a new port in the ingressgateway
Note that your sonarqube gateway port is a new port that you are introducing to Kubernetes and you’re telling the cluster to listen on that port. But your load balancer doesn’t know about this port. Therefore, you need to open the specified gateway port on your kubernetes external load balancer. Ref - Info
You don’t need to manually change your load balancer service. You just need to update the ingress gateway to include the new port, which will update the load balancer automatically.
You can identify if the port is causing issues by running istioctl analyze -n sonarqube. You should get the following warning;
[33mWarn[0m [IST0104] (Gateway sonarqube-gateway.sonarqube) The gateway refers to a port that is not exposed on the workload (pod selector istio=ingressgateway; port 9000) Error: Analyzers found issues when analyzing namespace: sonarqube. See https://istio.io/docs/reference/config/analysis for more information about causes and resolutions.
You should get the corresponding error in the control plane. Run kubectl logs -n istio-system -l app=istiod.
At this point you need to update the Istio ingressgateway service to expose the new port. Run kubectl edit svc istio-ingressgateway -n istio-system and add the following section to the ports.
Bypass creating a new port
In the previous section you saw how to expose a new port. This is optional and depending on your use case.
In this section you will see how to use a port that is already exposed.
If you look at the service of the istio-ingressgateway. You can see that there are default ports exposed. Here we are going to use port 80.
Your setup will look like the following;
To void specifying the port with your host name just add match uri prefix, as shown in the virtualservice manifest.
Time for testing
If everything works up to this point as expected, then you are good to go.
During testing I made one mistake by not specifying the port. If you get 404 status, Which is still a good thing, in this way you can verify what server it is using. If you setup things correctly, it should use the istio-envoy server, not the nginx.
Without specifiying the port. This will only work if you add the match uri prefix.

Donot pass argument just try running without it once working for me.
This is how my deployment file hope helpful
apiVersion: v1
kind: Service
metadata:
name: sonarqube-service
spec:
selector:
app: sonarqube
ports:
- protocol: TCP
port: 9000
targetPort: 9000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: sonarqube
name: sonarqube
spec:
replicas: 1
template:
metadata:
labels:
app: sonarqube
spec:
containers:
- name: sonarqube
image: sonarqube:7.1
resources:
requests:
memory: "1200Mi"
cpu: .10
limits:
memory: "2500Mi"
cpu: .50
volumeMounts:
- mountPath: "/opt/sonarqube/data/"
name: sonar-data
- mountPath: "/opt/sonarqube/extensions/"
name: sonar-extensions
env:
- name: "SONARQUBE_JDBC_USERNAME"
value: "root" #Put your db username
- name: "SONARQUBE_JDBC_URL"
value: "jdbc:mysql://192.168.112.4:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true" #DB URL
- name: "SONARQUBE_JDBC_PASSWORD"
value : password
ports:
- containerPort: 9000
protocol: TCP
volumes:
- name: sonar-data
persistentVolumeClaim:
claimName: sonar-data
- name: sonar-extensions
persistentVolumeClaim:
claimName: sonar-extensions

Related

kind kubernetes : Nodeport Service ( front-end ) service is not able to access ClusterIP ( back- end ) service from browser

I have used kind kubernetes to create cluster.
I have created 3 services for 3 Pods ( EmberJS, Flask, Postgres ). Pods are created using Deployment.
I have exposed my front-end service to port 84 ( NodePort Service ).
My app is now accessible on localhost:84 on my machine's browser.
But the app is not able to connect to the flask API which is exposed as flask-dataapp-service:6003 .
net:: ERR_NAME_NOT_RESOLVED
My flask service is available as flask-dataapp-service:6003. When I do a
curl flask-dataapp-service:6003
inside the bash of the ember pod container. It is being resolved without any issues.
But from the browser the flask-dataapp-service is not being resolved.
Find the config files below.
kind-custom.yaml
> kind: Cluster
> apiVersion: kind.x-k8s.io/v1alpha4 nodes:
> - role: control-plane
> extraPortMappings:
> - containerPort: 30000
> hostPort: 84
> listenAddress: "0.0.0.0" # Optional, defaults to "0.0.0.0"
> protocol: tcp
Emberapp.yaml
apiVersion: v1
kind: Service
metadata:
name: ember-dataapp-service
spec:
selector:
app: ember-dataapp
ports:
- protocol: "TCP"
port: 4200
nodePort: 30000
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ember-dataapp
spec:
selector:
matchLabels:
app: ember-dataapp
replicas: 1
template:
metadata:
labels:
app: ember-dataapp
spec:
containers:
- name: emberdataapp
image: emberdataapp
imagePullPolicy: IfNotPresent
ports:
- containerPort: 4200
flaskapp.yaml
apiVersion: v1
kind: Service
metadata:
name: flask-dataapp-service
spec:
selector:
app: flask-dataapp
ports:
- protocol: "TCP"
port: 6003
targetPort: 1234
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-dataapp
spec:
selector:
matchLabels:
app: flask-dataapp
replicas: 1
template:
metadata:
labels:
app: flask-dataapp
spec:
containers:
- name: dataapp
image: dataapp
imagePullPolicy: IfNotPresent
ports:
- containerPort: 1234
my flask service is available as flask-dataapp-service:6003. When I do a
curl flask-dataapp-service:6003
inside the bash of the ember pod container. It is being resolved without any issues.
Kubernetes has an in-cluster DNS which allows names such as this to be resolved directly within the cluster (i.e. DNS requests do not leave the cluster). This is also why this name does not resolve outside the cluster (hence why you cannot see it in your browser)
(Unrelated side note: this is actually a gotcha in the Kubernetes CKA certification)
Since you have used a NodePort service, you should in theory be able to use the NodePort you described (6003) and access the app using "http://localhost:6003"
Alternatively, you can port-forward:
kubectl port-forward svc/flask-dataapp-service 6003:6003
then use the same link
The port-forward option is not really of much use when running a local kubernetes cluster (in fact, the kubectl might fail with "port in use"), it's a good idea to get used to that method since it's the easiest way you can access a service in a remote kubernetes cluster that is using ClusterIP or NodePort without having to have direct access to the nodes.

Enabling SSL on GKE endpoints not working correctly

I created API on GKE using cloud endpoints. It is working fine without Https You can try it here API without Https
I followed the instructions which mention here Enabling SSL for cloud endpoint after setup everything which is mention in this page I'm able to access my endpoints with Https but with a warning.
Your connection is not private - Back to Safety (Chrome)
Check it here API with Https
Can you please let me know what I'm missing
Update
I'm using Google-managed SSL certificates for cloud endpoints in GKE.
I followed the steps which are mention in this doc but not able to successfully add SSL Certificate.
When I go in my cloud console I see
Some backend services are in UNKNOWN state
Here are my development yaml's
deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: quran-grpc
spec:
ports:
- port: 81
targetPort: 9000
protocol: TCP
name: rpc
- port: 80
targetPort: 8080
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
app: quran-grpc
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: quran-grpc
spec:
replicas: 1
selector:
matchLabels:
app: quran-grpc
template:
metadata:
labels:
app: quran-grpc
spec:
volumes:
- name: nginx-ssl
secret:
secretName: nginx-ssl
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http_port=8080",
"--ssl_port=443",
"--http2_port=9000",
"--backend=grpc://127.0.0.1:50051",
"--service=quran.endpoints.utopian-button-227405.cloud.goog",
"--rollout_strategy=managed",
]
ports:
- containerPort: 9000
- containerPort: 8080
- containerPort: 443
volumeMounts:
- mountPath: /etc/nginx/ssl
name: nginx-ssl
readOnly: true
- name: python-grpc-quran
image: gcr.io/utopian-button-227405/python-grpc-quran:5.0
ports:
- containerPort: 50051
ssl-cert.yaml
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: quran-ssl
spec:
domains:
- quran.endpoints.utopian-button-227405.cloud.goog
---
apiVersion: v1
kind: Service
metadata:
name: quran-ingress-svc
spec:
selector:
name: quran-ingress-svc
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: quran-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: 34.71.56.199
networking.gke.io/managed-certificates: quran-ssl
spec:
backend:
serviceName: quran-ingress-svc
servicePort: 80
Can you please let me know what I'm doing wrong.
Your SSL configuration is working fine, and the reason you are receiving this error is because you are using a self-signed certificate.
A self-signed certificate is a certificate that is not signed by a certificate authority (CA). These certificates are easy to make and do not cost money. However, they do not provide all of the security properties that certificates signed by a CA aim to provide. For instance, when a website owner uses a self-signed certificate to provide HTTPS services, people who visit that website will see a warning in their browser.
To solve this issue you should buy a valid certificate from a trusted CA, or use Let's Encrypt that will give a certificated valid for 90 days, and after this period you can renew this certificate.
If you decide to buy a SSL certificate, you can follow the document you describe to create a Kubernetes secret and use in your ingress, simple as that.
But if you don't want to buy a certificate, you could install cert-manager in your cluster, it will help you to generate valid certificates using Let's Encrypt.
Here is an example of how to use cert-manager + Let's Encrypt solution to generate valid SSL certificates:
Using cert-manager with Let's Encrypt
cert-manager builds on top of Kubernetes, introducing certificate authorities and certificates as first-class resource types in the Kubernetes API. This makes it possible to provide 'certificates as a service' to developers working within your Kubernetes cluster.
Let's Encrypt is a non-profit certificate authority run by Internet Security Research Group that provides X.509 certificates for Transport Layer Security encryption at no charge. The certificate is valid for 90 days, during which renewal can take place at any time.
I'm supossing you already have NGINX ingress installed and working.
Pre-requisites:
- NGINX Ingress installed and working
- HELM 3.0 installed and working
cert-manager install
Note: When running on GKE (Google Kubernetes Engine), you may encounter a ‘permission denied’ error when creating some of these resources. This is a nuance of the way GKE handles RBAC and IAM permissions, and as such you should ‘elevate’ your own privileges to that of a ‘cluster-admin’ before running the above command. If you have already run the above command, you should run them again after elevating your permissions:
Follow the official docs to install, or just use HELM 3.0 with the followe command:
$ kubectl create namespace cert-manager
$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
$ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.1/cert-manager-legacy.crds.yaml
Create CLusterIssuer for Let's Encrypt: Save the content below in a new file called letsencrypt-production.yaml:
Note: Replace <EMAIL-ADDRESS> with your valid email.
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
labels:
name: letsencrypt-prod
name: letsencrypt-prod
spec:
acme:
email: <EMAIL-ADDRESS>
http01: {}
privateKeySecretRef:
name: letsencrypt-prod
server: 'https://acme-v02.api.letsencrypt.org/directory'
Apply the configuration with:
kubectl apply -f letsencrypt-production.yaml
Install cert-manager with Let's Encrypt as a default CA:
helm install cert-manager \
--namespace cert-manager --version v0.8.1 jetstack/cert-manager \
--set ingressShim.defaultIssuerName=letsencrypt-prod \
--set ingressShim.defaultIssuerKind=ClusterIssuer
Verify the installation:
$ kubectl get pods --namespace cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-5c6866597-zw7kh 1/1 Running 0 2m
cert-manager-cainjector-577f6d9fd7-tr77l 1/1 Running 0 2m
cert-manager-webhook-787858fcdb-nlzsq 1/1 Running 0 2m
Using cert-manager
Apply this annotation in you ingress spec:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
After apply cert-manager will generate the tls certificate fot the domain configured in Host: like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
rules:
- host: myapp.domain.com
http:
paths:
- path: "/"
backend:
serviceName: my-app
servicePort: 80

How to deploy a Kubernetes service using NodePort on Amazon AWS?

I have created a cluster on AWS EC2 using kops consisting of a master node and two worker nodes, all with public IPv4 assigned.
Now, I want to create a deployment with a service using NodePort to expose the application to the public.
After having created the service, I retrieve the following information, showing that it correctly identified my three pods:
nlykkei:~/projects/k8s-examples$ kubectl describe svc hello-svc
Name: hello-svc
Namespace: default
Labels: app=hello
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"hello"},"name":"hello-svc","namespace":"default"},"spec"...
Selector: app=hello-world
Type: NodePort
IP: 100.69.62.27
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30001/TCP
Endpoints: 100.96.1.5:8080,100.96.2.3:8080,100.96.2.4:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
However, when I try to visit any of my public IPv4's on port 30001, I get no response from the server. I have already created a Security Group allowing all ingress traffic to port 30001 for all of the instances.
Everything works with Docker Desktop for Mac, and here I notice the following service field not present in the output above:
LoadBalancer Ingress: localhost
I've already studied https://kubernetes.io/docs/concepts/services-networking/service/, and think that NodePort should serve my needs?
Any help is appreciated!
So you want to have a service able to be accessed from public. In order to achieve this I would recommend to create a ClusterIP service and then an Ingress for that service. So, saying that you have the deployment hello-world serving at 8081 you will then have the following two objects:
Service:
apiVersion: v1
kind: Service
metadata:
name: hello-world
labels:
app: hello-world
spec:
ports:
- name: service
port: 8081(or whatever you want)
protocol: TCP
targetPort: 8080 (here goes the opened port in your pods)
selector:
app: hello-world
type: ClusterIP
Ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
labels:
app: hello-world
name: hello-world
spec:
rules:
- host: hello-world.mycutedomainname.com
http:
paths:
- backend:
serviceName: hello-world
servicePort: 8081 (or whatever you have set for the service port)
path: /
Note: the name tag in the service's port is optional.

AWS EKS: Service(LoadBalancer) running but not responding to requests

I am using AWS EKS.
I have launched my django app with help of gunicorn in kubernetes cluster.
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api
labels:
app: api
type: web
spec:
replicas: 1
template:
metadata:
labels:
app: api
type: web
spec:
containers:
- name: vogofleet
image: xx.xx.com/api:image2
imagePullPolicy: Always
env:
- name: DATABASE_HOST
value: "test-db-2.xx.xx.xx.xx.com"
- name: DATABASE_PASSWORD
value: "xxxyyyxxx"
- name: DATABASE_USER
value: "admin"
- name: DATABASE_PORT
value: "5432"
- name: DATABASE_NAME
value: "test"
ports:
- containerPort: 9000
I have applied these changes and I can see my pod running in kubectl get pods
Now, I am trying to expose it via service object. Here is my service object,
# service
---
apiVersion: v1
kind: Service
metadata:
name: api
labels:
app: api
spec:
ports:
- port: 9000
protocol: TCP
targetPort: 9000
selector:
app: api
type: web
type: LoadBalancer
The service is also up and running. It has given me the external IP to access the service, which is the address of the load balancer. I can see that it has launched a new load balancer in the AWS console. But I am not able to access it from browser. It says that address didn't return any data. The ELB is showing the healthcheck on instances as OutOfService.
There are other pods also running in the cluster. When I run printenv in those pods, here is the result,
root#consumer-9444cf7cd-4dr5z:/consumer# printenv | grep API
API_PORT_9000_TCP_ADDR=172.20.140.213
API_SERVICE_HOST=172.20.140.213
API_PORT_9000_TCP_PORT=9000
API_PORT=tcp://172.20.140.213:9000
API_PORT_9000_TCP=tcp://172.20.140.213:9000
API_PORT_9000_TCP_PROTO=tcp
API_SERVICE_PORT=9000
And I tried to check connection to my api pod,
root#consumer-9444cf7cd-4dr5z:/consumer# telnet $API_PORT_9000_TCP_ADDR $API_PORT_9000_TCP_PORT
Trying 172.20.140.213...
telnet: Unable to connect to remote host: Connection refused
But, when I do port-forward to my localhost, I can access it on my localhost,
$ kubectl port-forward api-6d94dcb65d-br6px 9000
and check the connection,
$ nc -vz localhost 9000
found 0 associations
found 1 connections:
1: flags=82<CONNECTED,PREFERRED>
outif lo0
src ::1 port 53299
dst ::1 port 9000
rank info not available
TCP aux info available
Connection to localhost port 9000 [tcp/cslistener] succeeded!
Why am I not able to access it from other containers and from public internet? And, The security groups are correct.
I have the same problem. Here's the o/p of kubectl describe service command.
kubectl describe services nginx-elb
Name: nginx-elb
Namespace: default
Labels: deploy=slido
Annotations: service.beta.kubernetes.io/aws-load-balancer-internal: true
Selector: deploy=slido
Type: LoadBalancer
IP: 10.100.29.66
LoadBalancer Ingress: internal-a2d259057e6f94965bfc1f08cf86d4ce-884461987.us-west-2.elb.amazonaws.com
Port: http 80/TCP
TargetPort: 3000/TCP
NodePort: http 32582/TCP
Endpoints: 192.168.60.119:3000
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 119s service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 117s service-controller Ensured load balancer

Exposing the same service with same URL but two different ports with traefik?

recently I am trying to set up CI/CD flow with Kubernetes v1.7.3 and jenkins v2.73.2 on AWS in China (GFW blocking dockerhub).
Right now I can expose services with traefik but it seems I cannot expose the same service with the same URL with two different ports.
Ideally I would want expose http://jenkins.mydomain.com as jenkins-ui on port 80, as well as the jenkin-slave (jenkins-discovery) on port 50000.
For example, I'd want this to work:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins
namespace: default
spec:
rules:
- host: jenkins.mydomain.com
http:
paths:
- path: /
backend:
serviceName: jenkins-svc
servicePort: 80
- host: jenkins.mydomain.com
http:
paths:
- path: /
backend:
serviceName: jenkins-svc
servicePort: 50000
and my jenkins-svc is defined as
apiVersion: v1
kind: Service
metadata:
name: jenkins-svc
labels:
run: jenkins
spec:
selector:
run: jenkins
ports:
- port: 80
targetPort: 8080
name: http
- port: 50000
targetPort: 50000
name: slave
In reality the latter rule overwrites the former rule.
Furthermore, There are two plugins I have tried: kubernetes-cloud and kubernetes.
With the former option I cannot configure jenkins-tunnel URL, so the slave fails to connect with the master; with the latter option I cannot pull from a private docker registry such as AWS ECR (no place to provice credential), therefore not able to create the slave (imagePullError).
Lastly, really I am just trying to get jenkins to work (create slaves with my custom image, build with slaves and delete slaves after jobs' finished ), any other solution is welcomed.
If you want your jenkins to be reachable from outside of your cluster then you need to change your ingress configuration.
Default type of ingress type is ClusterIP
Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType
You want it type to be NodePort
Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting :
So your service should look like:
apiVersion: v1
kind: Service
metadata:
name: jenkins-svc
labels:
run: jenkins
spec:
selector:
run: jenkins
type: NodePort
ports:
- port: 80
targetPort: 8080
name: http
- port: 50000
targetPort: 50000
name: slave