Isn't it possible to use k8s secret objects to store certificates?
In the doc (https://istio.io/docs/reference/config/istio.networking.v1alpha3/):
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: my-tls-ingress
spec:
selector:
app: my-tls-ingress-gateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
serverCertificate: /etc/certs/server.pem
privateKey: /etc/certs/privatekey.pem
The serverCertificate doc says:
REQUIRED if mode is SIMPLE or MUTUAL. The path to the file holding the
server-side TLS certificate to use.
So it seems it is not possible to use k8s secrets to store the certificates but a fix path (in the worker node?) is needed. Is this right?
Thank you
No, it is not right.
That path is the internal path of istio proxies.
For example with the Istio chart installation it is created a default istio proxy. If you list the deployments there is one called istio-ingressgateway.
If you edit/describe it you get:
template:
metadata:
....
labels:
istio: ingressgateway
....
volumeMounts:
- mountPath: /etc/certs
name: istio-certs
readOnly: true
- mountPath: /etc/istio/ingressgateway-certs
name: ingressgateway-certs
readOnly: true
....
volumes:
- name: istio-certs
secret:
defaultMode: 420
optional: true
secretName: istio.default
- name: ingressgateway-certs
secret:
defaultMode: 420
optional: true
secretName: istio-ingressgateway-certs
Now in the Gateway Istio objects you bind your Gateway to a "Proxy":
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
Is inside that proxy where the certificates are, and those certificates come from secret objects. As you can see in the example I pasted the "volumes" section uses "Secret" k8s objects.
You may want to read https://istio.io/docs/tasks/traffic-management/ingress/#add-a-secure-port-https-to-our-gateway
Create the secret istio-ingressgateway-certs in namespace istio-system using kubectl. The Istio gateway will automatically load the secret
The secret MUST be called istio-ingressgateway-certs in the istio-system namespace, or it will not be mounted and available to the Istio gateway.
The location of the certificate and the private key MUST be /etc/istio/ingressgateway-certs, or the gateway will fail to load them.
EDIT: https://istio.io/docs/tasks/traffic-management/secure-ingress/#configure-a-tls-ingress-gateway contains the phrases mentioned in the block quotes. The documentation has likely been updated & made the above link less relevant.
Related
I'm trying to understand Istio configuration model but the more I read the more I get confused, especially around the hosts and host fields. In their examples, they all use the same short name and I'm not sure whether they mean the virtual service name, the Kubernetes service hostname or the dns service address.
Assuming I have the following configuration:
My Kubernetees project namespace is called poc-my-ns
Inside poc-my-ns I have my pods (both version 1 and 2) a Kubernetes route and a Kubernetes service.
The service hostname is: poc-my-ns.svc.cluster.local and the route is https://poc-my-ns.orgdevcloudapps911.myorg.org.
Everything is up and running and the service selector gets all pods from all versions as it should. (Istio virtual service suppose to do the final selection by version).
The intended Istio configuration looks like that:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: poc-my-dr
spec:
host: poc-my-ns.svc.cluster.local # ???
subsets:
- name: v1
labels:
version: 1.0
- name: v2
labels:
version: 2.0
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: poc-my-vs
spec:
hosts:
- poc-my-ns.svc.cluster.local # ???
http:
- route:
- destination:
host: poc-my-dr # ???
subset: v1
weight: 70
- destination:
host: poc-my-dr # ???
subset: v2
weight: 30
My questions are:
Is the destination rule spec/host refers to the Kubernetes service hostname?
Is the virtual service spec/hosts refers to the Kubernetes service hostname, Is it the route https://poc-my-ns.orgdevcloudapps911.myorg.org or something else?
Is the virtual service spec/http/route/destination/host refers to the destination rule name or does it suppose to point to the Kubernetes service hostname or should it be the virtual service metadata/name?
I will really appreciate clarifications.
The VirtualService and DestinationRule basically configure the envoy-proxy of the istio mesh. The VirtualService defines where to route the traffic to and the DestinationRule defines what to additionally do with the traffic.
For the VS the spec.hosts list can contain kubernetes internal and external hosts.
Say you want the define how to route traffic for api.example.com coming from outside the kubernetes cluster through the istio-ingressgateway my-gateway into the mesh. It should be routed to the rating app in the store namespace, so the VS would look like this:
spec:
hosts:
- api.example.com # external host
gateway:
- my-gateway # the ingress-gateway
http:
- [...]
route:
- destination:
host: rating.store.svc.cluster.local # kubernetes service
If you want to define how cluster/mesh internal traffic is routed, you set rating.store.svc.cluster.local in the spec.hosts list and define the mesh gateway (or leave it out like you did, because mesh is the default) and route it to the rating.store.svc.cluster.local service. You also add a DR where you define subsets and route all mesh internal traffic to subset v1.
# VS
[...]
spec:
hosts:
- rating.store.svc.cluster.local # cluster internal host
gateway:
- mesh # mesh internal gateway (default when omitted)
http:
- [...]
route:
- destination:
host: rating.store.svc.cluster.local # cluster internal host
subset: v1 # defined in destinationrule below
---
[...]
spec:
host: rating.store.svc.cluster.local # cluster internal host
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
But it could also be that you want to route traffic to a cluster external destination. In that case destination.host would be an external fqdn, like in this example from docs:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-svc-wikipedia
spec:
hosts:
- wikipedia.org
location: MESH_EXTERNAL
ports:
- number: 80
name: example-http
protocol: HTTP
resolution: DNS
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: my-wiki-rule
spec:
hosts:
- wikipedia.org
http:
- timeout: 5s
route:
- destination:
host: wikipedia.org
Think about it as "I want to route traffic from HOST_FROM to HOST_TO", where
HOST_FROM is spec.host and spec.hosts
HOST_TO is destination.host
and both can be inside the kubernetes cluster or outside.
So to answer all your questions:
It depends: If you want to route from/to cluster internal traffic you'll use a kubernetes service fqdn. For cluster external traffic you'll use the external target fqdn.
I highly recommend reading through the docs of VirtualService and DestinationRule where you can see several examples with explanations.
I created API on GKE using cloud endpoints. It is working fine without Https You can try it here API without Https
I followed the instructions which mention here Enabling SSL for cloud endpoint after setup everything which is mention in this page I'm able to access my endpoints with Https but with a warning.
Your connection is not private - Back to Safety (Chrome)
Check it here API with Https
Can you please let me know what I'm missing
Update
I'm using Google-managed SSL certificates for cloud endpoints in GKE.
I followed the steps which are mention in this doc but not able to successfully add SSL Certificate.
When I go in my cloud console I see
Some backend services are in UNKNOWN state
Here are my development yaml's
deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: quran-grpc
spec:
ports:
- port: 81
targetPort: 9000
protocol: TCP
name: rpc
- port: 80
targetPort: 8080
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
app: quran-grpc
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: quran-grpc
spec:
replicas: 1
selector:
matchLabels:
app: quran-grpc
template:
metadata:
labels:
app: quran-grpc
spec:
volumes:
- name: nginx-ssl
secret:
secretName: nginx-ssl
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http_port=8080",
"--ssl_port=443",
"--http2_port=9000",
"--backend=grpc://127.0.0.1:50051",
"--service=quran.endpoints.utopian-button-227405.cloud.goog",
"--rollout_strategy=managed",
]
ports:
- containerPort: 9000
- containerPort: 8080
- containerPort: 443
volumeMounts:
- mountPath: /etc/nginx/ssl
name: nginx-ssl
readOnly: true
- name: python-grpc-quran
image: gcr.io/utopian-button-227405/python-grpc-quran:5.0
ports:
- containerPort: 50051
ssl-cert.yaml
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: quran-ssl
spec:
domains:
- quran.endpoints.utopian-button-227405.cloud.goog
---
apiVersion: v1
kind: Service
metadata:
name: quran-ingress-svc
spec:
selector:
name: quran-ingress-svc
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: quran-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: 34.71.56.199
networking.gke.io/managed-certificates: quran-ssl
spec:
backend:
serviceName: quran-ingress-svc
servicePort: 80
Can you please let me know what I'm doing wrong.
Your SSL configuration is working fine, and the reason you are receiving this error is because you are using a self-signed certificate.
A self-signed certificate is a certificate that is not signed by a certificate authority (CA). These certificates are easy to make and do not cost money. However, they do not provide all of the security properties that certificates signed by a CA aim to provide. For instance, when a website owner uses a self-signed certificate to provide HTTPS services, people who visit that website will see a warning in their browser.
To solve this issue you should buy a valid certificate from a trusted CA, or use Let's Encrypt that will give a certificated valid for 90 days, and after this period you can renew this certificate.
If you decide to buy a SSL certificate, you can follow the document you describe to create a Kubernetes secret and use in your ingress, simple as that.
But if you don't want to buy a certificate, you could install cert-manager in your cluster, it will help you to generate valid certificates using Let's Encrypt.
Here is an example of how to use cert-manager + Let's Encrypt solution to generate valid SSL certificates:
Using cert-manager with Let's Encrypt
cert-manager builds on top of Kubernetes, introducing certificate authorities and certificates as first-class resource types in the Kubernetes API. This makes it possible to provide 'certificates as a service' to developers working within your Kubernetes cluster.
Let's Encrypt is a non-profit certificate authority run by Internet Security Research Group that provides X.509 certificates for Transport Layer Security encryption at no charge. The certificate is valid for 90 days, during which renewal can take place at any time.
I'm supossing you already have NGINX ingress installed and working.
Pre-requisites:
- NGINX Ingress installed and working
- HELM 3.0 installed and working
cert-manager install
Note: When running on GKE (Google Kubernetes Engine), you may encounter a ‘permission denied’ error when creating some of these resources. This is a nuance of the way GKE handles RBAC and IAM permissions, and as such you should ‘elevate’ your own privileges to that of a ‘cluster-admin’ before running the above command. If you have already run the above command, you should run them again after elevating your permissions:
Follow the official docs to install, or just use HELM 3.0 with the followe command:
$ kubectl create namespace cert-manager
$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
$ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.1/cert-manager-legacy.crds.yaml
Create CLusterIssuer for Let's Encrypt: Save the content below in a new file called letsencrypt-production.yaml:
Note: Replace <EMAIL-ADDRESS> with your valid email.
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
labels:
name: letsencrypt-prod
name: letsencrypt-prod
spec:
acme:
email: <EMAIL-ADDRESS>
http01: {}
privateKeySecretRef:
name: letsencrypt-prod
server: 'https://acme-v02.api.letsencrypt.org/directory'
Apply the configuration with:
kubectl apply -f letsencrypt-production.yaml
Install cert-manager with Let's Encrypt as a default CA:
helm install cert-manager \
--namespace cert-manager --version v0.8.1 jetstack/cert-manager \
--set ingressShim.defaultIssuerName=letsencrypt-prod \
--set ingressShim.defaultIssuerKind=ClusterIssuer
Verify the installation:
$ kubectl get pods --namespace cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-5c6866597-zw7kh 1/1 Running 0 2m
cert-manager-cainjector-577f6d9fd7-tr77l 1/1 Running 0 2m
cert-manager-webhook-787858fcdb-nlzsq 1/1 Running 0 2m
Using cert-manager
Apply this annotation in you ingress spec:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
After apply cert-manager will generate the tls certificate fot the domain configured in Host: like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
rules:
- host: myapp.domain.com
http:
paths:
- path: "/"
backend:
serviceName: my-app
servicePort: 80
I am trying to implement SonarQube in a Kubernetes cluster. The deployment is running properly and is also exposed via a Virtual Service. I am able to open the UI via the localhost:port/sonar but I am not able to access it through my external ip. I understand that sonar binds to localhost and does not allow access from outside the remote server. I am running this on GKE with a MYSQL database. Here is my YAML file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sonarqube
namespace: sonar
labels:
service: sonarqube
version: v1
spec:
replicas: 1
template:
metadata:
name: sonarqube
labels:
name: sonarqube
spec:
terminationGracePeriodSeconds: 15
initContainers:
- name: volume-permission
image: busybox
command:
- sh
- -c
- sysctl -w vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: sonarqube
image: sonarqube:6.7
resources:
limits:
memory: 4Gi
cpu: 2
requests:
memory: 2Gi
cpu: 1
args:
- -Dsonar.web.context=/sonar
- -Dsonar.web.host=0.0.0.0
env:
- name: SONARQUBE_JDBC_USERNAME
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:mysql://***.***.**.*:3306/sonar?useUnicode=true&characterEncoding=utf8
ports:
- containerPort: 9000
name: sonarqube-port
---
apiVersion: v1
kind: Service
metadata:
labels:
service: sonarqube
version: v1
name: sonarqube
namespace: sonar
spec:
selector:
name: sonarqube
ports:
- name: http
port: 80
targetPort: sonarqube-port
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: sonarqube-internal
namespace: sonar
spec:
hosts:
- sonarqube.staging.jeet11.internal
- sonarqube
gateways:
- default/ilb-gateway
- mesh
http:
- route:
- destination:
host: sonarqube
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: sonarqube-external
namespace: sonar
spec:
hosts:
- sonarqube.staging.jeet11.com
gateways:
- default/elb-gateway
http:
- route:
- destination:
host: sonarqube
---
The deployment completes successfully. My exposed services gives a public ip that has been mapped to the host url but I am unable to access the service at the host url.
I need to change the mapping such that sonar binds with the server ip but I am unable to understand how to do that. I cannot bind it to my cluster ip, neither to my internal or external service ip.
What should I do? Please help!
I had the same issue recently and I managed to get this resolved today.
I hope the following solution will work for anyone facing the same issue!.
Environment
Cloud Provider: Azure - AKS
This should work regardless of whatever provider you use.
Istio Version: 1.7.3
K8 Version: 1.16.10
Tools - Debugging
kubectl logs -n istio-system -l app=istiod
logs from Istiod and events happening in the control plane.
istioctl analyze -n <namespace>
This generally gives you any warnings and errors for a given namespace.
Lets you know if things are misconfigured.
Kiali - istioctl dashboard kiali
See if you are getting inbound traffic.
Also, shows you any misconfigurations.
Prometheus - istioctl dashboard prometheus
query metric - istio_requests_total. This shows you the traffic going into the service.
If there's any misconfiguration you will see the destination_app as unknown.
Issue
Unable to access sonarqube UI via external IP, but accessible via localhost (port-forward).
Unable to route traffic via Istio Ingressgateway.
Solution
Sonarqube Service Manifest
apiVersion: v1
kind: Service
metadata:
name: sonarqube
namespace: sonarqube
labels:
name: sonarqube
spec:
type: ClusterIP
ports:
- name: http
port: 9000
targetPort: 9000
selector:
app: sonarqube
status:
loadBalancer: {}
Your targetport is the container port. To avoid any confusion just assign the service port number as same as the service targetport.
The port name is very important here. “Istio required the service ports to follow the naming form of ‘protocol-suffix’ where the ‘-suffix’ part is optional” - KIA0601 - Port name must follow [-suffix] form
Istio Gateway and VirtualService manifest for sonarqube
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: sonarqube-gateway
namespace: sonarqube
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 9000
name: http
protocol: HTTP
hosts:
- "XXXX.XXXX.com.au"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: sonarqube
namespace: sonarqube
spec:
hosts:
- "XXXX.XXXX.com.au"
gateways:
- sonarqube-gateway
http:
- route:
- destination:
host: sonarqube
port:
number: 9000
Gateway protocol must be set to HTTP.
Gateway Server Port and VirtualService Destination Port is the same. If you have different app Service Port, then your VirtualService Destination Port number should match the app Service Port. The Gateway Server Port should match the app Service Targetport.
Now comes to the fun bit! The hosts. If you want to access the service outside of the cluster, then you need to have your host-name (whatever host-name that you want to map the sonarqube server) as an DNS A record mapped to the External Public IP address of the istio-ingressgateway.
To get the EXTERNAL-IP address of the ingressgateway, run kubectl -n istio-system get service istio-ingressgateway.
If you do a simple nslookup (run - nslookup <hostname>), The IP address you get must match with the IP address that is assigned to the istio-ingressgateway service.
Expose a new port in the ingressgateway
Note that your sonarqube gateway port is a new port that you are introducing to Kubernetes and you’re telling the cluster to listen on that port. But your load balancer doesn’t know about this port. Therefore, you need to open the specified gateway port on your kubernetes external load balancer. Ref - Info
You don’t need to manually change your load balancer service. You just need to update the ingress gateway to include the new port, which will update the load balancer automatically.
You can identify if the port is causing issues by running istioctl analyze -n sonarqube. You should get the following warning;
[33mWarn[0m [IST0104] (Gateway sonarqube-gateway.sonarqube) The gateway refers to a port that is not exposed on the workload (pod selector istio=ingressgateway; port 9000) Error: Analyzers found issues when analyzing namespace: sonarqube. See https://istio.io/docs/reference/config/analysis for more information about causes and resolutions.
You should get the corresponding error in the control plane. Run kubectl logs -n istio-system -l app=istiod.
At this point you need to update the Istio ingressgateway service to expose the new port. Run kubectl edit svc istio-ingressgateway -n istio-system and add the following section to the ports.
Bypass creating a new port
In the previous section you saw how to expose a new port. This is optional and depending on your use case.
In this section you will see how to use a port that is already exposed.
If you look at the service of the istio-ingressgateway. You can see that there are default ports exposed. Here we are going to use port 80.
Your setup will look like the following;
To void specifying the port with your host name just add match uri prefix, as shown in the virtualservice manifest.
Time for testing
If everything works up to this point as expected, then you are good to go.
During testing I made one mistake by not specifying the port. If you get 404 status, Which is still a good thing, in this way you can verify what server it is using. If you setup things correctly, it should use the istio-envoy server, not the nginx.
Without specifiying the port. This will only work if you add the match uri prefix.
Donot pass argument just try running without it once working for me.
This is how my deployment file hope helpful
apiVersion: v1
kind: Service
metadata:
name: sonarqube-service
spec:
selector:
app: sonarqube
ports:
- protocol: TCP
port: 9000
targetPort: 9000
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: sonarqube
name: sonarqube
spec:
replicas: 1
template:
metadata:
labels:
app: sonarqube
spec:
containers:
- name: sonarqube
image: sonarqube:7.1
resources:
requests:
memory: "1200Mi"
cpu: .10
limits:
memory: "2500Mi"
cpu: .50
volumeMounts:
- mountPath: "/opt/sonarqube/data/"
name: sonar-data
- mountPath: "/opt/sonarqube/extensions/"
name: sonar-extensions
env:
- name: "SONARQUBE_JDBC_USERNAME"
value: "root" #Put your db username
- name: "SONARQUBE_JDBC_URL"
value: "jdbc:mysql://192.168.112.4:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true" #DB URL
- name: "SONARQUBE_JDBC_PASSWORD"
value : password
ports:
- containerPort: 9000
protocol: TCP
volumes:
- name: sonar-data
persistentVolumeClaim:
claimName: sonar-data
- name: sonar-extensions
persistentVolumeClaim:
claimName: sonar-extensions
We have a cluster of several nodes so I can't do a NodePort and just go to my node-ip (which it's what I've done for testing prometheus).
I did a helm install stable/prometheus and stable/grafana at "monitoring" namespace.
Everything looks okay so far.
Then, I'm trying to create an LB service to access Grafana, which gets created, I can see the CNAME pointing to the A record for the ELB at AWS, but when accessing the URL of Grafana, nothing happens, no HTTP error, no problem page, nothing.
Here's the service-elb.yaml:
apiVersion: v1
kind: Service
metadata:
name: grafana-lb
namespace: monitoring
labels:
app: grafana
annotations:
dns.alpha.kubernetes.io/external: grafana-testing.country.ourdomain
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:xxxxxx
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: '443'
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
spec:
selector:
app: grafana
tier: frontend
type: LoadBalancer
ports:
- name: https
port: 443
targetPort: 80
- name: http
port: 80
targetPort: 3000
loadBalancerSourceRanges:
- somerange
- someotherrange
- etc etc
BTW, Got an error of permissions regarding serviceaccount if I don't create the chart with --set rbac.create=false
I recently use a nginx-proxy-pass for Kibana and also use a LB service similar to this with no issue. But I'm missing something here and can't find out what it is yet.
Any help will be much appreciated. I'll update if I make it work.
Solved, had to remove the "tier" selector and just use a spec like this:
spec:
selector:
app: grafana
type: LoadBalancer
ports:
- name: http
port: 3000
I am new to Kops and a bit to kubernetes as well. I managed to create a cluster with Kops, and run a deployment and a service on it. everything went well, and an ELB was created for me and I could access the application via this ELB endpoint.
My question is: How can I map my subdomain (eg. my-sub.example.com) to the generated ELB endpoint ? I believe this should be somehow done automatic by kubernetes and I should not hardcode the ELB endpoint inside my code. I tried something that has to do with annotation -> DomainName, but it did not work.(see kubernetes yml file below)
apiVersion: v1
kind: Service
metadata:
name: django-app-service
labels:
role: web
dns: route53
annotations:
domainName: "my.personal-site.de"
spec:
type: LoadBalancer
selector:
app: django-app
ports:
- protocol: TCP
port: 80
targetPort: 8000
----
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: django-app-deployment
spec:
replicas: 2
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: django-app
spec:
containers:
- image: fettah/djano_kubernetes_medium:latest
name: django-app
imagePullPolicy: Always
ports:
- containerPort: 8000
When you have ELBs in place you can use external-dns (https://github.com/kubernetes-incubator/external-dns) plugin which can attach DNS records to those ELBs using AWS Route53 integration. You need to add proper rights to Kubernetes so he can create DNS record in Route53 - you need to add additional policy in kops (according guide in external-dns plugin) in additionalPolicies section in kops cluster configuration. Then use annotation like:
external-dns.alpha.kubernetes.io/hostname: myservice.mydomain.com.