I already set up Google Cloud Endpoints project and can invoke http/https requests. Endpoints gives me MY_API.endpoints.MY_PROJECT.cloud.goog domain name that I can use. I'm using gRPC Cloud Endpoints with HTTP/JSON to gRPC transcoding feature.
It is deployed on Google Kubernetes Engine (deployment yaml script attached at the end).
When I'm trying to create push subscription with that URL I getting next error:
"The supplied HTTP URL is not registered in the subscription's parent
project (url="https://MY_API.endpoints.MY_PROJECT.cloud.goog/v1/path", project_id="PROJECT_ID").
My gcloud call:
gcloud pubsub subscriptions create SUB_NAME --topic=projects/MY_PROJECT/topics/MY_TOPIC --push-endpoint="https://MY_API.endpoints.MY_PROJECT.cloud.goog/v1/path"
I tried to create Cloud DNS public zone with that DNS name and set corresponding records. But I still can't verify ownership in Google Search Console.
The question is how can I set DNS TXT record for MY_API.endpoints.MY_PROJECT.cloud.goog domain to verify ownership? Or how to use Pubsub push subscription with Cloud Endpoints gRPC in other way?
I could verify ownership of domain if I have ability to change meta or headers of gRPC responses converted to HTTP. But I doubt if there is a way.
Kubernetes script I used for deployment (if it would be helpful).
apiVersion: v1
kind: Service
metadata:
name: GKE_SERVICE_NAME
spec:
ports:
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 80
targetPort: 9000
protocol: TCP
name: http2
selector:
app: GKE_SERVICE_NAME
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: GKE_SERVICE_NAME
spec:
replicas: 1
template:
metadata:
labels:
app: GKE_SERVICE_NAME
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http2_port=9000",
"--service=MY_API.endpoints.MY_PROJECT.cloud.goog",
"--rollout_strategy=managed",
"--backend=grpc://127.0.0.1:50051"
]
ports:
- containerPort: 9000
- name: MY_CONTAINER_NAME
image: gcr.io/MY_PROJECT/IMAGE_NAME:v1
ports:
- containerPort: 50051
Ultimately, your goal is to get Cloud Pub/Sub pushing to your container on GKE. There are a couple ways to do this
Domain ownership validation, as you've discovered:
You can try to do it with DNS, and there's a guide for configuring DNS for a cloud.goog domain.
You can try to do it with one of the non-DNS alternatives, which includes methods such as hosting certain kinds of HTML or Javascript snippets from the domain. This can be tricky, though, as I don't know how to make Cloud Endpoints serve static HTML or Javascript content. It serves responses in OpenAPI format, which is essentially JSON.
Have you tried putting the Cloud Pub/Sub subscription and the cloud.goog domain in the same project? It might already be considered a verified domain in that case.
Since you are already using Google Kubernetes Engine, use either Cloud Run, or Cloud Run on top of Google Kubernetes Engine. There is a difference between Cloud Run and Cloud Run on GKE, but both will run your Kubernetes containers. Push endpoints on Cloud Run don't require domain ownership validation (I'm not sure if this also covers Cloud Run on GKE). You may get other interesting benefits as well, as Cloud Run is essentially designed to address the very use case of serving a push endpoint from a container. For example, it will do autoscaling and monitoring for you.
Related
I have multiple deployments running of RDP application and they all are exposed with ClusterIP service. I have nginx-ingress controller in my k8s cluster and to allow tcp I have added --tcp-services-configmap flag in nginx-ingress controller deployment and also created a configmap for the same that is shown below
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
3389: “demo/rdp-service1:3389”
This will expose “rdp-service1” service. And I have 10 more such services which needed to be exposed on the same port number but if I add more service in the same configmap like this
...
data
3389: “demo/rdp-service1:3389”
3389: “demo/rdp-service2:3389”
Then it will remove the previous service data and since here I have also deployed external-dns in k8s, so all the records created by ingress using host: ... will starts pointing to the deployment attached with the newly added service in configmap.
Now my final requirement is as soon as I append the rule for a newly created deployment(RDP application) in the ingress then it starts allowing the TCP connection for that, so is there any way to achieve this. Or is there any other Ingress controller available that can solve such type of use case and can also easily be integrated with external-dns ?
Note:- I am using AWS EKS Cluster and Route53 with external-dns.
Posting this answer as a community wiki to explain some of the topics in the question as well as hopefully point to the solution.
Feel free to expand/edit it.
NGINX Ingress main responsibility is to forward the HTTP/HTTPS traffic. With the addition of the tcp-services/udp-services it can also forward the TCP/UDP traffic to their respective endpoints:
Kubernetes.github.io: Ingress nginx: User guide: Exposing tcp udp services
The main issue is that the Host based routing for Ingress resource in Kubernetes is targeting specifically HTTP/HTTPS traffic and not TCP (RDP).
You could achieve a following scenario:
Ingress controller:
3389 - RDP Deployment #1
3390 - RDP Deployment #2
3391 - RDP Deployment #3
Where there would be no Host based routing. It would be more like port-forwarding.
A side note!
This setup would also depend on the ability of the LoadBalancer to allocate ports (which could be limited due to cloud provider specification)
As for possible solution which could be not so straight-forward I would take a look on following resources:
Stackoverflow.com: Questions: Nxing TCP forwarding based on hostname
Doc.traefik.io: Traefik: Routing: Routers: Configuring TCP routers
Github.com: Bolkedebruin: Rdpgw
I'd also check following links:
Aws.amazon.con: Quickstart: Architecture: Rd gateway - AWS specific
Docs.konghq.com: Kubernetes ingress controller: 1.2.X: Guides: Using tcpingress
Haproxy:
Haproxy.com: Documentation: Aloha: 12-0: Deployment guides: Remote desktop: RDP gateway
Haproxy.com: Documentation: Aloha: 10-5: Deployment guides: Remote desktop
Haproxy.com: Blog: Microsoft remote desktop services rds load balancing and protection
Actually, I really don't know why you are using that configmap.
In my knowledge, nginx-ingress-controller is routing traffic coming in the same port and routing based on host. So if you want to expose your applications on the same port, try using this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{ .Chart.Name }}-ingress
namespace: your-namespace
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: your-hostname
http:
paths:
- pathType: Prefix
path: "/"
backend:
serviceName: {{ .Chart.Name }}-service
servicePort: {{ .Values.service.nodeport.port }}
Looking in your requirement, I feel that you need a LoadBalancer rather than Ingress
Initially, I've deployed my frontend web application and all the backend APIS in AWS ECS, each of the backend APIs has a Route53 record, and the frontend is connected to these APIs in the .env file. Now, I would like to migrate from ECS to EKS and I am trying to deploy all these application in a Minikube local cluster. I would like to keep my .env in my frontend application unchanged(using the same URLs for all the environment variables), the application should first look for the backend API inside the local cluster through service discovery, if the backend API doesn't exist in the cluster, it should connect to the the external service, which is the API deployed in the ECS. In short, first local(Minikube cluster)then external(AWS). How to implement this in Kubernetes?
http:// backendapi.learning.com --> backend API deployed in the pod --> if not presented --> backend API deployed in the ECS
.env
BACKEND_API_URL = http://backendapi.learning.com
one of the example in the code in which the frontend is calling the backend API
export const ping = async _ => {
const res = await fetch(`${process.env.BACKEND_API_URL}/ping`);
const json = await res.json();
return json;
}
Assuming that your setup is:
Basing on microservices architecture.
Applications deployed in Kubernetes cluster (frontend and backend) are Dockerized
Applications are capable to be running on top of Kubernetes.
etc.
You can configure your Kubernetes cluster (minikube instance) to relay your request to different locations by using Services.
Service
In Kubernetes terminology "Service" is an abstract way to expose an application running on a set of Pods as a network service.
Some of the types of Services are following:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
You can use Headless Service with selectors and dnsConfig (in Deployment manifest) to achieve the setup referenced in your question.
Let me explain more:
Example
Let's assume that you have a backend:
nginx-one - located inside and outside
Your frontend manifest in most basic form should look following:
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
selector:
matchLabels:
app: frontend
replicas: 1
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- "infinity"
dnsConfig: # <--- IMPORTANT
searches:
- DOMAIN.NAME
Taking specific look on:
dnsConfig: # <--- IMPORTANT
searches:
- DOMAIN.NAME
Dissecting above part:
dnsConfig - the dnsConfig field is optional and it can work with any dnsPolicy settings. However, when a Pod's dnsPolicy is set to "None", the dnsConfig field has to be specified.
searches: a list of DNS search domains for hostname lookup in the Pod. This property is optional. When specified, the provided list will be merged into the base search domain names generated from the chosen DNS policy. Duplicate domain names are removed. Kubernetes allows for at most 6 search domains.
As for the Services for your backends.
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-one
spec:
clusterIP: None # <-- IMPORTANT
selector:
app: nginx-one
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
Above Service will tell your frontend that one of your backends (nginx) is available through a Headless service (why it's Headless will come in hand later!). By default you could communicate with it by:
service-name (nginx-one)
service-name.namespace.svc.cluster.local (nginx-one.default.svc.cluster.local) - only locally
Connecting to your backend
Assuming that you are sending the request using curl (for simplicity) from frontend to backend you will have a specific order when it comes to the DNS resolution:
check the DNS record inside the cluster
check the DNS record specified in dnsConfig
The specifics of connecting to your backend will be following:
If the Pod with your backend is available in the cluster, the DNS resolution will point to the Pod's IP (not ClusterIP)
If the Pod backend is not available in the cluster due to various reasons, the DNS resolution will first check the internal records and then opt to use DOMAIN.NAME in the dnsConfig (outside of minikube).
If there is no Service associated with specific backend (nginx-one), the DNS resolution will use the DOMAIN.NAME in the dnsConfig searching for it outside of the cluster.
A side note!
The Headless Service with selector comes into play here as its intention is to point directly to the Pod's IP and not the ClusterIP (which exists as long as Service exists). If you used a "normal" Service you would always try to communicate with the ClusterIP even if there is no Pods available matching the selector. By using a headless one, if there is no Pod, the DNS resolution would look further down the line (external sources).
Additional resources:
Minikube.sigs.k8s.io: Docs: Start
Aws.amazon.com: Blogs: Compute: Enabling dns resolution for amazon eks cluster endpoints
EDIT:
You could also take a look on alternative options:
Alernative option 1:
Use rewrite rule plugin in CoreDNS to rewrite DNS queries for backendapi.learning.com to backendapi.default.svc.cluster.local
Alernative option 2:
Add hostAliases to the Frontend Pod
You can also use Configmaps to re-use .env files.
I successfully deployed a simple Voila dashboard using Google Cloud Run for Anthos. However, since I created the deployment using a GitLab CI pipeline, by default the service was assigned a long and obscure domain name (e.g. http://sudoku.dashboards-19751688-sudoku.k8s.proteinsolver.org/).
I followed the instructions in mapping custom domains to map a shorter custom domain to the service described above (e.g http://sudoku.k8s.proteinsolver.org). However, while the static assets load fine from this new custom domain, the interactive dashboard does not load, and the javascript console is populated with errors:
default.js:64 WebSocket connection to 'wss://sudoku.k8s.proteinsolver.org/api/kernels/5bcab8b9-11d5-4de0-8a64-399e35258aa1/channels?session_id=7a0eed38-77bb-40e8-ad77-d05632b5fa1b' failed: Error during WebSocket handshake: Unexpected response code: 503
_createSocket # scheduler.production.min.js:10
[...]
Is there a way to get web sockets to work with custom domains? Am I doing something wrong?
TLDR, the following yaml needs to be applied to make websocket work:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: allowconnect-cluser-local-gateway
namespace: gke-system
spec:
workloadSelector:
labels:
app: cluster-local-gateway
configPatches:
- applyTo: NETWORK_FILTER
match:
listener:
portNumber: 80
filterChain:
filter:
name: "envoy.http_connection_manager"
patch:
operation: MERGE
value:
typed_config:
"#type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager"
http2_protocol_options:
allow_connect: true
Here is the explanation.
For the custom domain feature, the request path is
client ---> istio-ingress envoy pods ---> cluster-local-gateway envoy pods ---> user's application.
Specifically for websocket request, it needs cluster-local-gateway envoy pods to support extended CONNECT feature.
The EnvoyFilter yaml enables the extended CONNECT feature by setting allow_connect to true within the cluster-local-gateway pods.
I tried it by myself, and it works for me.
I don't know anything about your GitLab CI pipeline. By default, Knative (Cloud Run for Anthos) assigns external domain names like {name}.{namespace}.example.com where example.com can be customized based on your domain.
You can find this domain at Cloud Console or kubectl get ksvc.
First try if this domain works correctly with websockets. If so, indeed it's a "custom domain" issue. (If you are not sure, please edit your title/question to not to mention "custom domains".)
Also, you need to explicitly mark your container port as h2c on Knative for websockets to work. See ports section below, specifically name: h2c:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
name: hello
spec:
template:
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
ports:
- name: h2c
containerPort: 8080
I also see that the response code to your requests is HTTP 503, likely indicating a server error. Please check your application’s logs.
We've designed our API to use Istio JWT authentication which is mandatory and at the same time we've used the CORS. The problem is our JS code will do ajax call and HTTP Option pre-flight request will be called without JWT Authorization header. Unfornately the pre-flight request will be blocked by Istio. How to solve it?
Not sure if I understood your question correctly, but I think Service Entry will solve this.
ServiceEntry enables adding additional entries into Istio’s internal service registry, so that auto-discovered services in the mesh can access/route to these manually specified services. A service entry describes the properties of a service (DNS name, VIPs, ports, protocols, endpoints). These services could be external to the mesh (e.g., web APIs) or mesh-internal services that are not part of the platform’s service registry (e.g., a set of VMs talking to services in Kubernetes).
Service Entry for your example might look like the following:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-svc-https
spec:
hosts:
- api.foobar.com
location: MESH_EXTERNAL
ports:
- number: 80
name: http
resolution: DNS
I have a Kubernetes cluster provisioned at AWS with kops and I use route 53 mapper to configure ELB based on Service annotations and use namespaces for different environments dev, test, prod with configuration being defined in ConfigMap and Secret objects.
Environments have different hostname and TSL certificates:
kind: Service
apiVersion: v1
metadata:
name: http-proxy-service
labels:
dns: route53
annotations:
domainName: <env>.myapp.example.io
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: |-
arn:aws:acm:eu-central-1:44315247xxxxxxxxxxxxxxxx
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
spec:
selector:
app: http-proxy
ports:
- name: https
port: 443
Is there a Kubernetes way to reference ConfigMap/Secret objects in the metadata section of the object descriptor so I can have only one file for all environments?
I am looking for pure Kubernetes solution not using any templating before sending file to API via kubecetl.
There is not.
FWIW, it seems nuts that that mapper was designed to pull cert data from annotations on a Service. Service objects are not otherwise secret.
The mapper should be able to consume cert data from a Secret that has well defined fields to indicate what domain should be wired with what cert data in front of what service.