My applications uses HTTPS to run all services using docker-compose.
The application runs without any issues and we are trying to setup a HTTPS Load Balancer for all the services.
We created a Load Balancer using this Documentation.
We added three backend services and have set Host and path rules for all backend services.
But when trying to view below HTTPS URL's
https://Loadbalancer-ip:/strapi
https://Loadbalancer-ip:/auth
https://Loadbalancer-ip:/images/1.
I am getting the 404 page. But it works alone for All unmatched (default) alone.
I want to help you to fix your current limitation.
A URL redirect redirects your domain's visitors from one URL to another.
Before deploying a URL map, make sure you validate the URL map configuration to ensure that the map is routing requests to the appropriate backends as intended. You can do this by adding tests to the URL map configuration.
Use the gcloud compute url-maps validate command to validate URL map configuration.
gcloud compute url-maps validate --source PATH_TO_URL_MAP_CONFIG_FILE
PATH_TO_URL_MAP_CONFIG_FILE: Replace with a path to the file that contains the URL map configuration for validation.
Validating changes to an existing load balancer's URL map
If you have an existing load balancer that needs changes to the URL map, you can test those configuration changes before making them live.
Export the load balancer's existing URL map to a YAML file.
gcloud compute url-maps export URL_MAP_NAME \
--destination PATH_TO_URL_MAP_CONFIG_FILE \
--global
Edit the YAML file with new configuration. For example, if you want to edit an external HTTP(S) load balancer and send all requests with the path /video to a new backend service called video-backend-service, you can add tests to the URL map configuration as follows:
Existing URL map configuration with a single default web-backend-service:
kind: compute#urlMap
name: URL_MAP_NAME
defaultService: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendService/web-backend-service
Edited URL map configuration with added path matcher and tests for both the default web-backend-service and the new video-backend-service backend service:
kind: compute#urlMap
name: URL_MAP_NAME
defaultService: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendService/web-backend-service
hostRules:
- hosts:
- '*'
pathMatcher: pathmap
pathMatchers:
- defaultService: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendService/web-backend-service
name: pathmap
pathRules:
- paths:
- /video
- /video/*
service: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendService/video-backend-service
tests:
- description: Test routing to existing web service
host: foobar
path: /
service: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendService/web-backend-service
- description: Test routing to new video service
host: foobar
path: /video
service: https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/backendService/video-backend-service
Validate the new configuration.
gcloud compute url-maps validate --source PATH_TO_URL_MAP_CONFIG_FILE
If all tests pass successfully, you should see a success message such as:
Successfully validated [https://www.googleapis.com/compute/v1/projects/PROJECT_ID/global/urlMaps/URL_MAP_CONFIG_FILE_NAME
If the tests fail, an error message appears. Make the required fixes to the URL map config file and try validating again.
Error: Invalid value for field 'urlMap.tests': ''.
Test failure: Expect URL 'HOST/PATH' to map to service 'EXPECTED_BACKEND_SERVICE', but actually mapped to 'ACTUAL_BACKEND_SERVICE'.
Once you know that the new configuration works and does not impact your existing setup, you can import it into the URL map. Note that this step will also deploy the url map with the new configuration.
gcloud compute url-maps import URL_MAP_NAME \
--source PATH_TO_URL_MAP_CONFIG_FILE \
--global
Important: If you originally set up your load balancer in the Cloud Console, the URL map name is the same as your load balancer's name.
Have fun!
Would you mean the page with status code 404, Or just can not access to your page?
Please make sure you have specified the right ip AND port of backend services.
Do you want to map Loadbalancer-ip:/strapi to service-ip:/strapi or service-ip:?
Related
I am running an ingress in GKE. I am routing most of my traffic to one backend but I wish some calls to be routed to another backend. The ingress looks something like this:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
spec:
rules:
- http:
paths:
- backend:
service:
name: zone-search
port:
name: external
path: /api/v2/zones/location-search
pathType: Prefix
- http:
paths:
- backend:
service:
name: api-service
port:
name: external
path: /*
pathType: ImplementationSpecific
If I do a request like GET /api/v2/zones/location-search, it works fine.
However, if I do GET /api/v2/zones/location-search?foo=bar my request ends up in the api-service backend and not the zone-search as I expected.
I have tried using pathType: ImplementationSpecific and had both path: /api/v2/zones/location-search and path: /api/v2/zones/location-search/* but still no progress. Google requires wildcard to follow a slash but location-search is the endpoint itself and has no slash after it.
I also tried using a default backend with the same result. The problem still seems to be that the url including ?foo=bar doesn't match the path i specified.
I can't do path: /api/v2/zones/* since there are other endpoints in the api that would go to the zone-search backend that isn't supposed to.
Update
I tried using double quotes, plus removing the second
- http:
paths:
and started getting failed_to_pick_backend errors. It ended up solved by changing the health check for the backend service.
I don't know if the health check problem meant that the api-service was selected as a backup when the zone-search service was unhealthy or if one of my two changes solved my initial problem.
Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address. You can use Ingress to reuse the load balancer for multiple domain names, subdomains, and to expose multiple services on a single IP address and load balancer. Check out the simple fanout and name-based virtual hosting examples to learn how to configure Ingress for these tasks.
Note: Always modify the properties of the Load Balancer via the Ingress object. Making changes directly on the load balancing resources might get lost or overridden by the GKE Ingress controller.
On the other hand :
Each external HTTP(S) load balancer or internal HTTP(S) load balancer uses a single URL map, which references one or more backend services. One backend service corresponds to each Service referenced by the Ingress.
Additionally, to create an Ingress that specifies rules for routing requests depending on the URL path in the request. When you create the Ingress, the GKE Ingress controller creates and configures an external HTTP(S) load balancer, see the official documentation.
Initially, I've deployed my frontend web application and all the backend APIS in AWS ECS, each of the backend APIs has a Route53 record, and the frontend is connected to these APIs in the .env file. Now, I would like to migrate from ECS to EKS and I am trying to deploy all these application in a Minikube local cluster. I would like to keep my .env in my frontend application unchanged(using the same URLs for all the environment variables), the application should first look for the backend API inside the local cluster through service discovery, if the backend API doesn't exist in the cluster, it should connect to the the external service, which is the API deployed in the ECS. In short, first local(Minikube cluster)then external(AWS). How to implement this in Kubernetes?
http:// backendapi.learning.com --> backend API deployed in the pod --> if not presented --> backend API deployed in the ECS
.env
BACKEND_API_URL = http://backendapi.learning.com
one of the example in the code in which the frontend is calling the backend API
export const ping = async _ => {
const res = await fetch(`${process.env.BACKEND_API_URL}/ping`);
const json = await res.json();
return json;
}
Assuming that your setup is:
Basing on microservices architecture.
Applications deployed in Kubernetes cluster (frontend and backend) are Dockerized
Applications are capable to be running on top of Kubernetes.
etc.
You can configure your Kubernetes cluster (minikube instance) to relay your request to different locations by using Services.
Service
In Kubernetes terminology "Service" is an abstract way to expose an application running on a set of Pods as a network service.
Some of the types of Services are following:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
You can use Headless Service with selectors and dnsConfig (in Deployment manifest) to achieve the setup referenced in your question.
Let me explain more:
Example
Let's assume that you have a backend:
nginx-one - located inside and outside
Your frontend manifest in most basic form should look following:
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
selector:
matchLabels:
app: frontend
replicas: 1
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- "infinity"
dnsConfig: # <--- IMPORTANT
searches:
- DOMAIN.NAME
Taking specific look on:
dnsConfig: # <--- IMPORTANT
searches:
- DOMAIN.NAME
Dissecting above part:
dnsConfig - the dnsConfig field is optional and it can work with any dnsPolicy settings. However, when a Pod's dnsPolicy is set to "None", the dnsConfig field has to be specified.
searches: a list of DNS search domains for hostname lookup in the Pod. This property is optional. When specified, the provided list will be merged into the base search domain names generated from the chosen DNS policy. Duplicate domain names are removed. Kubernetes allows for at most 6 search domains.
As for the Services for your backends.
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-one
spec:
clusterIP: None # <-- IMPORTANT
selector:
app: nginx-one
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
Above Service will tell your frontend that one of your backends (nginx) is available through a Headless service (why it's Headless will come in hand later!). By default you could communicate with it by:
service-name (nginx-one)
service-name.namespace.svc.cluster.local (nginx-one.default.svc.cluster.local) - only locally
Connecting to your backend
Assuming that you are sending the request using curl (for simplicity) from frontend to backend you will have a specific order when it comes to the DNS resolution:
check the DNS record inside the cluster
check the DNS record specified in dnsConfig
The specifics of connecting to your backend will be following:
If the Pod with your backend is available in the cluster, the DNS resolution will point to the Pod's IP (not ClusterIP)
If the Pod backend is not available in the cluster due to various reasons, the DNS resolution will first check the internal records and then opt to use DOMAIN.NAME in the dnsConfig (outside of minikube).
If there is no Service associated with specific backend (nginx-one), the DNS resolution will use the DOMAIN.NAME in the dnsConfig searching for it outside of the cluster.
A side note!
The Headless Service with selector comes into play here as its intention is to point directly to the Pod's IP and not the ClusterIP (which exists as long as Service exists). If you used a "normal" Service you would always try to communicate with the ClusterIP even if there is no Pods available matching the selector. By using a headless one, if there is no Pod, the DNS resolution would look further down the line (external sources).
Additional resources:
Minikube.sigs.k8s.io: Docs: Start
Aws.amazon.com: Blogs: Compute: Enabling dns resolution for amazon eks cluster endpoints
EDIT:
You could also take a look on alternative options:
Alernative option 1:
Use rewrite rule plugin in CoreDNS to rewrite DNS queries for backendapi.learning.com to backendapi.default.svc.cluster.local
Alernative option 2:
Add hostAliases to the Frontend Pod
You can also use Configmaps to re-use .env files.
I am trying to make Elastic Beanstalk to auto-create an HTTPS redirect rule in the Application Load Balancer when the EB environment is created from a config file. I can see that Amazon has a yaml example but it doesn't reflect the format of my configuration file: https://github.com/awsdocs/elastic-beanstalk-samples/blob/master/configuration-files/aws-provided/resource-configuration/alb-http-to-https-redirection-full.config
I would like to configure the redirect in the load balancer not in my reverse proxy (Nginx).
This is what I have in the config right now. There is nothing for HTTP:80 - which EB creates by default I guess.
OptionSettings:
aws:elbv2:listener:443:
ListenerEnabled: true
SSLPolicy: ELBSecurityPolicy-2016-08
SSLCertificateArns: <my cert arn>
DefaultProcess: default
Protocol: HTTPS
Rules: ''
The "yaml" files provided by the AWS, alb-http-to-https-redirection-full.config and alb-http-to-https-redirection.config are to be placed (after your modifications if needed; HTTPs requires SSL certificate) in your .ebextensions folder.
They are actual EB config files, but look like yaml CloudFormation files. So in your zip package would have files .ebextensions/alb-http-to-https-redirection-full.config and/or alb-http-to-https-redirection.config along side your application.
I'm working on Cloud Run Anthos at GCP and host on GKE cluster.
Which I follow this qwiklabs for study the Cloud Run Anthos,
https://www.qwiklabs.com/focuses/5147?catalog_rank=%7B%22rank%22%3A6%2C%22num_filters%22%3A0%2C%22has_search%22%3Atrue%7D&parent=catalog&search_id=7054914
The example in hands-on lab. They used below command to check the service is working or not.
curl -H Host : <URL> <IP_CLUSTER>
And I wonder about reality used. No one add Host in the every request to working.
My question is, It have any possible to solve this issue? I just want to used the invoke request by browser or any application but no sure is possible?
I reach the resource document about Istio ingress, Which the example of qwiklab used it also.
It about VirtualSerivce and look like I have a Istio Ingress before to build this proxy.
Is that a correct way to trobleshooting?
https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPRewrite
You can change the config-domain config map in the knative namespace. you can see the config like this
kubectl describe configmap config-domain --namespace knative-serving
Then you can update it like this
Create a config file in a file config-domain.yaml (for example)
apiVersion: v1
kind: ConfigMap
metadata:
name: config-domain
namespace: knative-serving
data:
gblaquiere.dev: ""
Apply the configuration
kubectl apply -f config-domain.yaml
more detail here
With the new domain name, configure your DNS registrar to match your domain name to the load balancer external IP and you website will present the correct host on each request.
The curl -H Host... is a cheat to lie to the Istio controller and say to it "Yes I come from there". If you really come from there (your own domain name) no need to cheat!
I am trying to develop spring cloud microservice. I developed a sample demo of Spring Cloud project by using Zuul proxy, Eureka server and Hystrix. I added my developed service as a client of Eureka server and applied the routing. All are working well. Now I need to deploy in my AWS Ec2 machine. In my local I added the default zone URL in application.properties file like the following,
eureka.client.serviceUrl.defaultZone=http://localhost:8071/eureka/
When I am moving to my Ec2 machine or by sing AWS ECS, how I can modify this IP address belongs to cloud for proper configuration? I also using localhost:8090 and 8091 like these ports for Zuul and Turbine dashboard project etc. So how I need to change this URL when I am deploying to cloud?
We use domains. So you would point an A-record of api.yourdomain.com at the IP address or load balancer alias that is supporting your services.
Why? When we decided to change infrastructure we are able to change a DNS entry rather than modify all of our microservices' configurations. We recently moved from Eureka/Zuul to AWS's ALB. Using domains allowed us to run both environments in parallel and cutover with no down time. In the event there was a failure in the new environment, the old one was still running and we could cut back with a simple A-record change.
In your application.yml file you can configure different profiles so that you can test locally and then in ECS you can define the profile to use when creating the task definition.
First here is an example of how you can configure your application.yml file to be able to run on different profiles:
############# for running locally ################
server:
port: 1234
logging:
file: logs/example.log
level:
com.example: INFO
endpoints:
health:
sensitive: true
spring:
datasource:
url: jdbc:mysql://example.us-east-1.rds.amazonaws.com/example_db?noAccessToProcedureBodies=true
username: example
password: example
driver-class-name: com.mysql.jdbc.Driver
security:
oauth2:
client:
clientId: example
clientSecret: examplesecret
scope: webapp
accessTokenUri: http://localhost:9999/uaa/oauth/token
userAuthorizationUri: http://localhost:9999/uaa/oauth/authorize
resource:
userInfoUri: http://localhost:9999/uaa/user
########## For deployment in Docker containers/ECS ########
spring:
profiles: prod
datasource:
url: jdbc:mysql://example.rds.amazonaws.com/example_db?noAccessToProcedureBodies=true
username: example
password: example
driver-class-name: com.mysql.jdbc.Driver
prodnetwork:
ipAddress: api.yourdomain.com
security:
oauth2:
client:
clientId: exampleid
clientSecret: examplesecret
scope: webapp
accessTokenUri: https://${prodnetwork.ipAddress}/v1/uaa/oauth/token
userAuthorizationUri: https://${prodnetwork.ipAddress}/v1/uaa/oauth/authorize
resource:
userInfoUri: https://${prodnetwork.ipAddress}/v1/uaa/user
Second: Setting up ECS to use your Prod profile:
When you build your docker container, tag it with your new profile's name, in this case "prod"
Third: Create a task definition and define your Docker tag in the repo URL and your new profile in your container run command:
Now when you work on your application on your local machine, you can run it with "localhost" and when you deploy it to ECS you can define your new domain/ip to be used in the run command in your container definition.