I am trying to deploy microservices architecture on kubernetes cluster, do any one knows how to create ingress for AWS.
I recommend you use the ALB Ingress Controller https://github.com/kubernetes-sigs/aws-alb-ingress-controller, as it is recommended by AWS and creates Application Load Balancers for each Ingress.
Alternatively, know that you can use any kind of Ingress, such as Nginx, in AWS. You will create the Nginx Service of type LoadBalancer, so that all requests to that address are redirected to Nginx. Nginx itself will take care to redirect the requests to the correct service inside Kubernetes.
To create an Ingress Resource we first need to deploy Ingress Controller. Ingress Controller can be very easily deployed using helm. Follow the below steps to install Helm and ingress Controller:
$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
$ chmod 700 get_helm.sh
$ ./get_helm.sh
$ Kubectl createserviceaccount --namespace kube-system tiller
$ Kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-acount=tiller
$ helm install stable/nginx-ingress --name my-nginx --set rbac.create=true
Once Ingress Controller is installed check it by running kubectl get pods and you should see 2 pods running. One is the Ingress Controller and the second is Default Backend.
And now if you will go to your AWS Management Console, you should see an Elastic Load Balancer running which routes traffic to ingress controller which in turn routes traffic to appropriate services based on appropriate rules.
To test Ingress Follow Steps 1 to 4 of this link here: Setting up HTTP Load Balancing with Ingress
Hope this helps!
Related
I am trying to deploy the jpetstore app to EKS and have come across this variable that needs to be filled in. (https://github.com/IBM-Cloud/jpetstore-kubernetes/blob/master/jpetstore/jpetstore.yaml)
In the documentation it states that i can it using the commands:
Edit jpetstore/jpetstore.yaml and jpetstore/jpetstore-mmssearch.yaml and replace all instances of:
<CLUSTER DOMAIN> with your Ingress Subdomain (ibmcloud ks cluster get --cluster CLUSTER_NAME)
(https://github.com/IBM-Cloud/jpetstore-kubernetes)
but this is for ibm cloud and i need it for EKS.
Where can i get this <CLUSTER DOMAIN> value from?
You can define your own name in the ingress spec, example host: jpetstore.example.com. You need to install AWS LB Controller; which will handle your ingress request and create an ALB based on your ingress spec. You then access like: curl -H 'HOST: jpetstore.example.com' http://<your new alb endpoint>
I'm working on Cloud Run Anthos at GCP and host on GKE cluster.
Which I follow this qwiklabs for study the Cloud Run Anthos,
https://www.qwiklabs.com/focuses/5147?catalog_rank=%7B%22rank%22%3A6%2C%22num_filters%22%3A0%2C%22has_search%22%3Atrue%7D&parent=catalog&search_id=7054914
The example in hands-on lab. They used below command to check the service is working or not.
curl -H Host : <URL> <IP_CLUSTER>
And I wonder about reality used. No one add Host in the every request to working.
My question is, It have any possible to solve this issue? I just want to used the invoke request by browser or any application but no sure is possible?
I reach the resource document about Istio ingress, Which the example of qwiklab used it also.
It about VirtualSerivce and look like I have a Istio Ingress before to build this proxy.
Is that a correct way to trobleshooting?
https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPRewrite
You can change the config-domain config map in the knative namespace. you can see the config like this
kubectl describe configmap config-domain --namespace knative-serving
Then you can update it like this
Create a config file in a file config-domain.yaml (for example)
apiVersion: v1
kind: ConfigMap
metadata:
name: config-domain
namespace: knative-serving
data:
gblaquiere.dev: ""
Apply the configuration
kubectl apply -f config-domain.yaml
more detail here
With the new domain name, configure your DNS registrar to match your domain name to the load balancer external IP and you website will present the correct host on each request.
The curl -H Host... is a cheat to lie to the Istio controller and say to it "Yes I come from there". If you really come from there (your own domain name) no need to cheat!
Trying to teach myself on how to use Kubernetes, and having some issues.
I was able to set up a cluster, deploy the nginx image and then access nginx using a service of type NodePort (once I added the port to the security group inbound rules of the node).
My next step was to try to use a service of type LoadBalancer to try to access nginx.
I set up a new cluster and deployed the nginx image.
kubectl \
create deployment my-nginx-deployment \
--image=nginx
I then set up the service for the LoadBalancer
kubectl expose deployment my-nginx-deployment --type=LoadBalancer --port=80 --target-port=8080 --name=nginxpubic
Once it was done setting up, I tried to access nginx using the LoadBalancer Ingress (Which I found from describing the LoadBalancer service). I received a This page isn’t working error.
Not really sure where I went wrong.
results of kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 7h
nginxpubic LoadBalancer 100.71.37.139 a5396ba70d45d11e88f290658e70719d-1485253166.us-west-2.elb.amazonaws.com 80:31402/TCP 7h
From the nginx dockerhub page , I see that the container is using port 80.
https://hub.docker.com/_/nginx/
It should be like this:
kubectl expose deployment my-nginx-deployment --type=LoadBalancer --port=80 --target-port=80 --name=nginxpubic
Also,
make sure the service type loadbalancer is available in your environement.
Known Issues for minikube installation
Features that require a Cloud Provider will not work in Minikube. These include:
LoadBalancers
Features that require multiple nodes. These include:
Advanced scheduling policies
I'm trying to figure out how to get this setup to work:
I am using Kube 1.7 (no RBAC) spun up from kops in AWS
I have a single nginx ingress controller for my entire cluster that is using a LoadBalancer service in the kube-system, namespace installed via Helm
I have cert-manager setup in kube-system, installed via Helm and
using ClusterIssuers
I have external-dns setup in kube-system installed via Helm
I have multiple applications, one per namespace, with associated Ingress objects in each namespace.
I am annotating the Ingresses with the appropriate annotations for both cert-manager (certmanager.k8s.io/cluster-issuer: letsencrypt-prod) and external-dns (dns.alpha.kubernetes.io/external: app.contoso.com)
In this scenario, cert-manager is reacting appropriately to the Ingress object (modifying it to complete the ACME challenge), but external-dns is not doing anything (logs are saying all hostnames are up to date). If I manually add a Route53 record for the ELB associated with the LB service, everything works as expected. Inspecting the Ingress object, I see that the status block looks like so:
status:
loadBalancer:
ingress:
- {}
which I suppose is why external-dns isn't reacting? How do I get this to work? Per the documentation
More troubleshooting information (pod definitions, ingress definitions, controller logs, etc.) can be found here: https://gist.github.com/DWSR/f6d596850346223393bec23b289c9731
I solved this myself. The nginx ingress controller has a --publish-service command line argument which will cause it to update the status fields on the ingress objects which, in turn, will cause external-dns to create the appropriate DNS records. When installing via Helm, simply set .Values.controller.publishService.enabled to true and this will take effect.
Sources:
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/cli-arguments.md
https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress#configuration
We have set up OpenShift Origin on AWS using this handy guide. Our eventual
hope is to have some pods running REST or similar services that we can access
for development purposes. Thus, we don't need DNS or anything like that at this
point, just a public IP with open ports that points to one of our running pods.
Our first proof of concept is trying to get a jenkins (or even just httpd!) pod
that's running inside OpenShift to be exposed via an allocated Elastic IP.
I'm not a network engineer by any stretch, but I was able to successuflly get
an Elastic IP connected to one of my OpenShift "worker" instances, which I
tested by sshing to the public IP allocated to the Elastic IP. At this point
we're struggling to figure out how to make a pod visible that allocated Elastic IP,
owever. We've tried a kubernetes LoadBalancer service, a kubernetes Ingress,
and configuring an AWS Network Load Balancer, all without being able to
successfully connect to 18.2XX.YYY.ZZZ:8080 (my public IP).
The most promising success was using oc port-forward seemed to get at least part way
through, but frustratingly hangs without returning:
$ oc port-forward --loglevel=7 jenkins-2-c1hq2 8080 -n my-project
I0222 19:20:47.708145 73184 loader.go:354] Config loaded from file /home/username/.kube/config
I0222 19:20:47.708979 73184 round_trippers.go:383] GET https://ec2-18-2AA-BBB-CCC.us-east-2.compute.amazonaws.com:8443/api/v1/namespaces/my-project/pods/jenkins-2-c1hq2
....
I0222 19:20:47.758306 73184 round_trippers.go:390] Request Headers:
I0222 19:20:47.758311 73184 round_trippers.go:393] X-Stream-Protocol-Version: portforward.k8s.io
I0222 19:20:47.758316 73184 round_trippers.go:393] User-Agent: oc/v1.6.1+5115d708d7 (linux/amd64) kubernetes/fff65cf
I0222 19:20:47.758321 73184 round_trippers.go:393] Authorization: Bearer Pqg7xP_sawaeqB2ub17MyuWyFnwdFZC5Ny1f122iKh8
I0222 19:20:47.800941 73184 round_trippers.go:408] Response Status: 101 Switching Protocols in 42 milliseconds
I0222 19:20:47.800963 73184 round_trippers.go:408] Response Status: 101 Switching Protocols in 42 milliseconds
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
( oc port-forward hangs at this point and never returns)
We've found a lot of information about how to get this working under GKE, but
nothing that's really helpful for getting this working for OpenShift Origin on
AWS. Any ideas?
Update:
So we realized that sysdig.com's blog post on deploying OpenShift Origin on AWS was missing some key AWS setup information, so based on OpenShift Origin's Configuring AWS page, we set the following env variables and re-ran the ansible playbook:
$ export AWS_ACCESS_KEY_ID='AKIASTUFF'
$ export AWS_SECRET_ACCESS_KEY='STUFF'
$ export ec2_vpc_subnet='my_vpc_subnet'
$ ansible-playbook -c paramiko -i hosts openshift-ansible/playbooks/byo/config.yml --key-file ~/.ssh/my-aws-stack
I think this gets us closer, but creating a load-balancer service now gives us an always-pending IP:
$ oc get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-lb 172.30.XX.YYY <pending> 8080:31338/TCP 12h
The section on AWS Applying Configuration Changes seems to imply I need to use AWS Instance IDs rather than hostnames to identify my nodes, but I tried this and OpenShift Origin fails to start if I use that method. Still at a loss.
It may not satisfy the "Elastic IP" part but how about using AWS cloud provider ELB to expose the IP/port to the pod via a service to the pod with LoadBalancer option?
Make sure to configure the AWS cloud provider for the cluster (References)
Create a svc to the pod(s) with type LoadBalancer.
For instance to expose a Dashboard via AWS ELB.
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: LoadBalancer <-----
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
Then the svc will be exposed as an ELB and the pod can be accessed via the ELB public DNS name a53e5811bf08011e7bae306bb783bb15-953748093.us-west-1.elb.amazonaws.com.
$ kubectl (oc) get svc kubernetes-dashboard -n kube-system -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes-dashboard LoadBalancer 10.100.96.203 a53e5811bf08011e7bae306bb783bb15-953748093.us-west-1.elb.amazonaws.com 443:31636/TCP 16m k8s-app=kubernetes-dashboard
References
K8S AWS Cloud Provider Notes
Reference Architecture OpenShift Container Platform on Amazon Web Services
DEPLOYING OPENSHIFT CONTAINER PLATFORM 3.5 ON AMAZON WEB SERVICES
Configuring for AWS
Check this guide out: https://github.com/dwmkerr/terraform-aws-openshift
It's got some significant advantages vs. the one you referring to in your post. Additionally, it has a clear terraform spec that you can modify and reset to using an Elastic IP (haven't tried myself but should work).
Another way to "lock" your access to the installation is to re-code the assignment of the Public URL to the master instance in the terraform script, e.g., to a domain that you own (the default script sets it to an external IP-based value with "xip.io" added - works great for testing), then set up a basic ALB that forwards https 443 and 8443 to the master instance that the install creates (you can do it manually after the install is completed, also need a second dummy Subnet; dummy-up the healthcheck as well) and link the ALB to your domain via Route53. You can even use free Route53 wildcard certs with this approach.