i'm new to K8S, trying some exercises for the first time.
i'm trying to expose a simple web app (nginx) to the outer network. i'm working on an EC2 instance, with elastic ip (for a static ip address).
my deployment.yml file looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nginx-deployment
template:
metadata:
labels:
app: nginx-deployment
spec:
containers:
- image: "nginx:latest"
name: nginx
ports:
- containerPort: 80
after running the commands:
kubectl apply -f deployment.yml
kubectl expose deployment nginx-deployment --name my-service --port 8080 --target-port=80 --type=NodePort
i would expect that i could address this simple app by the elastic ip:port (in my situation - 8080). can't connect.
i've tried to see details about my app via the command:
kubectl get services my-service
and got this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service NodePort 10.99.98.56 <none> 8080:32725/TCP 26m
i've also tried to open ALL OF THE PORTS in my instance - to check if there's any connection. what i did manage to do is to retrieve the internal ip address with:
kubectl get nodes -o wide
and then by adding the port number (the 32725) with the curl command - i've managed to get the nginx base html page.
my question is this: why couldn't i get the nginx base page via the elastic ip?
and how can i access my simple app?
Related
I have a Django web application that can display forecasts graphs using the machine learning library Sktime and the library plotly for graphs. It runs fine on my local machine. However, when I run it on Kubernetes it doesn't load. The web page just stays forever loading. I have tried changing my yaml's resource files by increasing CPU and memory to 2000m and 1000mi, respectively. Unfortunately that does not fix the problems. Right now the way I run my application is by using the minikube command: minikube service --url mywebsite. I don't know whether its the proper way to start my application. Does anyone know?
Service + Deployment YAML:
apiVersion: v1
kind: Service
metadata:
name: mywebsite
spec:
type: LoadBalancer
selector:
app: mywebsite
ports:
- protocol: TCP
name: http
port: 8743
targetPort: 8000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mywebsite
spec:
selector:
matchLabels:
app: mywebsite
template:
metadata:
labels:
app: mywebsite
spec:
containers:
- name: mywebsite
image: mywebsite
imagePullPolicy: Never
ports:
- containerPort: 8000
resources:
requests:
cpu: 200m
memory: 100Mi
limits:
memory: "1Gi"
cpu: "200m"
Posted answer with general solution as there are no further details / logs provided.
According to the official minikube documentation for accessing apps minikube supports both NodePort and LoadBalancer services:
There are two major categories of services in Kubernetes: NodePort and LoadBalancer
For accessing NodePort service you should use minikube service --url <service-name> command - check this.
For accessing LoadBalancer service you should use minikube tunnel command - check this.
As LoadBalancer type is also exposing NodePort, it should work with a minikube service command as you tried. I installed a minikube with Docker driver. I created a sample deployment, then I created a sample LoadBalancer service for this deployment. After that I ran minikube service --url <my-service> - On the output, I got address like:
http://192.168.49.2:30711
30711 is a node port. It's working fine when I try to access this address.
Why doesn't it work for you? Some possible reasons:
You are not using Linux - on the other OSes, there are some limitations for Minikube - i.e check this answer for Mac. Also it depends which minikube driver you are using.
Your pods are not running - you can check this with kubectl get pods command
You specified wrong ports in the definitions
Something is wrong with your image
Also check the "Troubleshooting" section on the minikube website.
I am having trouble upgrading our CLB to a NLB. I did a manual upgrade via the wizard through the console, but the connectivity wouldn't work. This upgrade is needed so we can use static IPs in the loadbalancer. I think it needs to be upgraded through kubernetes, but my attempts failed.
What I (think I) understand about this setup is that this loadbalancer was set up using Helm. What I also understand is that the ingress (controller) is responsible for redirecting http requests to https. and that this lb is working on layer 4.
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.30.0
component: controller
heritage: Tiller
release: nginx-ingress-external
name: nginx-ingress-external-controller
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/services/nginx-ingress-external-controller
spec:
clusterIP: 172.20.41.16
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 30854
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 30621
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: nginx-ingress-external
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: xxx.region.elb.amazonaws.com
How would I be able to perform the upgrade by modifying this configuration file?
As #Jonas pointed out in the comments section, creating a new LoadBalancer Service with the same selector as the existing one is probably the fastest and easiest method. As a result we will have two LoadBalancer Services using the same ingress-controller.
You can see in the following snippet that I have two Services (ingress-nginx-1-controller and ingress-nginx-2-controller) with exactly the same endpoint:
$ kubectl get pod -o wide ingress-nginx-1-controller-5856bddb98-hb865
NAME READY STATUS RESTARTS AGE IP
ingress-nginx-1-controller-5856bddb98-hb865 1/1 Running 0 55m 10.36.2.8
$ kubectl get svc ingress-nginx-1-controller ingress-nginx-2-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP
ingress-nginx-1-controller LoadBalancer 10.40.15.230 <PUBLIC_IP>
ingress-nginx-2-controller LoadBalancer 10.40.11.221 <PUBLIC_IP>
$ kubectl get endpoints ingress-nginx-1-controller ingress-nginx-2-controller
NAME ENDPOINTS AGE
ingress-nginx-1-controller 10.36.2.8:443,10.36.2.8:80 39m
ingress-nginx-2-controller 10.36.2.8:443,10.36.2.8:80 11m
Additionally to avoid downtime, we can first change the DNS records to point at the new LoadBalancer and after the propagation time we can safely delete the old LoadBalancer Service.
I am new to Kubernetes and I am currently deploying a cluster in AWS using Kubeadm. The containers are deployed just fine, but I can't seem to access them with by browser. When I used to do this via Docker Swarm I could simply use the IP address of the AWS node to access and login in my application with by browser, but this does not seem to work with my current Kubernetes setting.
Therefore my question is how can I access my running application under these new settings?
You should read about how to use Services in Kubernetes:
A Kubernetes Service is an abstraction which defines a logical set of
Pods and a policy by which to access them - sometimes called a
micro-service.
Basically Services allows a Deployment (or Pod) to be reached from inside or outside the cluster.
In your case, if you want to expose a single service in AWS, it is as simple as:
apiVersion: v1
kind: Service
metadata:
name: myApp
labels:
app: myApp
spec:
ports:
- port: 80 #port that the service exposes
targetPort: 8080 #port of a container in "myApp"
selector:
app: myApp #your deployment must have the label "app: myApp"
type: LoadBalancer
You can check if the Service was created successfully in the AWS EC2 console under "Elastic Load Balancers" or using kubectl describe service myApp
Both answers were helpful in my pursuit for a solution to my problem, but I ended up getting lost in the details. Here is an example that may help others with a similar situation:
1) Consider the following application yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-web-app
labels:
app: my-web-app
spec:
serviceName: my-web-app
replicas: 1
selector:
matchLabels:
app: my-web-app
template:
metadata:
labels:
app: my-web-app
spec:
containers:
- name: my-web-app
image: myregistry:443/mydomain/my-web-app
imagePullPolicy: Always
ports:
- containerPort: 8080
name: cp
2) I decided to adopt Node Port (thank you #Leandro for pointing it out) to expose my service, hence I added the following to my application yaml:
---
apiVersion: v1
kind: Service
metadata:
name: my-web-app
labels:
name: my-web-app
spec:
type: NodePort
ports:
- name: http1
port: 80
nodePort: 30036
targetPort: 8080
protocol: TCP
selector:
name: my-web-app
One thing that I was missing is that the label names in both sets must match in order to link my-web-app:StatefulSet (1) to my-web-app:Service (2). Then, my-web-app:StatefulSet:containerPort must be the same as my-web-app:Service:targetPort (8080). Finally, my-web-app:Service:nodePort is the port that we expose publicly and it must be a value between 30000-32767.
3) The last step is to ensure that the security group in AWS allows inbound traffic for the chosen my-web-app:Service:nodePort, in this case 30036, if not add the rule.
After following these steps I was able to access my application via aws-node-ip:30036/my-web-app.
Basically the way kubernetes is constructed is different. First of all your containers are kept hidden from the world, unless you create a service to expose them, a load balancer or nodePort. If you create a service of the type of clusterIP, it will be available only from inside the cluster. For simplicity use port forwading to test your containers, if everything is working then create a service to expose them (Node Port or load balancer). The best and more difficult approach is to create an ingress to handle inbound traffic and routing to the services.
Port Forwading example:
kubectl port-forward redis-master-765d459796-258hz 6379:6379
Change redis for your pod name and the appropriate port of your container.
I want to set up an ingress controller on AWS EKS for several microservices that are accessed from an external system.
The microservices are accessed via virtual host-names like svc1.acme.com, svc2.acme.com, ...
I set up the nginx ingress controller with a helm chart: https://github.com/helm/charts/tree/master/stable/nginx-ingress
My idea was to reserve an Elastic IP Address and bind the nginx-controller to that IP by setting the variable externalIP.
This way I should be able to access the services with a stable wildcard DNS entry *.acme.com --> 54.72.43.19
I can see that the ingress controller service get the externalIP, but the IP is not accessible.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-ingress-controller LoadBalancer 10.100.45.119 54.72.43.19 80:32104/TCP,443:31771/TCP 1m
Any idea why?
Update:
I installed the ingress controller with this command:
helm install --name ingress -f values.yaml stable/nginx-ingress
Here is the gist for values, the only thing changed from the default is
externalIPs: ["54.72.43.19"]
https://gist.github.com/christianwoehrle/3b136023b1e0085b028a67ca6a0959b7
Maybe you can achieve that by using a Network Load Balancer (https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html), that supports fixed IPs, as the backing for your Nginx ingress, eg (https://aws.amazon.com/blogs/opensource/network-load-balancer-support-in-kubernetes-1-9/):
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
labels:
app: nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer
first of all I downloaded kubernetes, kubectl and created a cluster from aws (export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash
)
I added some lines to my project circle.yml to use circleCI services to build my image.
To support docker I added:
machine:
services:
- docker
and to create my image and send it to my artifacts I added:
deployment:
commands:
- docker login -e admin#comp.com -u ${ARTUSER} -p ${ARTKEY} docker-docker-local.someartifactory.com
- sbt -DBUILD_NUMBER="${CIRCLE_BUILD_NUM}" docker:publish
After that I created a 2 folders:
my project (MyApp) folder with two files:
controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: MyApp
labels:
name: MyApp
spec:
replicas: 1
selector:
name: MyApp
template:
metadata:
labels:
name: MyApp
version: 0.1.4
spec:
containers:
- name: MyApp
#this is the image artifactory
image: docker-docker-release.someartifactory.com/MyApp:0.1.4
ports:
- containerPort: 9000
imagePullSecrets:
- name: myCompany-artifactory
service.yaml
apiVersion: v1
kind: Service
metadata:
name: MyApp
labels:
name: MyApp
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
type: LoadBalancer
ports:
# the port that this service should serve on
- port: 9000
selector:
name: MyApp
And I have another folder for my artifactory (Kind : Secret).
Now I created my pods with:
kubectl create -f controller.yaml
And now I have my pod running when I check in kubectl get pods.
Now, how do I access my pod from the browser? my project is a play project so I want to get to it from the browser...how do I expose it the simplest way?
thanks
The Replication Controller sole responsibility is ensuring that the amount of pods with the given configuration are run on your cluster.
The Service is what is public (or internally) exposing your pods to other parts of the system (or the internet).
You should create your service with your yaml file (kubectl create -f service.yaml) which will create the service, selecting pods by the label selector MyApp for handling the load on the given port in your file (9000).
Afterwards look at the registered service with kubectl get service to see which endpoint (ip) is allocated for it.