I'm newbie in Kubernetes. I created a Kubernetes Cluster on Amazon EKS.
I'm trying to setup multiple kubernetes services to run multiple ASP.NET applications in one cluster. But facing a weird problem.
Everything runs fine when there is only 1 service. But whenever i create 2nd service for 2nd application it creates a conflict. The conflict is sometime service 1 url load service 2 application and sometime it loads service 1 application and same happens with service 2 url on simple page reload.
I've tried both Amazon Classic ELB (With LoadBalancer Service Type) and Nginx Ingress controller (With ClusterIp Service Type). This error is common in both approaches.
Both services and deployments are running on port 80, I even tried different ports for both services and deployments to avoid port conflict but same problem.
I've checked the deployment & service status, and pod log everything looks fine. No error or warning at all
Please guide how i can fix this error.
Here is the yaml file of both services for nginx ingress
# Service 1 for deployment 1 (container port: 1120)
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-12-05T14:54:21Z
labels:
run: load-balancer-example
name: app1-svc
namespace: default
resourceVersion: "463919"
selfLink: /api/v1/namespaces/default/services/app1-svc
uid: a*****-****-****-****-**********c
spec:
clusterIP: 10.100.102.224
ports:
- port: 1120
protocol: TCP
targetPort: 1120
selector:
run: load-balancer-example
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
2nd Service
# Service 2 for deployment 2 (container port: 80)
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-12-05T10:13:33Z
labels:
run: load-balancer-example
name: app2-svc
namespace: default
resourceVersion: "437188"
selfLink: /api/v1/namespaces/default/services/app2-svc
uid: 6******-****-****-****-************0
spec:
clusterIP: 10.100.65.46
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: load-balancer-example
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Thanks
The problem is with the selector in the services. They both have the same selector and that's why you are facing that problem. So they both will point to same set of pods.
The set of Pods targeted by a Service is (usually) determined by a Label Selector
Since deployemnt 1 and deployment 2 are different(i think), you should use different selector in them. Then expose the deployments. For example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15.4
ports:
- containerPort: 80
--
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: nightfury1204/hello_server
args:
- serve
ports:
- containerPort: 8080
Above two deployment nginx-deployment and hello-deployment has different selector. So expose them to service will not colide each other.
When you use kubectl expose deployment app1-deployment --type=ClusterIP --name=app1-svc to expose deployment, the service will have the same selector as the deployment.
Related
Kubernetes & AWS EKS newbie here.
I have deployed a simple Node.js web application onto a cluster on Amazon EKS. When I send a GET request to the root (/) route, my app responds with the message: Hello from Node.
My Deployment and Service configuration files are as follows:
eks-sample-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: eks-sample-app-deployment
spec:
replicas: 2
selector:
matchLabels:
app: eks-sample-app
template:
metadata:
labels:
app: eks-sample-app
spec:
containers:
- name: eks-sample-app-container
image: sundaray/node-server:v1
ports:
- containerPort: 8000
eks-sample-service.yaml
apiVersion: v1
kind: Service
metadata:
name: eks-sample-app-service
spec:
type: NodePort
ports:
- port: 3050
targetPort: 8000
nodePort: 31515
selector:
app: eks-sample-app
After I deployed my app, I checked the container log as shown below & I get the right response: Server listening on port 8000.
Now, I want to access my application from my browser. How do I get the URL address where I can access my app?
What you are looking for is Ingress
An API object that manages external access to the services in a cluster, typically HTTP.
https://kubernetes.io/docs/concepts/services-networking/ingress/
https://aws.amazon.com/premiumsupport/knowledge-center/eks-access-kubernetes-services/
I have used kind kubernetes to create cluster.
I have created 3 services for 3 Pods ( EmberJS, Flask, Postgres ). Pods are created using Deployment.
I have exposed my front-end service to port 84 ( NodePort Service ).
My app is now accessible on localhost:84 on my machine's browser.
But the app is not able to connect to the flask API which is exposed as flask-dataapp-service:6003 .
net:: ERR_NAME_NOT_RESOLVED
My flask service is available as flask-dataapp-service:6003. When I do a
curl flask-dataapp-service:6003
inside the bash of the ember pod container. It is being resolved without any issues.
But from the browser the flask-dataapp-service is not being resolved.
Find the config files below.
kind-custom.yaml
> kind: Cluster
> apiVersion: kind.x-k8s.io/v1alpha4 nodes:
> - role: control-plane
> extraPortMappings:
> - containerPort: 30000
> hostPort: 84
> listenAddress: "0.0.0.0" # Optional, defaults to "0.0.0.0"
> protocol: tcp
Emberapp.yaml
apiVersion: v1
kind: Service
metadata:
name: ember-dataapp-service
spec:
selector:
app: ember-dataapp
ports:
- protocol: "TCP"
port: 4200
nodePort: 30000
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ember-dataapp
spec:
selector:
matchLabels:
app: ember-dataapp
replicas: 1
template:
metadata:
labels:
app: ember-dataapp
spec:
containers:
- name: emberdataapp
image: emberdataapp
imagePullPolicy: IfNotPresent
ports:
- containerPort: 4200
flaskapp.yaml
apiVersion: v1
kind: Service
metadata:
name: flask-dataapp-service
spec:
selector:
app: flask-dataapp
ports:
- protocol: "TCP"
port: 6003
targetPort: 1234
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-dataapp
spec:
selector:
matchLabels:
app: flask-dataapp
replicas: 1
template:
metadata:
labels:
app: flask-dataapp
spec:
containers:
- name: dataapp
image: dataapp
imagePullPolicy: IfNotPresent
ports:
- containerPort: 1234
my flask service is available as flask-dataapp-service:6003. When I do a
curl flask-dataapp-service:6003
inside the bash of the ember pod container. It is being resolved without any issues.
Kubernetes has an in-cluster DNS which allows names such as this to be resolved directly within the cluster (i.e. DNS requests do not leave the cluster). This is also why this name does not resolve outside the cluster (hence why you cannot see it in your browser)
(Unrelated side note: this is actually a gotcha in the Kubernetes CKA certification)
Since you have used a NodePort service, you should in theory be able to use the NodePort you described (6003) and access the app using "http://localhost:6003"
Alternatively, you can port-forward:
kubectl port-forward svc/flask-dataapp-service 6003:6003
then use the same link
The port-forward option is not really of much use when running a local kubernetes cluster (in fact, the kubectl might fail with "port in use"), it's a good idea to get used to that method since it's the easiest way you can access a service in a remote kubernetes cluster that is using ClusterIP or NodePort without having to have direct access to the nodes.
I have my Azure Kubernetes YAML file which works completely in AKS.
Now I need to prepare it for AWS.
Could you please assist me what has to be changed?
I am specifically oriented that most probably file share segment must be modified since "azureFile" segment is specific to Azure (and probably related volumes and volumeMounts must be changed according to that)
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontarena-ads-win-deployment
labels:
app: frontarena-ads-win-deployment
spec:
replicas: 1
template:
metadata:
name: frontarena-ads-win-test
labels:
app: frontarena-ads-win-test
spec:
nodeSelector:
"beta.kubernetes.io/os": windows
restartPolicy: Always
containers:
- name: frontarena-ads-win-test
image: local.docker.dev/frontarena/ads:wintest2
imagePullPolicy: Always
ports:
- containerPort: 9000
volumeMounts:
- name: ads-win-filesharevolume
mountPath: /Host
volumes:
- name: ads-win-filesharevolume
azureFile:
secretName: fa-secret
shareName: fawinshare
readOnly: false
imagePullSecrets:
- name: fa-repo-secret
selector:
matchLabels:
app: frontarena-ads-win-test
---
apiVersion: v1
kind: Service
metadata:
name: frontarena-ads-win-test
spec:
type: ClusterIP
ports:
- protocol: TCP
port: 9001
targetPort: 9000
selector:
app: frontarena-ads-win-test
azurefile is one of the Storage Classes provisionners, that you could replace with, for instance, a AWSElasticBlockStore (AWS EBS)
But you might also benefit from AWS SMS (AWS Server Migration Service) in order to analyze your Azure configuration and generate one for AWS, as explained in "Migrating Azure VM to AWS using AWS SMS Connector for Azure" by Emma White.
You will need to Install the Server Migration Connector on Azure.
The tool has limitations though.
See also AWS Application Migration Service for the applications part.
I am deploying dotnet core web API 3.1 sample app to aws eks through kubectl get svc command I can able to get the external URL but the URL is not working,
The same deployment yml is working for a web applications but not working with web API. Do we need to do any additional configuration for web API projects?
Below is my yml deployment comments,
kind: Deployment
metadata:
name: apiddapp
spec:
replicas: 1
selector:
matchLabels:
app: apiddapp
template:
metadata:
labels:
app: apiddapp
spec:
containers:
- image: xxxx.amazonaws.com/myapptestapi:v3
name: apiddapp
ports:
- containerPort: 8080
apiVersion: v1
kind: Service
metadata:
name: apiddapp
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: apiddapp
type: LoadBalancer```
Many thanks in advance.
try creating separate file for deployment and service. Create service first and see. I had the same issue but it works now. I followed https://www.youtube.com/watch?v=ZOROT9yMp44
I'm new in Istio.
My question is how can I detect failures in services that are already running in istio?
and if there is a failure, how to define particular percentage of traffic to a new version of a service?
thanks.
I recommend using Kiali. Kiali helps you understand the structure and health of your service mesh by monitoring traffic flow and report.
Kiali is a management console for an Istio-based service mesh. It provides dashboards, observability, and lets you operate your mesh with robust configuration and validation capabilities. It shows the structure of your service mesh by inferring traffic topology and displays the health of your mesh. Kiali provides detailed metrics, powerful validation, Grafana access, and strong integration for distributed tracing with Jaeger.
Detailed documentation for installing Kiali can be found in the Installation Guide.
I have created a simple example to demonstrate how useful Kiali is.
First, I created a db-app application with two available versions (v1 and v2) and exposed it using single service:
# cat db-app.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: db-app
name: db-app
namespace: default
spec:
ipFamilies:
- IPv4
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: db-app
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: db-app
version: v1
name: db-app-v1
spec:
replicas: 1
selector:
matchLabels:
app: db-app
version: v1
template:
metadata:
labels:
app: db-app
version: v1
spec:
containers:
- image: nginx
name: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: db-app
version: v2
name: db-app-v2
spec:
replicas: 1
selector:
matchLabels:
app: db-app
version: v2
template:
metadata:
labels:
app: db-app
version: v2
spec:
containers:
- image: nginx
name: nginx
# kubectl apply -f db-app.yml
service/db-app created
deployment.apps/db-app-v1 created
deployment.apps/db-app-v2 created
# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/db-app-v1-59c8fb999c-bs47s 2/2 Running 0 39s
pod/db-app-v2-56dbf4c8d6-q24vm 2/2 Running 0 39s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/db-app ClusterIP 10.102.36.142 <none> 80/TCP 39s
Additionally, to illustrate how we can split the traffic, I generated some traffic to the db-app application:
# kubectl run test-traffic --image=nginx
pod/test-traffic created
# kubectl exec -it test-traffic -- bash
root#test-traffic:/# for i in $(seq 1 100000); do curl 10.102.36.142; done
...
Now in Kiali UI in the Graph section we can see traffic flow:
In the Services section, we can easily split traffic between the v1 and v2 versions using the Traffic Shifting Wizard:
NOTE: Detailed tutorial can be found in the Kiali Traffic Shifting tutorial.
We can also monitor the status of our application. As an example, I broke the v1 version:
# kubectl set image deployment/db-app-v1 nginx=nnnginx
deployment.apps/db-app-v1 image updated
In Kiali UI we see errors in the v1 version:
I suggest you read the Kali Official Tutorial to learn the full capabilities of Kali.