Kubernetes YAML file - convert Azure to AWS ("file share" volume segment) - amazon-web-services

I have my Azure Kubernetes YAML file which works completely in AKS.
Now I need to prepare it for AWS.
Could you please assist me what has to be changed?
I am specifically oriented that most probably file share segment must be modified since "azureFile" segment is specific to Azure (and probably related volumes and volumeMounts must be changed according to that)
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontarena-ads-win-deployment
labels:
app: frontarena-ads-win-deployment
spec:
replicas: 1
template:
metadata:
name: frontarena-ads-win-test
labels:
app: frontarena-ads-win-test
spec:
nodeSelector:
"beta.kubernetes.io/os": windows
restartPolicy: Always
containers:
- name: frontarena-ads-win-test
image: local.docker.dev/frontarena/ads:wintest2
imagePullPolicy: Always
ports:
- containerPort: 9000
volumeMounts:
- name: ads-win-filesharevolume
mountPath: /Host
volumes:
- name: ads-win-filesharevolume
azureFile:
secretName: fa-secret
shareName: fawinshare
readOnly: false
imagePullSecrets:
- name: fa-repo-secret
selector:
matchLabels:
app: frontarena-ads-win-test
---
apiVersion: v1
kind: Service
metadata:
name: frontarena-ads-win-test
spec:
type: ClusterIP
ports:
- protocol: TCP
port: 9001
targetPort: 9000
selector:
app: frontarena-ads-win-test

azurefile is one of the Storage Classes provisionners, that you could replace with, for instance, a AWSElasticBlockStore (AWS EBS)
But you might also benefit from AWS SMS (AWS Server Migration Service) in order to analyze your Azure configuration and generate one for AWS, as explained in "Migrating Azure VM to AWS using AWS SMS Connector for Azure" by Emma White.
You will need to Install the Server Migration Connector on Azure.
The tool has limitations though.
See also AWS Application Migration Service for the applications part.

Related

kind kubernetes : Nodeport Service ( front-end ) service is not able to access ClusterIP ( back- end ) service from browser

I have used kind kubernetes to create cluster.
I have created 3 services for 3 Pods ( EmberJS, Flask, Postgres ). Pods are created using Deployment.
I have exposed my front-end service to port 84 ( NodePort Service ).
My app is now accessible on localhost:84 on my machine's browser.
But the app is not able to connect to the flask API which is exposed as flask-dataapp-service:6003 .
net:: ERR_NAME_NOT_RESOLVED
My flask service is available as flask-dataapp-service:6003. When I do a
curl flask-dataapp-service:6003
inside the bash of the ember pod container. It is being resolved without any issues.
But from the browser the flask-dataapp-service is not being resolved.
Find the config files below.
kind-custom.yaml
> kind: Cluster
> apiVersion: kind.x-k8s.io/v1alpha4 nodes:
> - role: control-plane
> extraPortMappings:
> - containerPort: 30000
> hostPort: 84
> listenAddress: "0.0.0.0" # Optional, defaults to "0.0.0.0"
> protocol: tcp
Emberapp.yaml
apiVersion: v1
kind: Service
metadata:
name: ember-dataapp-service
spec:
selector:
app: ember-dataapp
ports:
- protocol: "TCP"
port: 4200
nodePort: 30000
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ember-dataapp
spec:
selector:
matchLabels:
app: ember-dataapp
replicas: 1
template:
metadata:
labels:
app: ember-dataapp
spec:
containers:
- name: emberdataapp
image: emberdataapp
imagePullPolicy: IfNotPresent
ports:
- containerPort: 4200
flaskapp.yaml
apiVersion: v1
kind: Service
metadata:
name: flask-dataapp-service
spec:
selector:
app: flask-dataapp
ports:
- protocol: "TCP"
port: 6003
targetPort: 1234
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-dataapp
spec:
selector:
matchLabels:
app: flask-dataapp
replicas: 1
template:
metadata:
labels:
app: flask-dataapp
spec:
containers:
- name: dataapp
image: dataapp
imagePullPolicy: IfNotPresent
ports:
- containerPort: 1234
my flask service is available as flask-dataapp-service:6003. When I do a
curl flask-dataapp-service:6003
inside the bash of the ember pod container. It is being resolved without any issues.
Kubernetes has an in-cluster DNS which allows names such as this to be resolved directly within the cluster (i.e. DNS requests do not leave the cluster). This is also why this name does not resolve outside the cluster (hence why you cannot see it in your browser)
(Unrelated side note: this is actually a gotcha in the Kubernetes CKA certification)
Since you have used a NodePort service, you should in theory be able to use the NodePort you described (6003) and access the app using "http://localhost:6003"
Alternatively, you can port-forward:
kubectl port-forward svc/flask-dataapp-service 6003:6003
then use the same link
The port-forward option is not really of much use when running a local kubernetes cluster (in fact, the kubectl might fail with "port in use"), it's a good idea to get used to that method since it's the easiest way you can access a service in a remote kubernetes cluster that is using ClusterIP or NodePort without having to have direct access to the nodes.

Deploy Dotnet core Web api to AWS EKS

I am deploying dotnet core web API 3.1 sample app to aws eks through kubectl get svc command I can able to get the external URL but the URL is not working,
The same deployment yml is working for a web applications but not working with web API. Do we need to do any additional configuration for web API projects?
Below is my yml deployment comments,
kind: Deployment
metadata:
name: apiddapp
spec:
replicas: 1
selector:
matchLabels:
app: apiddapp
template:
metadata:
labels:
app: apiddapp
spec:
containers:
- image: xxxx.amazonaws.com/myapptestapi:v3
name: apiddapp
ports:
- containerPort: 8080
apiVersion: v1
kind: Service
metadata:
name: apiddapp
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: apiddapp
type: LoadBalancer```
Many thanks in advance.
try creating separate file for deployment and service. Create service first and see. I had the same issue but it works now. I followed https://www.youtube.com/watch?v=ZOROT9yMp44

Use Prometheus operator with DB volume for k8s

We are trying to monitor K8S with Grafana and Prometheus Operator. Most of the metrics are working as expected and I was able to see the dashboard with the right value, our system contain 10 nodes with overall 500 pods. Now when I restarted Prometheus all the data was deleted. I want it to be stored for two week.
My question is, How can I define to Prometheus volume to keep the data for two weeks or 100GB DB.
I found the following (we use Prometheus operator):
https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/storage.md
This is the config of the Prometheus Operator
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
k8s-app: prometheus-operator
name: prometheus-operator
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
k8s-app: prometheus-operator
template:
metadata:
labels:
k8s-app: prometheus-operator
spec:
containers:
- args:
- --kubelet-service=kube-system/kubelet
- --logtostderr=true
- --config-reloader-image=quay.io/coreos/configmap-reload:v0.0.1
- --prometheus-config-reloader=quay.io/coreos/prometheus-config-reloader:v0.29.0
image: quay.io/coreos/prometheus-operator:v0.29.0
name: prometheus-operator
ports:
- containerPort: 8080
name: http
This is the config of the Prometheus
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
namespace: monitoring
labels:
prometheus: prometheus
spec:
replica: 2
serviceAccountName: prometheus
serviceMonitorNamespaceSelector: {}
serviceMonitorSelector:
matchLabels:
role: observeable
tolerations:
- key: "WorkGroup"
operator: "Equal"
value: "operator"
effect: "NoSchedule"
- key: "WorkGroup"
operator: "Equal"
value: "operator"
effect: "NoExecute"
resources:
limits:
cpu: 8000m
memory: 24000Mi
requests:
cpu: 6000m
memory: 6000Mi
storage:
volumeClaimTemplate:
spec:
selector:
matchLabels:
app: prometheus
resources:
requests:
storage: 100Gi
https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/storage.md
We have file system (nfs), and the above storage config doesn't works,
my questions are:
What I miss here is how to config the volume, server , path in the following its under the nfs section? Where should I find this /path/to/prom/db? How can I refer to it? Should I create it somehow, or just provide the path?
We have NFS configured in our system.
How to combine it to Prometheus?
As I don't have deep knowledge in pvc and pv, I've created the following (not sure regard those values, what is my server and what path should I provide)...
server: myServer
path: "/path/to/prom/db"
What should I put there and how I make my Prometheus (i.e. the config I have provided in the question) to use it?
apiVersion: v1
kind: PersistentVolume
metadata:
name: prometheus
namespace: monitoring
labels:
app: prometheus
prometheus: prometheus
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce # required
nfs:
server: myServer
path: "/path/to/prom/db"
If there any other persistence volume other than nfs which I can use for my use-case? Please advice how.
I started working with the operator chart recently ,
And managed to add persistency without defining pv and pvc.
On the new chart configuration adding persistency is much easier than you describe just edit the file /helm/vector-chart/prometheus-operator-chart/values.yaml under prometheus.prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: prometheus
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
selector: {}
And add this /helm/vector-chart/prometheus-operator-chart/templates/prometheus/storageClass.yaml:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: prometheus
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Retain
parameters:
type: gp2
zones: "ap-southeast-2a, ap-southeast-2b, ap-southeast-2c"
encrypted: "true"
This will automatically create you both pv and a pvc which will create an ebs in aws which will store all your data inside.
you must have to use persistent volume and volume claim (PV & PVC) for persist data. You can refer "https://kubernetes.io/docs/concepts/storage/persistent-volumes/" must see carefully provisioning, reclaim policy, access mode, storage type in above url.
To determine when to remove old data, use this switch --storage.tsdb.retention
e.g. --storage.tsdb.retention='7d' (by default, Prometheus keeps data for 15 days).
To completely remove the data use this API call:
$ curl -X POST -g 'http://<your_host>:9090/api/v1/admin/tsdb/<your_index>'
EDIT
Kubernetes snippet sample
...
spec:
containers:
- name: prometheus
image: docker.io/prom/prometheus:v2.0.0
args:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.retention=7d'
ports:
- name: web
containerPort: 9090
...
refer the below code. define storage-retention to 7d or the required retention days in a configmap and load it as env variable in the container as shown below
containers:
- name: prometheus
image: image: prom/prometheus:latest
args:
- '--storage.tsdb.path=/prometheus'
- '--storage.tsdb.retention=$(STORAGE_RETENTION)'
- '--web.enable-lifecycle'
- '--storage.tsdb.no-lockfile'
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- name: web
containerPort: 9090
env:
- name: STORAGE_RETENTION
valueFrom:
configMapKeyRef:
name: prometheus.cfg
key: storage-retention
you might need to adjust these settings in the prometheus operator files
Providing insight about what I gathered since we just started setting up kube-prometheus operator and ran into storage issues with default settings.
Create a custom values.yaml with helm show values command as below with default values.
helm show values prometheus-com/kube-prometheus-stack -n monitoring > custom-values.yaml
Then start updating prometheus, alertmanager, and grafana sections to either override default settings or add custom names, etc...
Coming to the storage options, I see following in the documentation to define custom storageclass or PV/PVC(if there is no default SC or other reasons).
Also here is a good example for using storageclass for all 3 pods.

Kubernetes Multiple Service Conflict

I'm newbie in Kubernetes. I created a Kubernetes Cluster on Amazon EKS.
I'm trying to setup multiple kubernetes services to run multiple ASP.NET applications in one cluster. But facing a weird problem.
Everything runs fine when there is only 1 service. But whenever i create 2nd service for 2nd application it creates a conflict. The conflict is sometime service 1 url load service 2 application and sometime it loads service 1 application and same happens with service 2 url on simple page reload.
I've tried both Amazon Classic ELB (With LoadBalancer Service Type) and Nginx Ingress controller (With ClusterIp Service Type). This error is common in both approaches.
Both services and deployments are running on port 80, I even tried different ports for both services and deployments to avoid port conflict but same problem.
I've checked the deployment & service status, and pod log everything looks fine. No error or warning at all
Please guide how i can fix this error.
Here is the yaml file of both services for nginx ingress
# Service 1 for deployment 1 (container port: 1120)
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-12-05T14:54:21Z
labels:
run: load-balancer-example
name: app1-svc
namespace: default
resourceVersion: "463919"
selfLink: /api/v1/namespaces/default/services/app1-svc
uid: a*****-****-****-****-**********c
spec:
clusterIP: 10.100.102.224
ports:
- port: 1120
protocol: TCP
targetPort: 1120
selector:
run: load-balancer-example
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
2nd Service
# Service 2 for deployment 2 (container port: 80)
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-12-05T10:13:33Z
labels:
run: load-balancer-example
name: app2-svc
namespace: default
resourceVersion: "437188"
selfLink: /api/v1/namespaces/default/services/app2-svc
uid: 6******-****-****-****-************0
spec:
clusterIP: 10.100.65.46
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: load-balancer-example
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Thanks
The problem is with the selector in the services. They both have the same selector and that's why you are facing that problem. So they both will point to same set of pods.
The set of Pods targeted by a Service is (usually) determined by a Label Selector
Since deployemnt 1 and deployment 2 are different(i think), you should use different selector in them. Then expose the deployments. For example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15.4
ports:
- containerPort: 80
--
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
labels:
app: hello
spec:
replicas: 3
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: nightfury1204/hello_server
args:
- serve
ports:
- containerPort: 8080
Above two deployment nginx-deployment and hello-deployment has different selector. So expose them to service will not colide each other.
When you use kubectl expose deployment app1-deployment --type=ClusterIP --name=app1-svc to expose deployment, the service will have the same selector as the deployment.

kube-controller-manager outputs an error "cannot change NodeName"

I use kubernetes on AWS with CoreOS & flannel VLAN network.
(followed this guide https://coreos.com/kubernetes/docs/latest/getting-started.html)
k8s version is 1.4.6.
And I have the following node-exporter daemon-set.
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: node-exporter
labels:
app: node-exporter
tier: monitor
category: platform
spec:
template:
metadata:
labels:
app: node-exporter
tier: monitor
category: platform
name: node-exporter
spec:
containers:
- image: prom/node-exporter:0.12.0
name: node-exporter
ports:
- containerPort: 9100
hostPort: 9100
name: scrape
hostNetwork: true
hostPID: true
When I run this, kube-controller-manager outputs an error repeatedly as below:
E1117 18:31:23.197206 1 endpoints_controller.go:513]
Endpoints "node-exporter" is invalid:
[subsets[0].addresses[0].nodeName: Forbidden: Cannot change NodeName for 172.17.64.5 to ip-172-17-64-5.ec2.internal,
subsets[0].addresses[1].nodeName: Forbidden: Cannot change NodeName for 172.17.64.6 to ip-172-17-64-6.ec2.internal,
subsets[0].addresses[2].nodeName: Forbidden: Cannot change NodeName for 172.17.80.5 to ip-172-17-80-5.ec2.internal,
subsets[0].addresses[3].nodeName: Forbidden: Cannot change NodeName for 172.17.80.6 to ip-172-17-80-6.ec2.internal,
subsets[0].addresses[4].nodeName: Forbidden: Cannot change NodeName for 172.17.96.6 to ip-172-17-96-6.ec2.internal]
Just for information, despite from this error message, node_exporter is accessible on e.g.) 172-17-96-6:9100 . My nodes are in a private network including k8s master.
But these logs are output too many and makes it difficult to see other logs by eyes from our log console. Could I see how to resolve this error?
Because I built my k8s cluster from scratch, cloud-provider=aws flag was not activated at first and I recently turned it on, but not sure if it's related to this issue.
It looks this is caused by my another manifest file
apiVersion: v1
kind: Service
metadata:
name: node-exporter
labels:
app: node-exporter
tier: monitor
category: platform
annotations:
prometheus.io/scrape: 'true'
spec:
clusterIP: None
ports:
- name: scrape
port: 9100
protocol: TCP
selector:
app: node-exporter
type: ClusterIP
I thought this is necessary to expose node-exporter daemon-set above, but it could rather introduce some sort of conflict when I set hostNetwork: true in a daemon-set (actually, a pod) manifest. I'm not 100% certain though, after I delete this service the error disappears while I can still access to 172-17-96-6:9100 from outside of the k8s cluster.
I just followed by this post when setting prometheus and node-exporter,
https://coreos.com/blog/prometheus-and-kubernetes-up-and-running.html
in case others face with the same problem, I'm leaving my comment here.