I have this issue: Two or more nodes on cluster and 5 deployment replicas, and I have to use one volume for them. For example I will add one file to first pod and can take it from another, and if my first pod will deleted, I still can take this data from second pod.
I tried kubernetes volumes types like hostPath, but it's didn't work.
I tried NFS but it didn't work. Because we have many instructions, but each of them not full and not correct! Can you please write full instruction, like for junior, ok - like for idiots? I never use NFS, Gluster, but in kubernetes docs information is too short about how to install it and connect to kubernetes.
Now I try using AWS EFS and kubernetes and the same story, a lot of general information, individual instructions, but not consistent. Why, it's so hard for you, explain how it works? I am in fire now, kubernetes documentation about base elements like deployment, services - ok, but about integrations, not basic volumes - awfully!
Maybe some one can help me with it?
AWS part: https://aws.amazon.com/getting-started/tutorials/create-network-file-system/
KUBERNETES part: https://github.com/kubernetes-incubator/external-storage/blob/master/aws/efs/deploy/manifest.yaml
Thanks for help.
Related
This must sound like a real noob question. I have a cluster-autoscaler and cluster overprovisioner set up in my k8s cluster (via helm). I want to see the auto-scaler and overprovisioner actually kick in. I am not able to find any leads on how to accomplish this.
does anyone have any ideas?
You can create a Deployment that runs a container with a CPU intensive task. Set it initially to a small number of replicas (perhaps < 10) and start increasing the replicas number with:
kubectl scale --replicas=11 your-deployment
Edit:
How to tell the Cluster Autoscaler has kicked in?
There are three ways you can determine what the CA is doing. By watching the CA pods' logs, checking the content of the kube-system/cluster-autoscaler-status ConfigMap or via Events.
I have the following:
2 pod replicas, load balanced.
Each replica having 2 containers sharing network.
What I am looking for is a shared volume...
I am looking for a solution where the 2 pods and each of the containers in the pods can share a directory with read+write access. So if a one container from pod 1 writes to it, containers from pod 2 will be able to access the new data.
Is this achievable with persistent volumes and PVCs? if so what do i need and what are pointers to more details around what FS would work best, static vs dynamic, and storage class.
Can the volume be an S3 bucket?
Thank you!
There are several options depending on price and efforts needed:
Simplest but a bit more expensive solution is to use EFS + NFS Persistent Volumes. However, EFS has serious throughput limitations, read here for details.
You can create pod with NFS-server inside and again mount NFS Persistent Volumes into pods. See example here. This requires more manual work and not completely highly available. If NFS-server pod fails, then you will observe some (hopefully) short downtime before it gets recreated.
For HA configuration you can provision GlusterFS on Kubernetes. This requires the most efforts but allows for great flexibility and speed.
Although mounting S3 into pods is somehow possible using awful crutches, this solution has numerous drawbacks and overall is not production grade. For testing purposes you can do that.
Refer to https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes for all available volume backends (You need ReadWriteMany compatibility)
As you can find there AWSElasticBlockStore doesn't support it. You will need any 3rd party volume provider which supports ReadWriteMany.
UPD: Another answer https://stackoverflow.com/a/51216537/923620 suggests that AWS EFS works too.
I would like to setup a Ray cluster to use Rtune over 4 gpus on AWS. But each gpu belongs to a different member of our team. I have scoured available resources for an answer and found nothing. Help ?
In order to start a Ray cluster using instances that span multiple AWS accounts, you'll need to make sure that the AWS instances can communicate with each other over the relevant ports. To enable that, you will need to modify the AWS security groups for the instances (though be sure not to open up the ports to the whole world).
You can choose which ports are needed via the arguments --redis-port, --redis-shard-ports, --object-manager-port, and --node-manager-port to ray start on the head node and just --object-manager-port, and --node-manager-port on the non-head nodes. See the relevant documentation.
However, what you're trying to do sounds somewhat complex. It'd be much easier to use a single account if possible, in which case you could use the Ray autoscaler.
I stuck somewhere when I was architecting to deploy my application on Kubernetes cluster which is on AWS.
Let's say we have a k8s cluster with one master and 3 worker node. And 3 pods of a replication controller is running on all the three nodes. How do I supposed to manage the Storage of it. How all three pods will be in sync ? I tried PVC with EBS but it is mounting on the pod in the single node. Is there any other way around of managing storage storage in kubernetes using EBS. I also saw some blog saying that we can use EFS. If anyone have any idea then pls help me out.
Thanks
You can do EFS but it might be too slow for you. Basically its an NFS server which you can make a pv pvc for. Then u can mount it on all.
If EFS is too slow use nfs server outside the cluster dont install it in the cluster you need amazon linux ami and not debian os.
I am guessing that by saying "How all three pods will be in sync?" you mean sharing the same persistent volume between pods?
If so, please read about access modes:
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
The AWS EBS provides supports only 'ReadWriteOnce' meaning it can't be shared between pods.
I haven't tried EFS but here it looks like it does support 'ReadWriteMany': https://github.com/kubernetes-incubator/external-storage/blob/f4d25e43f96d3a43359546dd0c1011ed3d203ca4/aws/efs/deploy/claim.yaml#L9
I have figured it out by using EFS. I followed this blog. https://ngineered.co.uk/blog/using-amazon-efs-to-persist-and-share-between-conatiners-data-in-kubernetes
My question is 2 fold:
**UPDATE*******
I fixed number 1.
I had to specify the region in the config. I guess this is because my keys associate the east by default.
If anyone has an answer to 2 that would be great.
1) I am ultimately trying to setup a 4 node cluster (2 in each region). In the main region (east-us-1) the nodes see each other perfectly fine but in the west, they don't seem to see each other. I'd like to make sure they can see each other before I try multi region (which I'm not entirely sure how to do yet). I've installed the plugin.
Basically, why in a different region are the nodes not seeing each other when it's the same config. I can telnet to/from each server on 9200/9300.
Here is my config:
cloud:
aws:
access_key:
secret_key:
discovery:
type: ec2
ec2:
groups: ELASTIC-SEARCH
2) Is there a way to designate a specific node to "Hold all the data" and then distribute it among them all?
While it's not the answer you want: Don't do that.
It'll be much easier to have two clusters in two regions, and keep them in sync on your application layer. Also, Elasticsearch has introduced the concept of a Tribe-node in 1.0 to make this a bit easier.
Elasticsearch, like any distributed database, is very sensitive to network issues. In this case you're relying on the Internet working reliably. It tends not to.
The setup you suggest will be quite prone to split brains or outages. If you configure minimum master nodes to be a quorum, which you always should, the cluster will go down whenever there's a connection problem between the regions.
We've written two articles that go much more in depth than this about this topic, which you may want to look into:
Elasticsearch in Production has a section on networking related issues.
Elasticsearch Internals: Networking Introduction describes the network topology of Elasticsearch. Specifically, you'll see just how many connections Elasticsearch needs to have working reliably.