I was trying to setup Cassandra cluster by kubernetes 1.3.4 new alpha feature - Petset. Following the yaml file posted here:
http://blog.kubernetes.io/2016/07/thousand-instances-of-cassandra-using-kubernetes-pet-set.html
My kubernetes cluster is based on 1.3.4 on bare metal environment with 10 powerful physical machines. However, after I created the Petset, I can get nothing from kubectl get pv.
run kubectl get pvc, i get following:
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
cass-volume-cassandra-0 Pending 4h
cass-volume-cassandra-1 Pending 4h
cass-volume-cassandra-2 Pending 4h
Reading the README here: https://github.com/kubernetes/kubernetes/blob/b829d4d4ef68e64b9b7ae42b46877ee75bb2bfd9/examples/experimental/persistent-volume-provisioning/README.md
saying the persistent volume will be automatically created if the kubenetes is running on asw, gce or Cinder. Wondering any way I can create such persistent volume and pvc on bare metal environment?
Another question: as long as I run kubernetes cluster on a few EC2 machines in aws, above persistent volume from aws EBS will be automatically created with these clauses in yaml file? or I have to allocate EBS first?
volumeClaimTemplates:
- metadata:
name: cassandra-data
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 380Gi
petset using Dynamic volume provisioning, this means volumeClaimTemplates in the petset definition request for storage from kubernetes, if storage available pvc bound and pod(petset) is running! but for now kubernetes only support "Dynamic volume provisioning" in cloud provider like gce or aws.
if you use kubernetes in bare metal cluster, other way is using network storage like ceph or gluster that need setup network storage in your cluster.
if you want using bare metal hard disk, existen solution is using hostPath type of persistent volume.
By default, the host path provisioner is set to false in the cluster/local-up-cluster.sh. You can enable it by running ENABLE_HOSTPATH_PROVISIONER=true cluster/local-up-cluster.sh. This enables the provisioner and the PV gets created.
Related
I've been trying to run a Prometheus on Kubernetes without it needing of a persistent volume. Why ? Cause I'm remotly writing the data gathered by this prometheus to an AWS Managed Prometheus and would prefer not having an EBS created with the cluster. I've tried playing with the values on chart helm like for instance doing this :
server:
persistentVolume:
## If true, alertmanager will create/use a Persistent Volume Claim
## If false, use emptyDir
##
enabled: false
and same with alertmanager and pushgateway but it still create a PVC and PV.
Could someone indicates where I could find some docs or info on my use case as I can't find it?
Or maybe explain me what I'm missing or not understanding, as I'm fairly new with k8s, helm and such.
Thanks !
I am creating an EKS-Anywhere local cluster by following these steps: Create local cluster | EKS Anywhere
Getting the following error after executing this command.
eksctl anywhere create cluster -f $CLUSTER_NAME.yaml
Performing setup and validations
Warning: The docker infrastructure provider is meant for local development and testing only
✅ Docker Provider setup is valid
Creating new bootstrap cluster
Installing cluster-api providers on bootstrap cluster
Provider specific setup
Creating new workload cluster
Installing networking on workload cluster
Installing storage class on workload cluster
Installing cluster-api providers on workload cluster
Moving cluster management from bootstrap to workload cluster
Error: failed to create cluster: error moving CAPI management from source to target: failed moving management cluster: Performing move...
Discovering Cluster API objects
Moving Cluster API objects Clusters=1
Creating objects in the target cluster
Deleting objects from the source cluster
Error: action failed after 10 attempts: failed to connect to the management cluster: action failed after 9 attempts: Get https://127.0.0.1:43343/api?timeout=30s: EOF
Upgrade your cert-manager
There is known issue: clusterctl init fails when existing cert-manager runs 1.0+ · Issue #3836 · kubernetes-sigs/cluster-api
And there is a solution: ⚠️ Upgrade cert-manager to v1.1.0 by fabriziopandini · Pull Request #4013 · kubernetes-sigs/cluster-api
And it works:
Cluster API is using cert-manager v1.1.0 now, so this should not be a problem anymore
So, I'd suggest upgrading.
It could be a resource constraint on your docker deployment. How much RAM and disk is Docker configured with. I have something like 16gb RAM and 60 gig disk which is more than required, but it does work.
Is there a way to specify EC2 instance type and storage from cli ?
I've got this command with which I'm creating instance:
docker-machine create -d amazonec2 --amazonec2-access-key abc --amazonec2-secret-key xyz --amazonec2-region eu-west-2 app-prod
this creates instance with default micro type and 16GB of SSD both of which I need to change.
I can change instance type from GUI but when I change storage it won't have the operating system and app installed.
Hence I'm asking how both can be specified from cli with other attributes ?
Use --amazonec2-instance-type
See: Using Docker Machine with AWS - Scott's Weblog - The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking
Trying to setup Vora 2 on an AWS kops k8s cluster.
The pod vsystem-vrep cannot start.
In the logfile on the node I see:
sudo cat vsystem-vrep_30.log
{"log":"2018-03-27 12:54:04.164349|+0000|INFO |Starting Kernel NFS Server||vrep|1|Start|server.go(41)\u001e\n","stream":"stderr","time":"2018-03-27T12:54:04.164897827Z"}
{"log":"2018-03-27 12:54:04.164405|+0000|INFO |Creating directory /exports||dir-handler|1|makeDir|dir_handler.go(40)\u001e\n","stream":"stderr","time":"2018-03-27T12:54:04.164919387Z"}
{"log":"2018-03-27 12:54:04.164423|+0000|INFO |Listening for private API on port 8738||vrep|18|func1|server.go(45)\u001e\n","stream":"stderr","time":"2018-03-27T12:54:04.164923893Z"}
{"log":"2018-03-27 12:54:04.166992|+0000|INFO |Configuring Kernel NFS Server||vrep|1|configure|server.go(126)\u001e\n","stream":"stderr","time":"2018-03-27T12:54:04.167109138Z"}
{"log":"2018-03-27 12:54:04.219089|+0000|INFO |Configuring Kernel NFS Server||vrep|1|configure|server.go(126)\u001e\n","stream":"stderr","time":"2018-03-27T12:54:04.219235263Z"}
{"log":"2018-03-27 12:54:04.230256|+0000|FATAL|Error starting NFS server: RPC service for NFS server has not been correctly registered||vrep|1|main|server.go(51)\u001e\n","stream":"stderr","time":"2018-03-27T12:54:04.230526346Z"}
How can I solve this?
When installing Vora 2.1 in AWS with kops, you need to first setup a RWX storage class which is needed by vsystem (the default AWS storage class is read only). During installation, you need to point to that storage class using parameter --vsystem-storage-class. Additionally, parameter --vsystem-load-nfs-modules needs to be set. I suspect that the error happened because that last parameter was missing.
Example, how a call of install.sh would look like:
./install.sh --accept-license --deployment-type=cloud --namespace=xxx
--docker-registry=123456789.dkr.ecr.us-west-1.amazonaws.com
--vora-admin-username=xxx --vora-admin-password=xxx
--cert-domain=my.host.domain.com --interactive-security-configuration=no
--vsystem-storage-class=aws-efs --vsystem-load-nfs-modules
A RWX storage class can e.g. be created as following
Create an EFS file system in same region as kops cluster - see https://us-west-2.console.aws.amazon.com/efs/home?region=us-west-2#/filesystems
Create file system
Select VPC of kops cluster
Add kops master and worker security groups to mount target
Optionally give it a name (e.g. same as your kops cluster, to know what it is used for)
Use default options for the remaining
Once created, note the DNS name (similar to fs-1234e567.efs.us-west-2.amazonaws.com).
Create persistent volume and storage class for Vora
E.g. use yaml files similar to below and point to the newly created EFS file system.
$ cat create_pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: vsystem-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: aws-efs
nfs:
path: /
server: fs-1234e567.efs.us-west-2.amazonaws.com
$ cat create_sc.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: aws-efs
provisioner: xyz.com/aws-efs
kubectl create -f create_pv.yaml
kubectl create -f create_sc.yaml
-- check if newly created pv and sc exist
kubectl get pv
kubectl get storageclasses
AWS Beanstalk can run applications from Docker containers.
As mentioned in the docs (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_image.html) it's possible to write directory mappings to the EC2 volume in the Dockerrun.aws.json:
"Volumes": [
{
"HostDirectory": "/var/app/mydb",
"ContainerDirectory": "/etc/mysql"
}
but, is it possible to mount specific EBS volume?
F.e. I need to run db in the Docker container and deploy it with Beanstalk. It's clear that I need to have persistence of the data, backup/restore for db, etc..
You can mount EBS volumes on any Beanstalk environment. This volume will be available on the EC2 instances.
You can do this using ebextensions option settings. Create a file in your app source .ebextensions/01-ebs.config with the following contents:
option_settings:
- namespace: aws:autoscaling:launchconfiguration
option_name: BlockDeviceMappings
value: /dev/sdj=:100,/dev/sdh=snap-51eef269,/dev/sdb=ephemeral0
The format of the mapping is device name=volume where the device mappings are specified as a single string with mappings separated by a comma. This example attaches to all instances in the autoscaling group an empty 100-GB Amazon EBS volume, an Amazon EBS volume with the snapshot ID snap-51eef269, and an instance store volume.
Read more details about this option setting here.
Read more about ebextensions here.
Once you have mounted the EBS volume for your beanstalk environment instances, you can use the volume mapping as above to map directories per your need.
I guess the leg100/docker-ebs-attach Docker container does what you want, i.e. make a particular existing EBS volume available. You can either copy the .py file and relevant Dockerfile statements or create a multi-container EB setup and mount the volume from this container.
BTW I have tried to mount a new EBS volume as proposed by Rohit (+ commands to format and mount it) and it works but Docker does not see the mount until the docker daemon is restarted.