I’m in the process of migrating ElasticSearch to a Kubernete’s PetSet but there’s a problem when provisioning the PersistentVolumeClaim.
I have two volumes in AWS “vol-84094e30” “vol-87e59f33”. Both are 5 GiB, 100/3000 IOPS, us-west-2a Availability Zone and are volume type gp2.
I have a StorageClass definition for the volume type:
{
"kind": "StorageClass",
"apiVersion": "apps/v1alpha1",
"metadata": {
"name": "fast"
},
"provisioner": "kubernetes.io/aws-ebs",
"parameters": {
"type": "gp2",
"zone": "us-west-2a"
}
}
… two PersistentVolume definitions:
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "es-persistent-vol",
"labels": {
"type": "amazonEBS"
}
},
"spec": {
"capacity": {
"storage": "5Gi"
},
"accessModes": [
"ReadWriteOnce"
],
"awsElasticBlockStore": {
"volumeID": "vol-87e59f33",
"fsType": "ext4"
}
}
}
{
"kind": "PersistentVolume",
"apiVersion": "v1",
"metadata": {
"name": "es-persistent-vol2",
"labels": {
"type": "amazonEBS"
}
},
"spec": {
"capacity": {
"storage": "5Gi"
},
"accessModes": [
"ReadWriteOnce"
],
"awsElasticBlockStore": {
"volumeID": "vol-84094e30",
"fsType": "ext4"
}
}
}
… an Elastic Search service:
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "elasticsearch-logging",
"namespace": "kube-system",
"labels": {
"k8s-app": "elasticsearch-logging",
"kubernetes.io/name": "Elasticsearch"
}
},
"spec": {
"ports": [
{
"port": 9200,
"protocol": "TCP",
"targetPort": "db"
}
],
"selector": {
"k8s-app": "elasticsearch-logging"
}
}
}
… and an Elastic Search PetSet:
{
"apiVersion": "apps/v1alpha1",
"kind": "PetSet",
"metadata": {
"name": "elasticsearch-logging-v1",
"namespace": "kube-system",
"labels": {
"k8s-app": "elasticsearch-logging",
"version": "v1",
"kubernetes.io/cluster-service": "true"
}
},
"spec": {
"serviceName": "elasticsearch-logging-v1",
"replicas": 2,
"template": {
"metadata": {
"annotations": {
"pod.beta.kubernetes.io/initialized": "true"
},
"labels": {
"app": "elasticsearch-data"
}
},
"spec": {
"containers": [
{
"image": "gcr.io/google_containers/elasticsearch:v2.4.1",
"name": "elasticsearch-logging",
"resources": {
"limits": {
"cpu": "1000m"
},
"requests": {
"cpu": "100m"
}
},
"ports": [
{
"containerPort": 9200,
"name": "db",
"protocol": "TCP"
},
{
"containerPort": 9300,
"name": "transport",
"protocol": "TCP"
}
],
"volumeMounts": [
{
"name": "es-persistent-storage",
"mountPath": "/data"
}
]
}
]
}
},
"volumeClaimTemplates": [
{
"metadata": {
"name": "es-persistent-storage",
"annotations": {
"volume.beta.kubernetes.io/storage-class": "fast"
},
"labels": {
"type": "amazonEBS"
}
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "5Gi"
}
}
}
}
]
}
}
When I create all these the PersistentStorageVolume (defined in PetSet) is unable to provision the volumes (ProvisioningFailed - no volume plugin matched). I think it may be AWS specific, I’ve looked on various forums for (what I thought were) the same issue, but were not applicable to AWS.
Any help is much appreciated.
Here are the kubectl describe outputs:
$ kubectl describe pv
Name: es-persistent-vol
Labels: type=amazonEBS
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 5Gi
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: vol-87e59f33
FSType: ext4
Partition: 0
ReadOnly: false
No events.
Name: es-persistent-vol2
Labels: type=amazonEBS
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 5Gi
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: vol-84094e30
FSType: ext4
Partition: 0
ReadOnly: false
No events.
$ kubectl describe pvc
Name: es-persistent-storage-elasticsearch-logging-v1-0
Namespace: kube-system
Status: Pending
Volume:
Labels: app=elasticsearch-data
Capacity:
Access Modes:
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
18s 1s 3 {persistentvolume-controller } Warning ProvisioningFailed no volume plugin matched
Name: es-persistent-storage-elasticsearch-logging-v1-1
Namespace: kube-system
Status: Pending
Volume:
Labels: app=elasticsearch-data
Capacity:
Access Modes:
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
19s 2s 3 {persistentvolume-controller } Warning ProvisioningFailed no volume plugin matched
$ kubectl describe petset
Name: elasticsearch-logging-v1
Namespace: kube-system
Image(s): gcr.io/google_containers/elasticsearch:v2.4.1
Selector: app=elasticsearch-data
Labels: k8s-app=elasticsearch-logging,kubernetes.io/cluster-service=true,version=v1
Replicas: 2 current / 2 desired
Annotations: <none>
CreationTimestamp: Wed, 02 Nov 2016 13:06:23 +0000
Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {petset } Normal SuccessfulCreate pvc: es-persistent-storage-elasticsearch-logging-v1-0
1m 1m 1 {petset } Normal SuccessfulCreate pet: elasticsearch-logging-v1-0
1m 1m 1 {petset } Normal SuccessfulCreate pvc: es-persistent-storage-elasticsearch-logging-v1-1
$ kubectl describe service el
Name: elasticsearch-logging
Namespace: kube-system
Labels: k8s-app=elasticsearch-logging
kubernetes.io/name=Elasticsearch
Selector: k8s-app=elasticsearch-logging
Type: ClusterIP
IP: 192.168.157.15
Port: <unset> 9200/TCP
Endpoints: <none>
Session Affinity: None
No events.
Related
I am creating a Managed Nodegroup for EKS using CloudFormation.
I have an EC2 Launch Template with a CapacityReservationSpecification defined.
The Launch Template is linked to the Managed Nodegroup using CloudFormation. When the Managed Node Group is initialised the Launch Template is copied with an eks-*** prefix in the name. The CapacityReservationSpecification is not copied to the newly generated Launch Template. Cloud Formation script Example:
LaunchTemplate:
Resources:
LaunchTemplateAux:
Type: 'AWS::EC2::LaunchTemplate'
Properties:
LaunchTemplateData:
InstanceType: t3.medium
CapacityReservationSpecification:
CapacityReservationTarget:
CapacityReservationResourceGroupArn: {{reservation_group_arn}}
MetadataOptions:
HttpPutResponseHopLimit: 2
HttpTokens: optional
SecurityGroupIds:
- xxxxx
LaunchTemplateName: !Sub '${AWS::StackName}Aux'
NodeGroup:
ManagedNodeGroupAux:
Type: 'AWS::EKS::Nodegroup'
Properties:
AmiType: AL2_x86_64
ClusterName: test-cluster
Labels:
alpha.eksctl.io/cluster-name: test-cluster
alpha.eksctl.io/nodegroup-name: test-ng-aux
LaunchTemplate:
Id: !Ref LaunchTemplateAux
NodeRole: node-instance-role::NodeInstanceRole'
NodegroupName: test-nodegroup
ScalingConfig:
DesiredSize: 1
MaxSize: 2
MinSize: 1
Subnets:
- xxx
The resulting launch templates are as follows. Obtained using the following command aws ec2 describe-launch-template-versions --launch-template-id <template-id>
My Launch template Output:
{
"LaunchTemplateVersions": [
{
"LaunchTemplateId": "lt-xx",
"LaunchTemplateName": "test-cluster-ngAux",
"VersionNumber": 1,
"CreateTime": "2022-03-24T12:35:05+00:00",
"CreatedBy": "xxx:user/xxx",
"DefaultVersion": true,
"LaunchTemplateData": {
"InstanceType": "t3.medium",
"SecurityGroupIds": [
"sg-xxx"
],
"CapacityReservationSpecification": {
"CapacityReservationTarget": {
"CapacityReservationResourceGroupArn": "arn:aws:resource-groups:xxxxx:group/my-group"
}
},
"MetadataOptions": {
"HttpTokens": "optional",
"HttpPutResponseHopLimit": 2
}
}
}
]
}
Launch template copied by EKS API:
{
"LaunchTemplateVersions": [
{
"LaunchTemplateId": "lt-xxx",
"LaunchTemplateName": "eks-xxx",
"VersionNumber": 1,
"CreateTime": "2022-03-24T12:35:46+00:00",
"CreatedBy": "xxx:assumed-role/AWSServiceRoleForAmazonEKSNodegroup/EKS",
"DefaultVersion": true,
"LaunchTemplateData": {
"IamInstanceProfile": {
"Name": "xxx"
},
"ImageId": "ami-0c37e3f6cdf6a9007",
"InstanceType": "t3.medium",
"UserData": "xxx",
"TagSpecifications": [
{
"ResourceType": "volume",
"Tags": [
{
"Key": "eks:cluster-name",
"Value": "test-cluster"
},
{
"Key": "eks:nodegroup-name",
"Value": "test-cluster-ng-aux"
}
]
},
{
"ResourceType": "instance",
"SecurityGroupIds": [
"xxx"
],
"MetadataOptions": {
"HttpTokens": "optional",
"HttpPutResponseHopLimit": 2
}
}
}
]
}
This seems to be a bug in AWS. They have informed me that they will fix it.
https://repost.aws/questions/QUaid5sRdmRu2OFi7SQyxytg#ANF8uJ5RulQtmrVMaabAKZZg
Hi I'm very newby in Istio/K8s, and I'm trying to make a service that I have test-service to use a new VirtualService that I've created.
Here the steps that I did
kubectl config set-context --current --namespace my-namespace
I create my VirtualService
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test-service
namespace: my-namespace
spec:
hosts:
- test-service
http:
- fault:
delay:
fixedDelay: 60s
percentage:
value: 100
route:
- destination:
host: test-service
port:
number: 9100
Then I apply into K8s
kubectl apply -f test-service.yaml
But now when I invoke the test-service using gRPC I can reach the service, but the fault with the delay is not happening.
I dont know in which log I can see of this test-service is using the VirtualService that I created or not
Here my gRPC Service config:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "test-service",
"namespace": "my-namespace",
"selfLink": "/api/v1/namespaces/my-namespace/services/test-service",
"uid": "8a9bc730-4125-4b52-b373-7958796b5df7",
"resourceVersion": "317889736",
"creationTimestamp": "2021-07-07T10:39:54Z",
"labels": {
"app": "test-service",
"app.kubernetes.io/managed-by": "Helm",
"version": "v1"
},
"annotations": {
"meta.helm.sh/release-name": "test-service",
"meta.helm.sh/release-namespace": "my-namespace"
},
"managedFields": [
{
"manager": "Go-http-client",
"operation": "Update",
"apiVersion": "v1",
"time": "2021-07-07T10:39:54Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:annotations": {
".": {},
"f:meta.helm.sh/release-name": {},
"f:meta.helm.sh/release-namespace": {}
},
"f:labels": {
".": {},
"f:app": {},
"f:app.kubernetes.io/managed-by": {},
"f:version": {}
}
},
"f:spec": {
"f:ports": {
".": {},
"k:{\"port\":9100,\"protocol\":\"TCP\"}": {
".": {},
"f:port": {},
"f:protocol": {},
"f:targetPort": {}
}
},
"f:selector": {
".": {},
"f:app": {}
},
"f:sessionAffinity": {},
"f:type": {}
}
}
},
{
"manager": "dashboard",
"operation": "Update",
"apiVersion": "v1",
"time": "2022-01-14T15:51:28Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:spec": {
"f:ports": {
"k:{\"port\":9100,\"protocol\":\"TCP\"}": {
"f:name": {}
}
}
}
}
}
]
},
"spec": {
"ports": [
{
"name": "test-service",
"protocol": "TCP",
"port": 9100,
"targetPort": 9100
}
],
"selector": {
"app": "test-service"
},
"clusterIP": "****************",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
}
According to the Istio documentation, configuring fault only works for HTTP traffic, not for gRPC:
https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPFaultInjection
We are using EKS varsion v1.17.17-eks-087e67
With installed aws-ebs-csi-driver components versions:
aws-ebs-csi-driver:v1.1.3
csi-provisioner:v2.1.1
csi-attacher:v3.1.0
csi-snapshotter:v3.0.3
csi-resizer:v1.0.0
When we create PVC driver could not mount volume. As I can see, AWS volume continuously creating and deleting (from cloud trail):
{
"eventVersion": "1.08",
"userIdentity": {
"type": "AssumedRole",
"principalId": "AROAV5QH66QYOM4FMMPFI:1631165222580844502",
"arn": "arn:aws:sts::XXXXXXXXXX:assumed-role/EKSEBSCSIServiceRole-cluster01-eks-external-sandbox/XXXXXXXXXXXXXXXXXXXXXXXX",
"accountId": "XXXXXXXXXX",
"accessKeyId": "ASIAV5QH66QYFCKRZG43",
"sessionContext": {
"sessionIssuer": {
"type": "Role",
"principalId": "AROAV5QH66QYOM4FMMPFI",
"arn": "arn:aws:iam::XXXXXXXXXX:role/eks/EKSEBSCSIServiceRole-cluster01-eks-external-sandbox",
"accountId": "XXXXXXXXXX",
"userName": "EKSEBSCSIServiceRole-cluster01-eks-external-sandbox"
},
"webIdFederationData": {
"federatedProvider": "arn:aws:iam::XXXXXXXXXX:oidc-provider/oidc.eks.eu-central-1.amazonaws.com/id/XXXXXXXXXXXXXXXXXXXXXXXX",
"attributes": {}
},
"attributes": {
"creationDate": "2021-09-09T05:27:03Z",
"mfaAuthenticated": "false"
}
}
},
"eventTime": "2021-09-09T06:11:12Z",
"eventSource": "ec2.amazonaws.com",
"eventName": "CreateVolume",
"awsRegion": "eu-central-1",
"sourceIPAddress": "18.157.68.62",
"userAgent": "aws-sdk-go/1.35.37 (go1.15.6; linux; amd64) exec-env/aws-ebs-csi-driver-v1.1.3",
"requestParameters": {
"size": "8",
"zone": "eu-central-1a",
"volumeType": "gp2",
"encrypted": true,
"tagSpecificationSet": {
"items": [
{
"resourceType": "volume",
"tags": [
{
"key": "ebs.csi.aws.com/cluster",
"value": "true"
},
{
"key": "CSIVolumeName",
"value": "pvc-27fa1e04-c99d-48d2-9efa-0633ee3669d2"
},
{
"key": "kubernetes.io/created-for/pv/name",
"value": "pvc-27fa1e04-c99d-48d2-9efa-0633ee3669d2"
},
{
"key": "kubernetes.io/created-for/pvc/name",
"value": "data-postgres-postgresql-0"
},
{
"key": "kubernetes.io/created-for/pvc/namespace",
"value": "default"
}
]
}
]
}
},
"responseElements": {
"requestId": "5404a63c-a8d6-4bfa-b18f-ce1fba1060ee",
"volumeId": "vol-032b5c6671123cc35",
"size": "8",
"zone": "eu-central-1a",
"status": "creating",
"createTime": 1631167872000,
"volumeType": "gp2",
"iops": 100,
"encrypted": true,
"masterEncryptionKeyId": "arn:aws:kms:eu-central-1:XXXXXXXXXX:key/ef3b2237-00c3-4fd0-b556-91cda7f7db95",
"tagSet": {
"items": [
{
"key": "ebs.csi.aws.com/cluster",
"value": "true"
},
{
"key": "CSIVolumeName",
"value": "pvc-27fa1e04-c99d-48d2-9efa-0633ee3669d2"
},
{
"key": "kubernetes.io/created-for/pv/name",
"value": "pvc-27fa1e04-c99d-48d2-9efa-0633ee3669d2"
},
{
"key": "kubernetes.io/created-for/pvc/name",
"value": "data-postgres-postgresql-0"
},
{
"key": "kubernetes.io/created-for/pvc/namespace",
"value": "default"
}
]
},
"multiAttachEnabled": false
},
"requestID": "5404a63c-a8d6-4bfa-b18f-ce1fba1060ee",
"eventID": "0941702c-119c-45fb-8c9e-6ef8918db6da",
"readOnly": false,
"eventType": "AwsApiCall",
"managementEvent": true,
"recipientAccountId": "XXXXXXXXXX",
"eventCategory": "Management"
}
"eventTime": "2021-09-09T06:11:15Z",
"eventSource": "ec2.amazonaws.com",
"eventName": "DeleteVolume",
"awsRegion": "eu-central-1",
"sourceIPAddress": "x.x.x.x",
"userAgent": "aws-sdk-go/1.35.37 (go1.15.6; linux; amd64) exec-env/aws-ebs-csi-driver-v1.1.3",
"errorCode": "Client.InvalidVolume.NotFound",
"errorMessage": "The volume 'vol-032b5c6671123cc35' does not exist.",
"requestParameters": {
"volumeId": "vol-032b5c6671123cc35"
},
"responseElements": null,
"requestID": "3cf2ce00-5845-436b-8470-3e1918dd24af",
"eventID": "e5fbd13c-fc72-4cc1-9468-2a928d52a186",
"readOnly": false,
"eventType": "AwsApiCall",
"managementEvent": true,
"recipientAccountId": "XXXXXXXXXX",
"eventCategory": "Management"
}
But eventually provisioner could not find this volume
0909 06:11:12.088851 1 controller.go:1332] provision "default/data-postgres-postgresql-0" class "ebs-default": started
I0909 06:11:12.089028 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"data-postgres-postgresql-0", UID:"27fa1e04-c99d-48d2-9efa-0633ee3669d2", APIVersion:"v1", ResourceVersion:"145344106", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/data-postgres-postgresql-0"
I0909 06:11:15.565942 1 controller.go:1099] Final error received, removing PVC 27fa1e04-c99d-48d2-9efa-0633ee3669d2 from claims in progress
W0909 06:11:15.565962 1 controller.go:958] Retrying syncing claim "27fa1e04-c99d-48d2-9efa-0633ee3669d2", failure 18
E0909 06:11:15.565981 1 controller.go:981] error syncing claim "27fa1e04-c99d-48d2-9efa-0633ee3669d2": failed to provision volume with StorageClass "ebs-default": rpc error: code = Internal desc = Could not create volume "pvc-27fa1e04-c99d-48d2-9efa-0633ee3669d2": failed to get an available volume in EC2: InvalidVolume.NotFound: The volume 'vol-032b5c6671123cc35' does not exist.
status code: 400, request id: a396c26c-71c6-4c88-8f2f-ebb3aa492447
I0909 06:11:15.566164 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"data-postgres-postgresql-0", UID:"27fa1e04-c99d-48d2-9efa-0633ee3669d2", APIVersion:"v1", ResourceVersion:"145344106", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "ebs-default": rpc error: code = Internal desc = Could not create volume "pvc-27fa1e04-c99d-48d2-9efa-0633ee3669d2": failed to get an available volume in EC2: InvalidVolume.NotFound: The volume 'vol-032b5c6671123cc35' does not exist.
status code: 400, request id: a396c26c-71c6-4c88-8f2f-ebb3aa492447
Here is the policy from AWS Role for annotated CA:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:AttachVolume",
"ec2:CreateSnapshot",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:DeleteSnapshot",
"ec2:DeleteTags",
"ec2:DeleteVolume",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeInstances",
"ec2:DescribeSnapshots",
"ec2:DescribeTags",
"ec2:DescribeVolumes",
"ec2:DescribeVolumesModifications",
"ec2:DetachVolume",
"ec2:ModifyVolume"
],
"Resource": "*"
}
]
}
Here is StorageClass:
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: ebs-default
parameters:
csi.storage.k8s.io/fstype: ext4
encrypted: "true"
type: gp2
provisioner: ebs.csi.aws.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
~
We are running workers in eu-central-1 region in 3 AZ
I wrote a document. Please do let me know if this helps.
Use this GitHub Page Link!
Follow it from Step 2 - https://github.com/parjun8840/ekscsidriver/blob/main/README.md
When I deploy the below AWS CloudFormation script, I am getting the following error: "Encountered unsupported property InstanceGroups"
I have used InstanceGroups in the past without any issues. Here is an example of how others using it: https://noise.getoto.net/tag/amazon-emr/
I am using EMR 5.17.0, which I have used to set up before.
{
"Description": "Spark ETL EMR CloudFormation",
"Resources": {
"EMRCluster": {
"Type": "AWS::EMR::Cluster",
"Properties": {
"Applications": [
{
"Name": "Hadoop"
},
{
"Name": "Spark"
},
{
"Name": "Ganglia"
},
{
"Name": "Zeppelin"
}
],
"AutoScalingRole": "EMR_AutoScaling_DefaultRole",
"BootstrapActions": [
{
"Path": "s3://somepath/scripts/install_pip36_dependencies.sh",
"Args": [
"relay==0.0.1"
],
"Name": "install_pip36_dependencies"
}
],
"Configurations": [
{
"Classification": "yarn-site",
"Properties": {
"yarn.scheduler.fair.preemption": "False",
"yarn.resourcemanager.am.max-attempts": "1"
},
"Configurations": []
},
{
"Classification": "core-site",
"Properties": {
"fs.s3.canned.acl": "BucketOwnerFullControl"
},
"Configurations": []
}
],
"EbsRootVolumeSize": 10,
"InstanceGroups": [
{
"Name": "Master",
"Market": "ON_DEMAND",
"InstanceRole": "MASTER",
"InstanceType": "m5.2xlarge",
"InstanceCount": 1,
"EbsConfiguration": {
"EbsBlockDeviceConfigs": [
{
"VolumeSpecification": {
"SizeInGB": 100,
"VolumeType": "64"
},
"VolumesPerInstance": 1
}
],
"EbsOptimized": "True"
}
},
{
"Name": "Core",
"Market": "ON_DEMAND",
"InstanceGroupType": "CORE",
"InstanceType": "m5.2xlarge",
"InstanceCount": 5,
"EbsConfiguration": {
"EbsBlockDeviceConfigs": [
{
"VolumeSpecification": {
"SizeInGB": 100,
"VolumeType": "gp2"
},
"VolumesPerInstance": 1
}
],
"EbsOptimized": "True"
}
},
{
"Name": "Task - 3",
"Market": "ON_DEMAND",
"InstanceGroupType": "TASK",
"InstanceType": "m5.2xlarge",
"InstanceCount": 2,
"EbsConfiguration": {
"EbsBlockDeviceConfigs": [
{
"VolumeSpecification": {
"SizeInGB": 32,
"VolumeType": "gp2"
},
"VolumesPerInstance": 1
}
],
"EbsOptimized": "True"
}
}
],
"LogUri": "s3://somepath/emr-logs/",
"Name": "EMR CF",
"ReleaseLabel": "emr-5.17.0",
"ServiceRole": "EMR_DefaultRole",
"VisibleToAllUsers": "True"
}
}
}
}
When the CF script is loaded, it should create an AWS EMR cluster
Aws recommends that you set MasterInstanceGroup and CoreInstanceGroup under Instances
I give you an example of the Instances property of an EMR Cluster with Hadoop, Hbase, Spark, Ganglia and Zookeeper:
Instances:
Ec2KeyName: !Ref KeyName
Ec2SubnetId: !ImportValue MySubnetPrivateA
EmrManagedMasterSecurityGroup: !ImportValue EmrMasterSgId
AdditionalMasterSecurityGroups:
- !ImportValue EmrMasterAdditionalSgId
EmrManagedSlaveSecurityGroup: !ImportValue EmrSlaveSgId
AdditionalSlaveSecurityGroups:
- !ImportValue EmrSlaveAdditionalSgId
ServiceAccessSecurityGroup: !ImportValue EmrServiceSgId
MasterInstanceGroup:
InstanceCount: 1
InstanceType: !Ref MasterInstanceType
Market: ON_DEMAND
Name: Master
CoreInstanceGroup:
InstanceCount: !Ref NumberOfCoreInstances
InstanceType: !Ref CoreInstanceType
Market: ON_DEMAND
Name: Core
TerminationProtected: false
VisibleToAllUsers: true
JobFlowRole: !Ref EMRClusterinstanceProfile
ReleaseLabel: !Ref ReleaseLabel
LogUri: !Ref LogUri
Name: !Ref EMRClusterName
AutoScalingRole: EMR_AutoScaling_DefaultRole
ServiceRole: !Ref EMRClusterServiceRole
Tags:
-
Key: "cluster_name"
Value: "master.emr.my.com"
You can see the complete AWS template here.
I have a kubernetes cluster in AWS with ec2 worker nodes in the following AZs along with corresponding PersistentVolumes in each AZ.
us-west-2a
us-west-2b
us-west-2c
us-west-2d
My problem is I want to create a Deployment with a volume mount that references a PersistentVolumeClaim and guarantee they land in the same AZ because right now it is luck whether both the Deployment and PersistentVolumeClaim end up in the same AZ. If they don't land in the same AZ then the deployment fails to find the volume mount.
I create 4 PersistentVolumes by manually creates EBS volumes in each AZ and copying the ID to the spec.
{
"apiVersion": "v1",
"kind": "PersistentVolume",
"metadata": {
"name": "pv-2"
},
"spec": {
"capacity": {
"storage": "1Gi"
},
"accessModes": [
"ReadWriteOnce"
],
"persistentVolumeReclaimPolicy": "Retain",
"awsElasticBlockStore": {
"volumeID": "vol-053f78f0c16e5f20e",
"fsType": "ext4"
}
}
}
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "mydata",
"namespace": "staging"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "10Mi"
}
}
}
}
{
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"metadata": {
"name": "myapp",
"namespace": "default",
"labels": {
"app": "myapp"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app": "myapp"
}
},
"template": {
"metadata": {
"labels": {
"app": "myapp"
}
},
"spec": {
"containers": [
{
"name": "hello",
"image": "centos:7",
"volumeMounts": [ {
"name":"mydata",
"mountPath":"/etc/data/"
} ]
}
],
"volumes": [ {
"name":"mydata",
"persistentVolumeClaim":{
"claimName":"mydata"
}
}]
}
}
}
}
You could try setting annotation for region and AvailabilityZone as mentioned in here and here