Migrating StorageClass from gp2 to gp3 - AWS EKS - kubectl

We are using EKS and we have a stateful set that uses a storage class as volumeclaimtemplates.
Our storage classes are of type gp2 and is using the ebs.csi.aws provisioner.
We need to convert the sc's from gp2 - gp3 without any downtime. Is it possible to do it? We are using Kubectl and Kustomize , when we tried to change the type using a kustomize layer it gave me an error :
Forbidden : Updates to parameters are forbidden.
I also referred to this documentation which also includes steps to change the provisioner.
https://aws.amazon.com/blogs/containers/migrating-amazon-eks-clusters-from-gp2-to-gp3-ebs-volumes/
I was looking to understand if we could easily change the pvc storage class type from gp2 - gp3 as we can with the AWS console and a regular volume.
Thank you

Related

Managing volume rollbacks in K8s using persistent volumes

I have a kubernetes deployment managed by a helm chart that I am planning an upgrade of. The app has 2 persistent volumes attached which are are EBS volumes in AWS. If the deployment goes wrong and needs rolling back I might also need to roll back the EBS volumes. How would one manage that in K8s? I can easily create the volume manually in AWS from my snapshot I've taken pre deployment but for the deployment to use it would I need to edit the pv yaml file to point to my new volume ID? Or would I need to create a new PV using the volume ID and a new PVC and then edit my deployment to use that claim name?
First you need to define a storage class with reclaimPolicy: Delete
https://kubernetes.io/docs/concepts/storage/storage-classes/
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: Immediate
Then, in your helm chart, you need to use that storage class. So, when you delete the helm chart, the persistent claim will be deleted and because the ReclaimPolicy=Delete for the storage class used, the corresponding persistent volume will also be deleted.
Be careful though. Once PV is deleted, you will not be able to recover that volume's data. There is no "recycle bin".

How do I resolve this circular reference in AWS CloudFormation?

I’m creating a generic stack template using CloudFormation, and I’ve hit a rather annoying circular reference.
Overall Requirements:
I want to be able to provision (a lot of other things, but mainly) an ECS Cluster Service that auto-scales using capacity providers, the capacity providers are using auto-scaling groups, and the auto scaling groups are using a launch template.
I don’t want static resource names. This causes issues if a resource has to be re-created due to an update and that particular resource has to have a unique name.
Problem:
Without the launch template “knowing the cluster name” (via UserData) the service tasks get stuck in a PROVISIONING state.
So we have the first dependency chain:
Launch Template <- Cluster (Name)
But the Cluster has a dependency chain of:
Cluster <- Capacity Provider <- AutoScalingGroup <- Launch Template
Thus, we have a circular reference: Cluster <-> Launch Template
——
One way I can think of resolving this is to add a suffix to another resource’s name (one that lives outside of this dependency chain, e.g., the target group) as the Cluster’s name; in that way, it is not static but also removes the circular reference.
My question is: is there a better way?
It feels like there should be a resource that the cluster can subscribe to and the ec2 instance can publish to, which would remove the circular dependency as well as the need to assign resource names.
There is no such resource to break the dependency and the cluster name must be pre-defined. This has already been recognized as a problem and its part of open github issue:
[ECS] Full support for Capacity Providers in CloudFormation.
One of the issues noted is:
Break circular dependency so that unnamed clusters can be created
At the moment one work around noted is to partially predefine the name, e.g.:
ECSCluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: !Sub ${AWS::StackName}-ECSCluster
LaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
UserData:
Fn::Base64: !Sub |
#!/bin/bash
echo ECS_CLUSTER=${AWS::StackName}-ECSCluster >> /etc/ecs/ecs.config
Alternatively, one could try to solve that by development of some custom resource that would be in the form of a lambda function. So you could probably create your unnamed cluster with launch template (LT) that has some dummy name for cluster. Then once the cluster is running, you would use the custom resource to create new version of LT with updated cluster name and refresh your auto-scaling group to use the new LT version. But I'm not sure if this would work. Nevertheless, its something that can be considered at least.
Sharing an update from the GitHub issue. The circular dependency has been broken by introducing a new resource: Cluster Capacity Provider Associations.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-clustercapacityproviderassociations.html
To use it in my example, you:
Create Cluster (without specifying name)
Create Launch Template (using Ref to get cluster name)
Create Auto Scaling Group(s)
Create Capacity Provider(s)
Create Cluster Capacity Provider Associations <- This is new!
The one gotcha is that you have to wait for the new association to be created before you can create a service on the cluster. So be sure that your service "DependsOn" these associations!

How to set EBS root volume to persist for an EC2 instance within Elastic Beanstalk using Terraform

I have written Terraform to manage my AWS Elastic Beanstalk environment and application, using the default docker solution stack for my region.
The EC2 instance created by autoscaling has the standard/default EBS root volume which is set to "true" value for the setting "DeleteOnTermination" -- meaning that when the instance is replaced or destroyed, the volume (and hence all the data) is also destroyed.
I would like to change this to false and persist the volume.
For some reason, I cannot find valid Terraform documentation for how to change this setting so that the root volume persists. The closest thing I can find is for the autoscaling launchconfiguration, a "root_block_device" mapping can be supplied to update it. Unfortunately, it is unclear from the documentation how exactly to use this. If I create a launchconfiguration resource, how do I use that within my beanstalk definition. I think I'm on the right track here but need some guidance.
Do I create the autoscaling resource and then reference it within my beanstalk definition? Or do I add a particular setting to my beanstalk definition with this mapping inside? Thanks for any help or example you can provide.
This can done at EB level through Resources.
Specifically, you have to modify settings of AWSEBAutoScalingLaunchConfiguration that EB is using to launch your instances from.
Here is an example of such a config file:
.ebextensions/40_ebs_delete_on_termination.config
Resources:
AWSEBAutoScalingLaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
DeleteOnTermination: false
Then to verify the setting, you can use AWS CLI:
aws ec2 describe-volumes --volume-ids <id-of-your-eb-instance-volume>
or simply terminate the environment and check the Volumes in EC2 console.
You can use the ebs_block_device block within the aws_instance resource. This will by default delete the ebs volume when the instance is terminated.
https://www.terraform.io/docs/providers/aws/r/instance.html#block-devices
You have to use the above instead of the aws_volume_attachment resource.
delete_on_termination - (Optional) Whether the volume should be
destroyed on instance termination (Default: true).

Add KeyName to EMR cluster in Cloud Formation template

I am creating an AWS EMR cluster running Spark using a Cloud Formation template. I am using Cloud Formation because that's how we create reproducible environments for our applications.
When I create the cluster from the web dashboard one of the options is to add a Key Pair. This is necessary in order to access via ssh the nodes of the cluster. http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/EMR_CreateJobFlow.html
I can't see how to do the same when using Cloud Formation templates.
The template structure (see below) doesn't have the same attribute.
Type: "AWS::EMR::Cluster"
Properties:
AdditionalInfo: JSON object
Applications:
- Applications
BootstrapActions:
- Bootstrap Actions
Configurations:
- Configurations
Instances:
JobFlowInstancesConfig
JobFlowRole: String
LogUri: String
Name: String
ReleaseLabel: String
ServiceRole: String
Tags:
- Resource Tag
VisibleToAllUsers: Boolean
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-emr-cluster.html#d0e76479
I had a loook to the attribute JobFlowRole that is a reference to an instance profile (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-instanceprofile.html). Again, no sign of the KeyName.
Did anyone solved this problem before?
Thanks,
Marco
I solved this problem. I was just confused by the lack of naming consistency in Cloud Formation templates.
What is generally referred as KeyName becomes Ec2KeyName under
the JobFlowInstancesConfig.
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-emr-cluster-jobflowinstancesconfig.html#cfn-emr-cluster-jobflowinstancesconfig-ec2keyname

change ElastiCache node DNS record in cloud formation template

I need to create CNAME record for ElastiCache Cluster. However, I build redis cluster and there is only one node. As far as I found there is no
ConfigurationEndpoint.Address for redis cluster. Is there any chance to change DNS name for node in cluster and how to do it?
Currently template looks like:
"ElastiCahceDNSRecord" : {
"Type" : "AWS::Route53::RecordSetGroup",
"Properties" : {
"HostedZoneName" : "example.com.",
"Comment" : "Targered to ElastiCache",
"RecordSets" : [{
"Name" : "elche01.example.com.",
"Type" : "CNAME",
"TTL" : "300",
"ResourceRecords" : [
{
"Fn::GetAtt": [ "myelasticache", "ConfigurationEndpoint.Address" ]
}
]
}]
}
}
For folks coming to this page for a solution. There is now a way to get the Redis endpoint directly from within the CFN.
There is now the ability to get the RedisEndpoint.Address from the AWS::ElastiCache::CacheCluster or PrimaryEndPoint.Address from the AWS::ElastiCache::ReplicationGroup
Per the documentation (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-elasticache-cache-cluster.html):
RedisEndpoint.Address - The DNS address of the configuration endpoint for the Redis cache cluster.
RedisEndpoint.Port - The port number of the configuration endpoint for the Redis cache cluster.
or
Per the documentation (http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-elasticache-replicationgroup.html):
PrimaryEndPoint.Address -
The DNS address of the primary read-write cache node.
PrimaryEndPoint.Port -
The number of the port that the primary read-write cache engine is listening on.
An example CFN (other bits not included):
Resources:
DnsRedis:
Type: 'AWS::Route53::RecordSetGroup'
Properties:
HostedZoneName: 'a.hosted.zone.name.'
RecordSets:
- Name: 'a.record.set.name'
Type: CNAME
TTL: '300'
ResourceRecords:
- !GetAtt
- RedisCacheCluster
- RedisEndpoint.Address
DependsOn: RedisCacheCluster
RedisCacheCluster:
Type: 'AWS::ElastiCache::CacheCluster'
Properties:
ClusterName: cluster-name-redis
AutoMinorVersionUpgrade: 'true'
AZMode: single-az
CacheNodeType: cache.t2.small
Engine: redis
EngineVersion: 3.2.4
NumCacheNodes: 1
CacheSubnetGroupName: !Ref ElastiCacheSubnetGroupId
VpcSecurityGroupIds:
- !GetAtt
- elasticacheSecGrp
- GroupId
Looks like the ConfigurationEndpoint.Address is only supported for Memcached clusters, not for Redis. Please see this relevant discussion in the AWS forums.
Also, the AWS Auto Discovery docs (still) state:
Note
Auto Discovery is only available for cache clusters running the
Memcached engine. Redis cache clusters are single node clusters, thus
there is no need to identify and track all the nodes in a Redis
cluster.
Looks like your 'best' solution is to query the individual endpoint(s) in us, in order to determine the addresses to connect to, using AWS::CloudFormation::Init as is suggested on the AWS forums thread.
UPDATE
As #slimdrive pointed out below, this IS now possible, through the AWS::ElastiCache::CacheCluster. Please read further below for more details.
You should be able to use PrimaryEndPoint.Address instead of ConfigurationEndpoint.Address in the template provided to get the DNS address of the primary read-write cache node as documented on the AWS::ElastiCache::ReplicationGroup page.
This can be extremely confusing-- depending on what you're trying to do, you use either ConfigurationEndpoint or PrimaryEndpoint... I'm adding my findings here as this was one of the first posts I found when trying to search. I'll also detail some other issues I've had with ElastiCache redis engine setup with CloudFormation. I was trying to set up a CloudFormation type of AWS::ElastiCache::ReplicationGroup
Let me preface this with the fact that I had previously set up a clustered instance of redis ElastiCache using a t2.micro build type with no problems. In fact, I received an error from the node-redis npm package saying that clusters weren't supported, so I also implemented the redis-clustr wrapper around that. Anyway, all that was working fine.
We then moved forward with trying to create a CloudFormation template for this, and I ran into all sorts of limitations that the aws console UI must be hiding from people. In chronological order of how I ran into the problems, here were my struggles:
t2.micro instances are not supported with auto-failover. So I set AutomaticFailoverEnabled to false.
Fix: t2.micro instances actually can use auto-failover. Use the Parameter Group that has clustered mode enabled. The default one for me was default.redis3.2.cluster.on (I used version 3.2.6, as this is the most current that supports encryption at rest and in transit). The parameter group can not be changed after the instance is created, so don't forget this part.
We received an error from the redis-clustr/node-redis package: this instance has cluster support disabled.
(This is how I found the parameter group needed the value on)
We received an error in the CF template that cluster mode can not be used if auto failure is off
This is what made me try using a t2.micro instance again, since I knew I had auto-failover turned on in my other instance and was using a t2.micro instance. Sure enough, this combination does work together.
I had outputs to the stack and creation of parameters in the Parameter Store of the connection url and port. This failed with x attribute/property does not exist on the ReplicationGroup.
Fix: It turns out that if cluster mode is disabled (using parameter group default.redis3.2, for example), you must use the PrimaryEndPoint.Address and PrimaryEndPoint.Port values. If cluster mode is enabled, use ConfigurationEndPoint.Address and ConfigurationEndPoint.Port. I had tried using the RedisEndpoint.Address and RedisEndpoint.Port with no luck, though this may work with a single redis node with no replica (I also could have had the casing wrong-- see the note below).
NOTE
Also, a major issue affected me is the casing: The P in EndPoint must be capitalized in the PrimaryEndPoint and ConfigurationEndPoint variations if you are creating a AWS::ElastiCache::ReplicationGroup, but the p is lower case if you are creating a AWS::ElastiCache::CacheCluster: RedisEndpoint, ConfigurationEndpoint. I'm not sure why there's a discrepancy there, but it may be the cause of some problems.
Link to AWS docs for GetAtt, which lists available attributes for different CloudFormation resources