cfn Cluster custom AMI creation - amazon-web-services

I am trying to use custom AMI (private) with cfn cluster framework.
I have created the AMI but every time I assign the AMI ID in config file (.cfncluster/config) the "Status: cfncluster-Belle - ROLLBACK_IN_PROGRESS" occurs.
How can I use my own private AMI in the cfn cluster as my worker nodes ?

Related

packer to bake AMI from shared AMI and Share with other AWS Account

I am trying to create AMI with (shared AMI from another Account). since i do not have access to snapshot i cannot create or rename AMI so i opted to use Packer to Bake New AMI with needed custom Name.
Since Shared AMI is encrypted so the newly created AMI its created with default AWS Key due to this i cannot share AMI with other accounts.
(error msg: ==> amazon-ebs.instance: Error modify AMI attributes: InvalidParameter: Snapshots encrypted with the AWS Managed CMK can't be shared. Specify another snapshot)
need some advice on how to address this issue.
P.S i need to create new AMI with custom name from Shared AMI so i can share same AMI across AWS Accounts.
i am open for hearing alternate approach also.

Unexpected behaviour of AWS ASG, AWS Launch Template within AWS EKS

I've created AWS EKS cluster, with managed node group(s), as well I've created an AWS ASG (Auto Scaling Group) and AWS Launch Template. But when I'm attaching an AWS Launch Template to managed EKS node groups, it is creating a duplicate of existing (created) AWS Launch Template
--
Those Launch Templates:
DEV/MANAGED/EKS-WORKERS-SM/LATEST/TEMPLATE/EU-CENTRAL-1X
and
eks-XXXXXXXX-XXXXXXXX-XXXXXXXX
are identical, and I don't understand why EKS is creating for duplicates
--
As well same stuff is happening with AWS ASG (Auto Scaling Group), is there any way to fix this problem?
Technologies which used:
Terraform
Launch Template resource
Auto Scaling Group resource
EKS resource
EKS node group resource
It seems, you're running both resources. When you want to manage the Nodes only by your self, you don't need to have EKS node group resource, instead you need to use Launch Template resource with Auto Scaling Group resource, and with proper tagging

user data for managed node group

How can I edit user data in the managed node group which is part of my EKS cluster?
I tried to create a new version for the Launch Template that EKS cluster has created but I get an error on "Health issues" of the cluster: Ec2LaunchTemplateVersionMismatch
I want to make the nodes in the managed node group to mount the EFS automatically, without doing it manually on each instance, because of the autoscaling.

ECS, how to add user-data after creating ecs instance

I can't find a way to specify a user-data after creating ECS instance definition.
Document says You can pass this user data into the Amazon EC2 launch wizard in Step 6.g of Launching an Amazon ECS Container Instance.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bootstrap_container_instance.html#multi-part_user_data
ECS is launched automatically, how do you specify the user data?
I want to send /var/log/syslog to cloudwatch and I need to add user data (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_cloudwatch_logs.html)
I had to add the user data as a autoscaling group property
steps are
copy existing launch configuration
edit user data of the launch configuration
edit autoscaling group to use the created launch configuration
terminate ecs instances so that the modified autoscaling group launches new ec2 with new launch configuration
via terraform we can pass it as template file within launch config
data "template_file" "user_data" {
template = "${file("${path.module}/templates/user_data.sh")}"
vars = {
ecs_config = "${var.ecs_config}"
ecs_logging = "${var.ecs_logging}"
cluster_name = "${var.cluster}"
env_name = "${var.environment}"
custom_userdata = "${var.custom_userdata}"
cloudwatch_prefix = "${var.cloudwatch_prefix}"
}
By default, user data scripts and cloud-init directives run only during the first boot cycle when an EC2 instance is launched.
https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/
In the article it also explain further possible workaround.

AWS EMR provisioning fails when I use custom AMI

Problem:
I have an EMR cluster (along with a number of other resources) defined in a cloudformation template. I use the AWS rest api to provision my stack. It works, I can provision the stack successfully.
Then, I made one change: I specified a custom AMI for my EMR cluster. And now the EMR provisioning fails when I provision my stack.
And now my stack creation fails, due to EMR provisioning failing. The only information I can find is an error on the console: null: Error provisioning instances.. Digging into each instance, I see that the master node failed with error Status: Terminated. Last state change reason:Time out occurred during bootstrap
I have s3 logging configured for my EMR cluster, but there are no logs in the s3 bucket.
Details:
I updated my cloudformation script like so:
my_stack.cfn.yaml:
rMyEmrCluster:
Type: AWS::EMR::Cluster
...
Properties:
...
CustomAmiId: "ami-xxxxxx" # <-- I added this
Custom AMI details:
I am adding a custom AMI because I need to encrypt the root EBS volume on all of my nodes. (This is required per documentation)
The steps I took to create my custom AMI:
I launched the base AMI that is used by AWS for EMR nodes: emr 5.7.0-ami-roller-27 hvm ebs (ID: ami-8a5cb8f3)
I created an image from my running instance
I created a copy of this image, with EBS root volume encryption enabled. I use the default encryption key. (I must create my own base image from a running instance, because you are not allowed to create an encrypted copy from an AMI you don't own)
I wonder if this might be a permissions issue, or perhaps my AMI is misconfigured in some way. But it would be prudent for me to find some logs first, to figure out exactly what is going wrong with node provisioning.
I feel stupid. I accidentally used a completely un-related AMI (a redhat 7 image) as the base image, instead of the AMI that EMR uses for it's nodes by default: emr 5.7.0-ami-roller-27 hvm ebs (ami-8a5cb8f3)
I'll leave this question and answer up in case someone else makes the same mistake.
Make sure you create your custom AMI from the correct base AMI: emr 5.7.0-ami-roller-27 hvm ebs (ami-8a5cb8f3)
You mention that you created your custom AMI based on an EMR AMI. However, according to the documentation you linked, you should actually base your AMI on "the most recent EBS-backed Amazon Linux AMI". Your custom AMI does not need to be based on an EMR AMI, and indeed I suppose that doing so could cause some problems (though I have not tried it myself).