Hardening AWS EC2 Instances - amazon-web-services

I have launched and AWS ECS cluster with 4 EC2 instances with ecs optimized AMI 2 years ago, the system was working fine but due to systems hardening compliance , I need to update my ECS cluster EC2 instances with latest ECS optimized AMI.
I can take latest AMI and update the instances but how can I automate this process continously, lets say for every 3 months, My autoscaling group should update the instances with latest ECS optimized AMI release by amazon.
My EC2 instances are in autoscaling group, what automation ideas I can implement here.
any AWS doc or github repo link to achieve this also will be very helpful.
Thanks in Advance

Step 1: You can use latest ami ids from AWS System Manager's paramstore and set up notifications when it is changed using EventBridge
Step 2: Write a lamba to update your launch config which has ami ids

Related

How can I Patch my Amazon EMR cluster with security updates?

I have an Amazon EMR cluster with 3 nodes (1 master and 2 core) running on Amazon EMR Release 5.31.0 AMI. I want to patch these nodes with security - critical and important patches as we would patch normal EC2 instances. Can we do this?
As EMR runs on EC2 instances in the background and the base OS of EMR Releases is Amazon Linux, I feel we can patch the nodes/instances either by SSH into the instances and running yum commands or using Patch Manager. Is it ok to do this way? Is it recommended?
But when I searched for the same, I found this article:
https://aws.amazon.com/blogs/big-data/create-custom-amis-and-push-updates-to-a-running-amazon-emr-cluster-using-amazon-ec2-systems-manager/
which is asking to use custom AMIs. I feel this is comparatively a long/tough process just to patch an EMR cluster. Is this the only correct way to do or do we have other ways?
Some are suggesting to clone the cluster and use the EMR release 6.x for the new cluster. ??
Can someone please help me on this?

AWS EMR cluster stucked starting with custom AMI

I'm trying to run a EMR cluster with custom AMI because the bootstrap time take 13 minutes. I have created an AMI from a m5.large instances with all of the software installed. This instances is based on amazon linux 2 AMI. I have created the image (clicking in action, image, create image). When I run the cluster it starts the instances on EC2 but the cluster keeps on starting.
How can I solve the problem?

Do we need to create two AMI's for master and core in EMR?

I need to create a AWS EMR cluster for spark job with one master and 4 core nodes with auto scaling. I need to have different Instance types for master and core with Ubuntu 16.0 installed on it. So do I need to create two AMI's for this master and slave.
Amazon EMR has its own library of AMIs. You can select the AMI version when launching the cluster.
You can create a custom AMI, but it must be based on Amazon Linux.
See: Using a Custom AMI - Amazon EMR
If you wish to launch a Hadoop cluster with your own Ubuntu AMI, you cannot use the Amazon EMR service. You will need to launch and configure it yourself on Amazon EC2 instances.

AWS ECS SDK.Register new container instance (EC2) for ECS Cluster using SDK

I've faced with the problem while using AWS SDK. Currently I am using SDK for golang, but solutions from other languages are welcome too!
I have ECS cluster created via SDK
Now I need to add EC2 containers for this cluster. My problem is that I can't use Amazon ECS Agent to specify cluster name via config:
#!/bin/bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
or something like that. I can use only SDK.
I found method called RegisterContainerInstance.
But it has note:
This action is only used by the Amazon ECS agent, and it is not
intended for use outside of the agent.
It doesn't look like working solution.
I need to understand how (if it's possible) to create working ECS clusterusing SDK only.
UPDATE:
My main target is that I need to start specified count of servers from my Docker image.
While I am investigating this task i've found that I need:
create ECS cluster
assign to it needed count of ec2 instances.
create Task with my Docker image.
run it on cluster manually or as service.
So I:
Created new cluster via CreateCluster method with name "test-cluster".
Created new task via RegisterTaskDefinition
Created new EC2 instance with ecsInstanceRole role with ecs-optimized AMI type, that is correct for my region.
And there place where problems had started.
Actual result: All new ec2 instances had attached to "default" cluster (AWS created it and attach instance to it).
If I am using ECS agent I can specify cluster name by using ECS_CLUSTER config env. But I am developing tool that use only SDK (without any ability of using ECS agent).
With RegisterTaskDefinition I haven't any possibility to specify cluster, so my question, how I can assign new EC2 instance exactly to specified cluster?
When I had tried to just start my task via RunTask method (with hoping that AWS somehow create instances for me or something like that) I receive an error:
InvalidParameterException: No Container Instances were found in your cluster.
I actually can't sort out which question you are asking. Do you need to add containers to the cluster, or add instances to the cluster? Those are very different.
Add instances to the cluster
This is not done with the ECS API, it is done with the EC2 API by creating EC2 instances with the correct ecsInstanceRole. See the Launching an Amazon ECS Container Instance documentation for more information.
Add containers to the cluster
This is done be defining a task definition, then running those tasks manually or as services. See the Amazon ECS Task Definitions for more information.

Multiple Aws CodeDeploy applications in a newly added instance

I think i've done something wrong while designing my aws infrastructure.
Actually i have one autoscaling group with one ec2 instance.
On this instance there are 6 laravel projects that are associated to 6 applications in aws CodeDeploy, so when i want to update the version i simply update using codedeploy.
Issues comes when the autoscaling group adds instances to the group, all my codedeploy applications are deployed to the newly created instance and it fails with this message:
One or more lifecycle events did not run and the deployment was unsuccessful. Possible causes include:
(1) Multiple deployments are attempting to run at the same time on an instance;
So... what's the best way to get this to work ?
AWS recommends associating a single deployment group to an ASG and consolidate deployments to a single deployment for proper scale out. Each deployment group associates a lifecycle hook with ASG through which ASG will notify deployment-group when scale-out events occur. Parallel deployments (in your case 6) will be prone to codedeploy timeouts (5 -60 min) and codedeploy agent running on ec2 can take one command at time.
If each of your app takes less time (<60 mins), you may want to consolidate them to a single application and deploy via codedeploy hooks. Else would suggest to use different asg for app.
Refer: https://aws.amazon.com/blogs/devops/under-the-hood-aws-codedeploy-and-auto-scaling-integration/
list lifecycle hooks:
aws autoscaling describe-lifecycle-hooks --auto-scaling-group-name <asg_name> --region <region>
If launch of new ec2 goes infinite loop of terminate and launch,you can remove lifecycle hooks
aws autoscaling delete-lifecycle-hook --lifecycle-hook-name <lifecycleName> --auto-scaling-group-name <asg_name> --region <region>