ec2 instance not created for ecs cluster - amazon-web-services

I am new to ECS
I need Help on few of the things:
FIRST
I am gonna use docker images tagged according to git tags then pushed to ECR and update the task Definition. Is there any way by which i can know that which revision consists of which tag. As the revisions are numbered in a sequential manner rather than the docker image tag. And i am gonna use bitbucket pipelines to push the image then update the service.
I want this feature so that i can revert to a desired tag anytime.
Will a python script with boto3 be helpful? Can anybody help with that
SECOND
An EC2 instance is not launched on creating a new cluster other then t2.micro...
and then it leads to "No Container Instances were found in your cluster"
error on creating any service on that cluster.
I checked that i have 'AmazonEC2ContainerServiceforEC2Role' attached to ecsinstance Role and i explicitly added this as a policy to my IAM user. BUt still the same issue. ANy Help!!!

For SECOND:
As per AWS Docs, the instances containing the Docker Images should reside in public subnet to query repositories.
I had this issue earlier and it was fixed when I changed the subnets in AutoScaling groups to public instead of private.

Related

How to migrate AWS ECS from one account to another (in a different Region/AZ)?

Docker containers are hosted with aws ecs inside a VPC in a particular region. How do I migrate them to a different VPC in a different region?
Unfortunately, there isn't a straightforward method to migrate a service from one region to another. To accomplish this, you'll need to ensure that you have a VPC and ECS cluster set up in the target region. Then, you can create the service within that cluster and VPC.
If you're using Cloudformation or Terraform for configuration as code, simply update the region and relevant definitions, then redeploy. Otherwise, you can use the AWS CLI to extract the full definition of your cluster and service, and then recreate it in the target region. For more information, see the AWS CLI ECS reference: https://docs.aws.amazon.com/cli/latest/reference/ecs/index.html
Also, make sure that any Docker images stored in a private registry are accessible in the target region. Best of luck with your migration!

ECS - User Data For EC2 instances

I am trying to create a docker image based on httpd with a custom information about the docker image. So for that am trying to set the ECS_ENABLE_CONTAINER_METADATA=true in /etc/ecs/ecs.config.
I am trying to do it in the user data of the ecs instance. First thing i noticed is there is no provision to specify the user data while creating the cluster.
Then tried copying the launch configuration and edited the user data per below stackoverflow,
ECS, how to add user-data after creating ecs instance
But when i try to run tasks, I found that no ecs instance is linked with the cluster.
Any suggestions if you had run to similar issue ?
It seems that the ECS instance is not registered with the cluster. You need to ensure that the AMIs you use to create the ECS instance has the ECS agent enabled and running. The full list of AMIs is available in the ECS developer docs under container instances.

AWS - How to send S3 artifacts from codebuild to EC2 instance

I recently was able to successfully send my artifacts to an S3 bucket using Code Build, but now I want to send those exact artifacts to a specific place in my EC2 instance.
I've been reading the AWS docs non-stop, but I haven't been able to configure Code Deploy in a way that works. Can anyone guide me to a proper source that teaches how to use appspec files and how Code Deploy works?
Thanks,
CodeDeploy simply fetches your code from S3/GitHub to your EC2 Instances and deploy it using appspec.yml.
Place your appspec.yml file in the root of your code.
https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file.html
Create a Deployment group which will contain either your EC2 Instances (Use tags to find the EC2 Instances) or AutoScaling group.
Configure it to use the deployment strategy as per your requirement which is AllAtOnce,HalfAtOnce,OneAtATime and it's done.
( Make sure your EC2 Instances are running CodeDeploy agent )
https://docs.aws.amazon.com/codedeploy/latest/userguide/codedeploy-agent-operations-install.html
Create a deployment which will get your code from S3 bucket and deploy on EC2 Instances.

AWS ECS SDK.Register new container instance (EC2) for ECS Cluster using SDK

I've faced with the problem while using AWS SDK. Currently I am using SDK for golang, but solutions from other languages are welcome too!
I have ECS cluster created via SDK
Now I need to add EC2 containers for this cluster. My problem is that I can't use Amazon ECS Agent to specify cluster name via config:
#!/bin/bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
or something like that. I can use only SDK.
I found method called RegisterContainerInstance.
But it has note:
This action is only used by the Amazon ECS agent, and it is not
intended for use outside of the agent.
It doesn't look like working solution.
I need to understand how (if it's possible) to create working ECS clusterusing SDK only.
UPDATE:
My main target is that I need to start specified count of servers from my Docker image.
While I am investigating this task i've found that I need:
create ECS cluster
assign to it needed count of ec2 instances.
create Task with my Docker image.
run it on cluster manually or as service.
So I:
Created new cluster via CreateCluster method with name "test-cluster".
Created new task via RegisterTaskDefinition
Created new EC2 instance with ecsInstanceRole role with ecs-optimized AMI type, that is correct for my region.
And there place where problems had started.
Actual result: All new ec2 instances had attached to "default" cluster (AWS created it and attach instance to it).
If I am using ECS agent I can specify cluster name by using ECS_CLUSTER config env. But I am developing tool that use only SDK (without any ability of using ECS agent).
With RegisterTaskDefinition I haven't any possibility to specify cluster, so my question, how I can assign new EC2 instance exactly to specified cluster?
When I had tried to just start my task via RunTask method (with hoping that AWS somehow create instances for me or something like that) I receive an error:
InvalidParameterException: No Container Instances were found in your cluster.
I actually can't sort out which question you are asking. Do you need to add containers to the cluster, or add instances to the cluster? Those are very different.
Add instances to the cluster
This is not done with the ECS API, it is done with the EC2 API by creating EC2 instances with the correct ecsInstanceRole. See the Launching an Amazon ECS Container Instance documentation for more information.
Add containers to the cluster
This is done be defining a task definition, then running those tasks manually or as services. See the Amazon ECS Task Definitions for more information.

Do I need to duplicate code on every EC2 instance running behind an ELB?

Hi this is a very noob question, but I am trying to deply my Node JS API server on AWS.
Everything is working fine with one m1.large instance that my Front End running on S3 connects to.
Now I want to Scale and put my EC2 instance and possibly many more behing and ELB and an Auto Scaling Group.
Do I need to duplicate my server code on every EC2 instance?
If so , I assume I'll have to create a seperate DB server which all of the EC2 instances will connect to.
Am I right,anyone experienced in Amazon AWS can answer this, I tried googling but most of the links point to detailed tutorials which however don't answer my question.
Any help would be much appreciated. Thanks
yep. that's basically correct. the code needs to be on all instances fronted by the load balancer. for the database you may want to look into RDS.
Of course NOT.. But sure you can do..
That's why there are EFS volumes, which are shared volumes to more than one EC2 instance, but you have to choose a region that support them since they are available on certain regions. As a candidate AWS certified architect I would recommend you more than two options.
You can follow your first approach and create an EC2 instance put your code inside and then create an AMI and use this AMI to launch your upcoming EC2s through autoscaling group. In my opinion bad decision since on any code change you have to go on each one and put the new code and then create a new AMI and a new Auto scaling configuration..Lot's of stuff to do, but it will work.
Second approach, following the first approach but do not create an AMI, instead upload your code on a private (I suppose) Repo like github, bitbucket, install SSM and the appropriate roles for managing EC2 and on every code changes push them to repo and pull them on your EC2, using SSM. Of course you may write a webhook to bitbucket to call an api and run the git pull command on each EC2. Probably the last sentence could be a third approach but needs more coding!!!
Last but not least!! Use an EFS volume put your code there, mount this volume on your EC2, add a auto mount command on every boot, alter your apache httpd main document to point on this EFS/folder and create an AMI with this configuration. Voila! every new EC2 will use the same code which located on this shared/network volume. Whenever you need to change something you have to log in on a third instance outside of your autoscaling group for a certain amount of time upload your changes and then turn it off and all of your EC2 will take immediately the new code. Of course you may pull the changes from a repo following the third approach.
Maybe there are more approaches, I'm using the third one with private repos of course and until now I haven't faced any problem (Fingers crossed)!
One other option is to use Elastic Beanstalk to Deploy NodeJs applications. Here is the guide specific to NodeJs. This will take care of most of the stuff which you would need to do otherwise if you only use EC2 For example: ELB, Autoscaling Cloudwatch etc.
For Database, you may want to use the Master Slave with Read Replicas. Another option is to evaluate NoSql Databases like DynamoDB if it fits your use case. The scalability of DynamoDB tables is managed by AWS so you dont need to worry about it.