I have a custom ECS AMI, running Debian 10. I launch the ECS-Agent as a container, as suggested in the docs here. Everything works fine.
Recently, I was asked to integrate EFS into the cluster, so that containers running within specific tasks would have access to shared, persistent storage.
I added the efs-utils package to the AMI build, as documented in the git repo. The instances themselves now automatically mount to EFS on boot, and users on the instances can read/write to the EFS mount.
However, tasks configured to use the efsVolumeConfiguration parameter in the task volume definition fail to get placed; the good old Container instance missing required attribute error.
Because the instances themselves have no problem mounting to EFS on boot, I've implemented a workaround using regular docker volumes, so the containers running in the task mount EFS on the host via normal docker volume, but I'd prefer to have the ECS -> EFS integration working properly.
When I run the ECS-CLI check-attributes command against any of the instances in my cluster I get:
ecs-cli check-attributes --task-def my-task --container-instances my-container-instance-id --cluster my-ecs-cluser
Container Instance Missing Attributes
my-container-instance-id ecs.capability.efsAuth
And indeed, in the console, when I go cluster->instances->specific-instance->actions->view/edit attributes, all of the ecs.capability.xxx contain empty values.
When do these values get populated? How should I augment the AMI build so that these values get populated with the proper values?
Please let me know if you need any additional information.
Thanks in advance!
I am not sure if this functionality of using EFS with ECS is supported on Debian based systems since the documentation 1 does not provide commands for Debian.
Still, try these steps:
Install efs utils and enable amazon-ecs-volume-plugin 1
Add the tag manually: 2
Name=ecs.capability.efsAuth
Value=<empty>
Apologies, I thought I marked this as the answer a long time ago.
Answer from #bravinator932421
I think I solved this. From github.com/aws/amazon-ecs-agent/blob/… I
saw where to set efsAuth, so placing it in my config file at
/etc/ecs/ecs.config: ECS_VOLUME_PLUGIN_CAPABILITIES=["efsAuth"] worked
This also worked for me .
I had the same problem but I got it when trying out Bottlerocket, which apparently does not support encrypted EFS mounts. Removing the transit encryption requirement fixed it.
Related
I need to install a gRPC PHP extension on my elastic beanstalk created EC2 instances. I have auto-scaling enabled, and when a new EC2 instance is kicked in, I lose all my installations.
From the documentation, I found two ways to fix this:
Create an instance and download everything required and take an image of that instance. And add the image id (AMI ID) in the Elastic Beanstalk environment (Under Configuration -> Instances). And every new instance created by auto-scaling will be from the image I provide. This approach never worked for me. Am I missing something here?
Write a config file in the .ebextensions to automatically install all the required extensions whenever a new instance is kicked in. And for this, we need to create a yaml/json file as per the documentation in cloud.google.com/php/grpc.
Can someone guide which approach should be taken? And help me create yaml/json file to automate the process for all the instances in auto scaling?
As per the AWS documentation here, to customize your Elastic Beanstalk environment you should use .ebextensions configuration files.
Creating .ebextensions provides the ability to completely customize the instances and environment that your application is running on/in, and makes upgrades, changes and/or additions to your instances and environment straightforward and efficient.
As a sidenote, ssh’ing to ElasticBeanstalk instances, and making on-instance changes, should be avoided. The autoscaling issue you are facing is one reason, however the other major reason is that making changes on the instance itself will cause the instances state to be out of sync with the EB state is expecting. If the state is out of sync, subsequent deployments will fail because the application version EB is expecting has drifted. Managing your application and environment through code and .ebextensions eliminates this issue.
I would like to create a Managed Compute Environment for AWS Batch, but use EC2 User Data to configure the instances as they are brought into the ECS fleet that Batch is scheduling jobs onto.
It shouldn't matter, but the purpose of the User Data script is to pull down large data files onto an InstanceStore that the Docker containers will reference.
This is possible in ECS, but I have found no way to pass User Data to a Managed Batch Compute Environment.
At most, I can specify the AMI. But since we're going with Managed, we must use the Amazon ECS-optimized AMI.
I'd prefer to use EC2 User Data as the solution, as it gives a entry-point for any other bootstrapping we wish to perform. But I'm open to other hacks or solutions, so long as they are applicable to a Managed Compute Environment.
You can create an AMI based on the AWS provided AMI, and customize it. It will still be managed since the Batch and/or ECS daemon is running on it.
As a side note I’m trying to do the same thing but no luck so far. I may end up creating a custom AMI and include the configure script in the AMI itself in /etc/rc.local. Not ideal but I don’t think Batch can pass a user data script other than what it needs. I am still looking into this.
You can create a launch template containing your user-data. Then assign this launch template to your compute environment. Keep in mind that you might have to clean the cloud init directory in your AMI since it probably was already spun up once (at ami creation).
Launch template userguide
I am playing around with the idea of having an Auto Scaling Group for my website that receives a lot of traffic. I need each server to be running an identical webservice, so I have come up with several ideas to make this happen.
Idea 1: Use Code Commit + User Data
I will keep my webserver code in a git repo in CodeCommit. Then, when my EC2 instances spin-up, they will install apache2, and then pull from the git repo.
Idea 2: Use Elastic File System
After a server spins up, it will mount to one central EFS that has my webserver code on it. EC2 will install apache2 then use EFS to get the proper php files etc.
Idea 3: Use AWS S3
Like above with apache2, but then download webserver code from s3.
Which option is advised? Why?
I suggest you have a reference machine which is used for creating images. Keep it updated with the latest version of your code and when you are happy with it, create an image out of it, update your launch configuration, and change the ASG configuration so that it uses it. You can then stop the reference machine and leave the job to the ASG instances.
I have a basic django/postgres app running locally, based on the Docker Django docs. It uses docker compose to run the containers locally.
I'd like to run this app on Amazon Web Services (AWS), and to deploy it using the command line, not the AWS console.
My Attempt
When I tried this, I ended up with:
this yml config for ecs-cli
these notes on how I deployed from the command line.
Note: I was trying to fire up the Python dev server in a janky way, hoping that would work before I added nginx. The cluster (RDS+server) would come up, but then the instances would die right away.
Issues I Then Failed to Solve
I realized over the course of this:
the setup needs another container for a web server (nginx) to run on AWS (like this blog post, but the tutorial uses the AWS Console, which I wanted to avoid)
ecs-cli uses a different syntax for yml/json config than docker-compose, so you need to have some separate/similar code from your local docker.yml (and I'm not sure if my file above was correct)
Question
So, what ecs-cli commands and config do I use to deploy the app, or am I going about things all wrong?
Feel free to say I'm doing it all wrong. I could also use Elastic Beanstalk - the tutorials on this don't seem to use docker/docker-compose, but seem easier overall (at least well documented).
I'd like to understand why any given approach is a good way to do this.
One alternative you may wish to consider in lieu of ECS, if you just want to get it up in the amazon cloud, is to make use of docker-machine using the amazonec2 driver.
When executing the docker-compose, just ensure the remote Amazon host machine is ACTIVE which can be viewed with a docker-machine ls
One item you will have to revisit with the Amazon Mmgt Console is to open the applicable PORTS such as Port 80 and any other ports exposed in the compose file. Once the security group is in place for the VPC, you should be able to simply refer to the VPC ID on subsequent executions bypassing any need to use the Mgmt console to add the ports. You may wish to bump up the instance size from the default t2.micro to match the t2.medium specified in your NOTES.
If ECS orchestration is needed, then a task definition will need to be created containing the container definitions you require as defined in your docker compose file. My recommendation would be to take advantage of the Mgmt console to construct the definition and then grab the accompanying JSON defintion which is made available and store in your source code repository for future executions on the command line where they can be referenced in registering task definitions, executing tasks and services within a given cluster.
I am a little confused with all the different offerings by docker.
So far, I have been using Docker Cloud Web API (cloud.docker.com) to create node-clusters on EC2 instances by linking to my AWS account.
Now recently, I wanted to setup a data container and mount is as a volume, that is shared by other containers running on the same node. This requires use of the --volumes-from flag in docker, which means I need to use docker-machine, connect to my AWS VM, and then launch my containers with this flag.
Do all of these containers show up on cloud.docker.com? Even the ones I launched from the terminal using docker-machine? Maybe I am confused here..
I found out that cloud.docker.com is still in Beta mode and so doesn't offer a --volumes-from. Also, these containers don't show on cloud.docker.com yet. Maybe it will come in the future...