Elastic BeanStalk MultiContainer docker fails - amazon-web-services

I want to deploy an multi-container application in elastic beanstalk. I get the following error.
Error 1: The EC2 instances failed to communicate with AWS Elastic
Beanstalk, either because of configuration problems with the VPC or a
failed EC2 instance. Check your VPC configuration and try launching
the environment again.
I have set up the VPC with just the public subnet and the security group that allows all traffic both inbound and outbound. I know this is not encouraged for production level deployment, but I have reduced the complexity to find the cause of the error.
So, the load balancer and the EC2 instance are inside the same public subnet that is attached with the internet gateway. They both share the same security group allowing all the traffic.
Before the above error, I also get another error stating
Error 2: No ecs task definition (or empty definition file) found in environment
Having said, I have bundled my Dockerrun.aws.json file with .ebextensions folder inside the source bundle which the beanstalk uses for deployment.
After all these errors, drilling down to two questions:
I cannot understand why No ecs task error appears, when I have packaged my dockerrun.aws.json file containing containerDefinitions?
Since there is no ecs task running, there is nothing running in the instance. Is this why beanstalk and ELB cannot communicate to the instance? (Assuming my public subnet and all traffic security group is not a problem)

The problem was the VPC. Even I had the simple VPC with just an public subnet, the beanstalk cannot talk to the instance and so cannot deploy the ECS task definition and docker containers in the instance.
By creating two subnets namely public and private and having an NAT instance in public subnet, which becomes the router for the instances in the private subnet. Making the above setup worked for me and I could deploy the ECS task definition successfully to the EC2 instance in the private subnet.

I found this question because I got the same error. Here are the steps that worked for me to actually deploy a multi-container app on Beanstalk:
To get past this particular error, I used the eb CLI tools. For some reason, using eb deploy instead of zipping and uploading myself fixed this. It didn't actually work, but it gave me a new error.
So, I changed my Dockerrun.aws.json, a file format that needs WAY more documentation, until I stopped getting errors about that.
Then, I got an even better error!
ERROR: [Instance: i-0*********0bb37cf] Command failed on instance.
Return code: 1 Output: (TRUNCATED)..._api_call
raise ClientError(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDenied) when
calling the GetObject operation: Access Denied
Failed to download authentication credentials [config file name] from [bucket name].
Hook /opt/elasticbeanstalk/hooks/appdeploy/enact/02update-
credentials.sh failed. For more detail, check /var/log/eb-activity.log
using console or EB CLI.
Per this part of the docs the way to solve this is to
Open the Roles page in the IAM console.
Choose aws-elasticbeanstalk-ec2-role.
On the Permissions tab, under Managed Policies, choose Attach Policy.
Select the managed policy for the additional services that your application uses. For example, AmazonS3FullAccess or AmazonDynamoDBFullAccess. (For our problem, the S3 one)
Choose Attach Policies.
This part got really exciting, because I got yet another error: Authentication credentials are not in JSON format as expected. Please generate the credentials using 'docker login'. (Keep in mind, I tried to follow the instructions on how to do this to the letter, but, oh well). Turns out this one was on me, I had malformed JSON in my DockerHub auth file stored on S3. I renamed the file to dockercfg.json to get syntax checking, and it seems the Beanstalk/ECS is okay with having the .json as part of the name, because this time... there was a different error: CannotPullContainerError: Error: image [DockerHub organization]/[repo name]:latest not found). Hmm, maybe there was a typo? Let's check:
$ docker run -it [DockerHub organization]/[repo name]:latest
Unable to find image '[DockerHub organization]/[repo name]:latest' locally
latest: Pulling from [DockerHub organization]/[repo name]
Ok, the repo is there. So... my auth is bad? Yup, turns out I followed an example in the DockerHub auth docs that was of what you shouldn't do. Your dockercfg.json should look like
{
"https://index.docker.io/v1/": {
"auth": "ZWpMQ=Vyd5zOmFsluMTkycN0ZGYmbn=WV2FtaGF2",
"email": "your#email.com"
}
}
There were a few more errors (volume sourcePath has to be a absolute path! That's what the invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed message means), but it eventually deployed. Sorry for the novel; hoping it helps someone.

Related

Codedeploy with S3 always fails after 5 minutes

I've spent the better half of the day trying to setup CodeDeploy, CodePipeline, S3 and EC2.
Codepipeline will successfully:
Pick up detected changes in GitHub
Push the ZIP file up to S3
Trigger CodeDeploy to begin deployment
Also
EC2 has list and read access to S3
S3 allows all actions from EC2
I've followed this outdated guide mostly: https://cloudacademy.com/blog/how-to-deploy-application-code-from-s3-using-aws-codedeploy/
appspec.yml
version: 0.0
os: linux
files:
- source: /
destination: /var/www
hooks:
AfterInstall:
- location: hooks/after-install.sh
runas: root
I'm rather new to AWS and can't for my life find where the logs are telling me what's going on, nor do I get any error message that points me anywhere, so I've literally been shooting blind double checking everything all day and trying again and this is taunting me now:
Any help even if it's pointing me towards where I can actually find the error message would be tremendously appreciated, thanks for your time
This generally occurs for one of the following 3 reasons:
The CodeDeploy agent needs to be installed and running on the target instance.
No access to CodeDeploy and S3 service. Either ensure you are:
Running an instance in a public subnet with an internet gateway
Running an instance in a private subnet with a NAT gateway/NAT instance
The IAM permissions for the IAM role of the instance are not sufficient, for sufficient permissions attach the AWSCodeDeployRole policy.
As you have said your IAM role permissions are fine you are left with one of the other 2 scenarios.
Once these are working you can generally see the logs within the /var/log/aws/codedeploy-agent location.

Pull images from Kubernetes running on AWS with ECR pulls images from the wrong region in other account

I have k8s clusters on AWS working with ECR and pulling images from all regions. This works fine.
But when I try to pull images from a different account they get "no such host". I followed these instructions to set iam permissions (and the docs). I'm not getting permission denied - I'm getting this:
Failed to pull image "<acc id>.dkr.ecr.ap-outheast-2.amazonaws.com/image:tag":
rpc error: code = Unknown desc = Error response from daemon:
Get https://<acc id>.dkr.ecr.ap-outheast-2.amazonaws.com/v1/_ping:
dial tcp: lookup <acc id>.dkr.ecr.ap-outheast-2.amazonaws.com
on 10.71.0.2:53: no such host
My cluster is running in ap-southeast-1 and the IP 10.71.0.2:53 is the default DNS AWS set for the VPC
I'm trying to wok around this by populating this region's ECR as well. But it seems pretty wrong.
Any idea how to allow ECR to pull from another region?
I think you made simple typo in .dkr.ecr.ap-outheast-2.amazonaws.com/image:tag - that's why you receive no such host from DNS server, just try to replace ap-outheast-2 with ap-southeast-2.
Generally if you set ECR IAM right that should work as ECR is accessible/routable as public service in Internet with limitations based on IAM.

Metabase deploy fails on AWS Beanstalk

I'm trying to deploy Metabase on AWS Beanstalk following the official documentation.
Unfortunately, I'm getting the following errors every time:
Stack named 'awseb-e-mbmm95mkdq-stack' aborted operation. Current state: 'CREATE_FAILED' Reason: The following resource(s) failed to create: [AWSEBRDSDBSecurityGroup].
Creating RDS database security group named: awseb-e-mbmm95mkdq-stack-awsebrdsdbsecuritygroup-lixrbjq6lh5x failed Reason: Either the resource does not exist, or you do not have the required permissions.
Any ideas how to fix it?
Here's how I was able to fix this issue. I created an RDS db instance from the RDS console, then created a snapshot of that instance. Enter Elastic Beanstalk console, then from Configuration modify Database and use the snapshot created. Remember also to add environment properties in the Configuration / Software console.

Elastic beanstalk - eb create fails to create AWSEBRDSDBSecurityGroup

I currently want to deploy a simple Django app in AWS using Elastic Beanstalk and RDS, following this tutorial: http://www.1strategy.com/blog/2017/05/23/tutorial-django-elastic-beanstalk/. To create the Beanstalk app I use the command eb create --scale 1 -db -db.engine postgres -db.i db.t2.micro.
In the creation process, the tool fails to create the [AWSEBRDSDBSecurityGroup]. Here is the output:
2018-07-28 06:07:51 ERROR Stack named 'awseb-e-ygq5xuvccr-stack' aborted
operation. Current state: 'CREATE_FAILED' Reason: The following resource(s)
failed to create: [AWSEBRDSDBSecurityGroup].
2018-07-28 06:07:51 ERROR Creating RDS database security group named:
awseb-e-ygq5xuvccr-stack-awsebrdsdbsecuritygroup-oj71kkwnaaag failed Reason:
Either the resource does not exist, or you do not have the required permissions.
I am using an access token with full administrator rights.
How can I solve this issue?
Are you sure you want to use a DB Security group and not a VPC Security group? AFAIK, db security groups should no longer be needed in new accounts, you should just be able to attach an existing VPC security group directly to your instance.
If you do need it, then its most likely one of these:
A badly worded error for hitting the limits for max db security groups
You actually don't have the admin permissions as you claimed.
Do try out and let us know what you find.

Unable to add an RDS instance to Elastic Beanstalk

Suddenly I can't add an RDS to my EB environment, not sure why. Here's the full error message:
Unable to retrieve RDS configuration options.
Configuration validation exception: Invalid option value: 'db.t1.micro' (Namespace: 'aws:rds:dbinstance', OptionName: 'DBInstanceClass'): DBInstanceClass db.t1.micro not supported for mysql db
I am not sure if this is due to the default AMI that I am using or something else.
Note that I didn't choose to launch t1.micro RDS instance. Seems like eb is trying to get that but this type has been eliminated from RDS instance class.
Just found this link in the community forum. https://forums.aws.amazon.com/ann.jspa?annID=4840, looks like elastic Beanstalk has not updated cloudformation templates yet.
I think it's resolved now. But as a side note, AWS should not make things like this a community announcement.