AWS Elastic Beanstalk deployment failed - amazon-web-services

When I try to deploy Java web app to Elastic Beanstalk Tomcat container it was failed with following error:
Service:AmazonCloudFormation, Message:TemplateURL must reference a valid S3 object to which you have access.
Please note the following points:
Deployment was automated via Jenkins running on EC2 server.
This error is not a continuous issue. Sometimes it was deployed successfully but sometimes it was failed with above error.

I had this exact problem, from what I could tell it was completely random but it turned out to be linked to IAM roles. Everything worked perfectly until I added .ebextensions with a database migration script, after that I couldn't get my Bamboo builder to work again. However I managed to figure it out (No thanks to Amazon's non-existing documentation on what permissions are needed for EB).
I based my IAM policy on this Gist: https://gist.github.com/magnetikonline/5034bdbb049181a96ac9
However I had to make some modifications. This specific issue was caused by a too restrictive policy on S3 gets, so I simply replaced the one provided with
{
"Action": [
"s3:Get*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::elasticbeanstalk-*/*"
]
},
This allows users with the policy to perform all kinds of Get operations on the bucket, since I couldn't be bothered to find out which specific one was required.

Uploading to beanstalk involves sending a zipped artifact into S3 along with modifying the cloudformation templates (this part is hands off).
Likely the IAM role that is attached to the jenkins runner (or access credentials) does not have access to the relevant S3 buckets. Ensure this via IAM. See: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.iam.html

This is an edge-case, but I wanted to capture it here for posterity. This error message can be returned as a generic error message sometimes. I spent many weeks working through this error with AWS to find out that it was related to Security Token Service (STS) credentials expiring. When you generate STS credentials the maximum duration of the session is 36 hours. If you generate a 36 hour key some services used by Elastic Beanstalk don't respect this session length and consider the session expired. To work around this we no longer allow STS credentials with a session length longer then 2 hours.

I have also struggled with this and, as in Rick's case, it turned out to be a permissions problem. But his solution hasn't worked for me.
I have fixed the
Service:AmazonCloudFormation, Message:TemplateURL must reference a valid S3 object to which you have access.
Adding "s3:Get*" alone wasn't enough, I needed also "s3:List*".
The interesting thing is that I was getting this issue just for one EB environments out of three. It turned out that the other environments did deploy to all nodes at once, while the problematic one had enabled Rolling updates (which, obviously, perform other actions, adding new instances etc.).
Here is the final IAM policy that works: gist: IAM policy to allow Continuous Integration user to deploy to AWS Elastic Beanstalk

I had the same issue. Based on what I gathered from AWS support, an IAM user requires full access to S3 to perform some actions like deployment. This is because EB uses CloudFormation which is using S3 to store templates. You need to attach the managed policy "AWSElasticBeanstalkFullAccess" to the IAM user performing deployment, or create a policy like the following and attach it to the user.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
Ideally amazon should have a way to restrict the Resource to specific buckets, but it doesn't look like that it is doable right now!

Related

AccessDeniedException on sagemaker:CreateDomain in AWS SageMaker Studio, despite having SageMakerFullAccess

I am trying to use the AWS SageMaker Studio > Get Started > Quick Start, as an IAM user with the AmazonSageMakerFullAccess policy attached, but I am getting the following error:
User: arn:aws:iam::<user-id>:user/<username> is not authorized to perform: sagemaker:CreateDomain on resource: arn:aws:sagemaker:us-west-1:<user-id>:domain/d-<domain-id>
I looked up some documentation on the CreateDomain command, and it looks like it involves EFS storage and VPC configuration, so I have also added the FullAccess policies for these services to my IAM user, but am still getting the same error.
I also tried adding a custom policy as shown here: https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html#sagemaker-roles-createdomain-perms which also seemed to have no effect.
What am I doing wrong here?
AmazonSageMakerFullAccess policy gives the user access to perform actions such as start training jobs, deploy endpoints, along with limited access on other services such as ECR, Glue etc. This is generally attached to a SageMaker notebook instance or Studio.
The user creating the SageMaker domain needs sagemaker:CreateDomain permission, i.e., to your IAM user, add:
{
"Sid": "AllowCreateDomain",
"Effect": "Allow",
"Action": "sagemaker:CreateDomain",
"Resource": "*"
}
I work at AWS and my opinions are my own.

Amazon Aurora 1.8 Load Data From S3 - Cannot Instantiate S3 Client

With the latest Aurora update (1.8), the command LOAD DATA FROM S3 was introduced. Has anyone gotten this to work? After upgrading to 1.8, I followed the setup guide Here to create the Role to allow access from RDS to S3.
After rebooting the server and trying to run the command
LOAD DATA FROM S3 PREFIX 's3://<bucket_name>/prefix' INTO TABLE table_name
in SQL Workbench/J, I get the errors:
Warnings:
S3 API returned error: Missing Credentials: Cannot instantiate S3 Client
S3 API returned error: Failed to instantiate S3 Client
Internal error: Unable to initialize S3Stream
Are there any additional steps required? Can I only run this from the SDK? I don't see that mentioned anywhere in the documents
I had the same issue. I tried adding AmazonS3FullAccess to the IAM role that my RDS instances were using...no joy.
After poking around, I went into the RDS console, to Clusters. Selected my Aurora cluster and clicked Manage IAM Roles. It gave me a drop-down, I selected the IAM role (same one that the individual instances were using).
Once I did that, all was well and data load was nice and fast.
So, there are (for us) 5 steps/components:
1) The S3 bucket and bucket policy to allow a user to upload the object
{
"Version": "2012-10-17",
"Id": "Policy1453918146601",
"Statement": [
{
"Sid": "Stmt1453917898368",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<account id>:<user/group/role>/<IAM User/Group/Role>"
},
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::<bucket name>/*"
}
]
}
The "Principal" would be whatever IAM user, group or role will be uploading the data files to the bucket so that the RDS instance can import the data.
2) The IAM policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1486490368000",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<bucket name>/*"
]
}
]
}
This is pretty simple with the Policy Generator.
3) Create the IAM Role:
This role should be assigned the IAM policy above. You can probably do an inline policy, too, if you're not going to use this policy for other roles down the line, but I like the idea of having a defined policy that I can reference later if I have a need.
4) Configure a Parameter Group that your cluster/instances will use to set the aws_default_s3_role value to the ARN of the role from #3 above.
5) Configure the Aurora Cluster by going to Clusters, selecting your cluster, selecting Manage IAM Roles and setting the IAM Role for your DB Cluster
At least for me, these steps worked like a charm.
Hope that helps!
If the only error is Internal error: Unable to initialize S3Stream and it throws this error immediately, possible culprits are:
typo in the bucket or object name
bucket created in different region than database
bucket or object name is not specified according to the syntax for specifying a path to files stored on an Amazon S3 bucket: s3-region://bucket-name/file-name-or-prefix
The path includes the following values:
region (optional) – The AWS Region that contains the Amazon S3 bucket to load from. This value is optional. If you don't specify a region value, then Aurora loads your file from Amazon S3 in the same region as your DB cluster.
bucket-name – The name of the Amazon S3 bucket that contains the data to load. Object prefixes that identify a virtual folder path are supported.
file-name-or-prefix – The name of the Amazon S3 text file or XML file, or a prefix that identifies one or more text or XML files to load. You can also specify a manifest file that identifies one or more text files to load.
After all the suggestions above, as a final step, I had to add a VPC Endpoint to S3. After that, everything started working.
March 2019:
RDS console doesn't have the option to change role anymore. What worked for me is to add the role via CLI and then reboot the writer instance.
aws rds add-role-to-db-cluster --db-cluster-identifier my-cluster --role-arn arn:aws:iam::123456789012:role/AllowAuroraS3Role
For me, I was missing the step to add the created RDS role to my S3 bucket. Once I add it, it worked instantly.
You need to attach the AmazonS3ReadOnlyAccess or AmazonS3FullAccess policy to the role you set up in IAM. This step was not included in the setup guide.
Go to IAM -> Roles in the AWS console, select the role you are using, click 'attach policy', then scroll way down to the S3 policies and pick one.
I reached out to Amazon Aurora team and they confirmed there are edge cases with some of the servers having this issue. They are rolling out a patch to fix the issue soon, but in the mean time manually applied the patch to my cluster.
I had experienced multiple occasions this error could occur.
The error was thrown after running 'LOAD' sql for a while (around 220s), which is a suspicious time-out case. Finally I found my RDS's Subnet Group only have one outbound excluding the one to S3. By adding the outbound rule can fix this issue.
The error was thrown immediately (0.2s). I was successfully loading data from S3 before, but suddenly with a change on the S3 url, this error occurred again. I was using a wrong S3 URL. Because I wanted to use S3 prefix instead of file. check the 'Load' syntax to make your sql right.
It worked for me by following step 2 to 5 and by creating VPC endpoint for S3 access.
I had the same error as I was trying to LOAD DATA FROM S3 using MySQL Workbench. I was already able to successfully CREATE DATABASE and CREATE TABLE and so I knew my connection was working.
I closely followed all of the AWS documentation instructions for Loading data into an Amazon Aurora MySQL DB cluster from text files in an Amazon S3 bucket.
In my case, I had not correctly followed instruction steps 3 & 4 (See list of instructions under subheading "Giving Aurora access to Amazon S3" at link above.
What fixed it for me:
From Amazon RDS, I selected "Parameter Groups" in the navigation
pane on the left.
Then I clicked on my newly created custom DB cluster parameter
group (step 3 from the link above).
From within my custom group, I searched for
aurora_load_from_s3_role and then in the "Values" entry box, I
copy/pasted the ARN for the Role that I had just created in step 2 of the
instructions into this box and clicked Save (step 4 from the link above).
I went back to MySQL Workbench and reran my LOAD DATA FROM S3 command and it worked!

AWS Code Deploy - deployment failed

I am trying to setup code deployment using aws, but when I try to perform deployment, I am getting this error:
2016-06-08 23:57:11 ERROR [codedeploy-agent(1207)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Cannot reach InstanceService: Aws::CodeDeployCommand::Errors::AccessDeniedException -
2016-06-08 23:58:41 INFO [codedeploy-agent(1207)]: Version file found in /opt/codedeploy-agent/.version.
2016-06-08 23:58:41 INFO [codedeploy-agent(1207)]: [Aws::CodeDeployCommand::Client 400 0.055741 0 retries] poll_host_command(host_identifier:"IAM-user-ARN") Aws::CodeDeployCommand::Errors::AccessDeniedException
I have two IAM roles - one for EC2 instance, and one for deployment app.
S3 bucket have permission set for iam role which is used for deployment:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "XXXXXXXX:role/TestRole"
},
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "arn:aws:s3:::pmcdeploy/*"
}
]
}
What is going on?
Is the error consistent? On looking at the agent code, it seems like the agent might having trouble talking to EC2. If this is a persistent problem, you can share the EC2 instance profile.
Also starting the agent with verbose option enabled gives a lot more information about what's going on.
Thanks
This is actually something related to the order of credential loading. The host agent is running with root user by default and also uses instance profile.
The exception is got when you've setup a root credential which has priority over instance profile according to: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#config-settings-and-precedence
Then the aws sdk used by host agent will use the credential configured for the root user instead of instance profile to configure the requests.
One of the workaround would be run the agent with a different user and don't configure any credential for that user.
We had what I think the same issue.
Our systems had a /root/.aws/credentials in place which CodeDeploy absolutely uses and I found no way of telling it to not do that.
Especially no documentation...
In the end, we rewrote everything on our end to ensure we'll no longer need a credentials file in place.
From that moment on, CodeDeploy used the instance profile and it was working fine.
I deleted /home/ubuntu/.aws and rebooted codedeploy agent service and it worked for me :-)

AWS Elasticsearch Service IAM Role based Access Policy

I have been struggling to figure out how to communicate with the Amazon ES service from my EC2 instances.
The documentation clearly states that the Amazon ES service supports IAM User & Role based access policies. http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-access-policies
However, when I have this access policy for my ES domain:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789:role/my-ec2-role"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-west-2:123456789:domain/myDomain/*"
}
]
}
I can't log into an ec2 instance and run a curl to hit my elasticsearch cluster.
Trying to do a simple curl of the _search API:
curl "http://search-myDomain.es.amazonaws.com/_search"
Produces an authentication error response:
{"Message":"User: anonymous is not authorized to perform: es:ESHttpGet on resource: arn:aws:es:us-west-2:123456789:domain/myDomain/_search"}
Just to be extra safe I put the AmazonESFullAccess Policy on my IAM Role, still doesn't work.
I must be missing something, because being able to programmatically interact with Elasticsearch from ec2 instances that use an IAM Role is essential to getting anything accomplished with the Amazon ES Service.
I also see this contradictory statement in the docs.
IAM-based Policy Example You create IAM-based access policies by
using the AWS IAM console rather than the Amazon ES console. For
information about creating IAM-based access policies, see the IAM
documentation.
That link to IAM documentation, is to the home page of IAM and contains exactly zero information about how to do it. Anyone got a solution for me?
When using IAM service with AWS, you must sign your requests. curl doesn't support signed requests (which consists of hashing the request and adding a parameter to the header of the request). You can use one of their SDK's that has the signing algorithm built in, and then submit that request.
See:
http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/what-is-amazon-elasticsearch-service.html#signing-requests
You can find the SDKs for popular languages here:
http://aws.amazon.com/tools/
First, you said you can't login to an EC2 instance to curl the ES instance? You can't login? Or you can't curl it from EC2?
I have my Elasticsearch (Service) instance open to the world (with nothing on it) and am able to curl it just fine, without signing. I changed the access policy to test, but unfortunately it takes forever to come back up after changing it...
My policy looks like this:
{ "Version": "2012-10-17", "Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": "*",
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:843348267853:domain/myDomain/*"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": "*",
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:843348267853:domain/myDomain"
}
]
}
I realize this isn't exactly what you want, but start off with this (open to the world), curl from outside AWS and test it. Then restrict it, that way you're able to isolate the issues.
Also, I think you have an issue with the "Principal" in your access policy. You have your EC2 Role. I understand why you're doing that, but I think the Principal requires a USER, not a role.
See below:
http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-createupdatedomains.html#es-createdomain-configure-access-policies
Principal
Specifies the AWS account or IAM user that is allowed or denied access
to a resource. Specifying a wildcard (*) enables anonymous access to
the domain, which is not recommended. If you do enable anonymous
access, we strongly recommend that you add an IP-based condition to
restrict which IP addresses can submit requests to the Amazon ES
domain.
EDIT 1
To be clear, you added the AmazonESFullAccess policy to the my-ec2-role? If you're going to use IAM access policies, I don't think you can have a resource based policy attached to it (which is what you're doing).
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_compare-resource-policies.html
For some AWS services, you can grant cross-account access to your
resources. To do this, you attach a policy directly to the resource
that you want to share, instead of using a role as a proxy. The
resource that you want to share must support resource-based policies.
Unlike a user-based policy, a resource-based policy specifies who (in
the form of a list of AWS account ID numbers) can access that
resource.
Possibly try removing the access policy altogether?
Why you don't create a proxy with elastic ip and allow your proxy to access your ES?
Basically exists three forms that you can limit access in your ES:
Allow everyone
White IP list
Signing the access key and secret key provided by AWS.
I'm using two forms, in my php apps I prefer to use proxy behind the connection to ES and in my nodejs app I prefer to sign my requests using the http-aws-es node module.
It's useful to create a proxy environment because my users needs to access the kibana interface to see some reports and it's possible because they have configured the proxy in their browsers =)
I must recommend to you close the access to your ES indexes, because it's pretty easy to delete them, curl -XDELETE https://your_es_address/index anyone can do it but you can say: "how the others users will get my ES address?" and I will answer you: "Security based in dimness isn't a real security"
My security access policy is basically something like it:
http://pastebin.com/EUKT1ekX
I encountered this issue recently and the root problem is that none of the Amazon SDKs yet support calling Elasticsearch operations like search, put, etc.
The only workaround at the moment is to execute requests directly against the endpoint using signed requests:
http://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html
The example here is for calling EC2, but it can be modified to instead call against Elasticsearch. Just modify the "service" value to "es". From there, you have to fill in values for
the endpoint (which is the full URL of your cluster including operation without request parameters)
the host (the part between https:// and your canonical URI like /_status
the canonical uri which is the URI after the first / inclusive (like /_status) but without the query string
the request parameters (everything after ? inclusive)
Note that I've only managed to get this working so far using AWS credentials as the assumption is that you pass in an access key and secret key to the various signing calls (access_key and secret_key in the example). It should be doable using IAM roles but you'll have to call into the security token service first to get temporary credentials that can be used to sign the request. Until you do that, be sure to edit your access policy on the Elasticsearch cluster to allow user creds (user/
you need to sign your request and unfortunately, it is no longer supported by the official elasticsearch library. Check this Github issue (https://github.com/elastic/elasticsearch-js/issues/1182#issuecomment-630641702)
They want to enforce their own cloud solution

How should I set up my bucket policy so I can deploy to S3?

I've been working on this a long time and I am getting nowhere.
I created a user and it gave me
AWSAccessKeyId
AWSSecretKey
I created a bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:PutObjectAcl",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::abc9876/*"
}
]
}
Now when I use a gulp program to upload to the bucket I see this:
[20:53:58] Starting 'deploy'...
[20:53:58] Finished 'deploy' after 25 ms
[20:53:58] [cache] app.js
Process terminated with code 0.
To me it looks like it should have worked but when I go to the console I cannot see anything in my bucket.
Can someone tell me if my bucket policy looks correct and give me some suggestions on what I could do to test out the uploading. Could I for example test this out from the command line?
There are multiple ways to manage access control on S3. These different mechanisms can be used simultaneously, and the authorization of a request will be the result of the interaction of all the rules in all these mechanisms. Things can get confusing!
Let's try to make things easier to understand. You have:
IAM policies - these are policies you define for specific Users or Groups (or Roles, but let's not get into that...).
S3 bucket policies - these are policies that you define at the bucket level.
S3 ACLs (access control lists) - these are rules that you define both at the bucket level and the object level. This is that permissions area mentioned on a comment to another answer.
Whenever you send a request to S3, e.g. downloading an object, the request will be processed by an authorization system. This system will calculate the union of all the policies/rules described above, and then will follow a process that can be simplified as follows:
If there is any rule explicitly denying the request, it's denied. Period. Otherwise...
Otherwise, if there is any rule explicitly allowing the request, it's allowed. Period. Otherwise...
Otherwise, the request is denied.
Let's say you have all the mechanisms in place. For the request to be accepted, you must not have any rules Denying that request, and need to have at least one rule allowing that request.
Making your policies easier to understand...
My suggestion to you is to simplify your policies. Choose one access control mechanism and use stick to that one.
In your specific situation, from your very brief description, I feel that using IAM policies could be a good idea. You can use either an IAM User Policy (that you define and attach specifically to your IAM User) or an IAM Group Policy (that you define and attach to a group your IAM User belongs to). Let's forget about IAM Roles, that is a whole different story.
Then delete your ACLs and Bucket Policies. Your requests should be allowed then.
As an additional hint, make sure the software you are using to upload objects to S3 is actually using those 2 API calls: PutObject and PutObjectAcl. Keep in mind that S3 supports multi-part upload, through the use of a different set of API calls. If your tool is doing multi-part uploads under the hood, then be sure to allow those API calls as well (many tools will, including the AWS CLI, and many SDKs have a higher level S3 API that will do that as well)!
For more information on this matter, I'd suggest the following post from the AWS Security Blog:
IAM policies and Bucket Policies and ACLs! Oh My! (Controlling Access to S3 Resources)
You don't need to define "Principal": "*" , since you have already created a IAM user
The Bucket Policy looks fine, if there was a problem with access it would have given you an appropriate error.
Just make sure your "Keyname" is correct while calling AWS APIs, the keyname which uniquely identifies the object in a bucket.
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html