Permission denied while elastic beanstalk is retrieving S3 file - amazon-web-services

I have files stored on S3 and wrote .ebextensions config to automatically copy the them to new instances. I'm receiving this error in the Elastic Beanstalk console:
[Instance: INSTANCEID Module: AWSEBAutoScalingGroup ConfigSet: null] Command failed on instance. Return code: 1 Output: [CMD-AppDeploy/AppDeployStage0/EbExtensionPreBuild] command failed with error code 1: Error occurred during build: Failed to retrieve https://s3-us-west-1.amazonaws.com/MyBucket/MyFolder/_MyFile.txt: HTTP Error 403 : AccessDenied
My .ebextension config file has this section:
files:
"/target/file/path" :
mode: "000777"
owner: ec2-user
group: ec2-user
source: https://s3-us-west-1.amazonaws.com/_MyBucket_/_MyFolder_/_MyFile.txt
In attempting to make this file copying work, I've also relaxed permissions by giving the elastic beanstalk IAM role the standard read only access policy to all of S3. It's policy is this:
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "*"
}
Yet the prebuild copying step still fails. Did I give the source url in the correct format? Is there another security entity/policy involved? Help please :)

The documentation is very sketchy on the subject (probably an ideal candidate for StackExchange Docs!).
To do this correctly with .ebextensions, you need to allow the Beanstalk instance IAMs user in the bucket policy, setup an AWS::CloudFormation::Authentication: auth config and attach config to remote sources. This is kind of a hybrid of all the other answers, but all failed in one way or another for me.
Assuming your IAM instance role is aws-elasticbeanstalk-ec2-role:
Set your AWS bucket to allow the Beanstalk IAM User. Edit "bucket policy":
{
"Version": "2012-10-17",
"Id": "BeanstalkS3Copy",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "<beanstalk_iam_role_arn>"
},
"Action": [
"s3:ListBucketVersions",
"s3:ListBucket",
"s3:GetObjectVersion",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<bucket_name>",
"arn:aws:s3:::<bucket_name>/*"
]
}
]
}
where:
beanstalk_iam_role_arn = the fully qualified instance IAMs role. See "IAM role" associated with a running instance if available or see environment configuration. Example: arn:aws:iam::12345689:role/aws-elasticbeanstalk-ec2-role
bucket_name = your bucket name
In your .ebextension/myconfig.config, add an S3 authentication block that uses your IAMs instance user:
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Auth:
type: "s3"
buckets: ["bucket_name"]
roleName:
"Fn::GetOptionSetting":
Namespace: "aws:asg:launchconfiguration"
OptionName: "IamInstanceProfile"
DefaultValue: "aws-elasticbeanstalk-ec2-role"
Set bucket_name appropriately
Define a remote file and attach the S3 Authentication block:
"/etc/myfile.txt" :
mode: "000400"
owner: root
group: root
authentication: "S3Auth" # Matches to auth block above.
source: https://s3-eu-west-1.amazonaws.com/mybucket/myfile.txt
Set your source URL appropriately

Similar to chaseadamsio's answer, you can configure the role given to the EC2 instance with a policy to access S3 resources, then use the pre-installed AWS CLI utilities to move files around.
The way I approached this is to create a role dedicated to the given EB application, then attach a policy similar to:
"Statement": [
{
"Sid": "<sid>",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<your_bucket_path>/*"
]
}
]
This gives your instance access, then to get the files, add a 'commands' block to your config such as:
commands:
01-get-file:
command: aws s3 cp s3://<your_bucket_path>/your-file.txt /home/ec2-user
02-execute-actions:
[unpack, run scripts, etc..]
Obviously you can use other AWS CLI utlities as needed. I found this solved a lot of problems I was having with S3 access and makes deployment a lot easier.

I found a solution to overcome this error. It turns out adding a Resources section to the .ebextensions config file makes it work. The entire file becomes:
files:
"/target/file/path" :
mode: "000777"
owner: ec2-user
group: ec2-user
source: https://s3-us-west-1.amazonaws.com/_MyBucket_/_MyFolder_/_MyFile.txt
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Access:
type: S3
roleName: aws-elasticbeanstalk-ec2-role
buckets: _MyBucket
At this point, I don't know enough to grok why it has to be this way. Hopefully it can help someone who's lost move forward and eventually gain a better understanding. I based my answer on this link https://forums.aws.amazon.com/message.jspa?messageID=541634

An alternative to setting the .ebextensions config would be to set a policy on the aws-elasticbeanstalk-ec2-role within the IAM Manager (or create a new role specifically for your elastic beanstalk environments to sandbox your autoscaled ec2 instances.
To do so, go to the IAM manager within the web console, and click on "Roles" on the left side. You should see your instance name in the list of roles, clicking on that will take you to the administration page for that particular role. Attach a new role policy to the role under "Permissions" with a policy document matching what you want your ec2 to have permissions to do (in this case, you'd give it a policy to access an s3 bucket called _MyBucket and you should no longer need the Resources section in your .ebextensions config.

If you have your IAM role for the machine configured to get access to the file you can do the following in .ebextensions
commands:
01a_copy_file:
command: aws s3 cp s3://bucket/path/file /destination/

Related

Does anyone know where this goes in the instances?

{
"Sid": "ElasticBeanstalkHealthAccess",
"Action": [
"elasticbeanstalk:PutInstanceStatistics"
],
"Effect": "Allow",
"Resource": [
"arn:aws:elasticbeanstalk:*:*:application/*",
"arn:aws:elasticbeanstalk:*:*:environment/*"
]
}
That's a part of the IAM profile for the elastic beanstalk instance.
If you choose AWSElasticBeanstalkWebTier or AWSElasticBeanstalkWorkerTier as IAM Instance profile, the ElasticBeanstalkHealthAccess permissions will be added already.
See https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/iam-instanceprofile.html
There are two IAM roles associated with an Elastic Beanstalk Environment:
Service role: used to manage the environment
Instance role: role assumed by the running application. It is used to provide access to other AWS services.
You need to find your instance role in IAM console and attach the permission that you see in the documentation. This will allow your application to send statistics.

Terraform unable to assume IAM Role

I am trying to get started with Terraform and am using GitLab CI/CD to interact with it. My Runner is unable to assume the IAM Role which has elevated privileges to create AWS resources. My Google-fu on this has failed me.
The error received is:
Error: error configuring Terraform AWS Provider: IAM Role
(my:arn) cannot be assumed. There are a number of possible causes of this - the most common are:
The credentials used in order to assume the role are invalid
The credentials do not have appropriate permission to assume the role
The role ARN is not valid
I have created an access/secret key in IAM and have attempted supplying these as GitLab CI/CD Variables, environment variables that I directly export in my before_script, and even the not-recommended hardcoding them into the provider stanza. No matter what, I still get this same error.
What is extra strange is that AWS shows that the key is being used. The "Last Used" column will always reflect a timestamp of the last attempt at running the pipeline. For better or worse, the key is part of my root AWS account - this is a sandbox project and I don't have any IAM Users, so, it's not clear to me how Terraform is unable to use these credentials to assume a Role when, according to AWS, it's able to access my account with them, and my account has root privileges.
Here is my provider.tf:
terraform {
required_version = ">= 0.14"
backend "s3" { }
}
provider "aws" {
region = "us-east-1"
access_key = "redacted"
secret_key = "redacted"
assume_role {
role_arn = "arn:aws:iam::redacted:role/gitlab-runner-role"
}
}
Here is the relevant section of my .gitlab-ci.yml for this stage:
.terraform_init: &terraform_init |-
terraform init -backend-config="bucket=my-terraform-state" -backend-config="region=us-east-1" -backend-config="key=terraform.tfstate"
tf-plan:
image:
name: hashicorp/terraform
entrypoint: [""]
stage: plan
before_script:
- *terraform_init
script:
- terraform plan -out=tfplan.plan
- terraform show --json tfplan.plan | convert_report > tfplan.json
needs:
- job: tf-val
tags:
- my-runner
My main.tf only contains a basic aws_instance stanza and my terraform validate stage (omitted above) says it's in ship-shape. These are the only 3 files in my repo.
My gitlab-runner-role only contains one Policy, gitlab-runner-policy, whose JSON is:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::*/*",
"arn:aws:s3:::my-terraform-state"
]
}
]
}
TIA for any advisement... really banging my head up against the wall on this one.
Turns out that assume_role is only needed for cross-account work. I was doing all of the work within my own account, so removing this allowed Terraform to just use the keys to do the work without needing a different IAM Role (or it's able to do what it needs to via the Role that is attached to the Runner as an instance profile). It's not clear to me why specifying assume_role anyway would result in an error, since the access should be there, but removing it has fixed this issue.

Permissions to READ EKS cluster from EC2

I have an EKS cluster and EC2. I would like to create an instance profile and attach it to the EC2 - this profile should allow ONLY READ access to the EKS cluster.
Will the following policy be apt for this requirement?:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"eks:ListNodegroups",
"eks:DescribeFargateProfile",
"eks:ListTagsForResource",
"eks:ListAddons",
"eks:DescribeAddon",
"eks:ListFargateProfiles",
"eks:DescribeNodegroup",
"eks:ListUpdates",
"eks:DescribeUpdate",
"eks:AccessKubernetesApi",
"eks:DescribeCluster",
"eks:ListClusters",
"eks:DescribeAddonVersions"
],
"Resource": "*"
}
]
}
Depends on what you mean by "read" access.
If you mean "read" from AWS perspective as in being able to use the AWS CLI to tell you about EKS, yes that would be sufficient to get you started. This will not include any kubectl commands.
But if you mean read as in being able to execute kubectl commands against the cluster, then you will not be able to achieve that with this.
To implement read access to the cluster itself using kubectl commands, your EC2 instance will need a minimum of the following IAM permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"eks:DescribeCluster",
"eks:ListClusters"
],
"Resource": "*"
}
]
}
With this, your EC2 will be able to execute eksctl utils write-kubeconfig --cluster=cluster-name to configure the kubeconfig. This also assumes you have the required components installed on your EC2 to run kubectl.
You also need to set up permissions within your cluster because the IAM permissions alone don't actually grant any visibility within the cluster itself.
The role you assign to your EC2 would need to be added to the aws-auth configmap in the kube-system namespace. See Managing users or IAM roles for your cluster from AWS docs for examples.
Unfortunately I don't believe there's a simple out-of-the-box RBAC role you can use that gives you read-only access to the entire cluster. Kubernetes provides four user-facing roles and of them, only the system:masters group has cluster-wide access.
Have a look at Kubernetes Using RBAC Authorization documentation - specifically on user-facing roles.
You will need to design a permission strategy to fit your needs, but you do have the default role view that you can start from. The default view user-facing role is tied to a ClusterRoleBinding and was designed / intended to be used in a namespace specific capacity.
Permissions and RBAC for Kubernetes is a very deep rabbit-hole.

Curl command to copy a .sh file from AWS S3 location

What is curl command(in a .config file) to copy a .sh file from AWS S3 bucket.
To copy fluent installer file from a S3 location and install in the EC2 instance
i Need to replace below command to
curl -L https://toolbelt.treasuredata.com/sh/install-amazon1-td-agent3.sh
copy from S3 location
I am assuming you mean a .config file that would be deployed with something like Elastic Beanstalk.
Firstly, I would highly recommend that you turn off the bucket's public availability if it is sensitive data.
Secondly, for this process, you should create a new IAM Policy (or a new Role), with access to the specific file in the Bucket you want to copy. Again, I am assuming Elastic Beanstalk for this answer.
Elastic Beanstalk environments spin up with a default Role named:"aws-elasticbeanstalk-ec2-role".
Here is an example Policy that you can create, and then attach to the Role:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["s3:GetObject"],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::your-bucket-name/your-file.txt"
]
}
]
}
The .config file contents should then be as follows:
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Auth:
type: "s3"
buckets: ["your-bucket-name"]
roleName: "aws-elasticbeanstalk-ec2-role"
files:
"/path/to/your-file.txt" :
mode: "000644"
owner: root
group: root
authentication: "S3Auth"
source: https://s3.REGION.amazonaws.com/your-bucket-name/your-file.txt
Replace:
REGION with your Bucket's region,
your-bucket-name with the Bucket's name,
your-file.txt with your file's name.
What this does
.config files are run before initialisation, and therefore allow you to do things before your code is run.
The code here creates a new file on the instance that it's deployed to, and copies the contents from the target in the S3 Bucket.
This file therefore exists before your own code is run, and you can safely use it at runtime.

Deploying MVC application on AWS (.NET CORE 3.1) - Error: Environment must have instance profile associated with it

Main problem is that I don't understand where I have to write these variables in the application.
I can not deploy my MVC application on AWS.
After deploying i get error: Environment must have instance profile associated with it.
I found out the answer here:
AWS Elastic Beanstalk - Environment must have instance profile associated with it
But I don't understand where I have to write these variables in the program.
OptionSettings.member.1.Namespace = aws:autoscaling:launchconfiguration
OptionSettings.member.1.OptionName = IamInstanceProfile
OptionSettings.member.1.Value = aws-elasticbeanstalk-ec2-role
I got the same error in my Elastic Beanstalk's environment page. When I checked Visual Studio output message, it said
"Caught AmazonIdentityManagementServiceException whilst setting up
role: User: arn:aws:iam::77485*****:user/vs_delpoy_agent is not
authorized to perform: iam:GetInstanceProfile on resource: instance
profile aws-elasticbeanstalk-ec2-role"
I solved this by creating my own policy on AWS's IAM page. That policy contains json like this
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iam:GetInstanceProfile"
],
"Resource": "*"
}
]
}
then add this newly created policy to your group