Pass a config file/url from AWS S3 to BeanStalk securely - amazon-web-services

I have a tomcat instance that runs in Beanstalk and in the configuration for Beanstalk I pass in a config.file as a parameter like so:
-Dconfig.url=https://s3.amazonaws.com/my-bucket/my-file.txt
This file is in s3 but I have to set permissions to 'Everyone': 'Open', which I do not like doing because this is unsafe, but can't seem to find any other way of doing this. I've looked at the url signing method and this isn't a good solution as both the file and the Beanstalk app are updated frequently and I'd like to have all this automated i.e, if the app breaks and restarts it will not be able to read the file because the signing key would have expired.
I've looked at the docs regararding roles but cannot seem to get this working. I've added a custom policy to the aws-elasticbeanstalk-ec2-role (shown below) and this isn't doing anything - my app can still not access files in the bucket. Could someone please tell me how / whether this can be fixed?
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket-name/*"
}
]
}
Is there another way I can allow the Beanstalk application to read files in an S3 bucket? Any help is appreciated.

Related

Is it impossible upload file in the different region from the region of s3 bucket? (IllegalLocationConstraintException)

I deployed my django project using AWS Elastic beanstalk and S3,
and I tried to upload the profile avatar but it shows Server Error(500)
My Sentry log shows me,
"An error occurred (IllegalLocationConstraintException) when calling the PutObject operation: The eu-south-1 location constraint is incompatible for the region specific endpoint this request was sent to."
I think this error appeared
because I put my bucket on eu-south-1 but I try to access it and to create a new object in Seoul, Korea.
Also, the AWS document said IllegalLocationConstraintException indicates that you are trying to access a bucket from a different Region than where the bucket exists. To avoid this error, use the --region option. For example: aws s3 cp awsexample.txt s3://testbucket/ --region ap-east-1.
(https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html)
but this solution might be just for when upload a file from AWS CLI...
I tried to change my bucket policy by adding this but doesn't work.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::{BucketName}/*"
}
]
}
I don't know what should I do and why they do not allow access from other regions?
How to allow access to create, update and remove an object in my bucket from all around the world?
This is my first deployment please help me🥲
Is your Django Elastic Beanstalk instance in a different region from the S3 bucket? If so, you need to set the AWS_S3_REGION_NAME setting as documented here.

How to automate visualization of S3 objects by using Lambda?

The thing is that I have these objects in my S3 bucket (email) and I want to be able to visualize them since they're not that easy to open them and if you want to you need to manually do it. I was told there was a way to use Lambda function with S3 to be able to create a some kind of html view page to see the objects.
Hope someone can help me do that.
Based on what you have provided, Here's what I understand.
You've email(s) in S3
You want to visualize/read them via website
There are many ways to do it. Simplest of all is using s3's public hosting capability. See this AWS doc for step by step details.
This example assumes your email is in .txt format. If you have any other formt (e.g. pdf / eml etc) you will need corresponding parser library and logic to open and read those. In that case this example may not work. you may want to look at other aws options such as aws lightsail or aws amplify, It will depend on your requirements.
Based on AWS doc, Here's what you can do at a high-level.
Create a basic index.html and upload to s3 folder.
Create a basic error.html and upload to your s3 folder.
Under bucket URL -> properties edit and enable 'Static Website Hosting'
e.g. https://s3.console.aws.amazon.com/s3/buckets/yours3bucket?region=us-west-1&tab=properties
This will give you an website URL something like below.
http://yours3bucket.s3-website-us-west-1.amazonaws.com
Under bucket URL -> permissions , UnCheck "Block all public access".
[ caution: this will open your s3 bucket to the world. For better access control, consider using IAM ]
Add a bucket policy.
[This example shows policy to enable access to anyone. you should consider restricting to certain users using IAM]
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::yours3bucket/*"
}
]
}
After that, simply launch your static website
http://yours3bucket.s3-website-us-west-1.amazonaws.com/index.html
Each of your email (assuming its in text format) should be accessible as below.
http://yours3bucket.s3-website-us-west-1.amazonaws.com/youremailobject.txt
Here's how it looks at my end

AWS Amplify CLI creates inaccessible bucket for hosting

I have got a strange problem with the Amplify CLI. When adding hosting to my angular app through
amplify hosting add
and subsequently calling
amplify publish
the link provided at the end of the process links to a webpage that just shows an XML document telling me the access was denied. What is happening here? It seems to me like the hosting bucket has a wrong policy attached, but why would the amplify CLI create a private bucket?
Can someone shed some light here?
Here is the bucket policy created by the CLI:
{
"Version": "2012-10-17",
"Id": "MyPolicy",
"Statement": [
{
"Sid": "APIReadForGetBucketObjects",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity xxx"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::xxx/*"
}
]
}
When using the amplify init command be very careful when passing the path to the project build.
In case of angular projects the default value is dist/{project-name}, so when initializing the amplify to correctly this path, this way:
Distribution Directory Path: dist/{project-name}
If the path is not passed correctly, the access message has been denied it will appear as if everything is working, but in fact it can not find the files.
Finally if even then the error continues to happen here is a reference to other possible reasons for this to be happening.
I just encountered this as well. If I go into the Cloudfront distribution and update the origin to include an Origin Path pointing to the main subdirectory in the S3 bucket (folder is the same name as the Amplify project), the problem appears to be resolved.
I just came across this issue and it ended up being a safety setting in S3.
1. Go to S3 > Public access settings for this account > Untick those

Is it possible to restrict access to S3 data from EMR (zeppelin) by IAM roles?

I have set up an EMR cluster with Zeppelin installed on it. I configured Zeppelin with Active Directory authentication and I have associated those AD users with IAM roles. I was hoping to restrict access to specific resources on S3 after logging into zeppelin using the AD credentials. However, it doesn't seem to be respecting the permissions the IAM role has defined. The EMR role has S3 access so I am wondering if that is overriding the permissions or that is actually the only role it cares about in this scenario
Does anyone have any idea?
I'm actually about to try to tackle this problem this week. I will try to post updates as I have some. I know that this is an old post, but I've found so many helpful things on this site that I figured it might help someone else even if doesn't help the original poster.
The question was if anyone has any ideas, and I do have an idea. So even though I'm not sure if it will work yet, I'm still posting my idea as a response to the question.
So far, what I've found isn't ideal for large organizations because it requires some per user modifications on the master node, but I haven't run into any blockers yet for a cluster at the scale that I need it to be. At least nothing that can't be fixed with a few configuration management tool scripts.
The idea is to:
Create a vanilla Amazon EMR cluster
Configure SSL
Configure authentication via Active Directory
(this step is what I am currently on) Configure Zeppelin to use impersonation (i.e. run the actual notebook processes as the authenticated user), which so far seems to require creating a local OS (Linux) user (with a username matching the AD username) for each user that will be authenticating to the Zeppelin UI. Employing one of the impersonation configurations can then cause Zeppelin run the notebooks as that OS user (there are a couple of different impersonation configurations possible).
Once impersonation is working, manually configure my own OS account's ~/.aws/credentials and ~/.aws/config files.
Write a Notebook that will test various access combinations based on different policies that will be temporarily attached to my account.
The idea is to have the Zeppelin notebook processes kick off as the OS user that is named the same as the AD authenticated user, and then have an ~/.aws/credentials and ~/.aws/config file in each users' home directory, hoping that that might cause the connection to S3 to follow the rules that are attached to the AWS account that is associated with the keys in each user's credentials file.
I'm crossing my fingers that this will work, because if it doesn't, my idea for how to potentially accomplish this will become significantly more complex. I'm planning on continuing to work on this problem tomorrow afternoon. I'll try to post an update when I have made some more progress.
One way to allow access to S3 by IAM user/role is to meet these 2 conditions:
Create S3 bucket policy matching S3 resources with IAM user/role. This should be done in S3/your bucket/Permissions/Bucket Policy.
Example:
{
"Version": "2012-10-17",
"Id": "Policy...843",
"Statement": [
{
"Sid": "Stmt...434",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::<account-id>:user/your-s3-user",
"arn:aws:iam::<account-id>:role/your-s3-role"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::target-bucket/*",
"arn:aws:s3:::other-bucket/specific-resource"
]
}
]
}
Allow S3 actions for your IAM user/role. This should be done in IAM/Users/your user/Permissions/Add inline policy. Example:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:HeadBucket",
"s3:ListObjects"
],
"Resource": "s3:*"
}
]
}
Please note this might be not the only and/or best way, but it worked for me.

Elastic Beanstalk deployment stuck on updating config settings

I've been testing my continuous deployment setup, trying to get to a minimal set of IAM permissions that will allow my CI IAM group to deploy to my "staging" Elastic Beanstalk environment.
On my latest test, my deployment got stuck. The last event in the console is:
Updating environment staging's configuration settings.
Luckily, the deployment will time out after 30 minutes, so the environment can be deployed to again.
It seems to be a permissions issue, because if I grant s3:* on all resources, the deployment works. It seems that when calling UpdateEnvironment, Elastic Beanstalk does something to S3, but I can't figure out what.
I have tried the following policy to give EB full access to its resource bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::elasticbeanstalk-REGION-ACCOUNT/resources/_runtime/_embedded_extensions/APP",
"arn:aws:s3:::elasticbeanstalk-REGION-ACCOUNT/resources/_runtime/_embedded_extensions/APP/*",
"arn:aws:s3:::elasticbeanstalk-REGION-ACCOUNT/resources/environments/ENV_ID",
"arn:aws:s3:::elasticbeanstalk-REGION-ACCOUNT/resources/environments/ENV_ID/*"
]
}
]
}
Where REGION, ACCOUNT, APP, and ENV_ID are my AWS region, account number, application name, and environment ID, respectively.
Does anyone have a clue which S3 action and resource EB is trying to access?
Shared this on your blog already, but this might have a broader audience so here it goes:
Following up on this, the ElastiBeanstalk team has provided me with the following answer regarding the S3 permissions:
"[...]Seeing the requirement below, would a slightly locked down version work? I've attached a policy to this case which will grant s3:GetObject on buckets starting with elasticbeanstalk. This is essentially to allow access to all elasticbeanstalk buckets, including the ones that we own. The only thing you'll need to do with our bucket is a GetObject, so this should be enough to do everything you need."
So it seems like ElasticBeanstalk is accessing buckets out of anyone's realm in order to work properly (which is kind of bad, but that's just the way it is).
Coming from this, the following policy will be sufficient for getting things to work with S3:
{
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::elasticbeanstalk-<region>-<account_id>",
"arn:aws:s3:::elasticbeanstalk-<region>-<account_id>/",
"arn:aws:s3:::elasticbeanstalk-<region>-<account_id>/*"
],
"Effect": "Allow"
},
{
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::elasticbeanstalk*",
"Effect": "Allow"
}
Obviously, you need to wrap this into a proper policy statement that IAM understands. All your previous assumptions about IAM policies have proven right though so I'm guessing this shouldn't be an issue.