I have this policy rule in my S3 bucket called aws-coes:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::aws-coes/*"
],
"Condition": {
"StringEquals": {
"aws:sourceVpc": "vpc-foo"
}
}
}
]
}
I was expecting that only the machines under my VPC "vpc-foo" could get the resources from my bucket, but no machine can get anything.
Did I do something wrong here?
Also I follow the steps of this post but nothing https://blog.adminfactory.net/allow-access-to-s3-bucket-only-from-ec2-instances.html
I once had a similar issue. The following comes into my mind:
The policy looks good. The s3:GetObject action does not need to reference the bucket as resource. A wildcard path pointing at objects is sufficient. The policy examples [1] in the docs clearly state that fact.
You must use the vpc id as value for the aws:sourceVpc condition. Just mentioning it to make sure that you are not using the VPC ARN accidentally. [2]
What is also interesting, is that people most likely use the aws:sourceVpc condition to restrict access (i.e. a deny policy) - not to whitelist traffic. This is most likely not a functional issue, but I want to mention it nonetheless. From a security perspective it is probably safer to restrict access to the S3 bucket (as described in the aws docs) and attaching an EC2 instance role which grants access to the S3 bucket. This way, all EC2 instances within a particular VPC are able to access an S3 bucket, but other (possibly malicious) network entities are not.
I would double check if the requests from your EC2 instance are really routed through the VPC endpoint. As mentioned by the docs [3], it is crucial for the traffic to originate from an AWS VPC endpoint. This is accomplished inside the VPC by routing the traffic over a dedicated route inside the AWS network instead of the Internet Gateway. Could you please double check that you added the VPC endpoint to your route table correctly? One way you could check this is to make a public S3 bucket and access it from the EC2 instance. Subsequently, attach a vpc endpoint policy which denies all S3 traffic. Then, try to access the S3 bucket from your EC2 instance again. If the second time, access is denied, you know that probably the VPC endpoint is used and traffic is routed correctly inside the VPC.
Depending on the size of your organization there might be other IAM controls in place which deny the access. This is probably not an issue, but it might be worth checking if your company uses AWS Organizations and has an SCP which denies access. Also check, if there is no explizit deny, e.g. for your EC2 instance role. Take a look at the IAM evaluation logic [4] for more information.
You did not mention in your question if you are using one single AWS account or if you are in a multi-account scenario, e.g. an S3 bucket in account A and a VPC in account B. Iff this is the case, please check out the docs in [5], since it changes the policy evaluation logic when context authority and bucket owner differ. Using the aws:sourceVpc condition cross-account is probably not even possible. [6]
I hope some of these points are helpful to track the issue down.
References
[1] https://docs.aws.amazon.com/AmazonS3/latest/dev/example-policies-s3.html#iam-policy-ex0
[2] https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies-vpc-endpoint.html#example-bucket-policies-restrict-access-vpc
[3] https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcevpc
[4] https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html#policy-eval-denyallow
[5] https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-auth-workflow-object-operation.html
[6] https://stackoverflow.com/a/52517646/10473469
FYI, internally in the EC2-->S3 networking stack there are certain edge cases where the SourceVPC header is not passed on requests to S3, in those cases you'll need to use a VPC endpoint condition instead.
Source (I used to do a lot of S3 support at AWS).
Related
Question: If I add a VPC to the Lambda, does it loose access to AWS services like DynamoDB? ***
My Lambda needs to do a fetch two HTTPS services (technically one is wss). As I understand Lambdas, they can't get to anything, even AWS services unless given. The Lambda already was able to access DynamoDB tables, but I wanted to give it the REST services as well. I read somewhere that the Lambda can't really connect almost anywhere without associating it with a VPC. To do that, I added an inline policy as described at AWS Lambda:The provided execution role does not have permissions to call DescribeNetworkInterfaces on EC2
The Lambda has a custom role which has AWS Policies:
AmazonS3FullAccess
AmazonAPIGatewayInvokeFullAccess
AmazonDynamoDBFullAccess
AWSLambdaBasicExecutionRole
plus an inline policy (literally from the SO link above)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeNetworkInterfaces",
"ec2:CreateNetworkInterface",
"ec2:DeleteNetworkInterface",
"ec2:DescribeInstances",
"ec2:AttachNetworkInterface"
],
"Resource": "*"
}
]
}
As long as you configure the lambda to use a subnet in your VPC that has internet access then it will be able to reach DynamoDB just fine. I suggest you specify two subnets for high availability. If you use private subnets then you'll need to create NAT gateways so that they have internet access. Access to AWS services could get a bit more complex if you're using something like VPC endpoints, but if you're not using those in your VPC then it's not something you need to worry about.
Also, you really only need to use VPCs/Subnets with your lambda if it needs access to resources that reside within the VPC (such as an RDS cluster, or some API that is not publicly available). Otherwise, if you don't specify a vpc, your lambda will have internet access by default.
I am working on aws SAM project and i have a requirement of giving access to my S3 bucket to multiple iam users from unknown aws accounts but i can't make bucket publicly accessible. I want to secure my bucket as well as i want any iam user from any aws account to access the contents of my S3 bucket. Is this possible?
Below is the policy i tried and worked perfectly.
{
"Version": "2012-10-17",
"Id": "Policy1616828964582",
"Statement": [
{
"Sid": "Stmt1616828940658",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/STS_Role_demo"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::new-demo-bkt/*"
}
]
}
Above policy is for one user but i want any user from other AWS account to access my contents without making the bucket and objects public so how can i achieve this?
This might be possible using a set of Conditions on the incoming requests.
I can think of two options:
You create an IAM role that your SAM application uses even when running in other accounts
You create S3 bucket policies that allow unknown users access
If you decide to look into S3 bucket policies, I suggest using an S3 Access Point to better manage access policies.
Access points are named network endpoints that are attached to buckets
that you can use to perform S3 object operations, such as GetObject
and PutObject. Each access point has distinct permissions and network
controls that S3 applies for any request that is made through that
access point. Each access point enforces a customized access point
policy that works in conjunction with the bucket policy that is
attached to the underlying bucket.
You can use a combination of S3 Conditions to restrict access. For example, your SAM application could include specific condition keys when making S3 requests, and the bucket policy then allows access based on those conditions.
You can also apply global IAM conditions to S3 policies.
This isn't great security though, malicious actors might be able to figure out the headers and spoof requests to your bucket. As noted on some conditions such as aws:UserAgent:
This key should be used carefully. Since the aws:UserAgent value is
provided by the caller in an HTTP header, unauthorized parties can use
modified or custom browsers to provide any aws:UserAgent value that
they choose. As a result, aws:UserAgent should not be used to
prevent unauthorized parties from making direct AWS requests. You can
use it to allow only specific client applications, and only after
testing your policy.
Is there any way to allow all instances created by a specific AWS account access to an S3 bucket?
I would like to provide data that should be very simple for clients to download to their instances. Ideally, automatically via the post_install script option of AWS ParallelCluster.
However, it seems like this requires a lot of setup, as is described in this tutorial by AWS:
https://aws.amazon.com/premiumsupport/knowledge-center/s3-instance-access-bucket/
This is not feasible for me. Clients should not have to create IAM roles.
The best I came up with at the moment is allowing S3 bucket access to a specific AWS account and then working with access keys:
export AWS_ACCESS_KEY_ID=<key-id>
export AWS_SECRETE_ACCESS_KEY=<secret-key>
aws s3 cp s3://<bucket> . --recursive
Unfortunately, this is also not ideal as I would like to provide ready-to-use AWS Parallelcluster post_install scripts. These scripts should automatically download the required data on cluster startup.
Is there any way to allow all instances created by a specific AWS account access to an S3 bucket?
Yes. It's a 2 step process. In summary:
1) On your side, the bucket must trust the account id of the other accounts that will access it, and you must decide which calls you will allow.
2) On the other accounts that will access the bucket, the instances must be authorised to run AWS API calls on your bucket using IAM policies.
In more detail:
Step 1: let's work through this and break it down.
On your bucket, you'll need to configure a bucket policy like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "111",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT_ID_TO_TRUST:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME_HERE/*"
}
]
}
You can find more examples of bucket policies in the AWS documentation here.
WARNING 1: "arn:aws:iam::ACCOUNT_ID:root" will trust everything that has permissions to connect to your bucket on the other AWS account. This shouldn't be a problem for what you're trying to do, but it's best you completely understand how this policy works to prevent any accidents.
WARNING 2: Do not grant s3:* - you will need to scope down the permissions to actions such as s3:GetObject etc. There is a website to help you generate these policies here. s3:* will contain delete permissions which if used incorrectly could result in nasty surprises.
Now, once that's done, great work - that's things on your end covered.
Step 2: The other accounts that want to read the data will have to assign an instance role to the ec2 instances they launch and that role will need a policy attached to it granting access to your bucket. Those instances can then run AWS CLI commands on your bucket, provided your bucket policy authorises the call on your side and the instance policy authorises the call on their side.
The policy that needs to be attached to the instance role should look something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME_HERE/*"
}
]
}
Keep in mind, just because this policy grants s3:* it doesn't mean they can do anything on your bucket, not unless you have s3:* in your bucket policy. Actions of this policy will be limited to whatever you've scoped the permissions to in your bucket policy.
This is not feasible for me. Clients should not have to create IAM roles.
If they have an AWS account it's up to them on how they choose to access the bucket as long as you define a bucket policy that trusts their account the rest is on them. They can create an ec2 instance role and grant it permissions to your bucket, or an IAM User and grant it access to your bucket. It doesn't matter.
The best I came up with at the moment is allowing S3 bucket access to a specific AWS account and then working with access keys:
If the code will run on an ec2 instance, it's bad practice to use access keys and instead should use an ec2 instance role.
Ideally, automatically via CloudFormation on instance startup.
I think you mean via instance userdata, which you can define through CloudFormation.
You say "Clients should not have to create IAM roles". This is perfectly correct.
I presume that you are creating the instances for use by the clients. If so, then you should create an IAM Role that has access to the desired bucket.
Then, when you create an Amazon EC2 instance for your clients, associate the IAM Role to the instance. Your clients will then be able to use the AWS Command-Line Interface (CLI) to access the S3 bucket (list, upload, download, or whatever permissions you put into the IAM Role).
If you want the data to be automatically downloaded when you first create their instance, then you can add User Data script that will execute when the instance starts. This can download the files from S3 to the instance.
I want to give access to my developer to set up EC2 and MongoDB for my app. Basically, moving everything from his server to my on AWS.
I created a Group called "Developer" and set the Policy to "PowerUserAccess".
I have added a User to this group, meaning he now has "PowerUserAccess" access.
My question:
Should I specify more permission now for that specific user as it seems to me that he now has more access than he actually needs? If yes, how to do so?
Poweruseraccess Group basically have full access to all aws services except management of IAM settings.
This subject very well documented with user case scenarios and best practices on AWS
Provide access only to EC2 Instances using "AmazonEC2FullAccess".
Launch an EC2 instance and provide the Public IP and the key(or create a user) to Developer, So that he can configure the instance.
If the server is an EC2 Instance, you can ask him to create an AMI and share the AMI with your account.
Basically, moving everything from his server to my on AWS.
Try to limit the permissions to only what is necessary for the developer to perform thier function and meet your business requirements. You can do that by creating a Developer group with a customer managed policy that meets these conditions:
Restrict the group permissions to a region or regions you'd like your app to run in.
Create a VPC and restrict the group to only create EC2 instances in this VPC to limit a breach to not affect or communicate with other instances.
For cost considerations, decide what time of instances are necessary for your business requirements and restrict the permissions to that.
Here's how you can apply this policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:*",
"Resource": [
"arn:aws:ec2:[region]:[account id]:instance/*"
],
"Condition": {
"StringEquals": {
"ec2:InstanceType": "m1.small"
},
"ArnEquals": {
"ec2:Vpc": "arn:aws:ec2:[region]:[account id]:vpc/[vpv id]"
}
}
}
]
}
It is possible that you don't need to specify both the VPC and EC2 instance resource to restrict this to a specific region. Specifying the VPC condition might be enough. You could simply replace the resource value with a * e.g "Resource": "*".
I suggest you create different groups with different granular permissions.
For instance,
You would want to give some developers read access only, some developer read and write and some read, write & delete access.
Thus, you can create three groups-
DeveloperWithReadAccess,
DeveloperWithReadAndWriteAccess
DeveloperWithReadWriteDeleteAccess
So that generic access can be given using aws managed policies and customer managed policies. If you have any specific case, then you can use inline policy for any specific user.
Official white also suggests the same.
IAM groups are a powerful tool for managing access to AWS resources. Even if you only have one user who requires access to a specific resource, as a best practice, you should identify or create a new AWS group for that access, and provision user access via group membership, as well as permissions and policies assigned at the group level.
You can read the same at
https://d0.awsstatic.com/whitepapers/Security/AWS_Security_Best_Practices.pdf
I have read the AWS documentation and it wasn't helpful... at least not for me. I have read about IAM and the user policy on the EC2.
I want to make users have full access/(or just some actions allowed) only on ONE ec2 instance.
The region I'm using is eu-west-1(Ireland). I made this policy:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "ec2:*",
"Resource": "arn:aws:ec2:eu-west-1:ACCOUNT_ID:instance/INSTANCE_ID"
}]
}
and when I login as the user, I see that I'm not authorized:
You are not authorized to describe Running Instances
You are not authorized to describe Elastic IPs
You are not authorized to describe Volumes
You are not authorized to describe Snapshots
You are not authorized to describe Key Pairs
You are not authorized to describe Load Balancers
You are not authorized to describe Placement Groups
You are not authorized to describe Security Groups
If I apply the following policy for the resource attribute:
"Resource": "arn:aws:ec2:*"
it's Ok but it's not what I need because users have access on all EC2 instances.
I want to know if this is a bug of AWS or there are problems with eu-west-1 region or this policy isn't supported already? Or maybe I'm wrong, if so, please help me how to do
The recently introduced Resource-Level Permissions for EC2 and RDS Resources are not yet available for all API actions, but AWS is gradually adding more, see this note from Amazon Resource Names for Amazon EC2:
Important Currently, not all API actions support individual ARNs; we'll add support for additional API actions and ARNs for additional
Amazon EC2 resources later. For information about which ARNs you can
use with which Amazon EC2 API actions, as well as supported condition
keys for each ARN, see Supported Resources and Conditions for Amazon
EC2 API Actions.
You will find that all ec2:Describe* actions are indeed absent still from Supported Resources and Conditions for Amazon EC2 API Actions at the time of this writing.
See also Granting IAM Users Required Permissions for Amazon EC2 Resources for a concise summary of the above and details on the ARNs and Amazon EC2 condition keys that you can use in an IAM policy statement to grant users permission to create or modify particular Amazon EC2 resources - this page also mentions that AWS will add support for additional actions, ARNs, and condition keys in 2014.
Possible Workaround/Alternative
Instead of or in addition to constraining access on the individual resource level, you might want to check into (also) using Conditions combined with Policy Variables, insofar ec2:Regionis one of the supported Condition Keys for Amazon EC2 - you might combine your policy with one that specifically handles Describe* actions, e.g. something like this (untested):
{
"Statement": [
{
"Action": [
"ec2:Describe*"
],
"Effect": "Allow",
"Resource": "*",
"Condition": {
"StringEquals": {
"ec2:Region": "eu-west-1"
}
}
}
]
}
Please note that this would still allow the user to see all instances in eu-west-1, even though your original policy fragment would prevent all API actions that already support resource level permissions (e.g instance creation/termination etc.).
I've outlined yet another possible approach in section Partial Workaround within my related answer to How to hide instances in EC2 based on tag - using IAM?.
Good Luck!