I followed a tutorial to create a new domain in the elastic search service. I created a policy as follows,
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"es:ESHttpDelete","es:ESHttpGet","es:ESHttpHead",
"es:ESHttpPost","es:ESHttpPut"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
then i created a role for a lambda to access elastic service. later i plan to call elastic search from lambda. here is my role ARN
arn:aws:iam::566879691663:role/myRole
and then for elastic search domain , I assigned "public access" for network configuration. and for access policy, I selected "custom access policy" and added my above role. the access policy json looks like below
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::566879691663:role/myRole*"
]
},
"Action": [
"es:*"
],
"Resource": "arn:aws:es:us-east-1:566879691663:domain/mydomain/*"
}
]
}
once the domain is up and running, when I click on the kibana url generated, I get the following json response in the browser. how can i access this via browser ?
{"message : " user : anonymous is not authorized to perform this action..."}
also, to be able to access/upload programatically, using AWS4AUTH, which requires aws access and secret key, how to I generate those? do i need to create a user and assign the above policy to the user?
Related
I will be using Cloudflare as a proxy for my S3 website bucket to make sure users can't directly access the website with the bucket URL.
I have an S3 bucket set up for static website hosting with my custom domain: www.mydomain.com and have uploaded my index.html file.
I have a CNAME record with www.mydomain.com -> www.mydomain.com.s3-website-us-west-1.amazonaws.com and Cloudflare Proxy enabled.
Issue: I am trying to apply a bucket policy to Deny access to my website bucket unless the request originates from a range of Cloudflare IP addresses. I am following the official AWS docs to do this, but every time I try to access my website, I get a Forbidden 403 AccessDenied error.
This is my bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CloudflareGetObject",
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::ACCOUNT_ID:user/Administrator",
"arn:aws:iam::ACCOUNT_ID:root"
]
},
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::www.mydomain.com/*",
"arn:aws:s3:::www.mydomain.com"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"2c0f:f248::/32",
"2a06:98c0::/29",
"2803:f800::/32",
"2606:4700::/32",
"2405:b500::/32",
"2405:8100::/32",
"2400:cb00::/32",
"198.41.128.0/17",
"197.234.240.0/22",
"190.93.240.0/20",
"188.114.96.0/20",
"173.245.48.0/20",
"172.64.0.0/13",
"162.158.0.0/15",
"141.101.64.0/18",
"131.0.72.0/22",
"108.162.192.0/18",
"104.16.0.0/12",
"103.31.4.0/22",
"103.22.200.0/22",
"103.21.244.0/22"
]
}
}
}
]
}
By default, AWS Deny all the request. Source
Your policy itself does not grant access to the Administrator [or any other user], it only omits him from the list of principals that are explicitly denied. To allow him access to the resource, another policy statement must explicitly allow access using "Effect": "Allow". Source
Now, we have to create Two Policy Statment's:- First with Allow and Second with Deny. Then, It is better to have only One Policy With "allow" only to Specific IP.
It is better not to complicate simple things like using Deny with Not Principal and NotIPAddress. Even AWS says :
Very few scenarios require the use of NotPrincipal, and we recommend that you explore other authorization options before you decide to use NotPrincipal. Source
Now, the questions come on how to whitelist Cloudflare IP's???.
Let's go with a simple approach. Below is the Policy. Replace your bucket name and your Cloudflare Ip's. I have tested it and it is running.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowCloudFlareIP",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:getObject",
"Resource": [
"arn:aws:s3:::my-poc-bucket",
"arn:aws:s3:::my-poc-bucket/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"IP1/32",
"IP2/32"
]
}
}
}
]
}
I am the root user of my account and i created one new user and trying to give access to s3 via s3 bucket policy:
Here is my policy details :-
{ "Id": "Policy1542998309644", "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1542998308012", "Action": [ "s3:ListBucket" ], "Effect": "Allow", "Resource": "arn:aws:s3:::aws-bucket-demo-1", "Principal": { "AWS": [ "arn:aws:iam::213171387512:user/Dave" ] } } ]}
in IAM i have not given any access to the new user. I want to provide him access to s3 via s3 bucket policy. Actually i would like to achieve this : https://aws.amazon.com/premiumsupport/knowledge-center/s3-console-access-certain-bucket/ But not from IAM , I want to use only s3 bucket policy.
Based on the following AWS blog post (the blog shows IAM policy, but it can be adapted to a bucket policy):
How can I grant a user Amazon S3 console access to only a certain bucket?
you can make the following bucket policy:
{
"Id": "Policy1589632516440",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1589632482887",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::aws-bucket-demo-1",
"Principal": {
"AWS": [
"arn:aws:iam::213171387512:user/Dave"
]
}
},
{
"Sid": "Stmt1589632515136",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::aws-bucket-demo-1/*",
"Principal": {
"AWS": [
"arn:aws:iam::213171387512:user/Dave"
]
}
}
]
}
This will require user to url directly to the bucket:
https://s3.console.aws.amazon.com/s3/buckets/aws-bucket-demo-1/
The reason is that the user does not have permissions to list all buckets available. Thus he/she has to go directly to the one you specify.
Obviously the IAM user needs to have AWS Management Console access enabled when you create him/her in the IAM service. With Programmatic access only, IAM users can't use console and no bucket policy can change that.
You will need to use ListBuckets.
It seems like you want this user to only be able to see your bucket but not access anything in it.
I am stuck with provisioning end-user access into a cross account shared bucket, and need help figuring out if there are specific policy requirements for using clients to access the bucket, vs straight CLI.
IAM User Accounts are managed in our "Core" AWS Account.
S3 Bucket is provisioned in our "Dev" AWS Account.
S3 Bucket in Dev account is encrypted with KMS key in Dev Account.
We have configured our Bucket Policy to permit the user access.
We have configured user policies to permit access to the S3 bucket.
We have configured user policies to permit use of the KMS key.
When using the CLI our user account can succesfully access and use the S3 bucket. When attempting to connect with a GUI Client (Win-SCP, CyberDuck, MAC ForkLift) we receive permission denied errors.
BUCKET POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::[DEVACCOUNT#]:role/EC2-ROLE-FOR-APP-ACCESS",
"arn:aws:iam::[COREACCOUNT#]:user/end.user"
]
},
"Action": "s3:List*",
"Resource": [
"arn:aws:s3:::dev-mybucket",
"arn:aws:s3:::dev-mybucket/*"
]
},
{
"Sid": "AllObjectActions",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::[DEVACCOUNT#]:role/EC2-ROLE-FOR-APP-ACCESS",
"arn:aws:iam::[COREACCOUNT#]:user/end.user"
]
},
"Action": [
"s3:GetObject",
"s3:Put*"
],
"Resource": "arn:aws:s3:::dev-mybucket/*"
}
]
}
User Policy - access KMS
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUseOfDevAPPSKey",
"Effect": "Allow",
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:Describe*"
],
"Resource": [
"arn:aws:kms:ca-central-1:[DEVACCOUNT#]:key/[redacted-key-number]"
]
},
{
"Sid": "AllowAttachmentOfPersistentResources",
"Effect": "Allow",
"Action": [
"kms:CreateGrant",
"kms:List*",
"kms:RevokeGrant"
],
"Resource": [
"arn:aws:kms:ca-central-1:[DEVACCOUNT#]:key/[redacted-key-number]"
],
"Condition": {
"Bool": {
"kms:GrantIsForAWSResource": true
}
}
}
]
}
User policy - Access S3 Bucket
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccessToMyBucket",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::dev-mybucket/",
"arn:aws:s3:::dev-mybucket/*"
]
}
]
}
From aws s3 commands we can 'ls' content and 'cp' content from local to remote and from remote to local.
When configuring access with the GUI Clients we always receive somewhat generic 'permission denied' or 'access denied' type errors.
The GUI client is probably making a call that is not List*, Put* or GetObject.
For example, it might be calling GetObjectVersion, GetObjectAcl or GetBucketAcl.
Try adding Get* permissions in addition to List*.
You might also be able to look at the events in your AWS CloudTrail trail to see what specific API calls were denied.
For details, see: Specifying Permissions in a Policy - Amazon Simple Storage Service
Access to an S3 bucket via a GUI such as the AWS web console or SFTP clients with s3 functionality(FileZilla, Cyberduck, ForkLift, etc.) requires the s3:ListAllMyBuckets action in a policy attached to that IAM user. This is very unfortunate as the user will now have access to see ALL your bucket names in that account even if they just have read, write, and or List access to just one bucket in that account.
https://docs.aws.amazon.com/AmazonS3/latest/API/API_Operations.html
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html
One other option is to go to the bucket URL directly. The user/role will require access via that bucket's Bucket policy.
https://s3.console.aws.amazon.com/s3/buckets/dev-mybucket
I have an Elasticsearch Service instance on AWS and an Elastic Beanstalk one.
I want to give read-only access to beanstalk however beanstalk doesn't have a static ip address be default and with a bit of googling it is too much trouble to add one.
I therefore gave access to the aws account but that doesnt seem to work. I am still getting the error:
"User: anonymous is not authorized to perform: es:ESHttpPost
When I set it to public access everything works so I am certain I am doing something wrong here:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxx:root"
},
"Action": "es:*",
"Resource": "arn:aws:es:eu-central-1:xxx:domain/xxx-elastic-search/*"
}
]
}
Use identity-based policy such as this instead of IP whitelists.
{
"Version": "2012-10-17",
"Statement": [
{
"Resource": "arn:aws:es:us-west-2:111111111111:domain/recipes1/*",
"Action": ["es:*"],
"Effect": "Allow"
}
]
}
Then attach it to the Elastic Beanstalk role. Read more here
https://aws.amazon.com/blogs/security/how-to-control-access-to-your-amazon-elasticsearch-service-domain/
I have an IAM Role for my Federated Identity Pool in Cognito. I want to give this role access to my Elasticsearch domain.
I added an inline policy to give read access to my Elasticsearch domain name using the new visual editor. I've attached this policy below.
I'm confused how to configure the access policy now for the Elasticsearch domain to give access to my IAM Role.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "es:ListTags",
"Resource": "arn:aws:es:us-west-2:ACCOUNT_ID:domain/DOMAIN_NAME"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "es:ESHttpPost",
"Resource": "*"
}
]
}
EDIT: I was still never able to figure this out. We also tried locking things down with a VPN but then we were not able to access services like Kibana.