I have an internet Elastic Search endpoint. I wanted to access it only within my 2 VPC's, to be specific from my EC2 instances only. Here is the policy i am trying with my VPC CIDR block IP's, but i am unable to access the endpoint from my EC2 instances. My EC2 instances are in private subnets , accessing internet through NAT Gateway. Here is my access policy which is not working
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:1XXXXXXXXXXX:domain/my-elasticsearch/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"xx.xx.xx.xx/24",
"xx.xx.xx.xx/24"
]
}
}
}
]
}
I have also tried something like this to allow access from only my EC2 instances assigned IAM role, that didnt work either
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::XXXXXXXXXXX:role/MyEC2Role"
]
},
"Action": [
"es:*"
],
"Resource": "arn:aws:es:us-east-1:XXXXXXXXXXX:domain/my-elasticsearch/*"
}
]
}
What am i doing wrong ? Or is there a better way to restrict access ?
Since you have a public AWS Elasticsearch cluster, allowing your EC2 instance from a private subnet having private IP's wont work.
Try adding the public IP of the NAT in the Access policy of your AWS ES cluster and see if that works.
Also if you are having IAM based access polices, make sure all the requests to AWS ES are signed as mentioned here: https://aws.amazon.com/blogs/database/get-started-with-amazon-elasticsearch-service-an-easy-way-to-send-aws-sigv4-signed-requests/
Related
I have hosted a web application on an Amazon S3 bucket and we are trying to restrict the access to the application within our VPN. So we have added the below policy to restrict the access only when we are connected to VPN. We are using Terraform software tool and Jenkins for building and deploying the application into Amazon S3 buckets.
For the first time when we deploy the application, it gets deployed successfully and also able to restrict only to VPN connected users. But now the problem I am facing is when I try to deploy the application for the second time, deployment is failing due to access restriction(Forbidden access error) and the reason for this is, our jenkins server is not on VPN and terraform refresh is failing with 403 error. The code I have used is as below.
"Sid": "VPNAccessIP",
"Action": "s3:GetObject",
"Effect": "Deny",
"Resource": [
"arn:aws:s3::: demo-dev",
"arn:aws:s3::: demo-dev/*"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"130.110.0.0/22"
]
}
},
"Principal": "*"
Is there any other approach to achieve the access restriction only for VPN connection and also it should allow deploying the application from Jenkins?
You need to use VPC Endpoint for S3, type Gateway.
Using VPC Endpoint with policy below will only allow access from your on-premise IP range and from VPCE.
VPC Endpoint type gateway is not charged, so it is a good way to use S3.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Access-from-specific-VPCE-or-IP-only",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
],
"Effect": "Deny",
"Resource": [
"arn:aws:s3:::demo-dev",
"arn:aws:s3:::demo-dev/*"
],
"Condition": {
"StringNotEquals": {
"aws:sourceVpce": "vpce-abcde12345"
},
"NotIpAddress": {
"aws:SourceIp": "130.110.0.0/22"
}
}
}
]
}
I have general EC2 iam role that I use to join to a domain every new windows EC2 instance that is spun up. One of those instances need to have ability to read SQS, and only that instance! I created VPC endpoint for SQS and now I am trying to limit access over condition aws:SourceArn where that is ARN of the EC2 instance or over aws:SourceIp with IP value of private IP of the instance (tried public too, didn't work.
Here is how my SQS access policy looks like.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "sqspolicySailpointDevDocument",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123123123123:role/apigateway_sqs"
},
"Action": "sqs:*",
"Resource": "arn:aws:sqs:us-west-2:123123123123:SailpointSqsDev"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123123123123:role/terraform-AWS-EC2-Domain-Join"
},
"Action": "sqs:*",
"Resource": "arn:aws:sqs:us-west-2:123123123123:SailpointSqsDev",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "arn:aws:ec2:us-west-2:123123123123:instance/i-075b02dfsdfdf435"
}
}
}
]
}
Second example for condition
"Condition":{
"IpAddress":{
"aws:SourceIp":"10.2.32.34"
}
}
Third example - this one can't pass validation even though it's from Global key context.
InvalidParameterValue: Value aws:VpcSourceIp for parameter Condition is invalid. Reason: Conditions must be from Global context key list https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html.
"Condition": {
"IpAddress": {
"aws:VpcSourceIp": "10.2.0.0/16"
}
}
ec2:SourceInstanceARN I can't use because policy allows only Global condition keys.
Worst case scenario, VPC Endpoint has security group and I could limit access from there but it's not nearly ideal solution...
VpcSourceIp is not available for SQS so in order to achieve what I want, one would need vpce (vpc endpoint) for SQS in VPC and then limit access in security group of vpce.
I have a simple website hosted on aws s3. And currently it is being accessible by anyone.
But I need to restrict the access.
So, I have an aws client vpn endpoint setup with the CIDR block of 10.3.0.0/22
So is it possible to give access only to anyone who's ONLY connected to VPN and restrict the access anything else.
We can restrict S3 access to certain ip range. Here are some examples.
Requests to S3 is allowed only when sourceIp falls under 10.3.0.0/22
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadForGetBucketObjects",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-s3-static-assets-bucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"10.3.0.0/22"
]
}
}
}
]
}
I have an Elasticsearch Service instance on AWS and an Elastic Beanstalk one.
I want to give read-only access to beanstalk however beanstalk doesn't have a static ip address be default and with a bit of googling it is too much trouble to add one.
I therefore gave access to the aws account but that doesnt seem to work. I am still getting the error:
"User: anonymous is not authorized to perform: es:ESHttpPost
When I set it to public access everything works so I am certain I am doing something wrong here:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxx:root"
},
"Action": "es:*",
"Resource": "arn:aws:es:eu-central-1:xxx:domain/xxx-elastic-search/*"
}
]
}
Use identity-based policy such as this instead of IP whitelists.
{
"Version": "2012-10-17",
"Statement": [
{
"Resource": "arn:aws:es:us-west-2:111111111111:domain/recipes1/*",
"Action": ["es:*"],
"Effect": "Allow"
}
]
}
Then attach it to the Elastic Beanstalk role. Read more here
https://aws.amazon.com/blogs/security/how-to-control-access-to-your-amazon-elasticsearch-service-domain/
I want to be able to publish test reports to S3 and have it accessible to the URL sent at the end of the Drone build.
Is it possible to have the S3 static site not view-able by anyone? So its only accessible by people who can already access resources in the VPC using a VPN.
I read that the content must have public read access, so checking if that is avoidable.
Yes:
Set up the static website as normal,
Add a VPC endpoint for S3,
Use a bucket policy to deny all but traffic from your VPC.
Here is a good article describing it in more detail: https://blog.monsterxx03.com/2017/08/19/build-private-staticwebsite-on-s3/
The other option is to write an S3 bucket policy like below, where x.x.x.x/x is the CIDR of the VPC:
{
"Id": "Policy1564215115240",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1564215036691",
"Action": "s3:*",
"Effect": "Deny",
"Resource": "arn:aws:s3:::<s3 bucket name>",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "x.x.x.x/x"
}
},
"Principal": "*"
}
]
}