Restricting Amazon S3 access only on VPN connectivity - amazon-web-services

I have hosted a web application on an Amazon S3 bucket and we are trying to restrict the access to the application within our VPN. So we have added the below policy to restrict the access only when we are connected to VPN. We are using Terraform software tool and Jenkins for building and deploying the application into Amazon S3 buckets.
For the first time when we deploy the application, it gets deployed successfully and also able to restrict only to VPN connected users. But now the problem I am facing is when I try to deploy the application for the second time, deployment is failing due to access restriction(Forbidden access error) and the reason for this is, our jenkins server is not on VPN and terraform refresh is failing with 403 error. The code I have used is as below.
"Sid": "VPNAccessIP",
"Action": "s3:GetObject",
"Effect": "Deny",
"Resource": [
"arn:aws:s3::: demo-dev",
"arn:aws:s3::: demo-dev/*"
],
"Condition": {
"NotIpAddress": {
"aws:SourceIp": [
"130.110.0.0/22"
]
}
},
"Principal": "*"
Is there any other approach to achieve the access restriction only for VPN connection and also it should allow deploying the application from Jenkins?

You need to use VPC Endpoint for S3, type Gateway.
Using VPC Endpoint with policy below will only allow access from your on-premise IP range and from VPCE.
VPC Endpoint type gateway is not charged, so it is a good way to use S3.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Access-from-specific-VPCE-or-IP-only",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
],
"Effect": "Deny",
"Resource": [
"arn:aws:s3:::demo-dev",
"arn:aws:s3:::demo-dev/*"
],
"Condition": {
"StringNotEquals": {
"aws:sourceVpce": "vpce-abcde12345"
},
"NotIpAddress": {
"aws:SourceIp": "130.110.0.0/22"
}
}
}
]
}

Related

How to allow only specific OpenID Connect provider in AWS with AWS SCP?

I'm trying to limit the possibility of adding new providers to an AWS account. I'm also using Bitbucket to deploy my app via Bitbucket Pipelines and I use OpenID Connect as a secure way for the deployments.
Now I have created a SCP to deny creation/deletion of IAM user and adding/deletion of providers. In this SCP I want to make an exception, it the URL for the IDP is a specific one, it should be allowed in all accounts to create or delete this provider.
Thing is, I don't understand, why my condition is not working. Any hints?
Thx!
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Deny",
"Action": [
"iam:CreateGroup",
"iam:CreateLoginProfile",
"iam:CreateOpenIDConnectProvider",
"iam:CreateSAMLProvider",
"iam:CreateUser",
"iam:DeleteAccountPasswordPolicy",
"iam:DeleteSAMLProvider",
"iam:UpdateSAMLProvider"
],
"Resource": [
"*"
],
"Condition": {
"StringNotEquals": {
"iam:OpenIDConnectProviderUrl": [
"https://api.bitbucket.org/2.0/workspaces/my-workspace-name/pipelines-config/identity/oidc"
]
}
}
}
]
}

How to restrict access to S3 hosted website using VPN

I have a simple website hosted on aws s3. And currently it is being accessible by anyone.
But I need to restrict the access.
So, I have an aws client vpn endpoint setup with the CIDR block of 10.3.0.0/22
So is it possible to give access only to anyone who's ONLY connected to VPN and restrict the access anything else.
We can restrict S3 access to certain ip range. Here are some examples.
Requests to S3 is allowed only when sourceIp falls under 10.3.0.0/22
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadForGetBucketObjects",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-s3-static-assets-bucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"10.3.0.0/22"
]
}
}
}
]
}

Is aws:SourceVpc condition key present in the request context when interacting with S3 over web console?

I have a Bucket Policy (listed below) that is supposed to prevent access to an S3 bucket when accessed from anywhere other than a specific VPC. I launched an EC2 instance in the VPC, tested and confirmed that S3 access works fine. Now, when I access the same S3 bucket over web console, I get 'Error - Access Denied' message.
Does this mean that aws:SourceVpc condition key is present in the request context when interacting with S3 over web console as well?
My assumption is that it is present in the request context as otherwise policy statement would have failed such that the statement's "Effect" does not apply because there is no "Ifexists" added to StringNotEquals - Asking this question as I could not find this information in AWS Documentation. Even after adding "Ifexists" to StringNotEquals, results are same - can someone confirm?
{
"Version": "2012-10-17",
"Id": "Policy1589385141624",
"Statement": [
{
"Sid": "Access-to-specific-VPC-only",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::abhxy12bst3",
"arn:aws:s3:::abhxy12bst3/*"
],
"Condition": {
"StringNotEquals": {
"aws:sourceVpc": "vpc-0xy915sdfedb5667"
}
}
}
]
}
Yes, you are right. I tested the following bucket policy, the operations from the AWS S3 console are denied.
{
"Version": "2012-10-17",
"Id": "Policy1589385141624",
"Statement": [
{
"Sid": "Access-to-specific-VPC-only",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::abhxy12bst3",
"arn:aws:s3:::abhxy12bst3/*"
],
"Condition": {
"StringLike": {
"aws:sourceVpc": "vpc-30*"
}
}
}
]
}
It means there is definitely some vpc id present in the request. It might be same for each account or it could be different.
This will apply to all requests interacting with S3. The console just provides a GUI on top of the AWS API.

S3 bucket policy is not allowing Athena to perform query execution

I am performing Amazon Athena queries on an S3 bucket. Let's call it athena-bucket. Today I got a requirement to restrict this bucket over VPC Enpoints. So I have tried this S3 bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VPCe and SourceIP",
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::**********:user/user_admin",
"arn:aws:iam::**********:root",
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::athena-bucket",
"arn:aws:s3:::athena-bucket/abc/*"
],
"Condition": {
"StringNotEquals": {
"aws:sourceVpce": [
"vpce-XXXXxxxxe",
"vpce-xxxxxxxxxx",
"vpce-XXXXXXXXXXXXXX"
]
},
"NotIpAddress": {
"aws:SourceIp": [
"publicip/32",
"publicip2/32"
]
}
}
}
]
}
Please note that Athena has full permission to access the above bucket. I want to use the S3 bucket policy to restrict access from only certain IP addresses and VPC Endpoint.
However, I am getting access denied error although request is routed through VPC Endpoints mentioned in the policy.
Amazon Athena is an Internet-based service. It accesses Amazon S3 directly and does not connect via an Amazon VPC.
If you restrict the bucket to only be accessible via a VPC Endpoint, Amazon Athena will not be able to access it.
There is actually a solution for you to get what you are asking for. The following policy condition allows actions from all of your VPC endpoints and Athena:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VPCe and SourceIP",
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::**********:user/user_admin",
"arn:aws:iam::**********:root",
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::athena-bucket",
"arn:aws:s3:::athena-bucket/abc/*"
],
"Condition": {
"ForAllValues:StringNotEquals": {
"aws:sourceVpce": [
"vpce-XXXXxxxxe",
"vpce-xxxxxxxxxx",
"vpce-XXXXXXXXXXXXXX"
],
"aws:CalledVia": [ "athena.amazonaws.com" ]
}
}
}
]
}
The "ForAllValues" portion of the condition is what turns this AND condition into an OR.
Not sure how your IP restrictions would play with this, since you cannot tell which IPs Athena would be coming from.

Permissions to access ElasticSearch from Lambda?

I'm trying to use Elasticsearch for data storage for a Lambda function connected to Alexa Skills Kit. The Lambda works alright without Elasticsearch but ES provides much-needed fuzzy matching.
The only way I've been able to access it from Lambda is by enabling Elasticsearch global access but that's a really bad idea. I've also been able to access from my computer via open access policy or IP address policy. Is there a way to do read-only access via Lambda and read-write via IP?
On IAM I granted my Lambda role AmazonESReadOnlyAccess. On the ES side I tried this but it only worked for IP address:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::NUMBER:root",
"arn:aws:iam::NUMBER:role/lambda_basic_execution"
]
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:NUMBER:domain/NAME/*"
},
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:NUMBER:domain/NAME/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "MY IP"
}
}
}
]
}
This forum post asks the same question but went unanswered.
The only way I know of to do this is to use a resource-based policy or an IAM-based policy on your ES domain. This would restrict access to a particular IAM user or role. However, to make this work you also need to sign your requests to ES using SigV4.
There are libraries that will do this signing for you, for example this one extends the popular Python requests library to sign ElasticSearch requests via SigV4. I believe similar libraries exist for other languages.
Now it's possible from your code with elasticsearch.js. Before you try it, you must install http-aws-es module.
const AWS = require('aws-sdk');
const httpAwsEs = require('http-aws-es');
const elasticsearch = require('elasticsearch');
const client = new elasticsearch.Client({
host: 'YOUR_ES_HOST',
connectionClass: httpAwsEs,
amazonES: {
region: 'YOUR_ES_REGION',
credentials: new AWS.EnvironmentCredentials('AWS')
}
});
// client.search({...})
Of course, before using it, configure access to elasticsearch domain:
For external (outside AWS) access to your Elasticsearch cluster, you want to create the cluster with an IP-based access policy. Something like the below:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"<<IP/CIDR>>"
]
}
},
"Resource": "arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/*"
}
]
}
For your Lambda function, create the role that the Lambda function will assume with the below policy snippet.
{
"Sid": "",
"Effect": "Allow",
"Action": [
"es:DescribeElasticsearchDomain",
"es:DescribeElasticsearchDomains",
"es:DescribeElasticsearchDomainConfig",
"es:ESHttpPost",
"es:ESHttpPut"
],
"Resource": [
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/*"
]
},
{
"Sid": "",
"Effect": "Allow",
"Action": [
"es:ESHttpGet"
],
"Resource": [
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_all/_settings",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_cluster/stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/<<INDEX>>*/_mapping/<<TYPE>>",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_nodes",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_nodes/stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_nodes/*/stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/_stats",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/<<INDEX>>*/_stats"
]
}
I think you could more easily condense the above two policy statements into the following:
{
"Sid": "",
"Effect": "Allow",
"Action": [
"es:DescribeElasticsearchDomain",
"es:DescribeElasticsearchDomains",
"es:DescribeElasticsearchDomainConfig",
"es:ESHttpPost",
"es:ESHttpGet",
"es:ESHttpPut"
],
"Resource": [
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>",
"arn:aws:es:<<REGION>>:<<ACCOUNTID>>:domain/<<DOMAIN_NAME>>/*"
]
}
I managed to piece the above together from the following sources:
https://aws.amazon.com/blogs/security/how-to-control-access-to-your-amazon-elasticsearch-service-domain/
How to access Kibana from Amazon elasticsearch service?
https://forums.aws.amazon.com/thread.jspa?threadID=217149
You need to go to the access policy of Lambda and provide the AWS ARN to connect
http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-aws-integrations.html#es-aws-integrations-s3-lambda-es-authorizations
AWS Lambda runs on public EC2 instances. So simply adding a whitelist of IP addresses to the Elasticsearch access policy will not work. One way to do this will be to give the Lambda execution role appropriate permissions to the Elasticsearch domain. Make sure that the Lambda Execution role has permissions to the ES domain and the ES domain access policy has a statement which allows this Lambda Role ARN to do the appropriate actions. Once this is done all you would have to do is sign your request via SigV4 while accessing the ES endpoint Hope that helps!