Its possible to use AWS Athena using a VPC endpoint? - amazon-web-services

I would like to know if it is possible to create a VPC endpoint for AWS Athena and restrict to only allow certain users (that MUST BE in my account) to use the VPC endpoint. I currently use this VPC endpoint policy for a S3 endpoint and I would need something similar to use with AWS Athena.
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::<MY_ACCOUNT_ID>:user/user1",
"arn:aws:iam::<MY_ACCOUNT_ID>:user/user2",
...
]
},
"Action": "*",
"Resource": "*"
}
]
}
The problem I'm trying to solve is to block developers in my company, that are logged in a RDP session inside my company VPN, to offload data to a personal AWS account. So I would need a solution that blocks access to the public internet, so I think a VPC endpoint should be the only option, but I accept new ideas.

Yes you can, check out this doc.
https://docs.aws.amazon.com/athena/latest/ug/interface-vpc-endpoint.html
Also, keep in mind to adopt a encryption at rest and transit when query data via athena, the results always by default is open even if it's saved on a encrypted s3 bucket.

Related

aws glue IAM role cant connect to aws opensearch

I have a Glue job to push data into AWS OpenSearch. Everythings works perfectly when I have an "open" permission on OpenSearch, for example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:<region>:<accountId>:domain/<domain>/*"
}
]
}
This works without issue. The problem is I want to secure my OpenSearch domain to only the role running the glue job.
I attempted to do that starting basic with:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::<accountId>:role/AWSGluePowerUser"
]
},
"Action": [
"*"
],
"Resource": [
"*"
]
}
]
}
This disables all access to OpenSearch which I want, however it also blocks it for Glue even though the jobs a running with the AWSGluePowerUser role set.
An error occurred while calling o805.pyWriteDynamicFrame. Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
Which I assume is because the Glue job can no longer see the OpenSearch cluster. Keep in mind everything works when using the "default" access policy for OpenSearch.
I have my glue job configured to use the IAM role AWSGluePowerUser which also has AmazonOpenSearchServiceFullAccess policy attached.
I'm not sure where I've gone wrong here?
Edit: Here is where/how I've set the roles for the Glue job, I assume this is all I needed to do?
From Glue Job Details
I believe this is not possible because the AWS Glue Elasticsearch connector is based on an open-source Elasticsearch Spark library that doest not sign requests using AWS Signature Version 4 which is required for enforcing domain access policies.
If you take a look at the key concepts for fine-grained access control in OpenSearch, you'll see:
If you choose IAM for your master user, all requests to the cluster must be signed using AWS Signature Version 4.
If you visit the Elasticsearch Connector for AWS Glue AWS Marketplace page, you'll notice that the connector itself is based on an open-source implementation:
For more details about this open-source Elasticsearch spark connector, please refer to this open-source connector online reference
Under the hood, AWS Glue is using this library to index data from Spark dataframes to the Elasticsearch endpoint. Since this open-source library (maintained by the Elasticsearch community) does not have support for signing requests using using AWS Signature Version 4, it will only work with the "open permission" you've referenced. This is hinted at in the big picture on fine-grained access control:
In general, if you enable fine-grained access control, we recommend using a domain access policy that doesn't require signed requests.
Note that you can always fall back us using a master user based on username/password:
Create a master user (username/password) for the OpenSearch domain's fine-grained access control configuration.
Store the username/password in an AWS Secrets Manager secret as described here.
Attach the secret to the AWS Glue connector as described here.
Hope this helps!
I usually take a "deny everyone except" approach in these situations
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "es:*",
"Resource": [
"*"
],
"Condition": {
"ArnNotLike": {
"aws:PrincipalArn": [
"arn:aws:iam::<accountId>:role/AWSGluePowerUser"
]
}
}
}
]
}

AWS Elastic Search With Kibana - Authentication through IP-based policies or resource-based policies DO NOT WORK AT ALL

at my serverless.yaml file I create and restrict the access to my ElasticSearch domain service and Kibana. However, through AWS Resource-based policies or AWS IP-based policies I am not able to access kibana.
The restriction was done following the AWS Tutorial Source below
For example:
enter link description here
However, it does not worked and I got the error when I tried to access Kibana: {"Message":"User: anonymous is not authorized to perform: es:ESHttpGet"}
So, it means that Kibana requires recently an user. So, the only way now is to use AWS Cognito??
Thank you very much in advance!
Cheers,
Marcelo
You need to modify your access policy as
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:ap-south-1:****:domain/es-cert/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"88.8.888.8"
]
}
}
}
]
}
In the "aws:SourceIp" section, you need to add your public IP address
I would question which ElasticSearch domain you created. Did you create it with VPC or HTTP access? I did just this the other day and in trying to implement my IP access policy, nothing worked. I finally found this article, recreated my domain, applied my access policy, similar to what you have above, and it worked.
“If you enabled VPC access, you can't use IP-based policies. Instead, you can use security groups to control which IP addresses can access the domain.”
https://docs.aws.amazon.com/opensearch-service/latest/developerguide/createupdatedomains.html

How can I deny public access to an AWS API gateway?

Similar to this question, I would like to deny public access to an AWS API Gateway and only allow access when the API is invoked via a specific user account. I have applied the following resource policy to the gateway:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::123456789012:root",
"arn:aws:iam::123456789012:user/apitestuser"
]
},
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:us-east-1:123456789012:abcd123456/*"
}
]
}
But when I run
curl -X GET https://abcd123456.execute-api.us-east-1.amazonaws.com/dev/products
I still receive a success response with data:
[{"id":1,"name":"Product 1"},{"id":2,"name":"Product 2"}]
I am expecting to receive a 4XX response instead.
How can I change the policy to deny public access to the gateway? Or, is it not possible to deny public access without using a VPC? Ideally I wish to avoid using a VPC as using a NAT gateway in multiple regions will be costly. I also want to avoid building in any authentication mechanism as authentication and authorization take place in other API gateways which proxy to this gateway.
Based on the comments.
The issue was that the stage was not re-deployed after adding/changing the policy.
So the solution was to re-deploy the stage for the policy to take effect.

How should I write IAM to make only a certain VPC can send mail via SES?

I use Redash on EC2 instance, and I have to send invitation mails via Amazon SES.
I'd like to add a setting to restrict mail sender to inside a certain VPC where the Redash instance is located.
Here's my IAM for SES:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "ses:SendRawEmail",
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:SourceVpce": "vpce-******"
},
"ForAnyValue:StringLike": {
"ses:Recipients": "*#mycompany.com"
}
}
}
]
}
But I can't send any mail. I think it's because I use VPC endpoint in the code above. It's not available for SES yet.
Is there any other way to specify a certain VPC?
This is an interesting challenge!
The Amazon SES interface is on the internet, so theoretically anything can access it. Normally, policy restrictions use permissions on the IAM User or Role that calls SES to determine whether calls are allowed, rather than from where the call is made.
I assume you are doing this because you only want your Production system to send emails, rather than Dev/Test systems.
However, it is not possible to restrict via VPC, because Amazon SES has no visibility to the concept of a VPC. It simply receives API calls via the Internet.
If your application is in a private subnet and sends requests via NAT Gateway, then you could add a policy to restrict based upon the IP address of the NAT Gateway.
You could do this either by putting the restriction on the Allow statement, or by adding a Deny statement.
See: AWS: Denies Access to AWS Based on the Source IP - AWS Identity and Access Management

Auth0 and S3 Buckets

I have a question.
I'm using Auth0 and AWS SDK to access to some buckets on S3. I have a question. Is there any way to restrict the access to S3 Buckets without he use of bucket policies? Maybe using metadata provided by Auth0.
Thanks for all
Maybe you're looking for something like this https://github.com/auth0/auth0-s3-sample to restrict access for users on their resources.
IAM Policy for user buckets (restricted to these users only)
{
"Version": "2012-10-17",
"Statement": [{
"Sid": "AllowEverythingOnSpecificUserPath",
"Effect": "Allow",
"Action": [
"*"
],
"Resource": [
"arn:aws:s3:::YOUR_BUCKET/dropboxclone/${saml:sub}",
"arn:aws:s3:::YOUR_BUCKET/dropboxclone/${saml:sub}/*"]
},
{
"Sid": "AllowListBucketIfSpecificPrefixIsIncludedInRequest",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::YOUR_BUCKET"],
"Condition":{
"StringEquals": { "s3:prefix":["dropboxclone/${saml:sub}"] }
}
}
]
}
But, if the users have shareable group folders it may become more tricky, I'm looking into it myself a.t.m.
Checkout this pdf: https://www.pingidentity.com/content/dam/pic/downloads/resources/ebooks/en/amazon-web-services-ebook.pdf?id=b6322a80-f285-11e3-ac10-0800200c9a66 on page 11 LEVERAGE OPENID CONNECT FOR AWS APIS the use case is similar.
So, an option could be to do the following
[USER] -> [Auth0] -> [AWS (Federation/SAML)] -> [exchange temporary AWS credentials] -> [use temp. credentials to access S3]
Hope it helped and even if it is quite a long time ago you asked this question, other users maybe benefit. If you've found a better solution, please share it.
To restrict access to S3 buckets you must use Amazon IAM.
When using Auth0, you're basically exchanging your Auth0 token for an Amazon Token. Then, with that Amazon token you're calling S3. That means that in order to restrict access to certain parts of S3 you're going to have to change the permissions on Amazon's token, which means you'll need to play with IAM.
Makes sense?
Cheers!