We have a Lambda function on aws which is exposed via api gateway.
On that api, we have a resource policy to restrict traffic so only ip addresses in our firm can access the endpoint.
For this, we use the standard ip range blacklist template as provided by AWS on the api gateway resource policy page and modify it to use NotIpAddress instead of IpAddress- for example
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": "*",
"Action": "execute-api:Invoke",
"Resource": "execute-api:/stage/*/getInfo",
"Condition" : {
"NotIpAddress": {
"aws:SourceIp": [ "192.188.1.1", "192.168.1.2" ]
}
}
},
{
"Effect": "Allow",
"Principal": "*",
"Action": "execute-api:Invoke",
"Resource": "execute-api:/stage/*/getInfo"
}
]
}
We now have a requirement to develop another lambda which makes a http call to this API Gateway to gather some information before performing more logic. We want to use this existing lambda as it performs some complex logic.
However, when we try to do a http get in the new lambda to the API Gateway of the existing lambda to get the required information, it is denied as per the deny rule in the resource policy
Is it possible to have an IPAddress restriction and allow invocations from all lambdas in our AWS account?
If the Lambda is based within your VPC in a private subnet its IP address(es) can be bound to the NAT Gateway/NAT instance.
Related
It is possible to allow pulling from but not pushing to the Docker API VPC Endpoint (com.amazonaws.<region>.ecr.dkr) in its attached policy?
I can't find a reference for any supported actions other than "*", is there a way to specify pull only? Or something via a condition?
Yes, you can achieve this with a VPC endpoint policy.
Here's an example from the documentation. This policy enables a specific IAM role to pull images from Amazon ECR:
{
"Statement": [{
"Sid": "AllowPull",
"Principal": {
"AWS": "arn:aws:iam::1234567890:role/role_name"
},
"Action": [
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer",
"ecr:GetAuthorizationToken"
],
"Effect": "Allow",
"Resource": "*"
}]
}
In AWS Console, add security groups that your instances (maybe all possible security groups) are using to the VPC endpoints.
Is there any way an application can post messages to a SQS queue by being whitelisted by its machine IP address?
I took a look at https://aws.amazon.com/premiumsupport/knowledge-center/iam-restrict-calls-ip-addresses/ but this is for a role and I'd still need an AWS user to do this.
Is there any way to publish to a SQS queue just from an IP address, without needing an AWS user at all?
There isn't a direct way - you still need your AWS credentials.
But another way would be to have an AWS API Gateway that calls a Lambda that sends the SQS message. The API Gateway would be restricted to a small (or one) set of IP's. This link goes into details but the key is having a resource policy on your API gateway call:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": "*",
"Action": "execute-api:Invoke",
"Resource": "execute-api:/*/*/*"
},
{
"Effect": "Deny",
"Principal": "*",
"Action": "execute-api:Invoke",
"Resource": "execute-api:/*/*/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": ["1.2.3.4"]
}
}
}
]
}
where 1.2.3.4 is the IP address you want to allow in. This API Gateway call would then call your Lambda. Internally you're still using IAM roles but external to AWS you're not.
If your application runs in a VPC you can use AWS PrivateLink to connect to your SQS queue privately so traffic does not traverse the internet, this normally resolves the need to whitelist an IP address for internal private traffic. SQS and AWS PrivateLink
I found out that for SQS (not SNS) I can do this with whitelists the ip of the sender. Then I can just do a POST and it will work.
I have an S3 bucket that acts as a static website and I am using API Gateway to distribute traffic to it. I understand CloudFront is a better option here, but please do not suggest it. It is not an option, due to reasons I won't go into.
I am accomplishing my solution by configuring a {proxy+} resource. Image below:
I would like to only allow access to the S3 website from the API Gateway proxy resource. Is there a way I can provide an execution role to the proxy resource, similarly to how you can provide an execution role to a resource to runs a lambda function? Lambda execution role example below:
The integration request portion of the proxy resource doesn't seem to have an execution role:
Or is there a way I can assign a role to the entire API Gateway to provide it the right to access the S3 bucket?
Other things I have tried:
Editing the bucket policy to only allow traffic from the API gateway service:
{
"Version": "2012-10-17",
"Id": "apiGatewayOnly",
"Statement": [
{
"Sid": "apiGW",
"Effect": "Allow",
"Principal": {
"Service": ["api-gateway-amazonaws.com"]
},
"Action": "s3:GetObject",
"Resource": "http://test-proxy-bucket-01.s3-website.us-east-2.amazonaws.com/*"
}
]
}
Editing the bucket policy to only allow traffic from API Gateway's URL:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests originating from www.example.com and example.com.",
"Effect": "Allow",
"Principal": "",
"Action": "s3:GetObject",
"Resource": "http://test-proxy-bucket-01.s3-website.us-east-2.amazonaws.com/",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://xxxxxxx.execute-api.us-east-2.amazonaws.com/prod/",
"http://xxxxxxxx.execute-api.us-east-2.amazonaws.com/prod"
]
}
}
}
]
}
Create a private S3 bucket
Create an IAM role that can access the bucket. Set the trusted entity/principal who can assume this role to apigateway.amazonaws.com
Use AWS service integration type and select s3. Set the execution role to the role created in 2
Refer to docs for more details.
I have a Private API in Amazon API Gateway that I want to be consumed from another account, by a lambda with VPC support. I modified the API ResourcePolicy to allow private API traffic based on source VPC as specified here, in the last example. This is how my ResourcePolicy looks like:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:my-region:my-account:api-id/*",
"Condition": {
"StringEquals": {
"aws:sourceVpce": "my-vpce"
}
}
},
{
"Effect": "Allow",
"Principal": "*",
"Action": "execute-api:Invoke",
"Resource": ""arn:aws:execute-api:my-region:my-account:api-id/*",
"Condition": {
"StringEquals": {
"aws:SourceVpc": "my-vpc-from-another-account"
}
}
}
]
}
Now, when I try to consume the API using https://my-api-id.execute-api.us-west-2.amazonaws.com/my-stage/ endpoint, I get getaddrinfo ENOTFOUND error. Is this the appropriate way to expose private API to be accessible from a VPC from another account?
Asked the guys from AWS, and the answer was that you can specify the source VPC, but only if it's in the same account.
aws:SourceVpc and aws:VpcSourceIp correspond to the VPC in which the VPC Endpoint resides, not, as "source" would suggest, the VPC from which the request originates.
At least, I can confirm that's true when the traffic is routed over Transit Gateway, I haven't tested this with VPC Peering.
When your VPC Endpoint resides in a different VPC than the VPC the request is coming from, you cannot use aws:SourceVpc or aws:VpcSourceIp to restrict access based on the request origin.
If you have a requirement to restrict access to only allow requests that originate from a particular VPC, there's really only one solid option, and that's to create a VPC Endpoint in the request origin VPC, and use aws:SourceVpc in the resource policy.
I have confirmed this with AWS Support, and have passed on feedback that the documentation is in need of some improvement on this point.
I'm trying to secure access to an internal static website.
Everyone in the company is using a VPN to access our Amazon VPC so I would like to limit access to that site if you're using the VPN.
So I found out this documentation on AWS to use VPC endpoint which seems to be what I'm looking for.
So I created a VPC endoint with the folowing policy.
{
"Statement": [
{
"Action": "*",
"Effect": "Allow",
"Resource": "*",
"Principal": "*"
}
]
}
On my S3 bucket, I verified that I could access index.html both from the regular Web and from the VPN.
Then I added the following bucket Policy to restrict to only the VPC Endpoint.
{
"Id": "Policy1435893687892",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1435893641285",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::mybucket/*",
"Principal": {
"AWS": [
"arn:aws:iam::123456789:user/op"
]
}
},
{
"Sid": "Access-to-specific-VPCE-only",
"Action": "s3:*",
"Effect": "Deny",
"Resource": ["arn:aws:s3:::mybucket/*"],
"Condition": {
"StringNotEquals": {
"aws:sourceVpce": "vpce-1234567"
}
},
"Principal": "*"
}
]
}
Now Regular Web gets a 403 but I also get a 403 when I'm behind the company VPN.
Am I missing something?
#Michael - sqlbot is right.
It seems what you are doing is restrict access to the S3 bucket where you store that static web content to requests coming from a particular AWS VPC, using a VPC endpoint.
VPC endpoints establish associations between AWS services, to allow requests coming from INSIDE the VPC.
You can't get what you want with VPC and S3 ACL configuration, but you can get it with ACL and some VPN configuration.
Let's assume connecting to your company's VPN doesn't mean that all the traffic, including Internet traffic between the VPN clients and AWS S3 will be routed through that VPN connection, because that's how sane VPN configuration usually works. If that's not the case, ommit the following step:
Add a static route to your S3 bucket to your VPN server configuration, so every client tries to reach the bucket through the VPN instead of trying to establish a direct internet connection with it. For example, on OpenVPN, edit server.conf, adding the following line:
push "route yourS3bucketPublicIP 255.255.255.255"
After that you will see that when a client connects to the VPN it gets an extra entry added to its routing table, corresponding to the static route that tells it to reach the bucket trough the VPN.
Use S3 bucket ACLs "IpAddress" field to set the configuration you want. It should look something like this:
.
{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "54.240.143.0/24"},
"NotIpAddress": {"aws:SourceIp": "54.240.143.188/32"}
}
}
]
}
You use IpAddress field to allow an IP or range of IPs using CIDR notation, and NotIpAddress field the same way for restricting an IP or range of IPs (you can ommit that one). That IP (or range of IPs) specified on IpAddress should be the public address(es) of the gateway interface(s) that route(s) your company's VPN Internet traffic (the IP address(es) S3 sees when somebody from your VPN tries to connect to it).
More info:
http://www.bucketexplorer.com/documentation/amazon-s3--access-control-list-acl-overview.html
http://aws.amazon.com/articles/5050/
http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html#example-bucket-policies-use-case-3
https://openvpn.net/index.php/open-source/documentation/howto.html
Actually, #Michael - sqlbot was right until until May 15, 2015. What you do is correct. You found the documentation (again, correctly) that allows you to set up S3 bucket within VPC (probably with no access from outside world), the same way you set up your EC2 machines. Therefore,
On my S3 bucket, I verified that I could access index.html both from the regular Web and from the VPN.
is a problem. If you didn't make mistakes, you shouldn't be able to access the bucket from regular Web. Everything that you did afterwards is irrelevant - because you didn't create S3 bucket inside your VPN-connected VPC.
You don't give much details as to what you did in your very first step; the easiest is probably to delete this bucket and start from the beginning. With the need to set up route tables and what not it is easy to make a mistake. This is a simper set of instructions - but it doesn't cover as much ground as the document that you followed.
But any links that predate this capability (that is, any links before May 2015) are irrelevant.