Response error from API Gateway HTTP endpoint to AWS Elasticsearch domain - amazon-web-services

I have a post endpoint in API gateway with the HTTP request integration. It's not a proxy endpoint. The URL it's pointed at is the search URL for my AWS ElasticSearch domain. I'm taking the body of the post request and mapping it to a template to query the ES domain with.
Right now I'm getting this response back from ES: {"Message":"User: anonymous is not authorized to perform: es:ESHttpPost"}.
I don't have fine grained access control enabled on the ES Domain, and it's not in a VPC. My JSON access policy looks like this:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:ESHttp*",
"Resource": "arn:aws:es:xx-xxxx-x:999999999999:domain/XXXXX/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"X.X.X.X/X",
"X.X.X.X/X"
]
}
}
}
]
}
With this policy, I'm able to reach out and query the domain directly and navigate to the Kibana dashboard while on the IP's I have in the allow list. However, I'm assuming the query request coming from my gateway endpoint has a different IP address as that's where I get the 403 response mentioned above.
I know I can find the origin IP address in the request in the x-forwarded-for header, but I don't think I'm able to add that as a condition to the ES access policy, I've tried a few other policy conditions as well, such as a StringLike for the principle ARN and looking for the method request ARN, but these don't work.
Ideally, I would like to just be able to override the sourceIP with the actual origin and not gateway's IP address, but I don't know if this is possible. I also know I could just use a lambda function here, but I would like to avoid needing a function here, to avoid having code / another service to maintain.
Basically, I'm hoping to be able to still allow anonymous user access when requests are coming from certain IP's, but I want to create and abstract certain frequently used queries with a few gateway endpoints. Is there a way I can add the API endpoint to the allow list, or do I have to enable fine-grained access control?

Related

Private S3 static website accessed only by API Gateway

I want to set up a static S3 website that is only accessible via API Gateway, so what I've done is.
S3 side
1. Enabled static website hosting on the S3 bucket.
2. Blocked all public access.
3. Put in a bucket policy that only allows my VPC Endpoint to access it.
{
"Version": "2012-10-17",
"Id": "VPCe",
"Statement": [
{
"Sid": "VPCe",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket.com/*",
"Condition": {
"StringNotEquals": {
"aws:SourceVpce": "vpce-my-vpce"
}
}
}
]
}
API Gateway side
1. Mapped that same VPCE to the API
2. Created a proxy resource
3. In the integration request, I made it HTTP and put my S3 website URL as the endpoint URL, content handling as passththrough.
4. But when I test this through APIGW, I get access denied.
Is there something I'm missing, or am I wrong to expect this to work?
I get a 403, Access Denied on this.
I want to set up a static S3 website that is only accessible via API Gateway,
You can't do this, as its not possible. S3 static websites are only accessible through public URL, thus you need to access it from the internet.
They are not meant to be accessed from VPC using private IP addresses or any VPC endpoints.
If you want to have private static website, you have to host it yourself on private EC2 instance or ECS container.

How can I deny public access to an AWS API gateway?

Similar to this question, I would like to deny public access to an AWS API Gateway and only allow access when the API is invoked via a specific user account. I have applied the following resource policy to the gateway:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::123456789012:root",
"arn:aws:iam::123456789012:user/apitestuser"
]
},
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:us-east-1:123456789012:abcd123456/*"
}
]
}
But when I run
curl -X GET https://abcd123456.execute-api.us-east-1.amazonaws.com/dev/products
I still receive a success response with data:
[{"id":1,"name":"Product 1"},{"id":2,"name":"Product 2"}]
I am expecting to receive a 4XX response instead.
How can I change the policy to deny public access to the gateway? Or, is it not possible to deny public access without using a VPC? Ideally I wish to avoid using a VPC as using a NAT gateway in multiple regions will be costly. I also want to avoid building in any authentication mechanism as authentication and authorization take place in other API gateways which proxy to this gateway.
Based on the comments.
The issue was that the stage was not re-deployed after adding/changing the policy.
So the solution was to re-deploy the stage for the policy to take effect.

Securely Connect to AWS Elasticsearch from Nextjs Serverless Functions

I am trying to securely connect to my ES service from the serverless functions in Nextjs. The serverless functions have no fixed IP address or IP address range so I can't secure it that way. I have tried creating an IAM User and using the access key ID and secret access key to create the basic authorization header like this.
const auth = btoa(`${accessKey}:${accessSecret}`);
Then I used that as the Authorization header like this.
`Authorization: Basic ${auth}`
The request comes back with the following error.
{
"message": "Authorization header requires 'Credential' parameter. Authorization header requires 'Signature' parameter. Authorization header requires 'SignedHeaders' parameter. Authorization header requires existence of either a 'X-Amz-Date' or a 'Date' header. Authorization=Basic XXXXXXXXXXXXXXXXXXXXXXXXXX"
}
I have given the IAM user permission through the access policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXX:user/elasticsearch"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-2:XXXXXXXX:domain/team-up/*"
}
]
}
Is this the right way to secure the connection to the database? I have tried setting up a password on the database using the POST /_security/user/_password endpoint, but that fails too.
Amazon claims that basic authorization is supported.
https://aws.amazon.com/elasticsearch-service/faqs/
I didn't create a master user during the wizard setup of the aws elasticsearch domain. Once the domain is created there is no way to create a master user it seems. I deleted the elasticseach domain and recreated it with a master user.
I followed this guide.
https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/fgac.html#fgac-walkthrough-basic

Authorization using Api gateway and Lambda

Hi I am trying to implement Custom authorization using api gateway and lambda. My current understanding is as follows. I have created simple GET method and deployed to Dev Enviroment. Create lambda authorize to return the IAM policy. I used python blue print api-gateway-authorizer-python. Below is the format of response we should get.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "execute-api:Invoke",
"Effect": "Deny",
"Resource": "arn:aws:execute-api:us-east-1:{ACCOUNTID}:{APIID}/ESTestInvoke-stage/GET/"
}
]
}
In the above IAM policy, Resource is ARN of my Api Dev stage. What is Action indicates? Also to test this now, How can I get token? I want to test it from postman? I am just confused here. I have my AWS account and authorization is nothing but my current account has access to this Dev stage? How internally it works? To store all the permissions do we need to maintain any other DB? Can someone help me to understand this? Any help would be appreciated. Thanks
To get a token you need an identity provider. Amazon Cognito is one of those (Google, Facebook works as well). To understand that policy you have to understand the chain of commands.
Suppose a client calls an API endpoint (GET /orders), this will trigger a service Lambda so the token can be verified. If the verification is successful, another Lambda (GetOrder a business Lambda this time) will be invoked by Api Gateway.
If your service Lambda (Lambda authorizer) will return a policy like this:
{
"principalId": "apigateway.amazonaws.com",
"policyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Action": "execute-api:Invoke",
"Effect": "Allow",
"Resource": "arn:aws:execute-api:{REGION}:{ACCOUNT_ID}:{API_ID}/Prod/GET/"
}]
}
}
the API Gateway service (i.e. the principalId equals to apigateway.amazonaws.com) is allowed (i.e. Effect equals to Allow) to invoke (i.e. Action equals to execute-api:Invoke) the given API resource (e.g. Resource equals to arn:aws:execute-api:{REGION}:{ACCOUNT_ID}:{API_ID}/Prod/GET/).
In your case the ARN that you return is related to the tester of API Gateway, but it should point to your real function.
This article may help.
Ok, what are custom authorisers for API gateway: custom authorisers let you define your own authentication & authorisation logic.
How do you get the token: Thats part of your authentication and authorisation logic, If you are deploying your services on AWS, you can use AWS cognito. API gateway also supports cognito authorization.
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-integrate-with-cognito.html
Using postman to test? it's handy to use postman. I use it.
How does the permission internally work: You can use a token to authenticate a user. (If you are using a JWT token, you can also verify the user's claims).
Do you need an internal db? this is entirely depends on your use case. if your use case is simple as all users treated equal, you might not need a db. lets say some users can access some additional features, you may still not need a db (you can use claims). but if your application becomes complicated and you have to manage different access permissions, users, groups, etc, you may surely need a db.

How to limit AWS API Gateway access to specific CloudFront distribution or Route53 subdomain

I have an API Gateway api setup that I want to limit access to. I have a subdomain setup in AWS Route 53 that points to a CloudFront distribution where my app lives. This app makes a POST request to the API.
I have looked into adding a resource policy for my api based on the example 'AWS API Whitelist' but I can't seem to get the syntax correct, I constantly get errors.
I also tried creating an IAM user and locking down the API with AWS_IAM auth but then I need to create a signed request which seems like a lot of work that should be a lot easier via resource policies?
This is an example of the resource policy I tried to attach to my API:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity {{CloudFrontID}}"
},
"Action": "execute-api:Invoke",
"Resource": [
"execute-api:/*/*/*"
]
}
]
}
This returns the following error:
Invalid policy document. Please check the policy syntax and ensure that Principals are valid.
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity {{CloudFrontID}}"
The problem with this concept is that this is a public HTTP request. Unless it's a signed request, AWS will not know about any IAM or ARN resources, it just knows it has a standard HTTP request. If you make the request with a curl -v command you will see the request parameters look something like this:
GET
/test/mappedcokerheaders
HTTP/2
Host: APIID.execute-api.REGION.amazonaws.com
User-Agent: curl/7.61.1
Accept: */*
It's possible you could filter the user Agent as I do see that condition defined here.
I would check all of the values that are coming in the request from cloudfront vs the request from your curl directly to the API by trapping the api gw request id in the response headers, and looking for those in your API Gateway Access Logs. You'll have to enable Access Logs though, and define what parameters you want logged, which you can see how to do here.
The problem is that OAI cannot be used in CustomOrigin. If you are not forwarding User-Agent to the API Gateway CustomOrigin, then the simplest approach for you is to add a resource policy in API Gateway which only allows aws:UserAgent: "Amazon CloudFront".
Be careful: User-Agent can very easily be spoofed. This approach is designed to only prevent "normal access" like a random bot on the web is trying to scrape your site.
The User-Agent header is guaranteed to be Amazon CloudFront. See the quote from Request and Response Behavior for Custom Origins.
Behavior If You Don't Configure CloudFront to Cache Based on Header Values: CloudFront replaces the value of this header field with Amazon CloudFront. If you want CloudFront to cache your content based on the device the user is using, see Configuring Caching Based on the Device Type.
Here is how the full resource policy looks like:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "execute-api:Invoke",
"Resource": "arn:aws:execute-api:us-west-2:123456789012:abcdefghij/*/*/*",
"Condition": {
"StringEquals": {
"aws:UserAgent": "Amazon CloudFront"
}
}
}
]
}
Here is how to configure it in serverless.yml:
provider:
resourcePolicy:
- Effect: Allow
Principal: "*"
Action: execute-api:Invoke
Resource:
- execute-api:/*/*/*
Condition:
StringEquals:
aws:UserAgent: "Amazon CloudFront"
I have a subdomain setup in AWS Route 53 that points to a CloudFront
distribution where my app lives. This app makes a POST request to the
API.
What I understand is that you have a public service that can be called from the web browser ( https://your-service.com )
You want the service to respond only when the client's browser is at https://your-site.com. The service will not respond when the browser for example is on https://another-site.com
If that is the case,
you will need to read more about CORS
This will not prevent a random guy / web client to go to and call your service directly to https://your-service.com however. To protect the service from that, you need proper authentication system