How to protect non-production API Gateway endpoints? - amazon-web-services

In our previous regular instance-based backends, for a pretty good portion of the development process of the API, we were protecting the dev version of the endpoints with basic auth.
Is there a pattern of how to efficiently hide our serverless endpoints (API Gateway) from public, but still available for the frontend team to develop on?

Create an API key for your non-production API Gateway. A different key from any that you use in production. Give that key to your dev team, then they will be able to call your non-production API Gateway endpoints, while the API will be inaccessible to anyone without the key.

Related

How to give access to API behing AWS Api Gateway from our SDK

we use Aws Api Gateway to protect our apis.
We want to create an sdk for developers. The idea is to have In app purchase in their apps and call our apis in order to start payment.
We are trying to have a way to connect machine from machine. The first machine will be an sdk integrated in Android games and the second will be our back end protected by the api gateway.
The sdk should have a the authorization to consume the api.
Today we don't know how to do it, even after discussing with many architects.
The sdk is not going to have user and password, because it is not a user, so no way to generate the classic access token.
How could we achieve this use case ?
Thanks
The sdk should have a the authorization to consume the api. Today we don't know how to do it, even after discussing with many architects.
There are three possible ways to have a protected API Gateway. AWS Cognito and AWS Lambda. The third option, API Keys, is a little bit different than the previous two.
With Cognito, you create user pools and API Gateway automatically handles authorization when a request is received.
With Lambda, you create your own custom authorization logic.
I generally prefer the Lambda approach due to the ease of use but it depends on the use-case.
With the API Key option, you create API Keys for specific routes and API Gateway checks them through the request headers. I think any use-case that requires this can be handled with the Lambda option as well.
What you are describing doesn't sound difficult to achieve.
You will generate an API key and you will give them to the developers that will use your API.
In your SDK, you can accept the API key as an initialization parameter and provide it in the requests to API Gateway.

Implementing a backend-less auth mechansim

I would like to implement an SPA which bounces the user to a login page, if not already logged in. It would then able to make a call to an API (not necessarily an API Gateway) hosted within an AWS VPC.
As I currently understand it, this would involve a front-end framework library authenticating the user via OAuth 2.0. It would then need to retrieve a token (allowed because of the auth validation) to call an API Gateway which provides access to the API hosted within the VPC.
Given this concept, is this architecture possible without the use of a Lambda?
If you are willing to use API Gateway in front of your API:
Yes, this architecture is possible without using a Lambda. API Gateway has integration with AWS Cognito User Pools for authorizing requests. You can find the AWS docs on how to set this up.
If you don't have an AWS API gateway in front of your API:
In this case you will have to implement one of the authentication and authorization flows provided by OAuth 2.0 standard. In this case, whether you would want to use a Lambda or not, is up to you and your back-end architecture.

AWS HTTP API Gateway as a proxy to private S3 bucket

I have a private S3 bucket with lots of small files. I'd like to expose the contents of the bucket (only read-only access) using AWS API Gateway as a proxy. Both S3 bucket and AWS API Gateway belong to the same AWS account and are in the same VPC and Availability Zone.
AWS API Gateway comes in two types: HTTP API, REST API. The configuration options of REST API are more advanced, additionally, REST API supports much more AWS services integrations than the HTTP API. In fact, the use case I described above is fully covered in one of the documentation tabs of REST API. However, REST API has one huge disadvantage - it's about 70% more expensive than the HTTP API, the price comes with more configuration options but as for now, I need only one - integration with the S3 service that's why I believe this type of service is not well suited for my use case. I started searching if HTTP API can be integrated with S3, and so far I haven't found any way to achieve it.
I tried creating/editing service-linked roles associated with the HTTP API Gateway instance, but those roles can't be edited (only read-only access). As for now, I don't have any idea where I should search next, or if my goal is even achievable using HTTP API.
I am a fan of AWSs HTTP APIs.
I work daily with an API that serves a very similar purpose. The way I have done it is by using AWS Lambda functions integrated with the APIs paths.
What works for me is this:
Define your API paths, and integrate them with AWS Lambda functions.
Have your integrated Lambda function return a signed URL for any objects you want to provide access to through API calls.
There are several different ways to pass the name of the object(s) you want to the Lambda function servicing the API call.
This is the short answer. I plan to give a longer answer at a later time. But this has worked for me.

Auth between AWS API Gateway and Elastic Cloud hosted Elasticsearch

We're configuring an AWS API Gateway proxy in front of Elasticsearch deployed on Elastic Cloud (for throttling, usage plans, and various other reasons). In order to authenticate between the Gateway and ES, one idea is to configure an integration request on the API Gateway resource to add an Authorization header with creds created in ES. Is this the best strategy? It seems inferior to IAM roles, but that option isn't available as they're not accessible for the ES instance (Elastic Cloud hosts our deployment on AWS, but it's not a resource under our control). The API Gateway itself will require an API key.
I am not an expert at Elasticsearch, but it sounds like you want to securely forward a request from API gateway to another REST web service. Because Elasticsearch is an external REST web service to AWS, you will not have access to IAM roles. I had a similar integration to another cloud rest service (not elasticsearch) will do my best to review the tools in AWS that are available to complete the request.
One idea is to configure an integration request on the API Gateway resource to add an Authorization header with creds created in ES. Is this the best strategy?
This is the most straightforward strategy. In API Gateway, you can map custom headers in the Integration Request. This is where you will map your Authorization header for Elastic Search.
Similarly you can map your Authorization Header as a "Stage Variable" which will make it easier to maintain if the Authorization Header will change across different Elasticsearch environments.
In both strategies, you are storing your Authorization Header in API Gateway. Since the request to Elasticsearch should be HTTPS, the data will be secure in transit. This thread has more information about storing credentials in API Gateway.
From MikeD#AWS: There are currently no known issues with using stage variables to manage credentials; however, stage variables were not explicitly designed to be a secure mechanism for credentials management. Like all API Gateway configuration information, stage variables are protected using standard AWS permissions and policies and they are encrypted when transmitted over the wire. Internally, stage variables are treated as confidential customer information.
I think this applies to your question. You can store the Authorization Header in the API Gateway Proxy, however you have to acknowledge that API Gateway Configuration information was not explicitly designed for sensitive information. That being said, there are no known issues with doing so. This approach is the most straightforward to configure if you are willing to assume that risk.
What is a more "AWS" Approach?
An "AWS" approach would be to use the services designed for the function. For example, using the Key Management Service to store your Elasticsearch Authorization Header.
Similarly to the tutorial referenced in the comments, you will want to forward your request from API Gateway to Lambda. You will be responsible for creating the HTTPS request to Elasticsearch in the language of your choice. There are several tutorials on this but this is the official AWS documentation. AWS provides blueprints as a template to start a Lambda Function. The Blueprint https-request will work.
Once the request is being forwarded from API Gateway to Lambda, configure the authorization header for the Lambda request as an Environment Variable and implement Environment Variable Encryption. This is a secure recommended way to store sensitive data, such as the Elasticsearch authorization header.
This approach will require more configuration but uses AWS services for intended purposes.
My Opinion: I initially used the first approach (Authorization Headers in API Gateway) to authenticate with a dev instance because it was quick and easy, but as I learned more I decided the second approach was more aligned with the AWS Well Architected Framework

AWS API Gateway Lambda as a proxy for microservices

As my project is going to be deployed on AWS, we started thinking about AWS API Gateway as a way to have one main entry point for all of our microservices(frankly speaking, we also would like to use by some other reasons like security). I was playing with API Gateway REST API and I had feeling that it it a bit incovinient if we have to register there every REST service we have.
I found very good option of using AWS API Gateway and lambda function as a proxy. It is described here:
https://medium.com/wolox-driving-innovation/https-medium-com-wolox-driving-innovation-building-microservices-api-aws-e9a455cc3456
https://aws.amazon.com/blogs/compute/using-api-gateway-with-vpc-endpoints-via-aws-lambda
I would like to know your opinion about this approach. May be you could also share some other approaches that can simplify API Gateway configuration for REST API?
There are few considerations when you proxy your existing services through API Gateway.
If your backend is not publicly then you need to setup a VPC and a site to site VPN connection from the VPC to your backend Network and use Lambda's to proxy your services.
If you need do any data transformations or aggregations, you need to use Lambda's(Inside VPC is optional unless VPN connection is needed).
If you have complex integrations behind the API gateway for your services, you can look into having ESB or Messaging Middleware running in your on-premise or AWS then proxy to API Gateway.
You can move data model schema validations to API Gateway.
You can move service authentication to API Gateway by writing a Custom Authorizer Lambda.
If you happen to move your User pool and identity service to AWS, you can migrate to AWS Cognito Manage Service and use AWS Cognito Authorizer in API Gateway to authenticate.
For usecases when you adopt dumb pipes (as described on martinfowler.com) AWS API Gateway is a reasonable option.
For AWS API Gateway I'd suggest to describe/design your API first with RAML or OpenAPI/Swagger and then import into AWS using AWS API Importer.
As soon as you plan to move logic in there, such as dynamic routing, detailed monitoring, alerting, etc, I'd suggest considering other approaches, such as:
Apigee
Mulesoft
WSO2
You can also host them on an EC2 within your VPC or opt-in for the hosted version. (which does have a significant pricetag in some cases)
For describing APIs you can use RAML (for Mulesoft) or OpenAPI (ex-Swagger, for Apigee and WSO2). You can also convert between them using APIMATIC which enables you to migrate your specification across various API Gateways (even AWS).