Is there a way to configure the HTTP Proxy for all methods of resource at once so
FIRST:
GET,POST,PUT,DELETE of /v1/resource should all be forwarded to http://api.com/resource/{id}
If that is possible can I still map:
GET /v1/resource/schema to an S3 ?
or do i have to define them one by one.
The emergency way to save some time is probably to generate a swagger API documentation and create the API Gateway setup of it.
API Gateway doesn't support to set up multiple methods for one resource at once. As you already mentioned, you could create a Swagger template and use the API Gateway Importer tool (https://github.com/awslabs/aws-apigateway-importer).
Best,
Jurgen
Related
I'm looking after solution where AWS Api Gateway changes method endpoint Url dynamically.
I am familiar with stage variables and in Integration request I can change endpoint per method like (https://${stageVariables.Url}/api/DoSomething).
What I need is that information how parse endpoint is included in requests.
https://${RequestData.Url}/api/DoSomething
I have same Api in different locations and to implement centralized Api keys and logging services I try to forward all traffic through this one Api Gateway.
After first request client gets its endpoint information, but I don't know how to solve that clients next requests to Gateway should forward to that endpoint which client get earlier.
I got an answer from AWS support. They told that I have to make a lambda function to process all requests or just use Stage variables.
I have a private S3 bucket with lots of small files. I'd like to expose the contents of the bucket (only read-only access) using AWS API Gateway as a proxy. Both S3 bucket and AWS API Gateway belong to the same AWS account and are in the same VPC and Availability Zone.
AWS API Gateway comes in two types: HTTP API, REST API. The configuration options of REST API are more advanced, additionally, REST API supports much more AWS services integrations than the HTTP API. In fact, the use case I described above is fully covered in one of the documentation tabs of REST API. However, REST API has one huge disadvantage - it's about 70% more expensive than the HTTP API, the price comes with more configuration options but as for now, I need only one - integration with the S3 service that's why I believe this type of service is not well suited for my use case. I started searching if HTTP API can be integrated with S3, and so far I haven't found any way to achieve it.
I tried creating/editing service-linked roles associated with the HTTP API Gateway instance, but those roles can't be edited (only read-only access). As for now, I don't have any idea where I should search next, or if my goal is even achievable using HTTP API.
I am a fan of AWSs HTTP APIs.
I work daily with an API that serves a very similar purpose. The way I have done it is by using AWS Lambda functions integrated with the APIs paths.
What works for me is this:
Define your API paths, and integrate them with AWS Lambda functions.
Have your integrated Lambda function return a signed URL for any objects you want to provide access to through API calls.
There are several different ways to pass the name of the object(s) you want to the Lambda function servicing the API call.
This is the short answer. I plan to give a longer answer at a later time. But this has worked for me.
I deployed few microservices into Fargate. Each microservice will have around 30 API endpoints.
I have AWS ALB which will do the path based routing to the Fargate.
I created API Gateway APIs to externally expose the APIs. API Gateway integration method is HTTP and that points to the ALB endpoint.
Is this the proper way to setup the microservices? If not, please suggest the better approach.
And also, I want to automatically import the Swagger definition into API Gateway whenever there is a change in the Swagger definition. Swagger definition is exposed under /apidocs of each microservice. How do I automate the import of swagger definition into API gateway? Is there a commonly used approach?
Keep in mind you'll need to update both API GW and ALB paths. To import new swagger definition you'll need some kind of triggered event to:
Upload new swagger definition to S3.
Create an API gateway deployment.
Perform the path edition in ALB target group(s). This one could be done by triggering a CLI or API call (or sdk).
If you use CloudFormation, you could trigger an UPDATE containing all these new changes too. If so, also keep an eye on those resources parameters that may or not require a resource replacing.
Hope it helps.
I am trying to create a service root endpoint which will respond with a list of all existing path templates. I can create the response manually. Is there any way to get the list other than this manual approach?
If you are using serverless with aws + API Gateway you probably can use get resources method from the API Gatway methods from the aws sdk
example in JS sdk:
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/APIGateway.html#getResources-property
We're configuring an AWS API Gateway proxy in front of Elasticsearch deployed on Elastic Cloud (for throttling, usage plans, and various other reasons). In order to authenticate between the Gateway and ES, one idea is to configure an integration request on the API Gateway resource to add an Authorization header with creds created in ES. Is this the best strategy? It seems inferior to IAM roles, but that option isn't available as they're not accessible for the ES instance (Elastic Cloud hosts our deployment on AWS, but it's not a resource under our control). The API Gateway itself will require an API key.
I am not an expert at Elasticsearch, but it sounds like you want to securely forward a request from API gateway to another REST web service. Because Elasticsearch is an external REST web service to AWS, you will not have access to IAM roles. I had a similar integration to another cloud rest service (not elasticsearch) will do my best to review the tools in AWS that are available to complete the request.
One idea is to configure an integration request on the API Gateway resource to add an Authorization header with creds created in ES. Is this the best strategy?
This is the most straightforward strategy. In API Gateway, you can map custom headers in the Integration Request. This is where you will map your Authorization header for Elastic Search.
Similarly you can map your Authorization Header as a "Stage Variable" which will make it easier to maintain if the Authorization Header will change across different Elasticsearch environments.
In both strategies, you are storing your Authorization Header in API Gateway. Since the request to Elasticsearch should be HTTPS, the data will be secure in transit. This thread has more information about storing credentials in API Gateway.
From MikeD#AWS: There are currently no known issues with using stage variables to manage credentials; however, stage variables were not explicitly designed to be a secure mechanism for credentials management. Like all API Gateway configuration information, stage variables are protected using standard AWS permissions and policies and they are encrypted when transmitted over the wire. Internally, stage variables are treated as confidential customer information.
I think this applies to your question. You can store the Authorization Header in the API Gateway Proxy, however you have to acknowledge that API Gateway Configuration information was not explicitly designed for sensitive information. That being said, there are no known issues with doing so. This approach is the most straightforward to configure if you are willing to assume that risk.
What is a more "AWS" Approach?
An "AWS" approach would be to use the services designed for the function. For example, using the Key Management Service to store your Elasticsearch Authorization Header.
Similarly to the tutorial referenced in the comments, you will want to forward your request from API Gateway to Lambda. You will be responsible for creating the HTTPS request to Elasticsearch in the language of your choice. There are several tutorials on this but this is the official AWS documentation. AWS provides blueprints as a template to start a Lambda Function. The Blueprint https-request will work.
Once the request is being forwarded from API Gateway to Lambda, configure the authorization header for the Lambda request as an Environment Variable and implement Environment Variable Encryption. This is a secure recommended way to store sensitive data, such as the Elasticsearch authorization header.
This approach will require more configuration but uses AWS services for intended purposes.
My Opinion: I initially used the first approach (Authorization Headers in API Gateway) to authenticate with a dev instance because it was quick and easy, but as I learned more I decided the second approach was more aligned with the AWS Well Architected Framework