I created a resource in Amazon's ApiGateway. It is pointing to a Lambda function. This is being hit by a native mobile application (android and ios) which is already in the wild.
I now want to modify the Lambda function, but I see no way to change my ApiGateway resource to point to an alias of the lambda. This is my first time playing with any of these technologies and I see no easy mechanism to manage this in the aws console.
How can I modify my ApiGateway resource to point to my lambda alias so I can edit trunk without affecting existing clients?
Under Integration Type -> Lambda Function you need to add a reference to the stage variable MyLambdaFuntionName:${stageVariables.lambdaAlias} and then for each stage set the lambdaAlias in the Stage Variables tab accordingly(lambdaAlias=dev, lambdaAlias=prod, etc.)
There is an example with screenshots here: https://aws.amazon.com/blogs/compute/using-api-gateway-stage-variables-to-manage-lambda-functions/
Its kind of hidden towards the very bottom of the page starting with "Alternatively, you can mix and match static names"
For the later googler, be careful to add permissions WITH the correct alias like yourfunc:prod not only yourfunc. That means if you'r planning to use 3 alias to invoke the lambda functions, you have to add 3 of them.
On the Api Gateway console, use ARN instead of a lambda function name.
For my case, I would add directly the arn of the lambda function to replace the Lambda name. This works for me without have to to add the {} notation.
Related
I have an architectural question about the design and organisation of AWS Serverless resources using CloudFormation.
Currently I have multiple stack organised by the domain specific purpose and this works well. Most of the stack that contain Lambdas have to transformed using Serverless (using SAM for all). The async communication is facilitated using a combination of EventBridge and S3+Events and works well. The issue I have is with synchronous communication.
I don't want to reference Lambdas from other stacks using their exported names from other stacks and invoke them directly as this causes issues with updating and versions (if output exports are referenced in other stacks, I cannot change the resource unless the reference is removed first, not ideal for CI/CD and keeping the concerns separate).
I have been using API Gateway as an abstraction but that feels rather heavy handed. It is nice to have that separation but having to have domain and DNS resolving + having the API GW exposed externally doesn't feel right. Maybe there is a better way to configure API GW to be internal only. If you had success with this, could you please point me in the direction?
Is there a better way to abstract invocation of Lambda functions from different stacks in a synchronous way? (Common template patterns for CF or something along those lines?)
I see two questions:
Alternatives for Synchronous Lambda Functions with API Gateway .
Api Gateway is one easy way, with IAM Authentication to make it secure. HTTP Api is much simplified and cheaper option compared to REST APIs. We can choose Private Api rather than a Regional/Edge, which is not exposed outside VPC to make it even move secure.
we can have a private ALB with target as Lambda functions, for a simple use case that doesn't need any API gateway features.(this will cost some amount every month)
We can always call lambdas directly with AWS SDK invoke.
Alternatives to share resources between templates.
Exporting and Importing will be bit of problem if we need to delete and recreate the resource, shouldn't be a problem if we are just updating it though.
We can always store the Arn of the Lambda function in an SSM parameter in source template and resolve the value of the Arn from SSM parameter in destination template. This is completely decoupled. This is better than simply hard coding the value of Arn.
I am making an HTTP API in AWS. Most of my logic is handled in lambda functions. I use terraform to describe and deploy my infrastructure. So far this is going well.
I have a single project that has lambdas for CRUD operations
GET /journal
POST /journal
GET /journal/{id}
PUT /journal/{id}
DELETE /journal/{id}
The code and infrastructure are all currently inside a monorepo.
Its now time to add endpoints for nested resources
for example...
/journals/{id}/sentence/{id}
and
/journals/{id}/sentence/{id}/correction
I really want the code and terraform for these sub-resources to be in a separate project because it's going to get very, very big.
I am really struggling to figure out how to make sub-resources for API gateway that exist in multiple separately deployed projects.
I want to avoid exporting the ARNs for the api-gateway resources and using them in other projects as I don't think it's a great idea to create dependencies between separate deployments.
I would really appreciate hearing any advice on how this could be managed as I am sure many people have faced this issue.
Is it possible to use route53 to route all API calls on a domain to lots of separate API gateways? If this is possible then this make life much easier. I am really struggling to find documentation or literature that explains anything beyond creating an API with single resource endpoints.
Edit: I had one other idea that maybe all my lambda projects could be completely ignorant of the api-gateway and just have their ARNs exported as outputs. Then I could have a completely separate project which defines the whole api-gateway and creates lambda integrations which simply use the function ARNs exported by the function in the other projects. This would prevent each lambda project from needing to reference the api_gateway_resource of the corresponding parent resource.
I feel like this may not be a good idea though.
I want to avoid exporting the ARNs for the api-gateway resources and
using them in other projects as I don't think it's a great idea to
create dependencies between separate deployments.
I am not sure about that statement. Independently of the way you go, you will have dependencies anyway between your project that creates the API GW and the lambda sub project.
So far I understood the question is in which direction you should create this dependency:
export the apigw and reuse it in the lambda project
export the lambda and reuse it in the apigw project
I think it makes sense for the lambda to be considered as independent piece of infrastructure in your terraform project and not create any kind of apigw related resource in it. First of all because in such case it already creates a strong implicit dependency and constraint on the usage of your lambda. And secondly you can think this way: what if your project grow more and more and you need to add more and more lambdas to your API. In such scenario you probably don't want to create each time new methods/resources and it's probably more convenient to use something like for_each in terraform and write your code one single time and automatically creates new integrations when you add new lambda.
Hence I would avoid the first choice and go for the second option which is way cleaner from an architectural stand point. Everything that deals with API GW stays in the same project. Your edits point it out to the right direction. You could have one repo for the "Infrastructure" (call it whatever you want) and one for "Lambda". You could output the lambda ARNs as a list (or a custom map with other key parameters) and use remote state from the apigw project to loop through your lambda and create the needed resource in one single place with for_each.
For example, use the remote state from apigw project:
data "terraform_remote_state" "lambda" {
backend = "s3"
config = {
bucket = "my-bucket"
key = "my/key/state/for/lambda"
region = "region"
}
}
And use the outputs like this:
resource "aws_api_gateway_integration" "lambda" {
for_each = data.terraform_remote_state.lambda.outputs.lambdas
...
uri = each.value
}
Is it possible to use route53 to route all API calls on a domain to
lots of separate API gateways? If this is possible then this make life
much easier. .
Sorry I am not sure to understand that point but I will still try to detail a bit the topic. The "link" API gateway provides you to call your api is just a DNS. When you create an API, behind the scenes AWS creates a CloudFront distribution for you in us-east-1 region. You don't have access to it through the console because it's managed by AWS. When you map an API to a domain name, you actually map the domain to the CloudFront distribution of your API. When you add methods or resources to your API (this is what you do with your lambda), you actually don't create new APIs each time.
Given a REST API, outside of my AWS environment, which can be queried for json data:
https://someExternalApi.com/?date=20190814
How can I setup a serverless job in AWS to hit the external endpoint on a periodic basis and store the results in S3?
I know that I can instantiate an EC2 instance and just setup a cron. But I am looking for a serverless solution, which seems to be more idiomatic.
Thank you in advance for your consideration and response.
Yes, you absolutely can do this, and probably in several different ways!
The pieces I would use would be:
CloudWatch Event using a cron-like schedule, which then triggers...
A lambda function (with the right IAM permissions) that calls the API using eg python requests or equivalent http library and then uses the AWS SDK to write the results to an S3 bucket of your choice:
An S3 bucket ready to receive!
This should be all you need to achieve what you want.
I'm going to skip the implementation details, as it is largely outside the scope of your question. As such, I'm going to assume your function already is written and targets nodeJS.
AWS can do this on its own, but to make it simpler, I'd recommend using Serverless. We're going to assume you're using this.
Assuming you're entirely new to serverless, the first thing you'll need to do is to create a handler:
serverless create --template "aws-nodejs" --path my-service
This creates a service based on the aws-nodejs template on the provided path. In there, you will find serverless.yml (the configuration for your function) and handler.js (the code itself).
Assuming your function is exported as crawlSomeExternalApi on the handler export (module.exports.crawlSomeExternalApi = () => {...}), the functions entry on your serverless file would look like this if you wanted to invoke it every 3 hours:
functions:
crawl:
handler: handler.crawlSomeExternalApi
events:
- schedule: rate(3 hours)
That's it! All you need now is to deploy it through serverless deploy -v
Below the hood, what this does is create a CloudWatch schedule entry on your function. An example of it can be found over on the documentation
First thing you need is a Lambda function. Implement your logic, of hitting the API and writing data to S3 or whatever, inside the Lambda function. Next thing, you need a schedule to periodically trigger your lambda function. Schedule expression can be used to trigger an event periodically either using a cron expression or a rate expression. The lambda function you created earlier should be configured as the target for this CloudWatch rule.
The resulting flow will be, CloudWatch invokes the lambda function whenever there's a trigger (depending on your CloudWatch rule). Lambda then performs your logic.
I want to make the alias of an AWS lambda function point to another version.
Since I can't find how to update an alias using AWS management console. I deleted and created the alias.
But then I found that all the cloudwatch rules that trigger the lambda function failed to work:
Is it possible to recreate the alias of a lambda function without breaking cloudwatch rules?
Where can I find the log for FailedInvocations of cloudwatch rules? I'd like to dig deeper to know the reason of the failure.
Doesn't AWS management console have the update-alias button?
From the AWS Management console you can change version number tagged with the alias.
To change the version number of your existing alias, got to the Lambda Function and select alias from the Switch versions/aliases dropdown. from the Aliases section you can change the version number, as well as you can divert traffic between two versions based on based on weights (%).
After creating the new Lambda function alias you can just re-select the alias from the existing CloudWatch without breaking the rules.
I am working in AWS with the API gateway together with a lambda function. I read about how to pass parameters over to lambda function, that is fine. But I want to pass the whole path over to lambda. Does someone know how that would be done? Especially I want to pass the stage of the API gateway. The lambda function should connect to either the test server or the prod based on the stage. In the following example it would be test:
https://skjdfsdj.execute-api.us-east-1.amazonaws.com/test/name/name2
In next example it would be prod:
https://skjdfsdj.execute-api.us-east-1.amazonaws.com/prod/name/name2
Any information how that would work?
Thanks,
Benni
We can configure/deploy the API Gateway with respect to the stages and the HTTP Methods that are required Docs.
There may be two cases :
You may either have two different AWS lambda functions implemented, in this scenario its pretty simple as you can just create another stage and map the lambda function and the respective methods accordingly.
If you have to access the same lambda function and take action corresponding to the stage, You can add, remove, and edit stage variables and their values. You can use stage variables in your API configuration to parametrize the integration of a request. Stage variables are also available in the $context object of the mapping templates, and once we have mapped the particular stage variable in the incoming request you can use it and configure which server to call accordingly. Do check this out API Gateway context/stage variables