Is there a way to force AWS to execute a Lambda request coming from an API Gateway resource in a certain execution environment? We're in a use-case where we use one codebase with various models that are 100-300mb, so on their own small enough to fit in the ephemeral storage, but too big to play well together.
Currently, a second invocation with a different model will use the existing (warmed up) lambda function, and run out of storage.
I'm hoping to attach something like a parameter to the request that forces lambda to create parallel versions of the same function for each of the models, so that we don't run over the 512 MB limit and optimize the cold-boot times, ideally without duplicating the function and having to maintain the function in multiple places.
I've tried to investigate Step Machines but I'm not sure if there's an option for parameter-based conditionality there. AWS are suggesting to use EFS to circumvent the ephemeral storage limits, but from what I can find, using EFS will be a lot slower than reading from the ephemeral /tmp/ directory.
To my knowledge: no. You cannot control the execution environments. Only thing you can do is limit the concurrent executions.
So you never know, if it is a single Lambda serving all your events triggered from API Gateway or several running in parallel. You also have no control over which one of the execution environments is serving the next request.
If your issues is the /temp directory limit for AWS Lambda, why not try EFS?
I need to access some credentials on lambda functions and one way to do that is to put them in environment variables.
However, the values of these credentials are visible in the lambda console. I am thinking to put them in secret manager or parameter store and put the key as env in lambda. Then load the value in lambda at runtime.
However, this approach is secure but give some latency. I am not sure how much latency it gives. Is there a better solution for that?
The Lambda variables are actually encrypted already, the decrypted values are shown from the console but if the user does not have permission for the key they will not be able to see them.
As you suggested you have the solutions of:
Secrets Manager
Systems Manager Parameter Store (use a SecureString).
The benefits that you will get by using either of these solutions is that you can change them outside of the version you're using as well as across a number of Lambda functions simultaneously.
You will also have an increased latency from trying to reach these service endpoints which would lead to a slight increase in time. You could reduce this latency by having your Lambda within a VPC and use VPC endpoints to the service (which will allow direct private communication over using the public internet) but it will still be longer than environment variables.
Ultimately this choice is for you, if you do need to reuse the variables and can put up with the slight latency then use secrets manager or systems manager parameter store. Otherwise manage the KMS permissions so that not every user can access the get to decrypt.
Does AWS provides any service for storing all the configs and we can get this config by just making a call to it? Here the config can be version controlled or available with less latency and so on?
Eg. I want to use some configs from the lambda function which I can easily change without changing the lambda function.
You can use AWS Systems Manager Parameter Store. It provides a centralized store to manage configuration data such as database strings, secrets or credentials.
https://aws.amazon.com/systems-manager/features/#Parameter_Store
DynamoDB is typically used for that purpose. The latency for a single GetItem request is typically around 5ms, and you can cache the results client-side to reduce the latency even further and to avoid a read io ever time.
I am new to Serverless architecture using AWS Lambda and still trying to figure out how some of the pieces fit together. I have converted my website from EC2 (React client, and node API) to a serverless architecture. The React Client is now using s3 static web hosting and the API has been converted over to use AWS Lambda and API Gateway.
In my previous implementation I was using redis as a cache for caching responses from other third party API's.
API Gateway has the option to enable a cache, but I have also looked into Elasticache as an option. They are both comparable in price with API Gateway cache being slightly costlier.
The one issue I have run into when trying to use Elasticache is that it needs to be running in a VPC and I can no longer call out to my third party API's.
I am wondering if there is any benefit to using one over the other? Right now the main purpose of my cache is to reduce requests to the API but that may change over time. Would it make sense to have a Lambda dedicated to checking Elasticache first to see if there is a value stored and if not triggering another Lambda to retrieve the information from the API or is this even possible. Or for my use case would API Gateway cache be the better option?
Or possibly a completely different solution all together. Its a bit of a shame that mainly everything else will qualify for the free tier but having some sort of cache will add around $15 a month.
I am still very new to this kind of setup so any kind of help or direction would be greatly appreciated. Thank you!
I am wondering if there is any benefit to using one over the other?
Apigateway internally uses Elasticache to support caching so functionally they both behave in same way. Advantage of using api gateway caching is that ApiGateway checks chache before invoking backend lambda, thus you save cost of lambda invocation for response which are served by cache.
Another difference will be that when you use api gateway cache , cache lookup time will not be counted towards "29s integration timeout" limit for cache miss cases.
Right now the main purpose of my cache is to reduce requests to the API but that may change over time.
I will suggest to make your decision about cache based on current use case. You might use completely new cache or different solution for other caching requirement.
Would it make sense to have a Lambda dedicated to checking Elasticache first to see if there is a value stored and if not triggering another Lambda to retrieve the information from the API or is this even possible. Or for my use case would API Gateway cache be the better option?
In general, I will not suggest to have additional lambda just for checking cache value ( just to avoid latency and aggravate lambda's cold start problem ). Either way, as mentioned above this way you will end up paying for lambda invokation even for requests which are being served by cache. If you use api gateway cache , cached requests will not even reach lambda.
Very interested in getting hands-on with Serverless in 2018. Already looking to implement usage of AWS Lambda in several decentralized app projects. However, I don't yet understand how you can prevent abuse of your endpoint from a 3rd-party app (perhaps even a competitor), from driving up your usage costs.
I'm not talking about a DDoS, or where all the traffic is coming from a single IP, which can happen on any network, but specifically having a 3rd-party app's customers directly make the REST calls, which cause your usage costs to rise, because their app is piggy-backing on your "open" endpoints.
For example:
I wish to create an endpoint on AWS Lambda to give me the current price of Ethereum ETH/USD. What would prevent another (or every) dapp developer from using MY lambda endpoint and causing excessive billing charges to my account?
When you deploy an endpoint that is open to the world, you're opening it to be used, but also to be abused.
AWS provides services to avoid common abuse methods, such as AWS Shield, which mitigates against DDoS, etc., however, they do not know what is or is not abuse of your Lambda function, as you are asking.
If your Lambda function is private, then you should use one of the API gateway security mechanisms to prevent abuse:
IAM security
API key security
Custom security authorization
With one of these in place, your Lambda function can only by called by authorized users. Without one of these in place, there is no way to prevent the type of abuse you're concerned about.
Unlimited access to your public Lambda functions - either by bad actors, or by bad software developed by legitimate 3rd parties, can result in unwanted usage of billable corporate resources, and can degrade application performance. It is important to you consider ways of limiting and restricting access to your Lambda clients as part of your systems security design, to prevent runaway function invocations and uncontrolled costs.
Consider using the following approach to preventing execution "abuse" of your Lambda endpoint by 3rd party apps:
One factor you want to control is concurrency, or number of concurrent requests that are supported per account and per function. You are billed per request plus total memory allocation per request, so this is the unit you want to control. To prevent run away costs, you prevent run away executions - either by bad actors, or by bad software cause by legitimate 3rd parties.
From Managing Concurrency
The unit of scale for AWS Lambda is a concurrent execution (see
Understanding Scaling Behavior for more details). However, scaling
indefinitely is not desirable in all scenarios. For example, you may
want to control your concurrency for cost reasons, or to regulate how
long it takes you to process a batch of events, or to simply match it
with a downstream resource. To assist with this, Lambda provides a
concurrent execution limit control at both the account level and the
function level.
In addition to per account and per Lambda invocation limits, you can also control Lambda exposure by wrapping Lambda calls in an AWS API Gateway, and Create and Use API Gateway Usage Plans:
After you create, test, and deploy your APIs, you can use API Gateway
usage plans to extend them as product offerings for your customers.
You can provide usage plans to allow specified customers to access
selected APIs at agreed-upon request rates and quotas that can meet
their business requirements and budget constraints.
What Is a Usage Plan? A usage plan prescribes who can access one or
more deployed API stages— and also how much and how fast the caller
can access the APIs. The plan uses an API key to identify an API
client and meters access to an API stage with the configurable
throttling and quota limits that are enforced on individual client API
keys.
The throttling prescribes the request rate limits that are applied to
each API key. The quotas are the maximum number of requests with a
given API key submitted within a specified time interval. You can
configure individual API methods to require API key authorization
based on usage plan configuration. An API stage is identified by an
API identifier and a stage name.
Using API Gateway Limits to create Gateway Usage Plans per customer, you can control API and Lambda access prevent uncontrolled account billing.
#Matt answer is correct, yet incomplete.
Adding a security layer is a necessary step towards security, but doesn't protect you from authenticated callers, as #Rodrigo's answer states.
I actually just encountered - and solved - this issue on one of my lambda, thanks to this article: https://itnext.io/the-everything-guide-to-lambda-throttling-reserved-concurrency-and-execution-limits-d64f144129e5
Basically, I added a single line on my serverless.yml file, in my function that gets called by the said authirized 3rd party:
reservedConcurrency: 1
And here goes the whole function:
refresh-cache:
handler: src/functions/refresh-cache.refreshCache
# XXX Ensures the lambda always has one slot available, and never use more than one lambda instance at once.
# Avoids GraphCMS webhooks to abuse our lambda (GCMS will trigger the webhook once per create/update/delete operation)
# This makes sure only one instance of that lambda can run at once, to avoid refreshing the cache with parallel runs
# Avoid spawning tons of API calls (most of them would timeout anyway, around 80%)
# See https://itnext.io/the-everything-guide-to-lambda-throttling-reserved-concurrency-and-execution-limits-d64f144129e5
reservedConcurrency: 1
events:
- http:
method: POST
path: /refresh-cache
cors: true
The refresh-cache lambda was invoked by a webhook triggered by a third party service when any data change. When importing a dataset, it would for instance trigger as much as 100 calls to refresh-cache. This behaviour was completely spamming my API, which in turn was running requests to other services in order to perform a cache invalidation.
Adding this single line improved the situation a lot, because only one instance of the lambda was running at once (no concurrent run), the number of calls was divided by ~10, instead of 50 calls to refresh-cache, it only triggered 3-4, and all those call worked (200 instead of 500 due to timeout issue).
Overall, pretty good. Not yet perfect for my workflow, but a step forward.
Not related, but I used https://epsagon.com/ which tremendously helped me figuring out what was happening on AWS Lambda. Here is what I got:
Before applying reservedConcurrency limit to the lambda:
You can see that most calls fail with timeout (30000ms), only the few first succeed because the lambda isn't overloaded yet.
After applying reservedConcurrency limit to the lambda:
You can see that all calls succeed, and they are much faster. No timeout.
Saves both money, and time.
Using reservedConcurrency is not the only way to deal with this issue, there are many other, as #Rodrigo stated in his answer. But it's a working one, that may fit in your workflow. It's applied on the Lambda level, not on API Gateway (if I understand the docs correctly).