Very interested in getting hands-on with Serverless in 2018. Already looking to implement usage of AWS Lambda in several decentralized app projects. However, I don't yet understand how you can prevent abuse of your endpoint from a 3rd-party app (perhaps even a competitor), from driving up your usage costs.
I'm not talking about a DDoS, or where all the traffic is coming from a single IP, which can happen on any network, but specifically having a 3rd-party app's customers directly make the REST calls, which cause your usage costs to rise, because their app is piggy-backing on your "open" endpoints.
For example:
I wish to create an endpoint on AWS Lambda to give me the current price of Ethereum ETH/USD. What would prevent another (or every) dapp developer from using MY lambda endpoint and causing excessive billing charges to my account?
When you deploy an endpoint that is open to the world, you're opening it to be used, but also to be abused.
AWS provides services to avoid common abuse methods, such as AWS Shield, which mitigates against DDoS, etc., however, they do not know what is or is not abuse of your Lambda function, as you are asking.
If your Lambda function is private, then you should use one of the API gateway security mechanisms to prevent abuse:
IAM security
API key security
Custom security authorization
With one of these in place, your Lambda function can only by called by authorized users. Without one of these in place, there is no way to prevent the type of abuse you're concerned about.
Unlimited access to your public Lambda functions - either by bad actors, or by bad software developed by legitimate 3rd parties, can result in unwanted usage of billable corporate resources, and can degrade application performance. It is important to you consider ways of limiting and restricting access to your Lambda clients as part of your systems security design, to prevent runaway function invocations and uncontrolled costs.
Consider using the following approach to preventing execution "abuse" of your Lambda endpoint by 3rd party apps:
One factor you want to control is concurrency, or number of concurrent requests that are supported per account and per function. You are billed per request plus total memory allocation per request, so this is the unit you want to control. To prevent run away costs, you prevent run away executions - either by bad actors, or by bad software cause by legitimate 3rd parties.
From Managing Concurrency
The unit of scale for AWS Lambda is a concurrent execution (see
Understanding Scaling Behavior for more details). However, scaling
indefinitely is not desirable in all scenarios. For example, you may
want to control your concurrency for cost reasons, or to regulate how
long it takes you to process a batch of events, or to simply match it
with a downstream resource. To assist with this, Lambda provides a
concurrent execution limit control at both the account level and the
function level.
In addition to per account and per Lambda invocation limits, you can also control Lambda exposure by wrapping Lambda calls in an AWS API Gateway, and Create and Use API Gateway Usage Plans:
After you create, test, and deploy your APIs, you can use API Gateway
usage plans to extend them as product offerings for your customers.
You can provide usage plans to allow specified customers to access
selected APIs at agreed-upon request rates and quotas that can meet
their business requirements and budget constraints.
What Is a Usage Plan? A usage plan prescribes who can access one or
more deployed API stages— and also how much and how fast the caller
can access the APIs. The plan uses an API key to identify an API
client and meters access to an API stage with the configurable
throttling and quota limits that are enforced on individual client API
keys.
The throttling prescribes the request rate limits that are applied to
each API key. The quotas are the maximum number of requests with a
given API key submitted within a specified time interval. You can
configure individual API methods to require API key authorization
based on usage plan configuration. An API stage is identified by an
API identifier and a stage name.
Using API Gateway Limits to create Gateway Usage Plans per customer, you can control API and Lambda access prevent uncontrolled account billing.
#Matt answer is correct, yet incomplete.
Adding a security layer is a necessary step towards security, but doesn't protect you from authenticated callers, as #Rodrigo's answer states.
I actually just encountered - and solved - this issue on one of my lambda, thanks to this article: https://itnext.io/the-everything-guide-to-lambda-throttling-reserved-concurrency-and-execution-limits-d64f144129e5
Basically, I added a single line on my serverless.yml file, in my function that gets called by the said authirized 3rd party:
reservedConcurrency: 1
And here goes the whole function:
refresh-cache:
handler: src/functions/refresh-cache.refreshCache
# XXX Ensures the lambda always has one slot available, and never use more than one lambda instance at once.
# Avoids GraphCMS webhooks to abuse our lambda (GCMS will trigger the webhook once per create/update/delete operation)
# This makes sure only one instance of that lambda can run at once, to avoid refreshing the cache with parallel runs
# Avoid spawning tons of API calls (most of them would timeout anyway, around 80%)
# See https://itnext.io/the-everything-guide-to-lambda-throttling-reserved-concurrency-and-execution-limits-d64f144129e5
reservedConcurrency: 1
events:
- http:
method: POST
path: /refresh-cache
cors: true
The refresh-cache lambda was invoked by a webhook triggered by a third party service when any data change. When importing a dataset, it would for instance trigger as much as 100 calls to refresh-cache. This behaviour was completely spamming my API, which in turn was running requests to other services in order to perform a cache invalidation.
Adding this single line improved the situation a lot, because only one instance of the lambda was running at once (no concurrent run), the number of calls was divided by ~10, instead of 50 calls to refresh-cache, it only triggered 3-4, and all those call worked (200 instead of 500 due to timeout issue).
Overall, pretty good. Not yet perfect for my workflow, but a step forward.
Not related, but I used https://epsagon.com/ which tremendously helped me figuring out what was happening on AWS Lambda. Here is what I got:
Before applying reservedConcurrency limit to the lambda:
You can see that most calls fail with timeout (30000ms), only the few first succeed because the lambda isn't overloaded yet.
After applying reservedConcurrency limit to the lambda:
You can see that all calls succeed, and they are much faster. No timeout.
Saves both money, and time.
Using reservedConcurrency is not the only way to deal with this issue, there are many other, as #Rodrigo stated in his answer. But it's a working one, that may fit in your workflow. It's applied on the Lambda level, not on API Gateway (if I understand the docs correctly).
Related
I am new to Serverless architecture using AWS Lambda and still trying to figure out how some of the pieces fit together. I have converted my website from EC2 (React client, and node API) to a serverless architecture. The React Client is now using s3 static web hosting and the API has been converted over to use AWS Lambda and API Gateway.
In my previous implementation I was using redis as a cache for caching responses from other third party API's.
API Gateway has the option to enable a cache, but I have also looked into Elasticache as an option. They are both comparable in price with API Gateway cache being slightly costlier.
The one issue I have run into when trying to use Elasticache is that it needs to be running in a VPC and I can no longer call out to my third party API's.
I am wondering if there is any benefit to using one over the other? Right now the main purpose of my cache is to reduce requests to the API but that may change over time. Would it make sense to have a Lambda dedicated to checking Elasticache first to see if there is a value stored and if not triggering another Lambda to retrieve the information from the API or is this even possible. Or for my use case would API Gateway cache be the better option?
Or possibly a completely different solution all together. Its a bit of a shame that mainly everything else will qualify for the free tier but having some sort of cache will add around $15 a month.
I am still very new to this kind of setup so any kind of help or direction would be greatly appreciated. Thank you!
I am wondering if there is any benefit to using one over the other?
Apigateway internally uses Elasticache to support caching so functionally they both behave in same way. Advantage of using api gateway caching is that ApiGateway checks chache before invoking backend lambda, thus you save cost of lambda invocation for response which are served by cache.
Another difference will be that when you use api gateway cache , cache lookup time will not be counted towards "29s integration timeout" limit for cache miss cases.
Right now the main purpose of my cache is to reduce requests to the API but that may change over time.
I will suggest to make your decision about cache based on current use case. You might use completely new cache or different solution for other caching requirement.
Would it make sense to have a Lambda dedicated to checking Elasticache first to see if there is a value stored and if not triggering another Lambda to retrieve the information from the API or is this even possible. Or for my use case would API Gateway cache be the better option?
In general, I will not suggest to have additional lambda just for checking cache value ( just to avoid latency and aggravate lambda's cold start problem ). Either way, as mentioned above this way you will end up paying for lambda invokation even for requests which are being served by cache. If you use api gateway cache , cached requests will not even reach lambda.
I'm learning about AWS Lambda and I'm worried about synchronized real-time requests.
The fact the lambda has a "cold start" it doesn't sounds good for handling GET petitions.
Imagine a user is using the application and do a GET HTTP Request to get a Product or a list of Products, if the lambda is sleeping, then it will take 10 seconds to respond, I don't see this as an acceptable response time.
Is it good or bad practice to use AWS Lambda for classic (sync responses) API Rest?
Like most things, I think you should measure before deciding. A lot of AWS customers use Lambda as the back-end for their webapps quite successfully.
There's a lot of discussion out there on Lambda latency, for example:
2017-04 comparing Lambda performance using Node.js, Java, C# or Python
2018-03 Lambda call latency
2019-09 improved VPC networking for AWS Lambda
2019-10 you're thinking about cold starts all wrong
In December 2019, AWS Lambda introduced Provisioned Concurrency, which improves things. See:
2019-12 AWS Lambda announces Provisioned Concurrency
2020-09 AWS Lambda Cold Starts: Solving the Problem
You should measure latency for an environment that's representative of your app and its use.
A few things that are important factors related to request latency:
cold starts => higher latency
request patterns are important factors in cold starts
if you need to deploy in VPC (attachment of ENI => higher cold start latency)
using CloudFront --> API Gateway --> Lambda (more layers => higher latency)
choice of programming language (Java likely highest cold-start latency, Go lowest)
size of Lambda environment (more RAM => more CPU => faster)
Lambda account and concurrency limits
pre-warming strategy
Update 2019-12: see Predictable start-up times with Provisioned Concurrency.
Update 2021-08: see Increasing performance of Java AWS Lambda functions using tiered compilation.
As an AWS Lambda + API Gateway user (with Serverless Framework) I had to deal with this too.
The problem I faced:
Few requests per day per lambda (not enough to keep lambdas warm)
Time critical application (the user is on the phone, waiting for text-to-speech to answer)
How I worked around that:
The idea was to find a way to call the critical lambdas often enough that they don't get cold.
If you use the Serverless Framework, you can use the serverless-plugin-warmup plugin that does exactly that.
If not, you can copy it's behavior by creating a worker that will invoke the lambdas every few minutes to keep them warm. To do this, create a lambda that will invoke your other lambdas and schedule CloudWatch to trigger it every 5 minutes or so. Make sure to call your to-keep-warm lambdas with a custom event.source so you can exit them early without running any actual business code by putting the following code at the very beginning of the function:
if (event.source === 'just-keeping-warm) {
console.log('WarmUP - Lambda is warm!');
return callback(null, 'Lambda is warm!');
}
Depending on the number of lamdas you have to keep warm, this can be a lot of "warming" calls. AWS offers 1.000.000 free lambda calls every month though.
We have used AWS Lambda quite successfully with reasonable and acceptable response times. (REST/JSON based API + AWS Lambda + Dynamo DB Access).
The latency that we measured always had the least amount of time spent in invoking functions and large amount of time in application logic.
There are warm up techniques as mentioned in the above posts.
I am developing a "micro-services" application using AWS API Gateway with either Lambda or ECS for compute. The issue now is communication between services are via API calls through the API gateway. This feels inefficient and less secure than it can be. Is there a way to make my microservices talk to each other in a more performant and secure manner? Like somehow talk directly within the private network?
One way I thought of is multiple levels of API gateway.
1 public API gateway
1 private API gateway per microservice. And each microservice can call another microservice "directly" inside the private network
But in this way, I need to "duplicate" my routes in 2 levels of API ... this does not seem ideal. I was thinking maybe use {proxy+}. So anything /payment/{proxy+} goes to payment API gateway and so on - theres still 2 levels of API gateway ... but this seem to be the best I can go?
Maybe there is a better way?
There are going to be many ways to build micro-services. I would start by familiarizing yourself with the whitepaper AWS published: Microservices on AWS, Whitepaper - PDF version.
In your question you stated: "The issue now is communication between services are via API calls through the API gateway. This feels inefficient and less secure than it can be. Is there a way to make my microservices talk to each other in a more performant and secure manner?"
Yes - In fact, the AWS Whitepaper, and API Gateway FAQ reference the API Gateway as a "front door" to your application. The intent of API Gateway is to be used for external services communicating to your AWS services.. not AWS services communicating with each other.
There are several ways AWS resources can communicate with each other to call micro-services. A few are outlined in the whitepaper, and this is another resource I have used: Better Together: Amazon ECS and AWS Lambda. The services you use will be based on the requirements you have.
By breaking monolithic applications into small microservices, the communication overhead increases because microservices have to talk to each other. In many implementations, REST over HTTP is used as a communication protocol. It is a light-weight protocol, but high volumes can cause issues. In some cases, it might make sense to think about consolidating services that send a lot of messages back and forth. If you find yourself in a situation where you consolidate more and more of your services just to reduce chattiness, you should review your problem domains and your domain model.
To my understanding, the root of your problem is routing of requests to micro-services. To maintain the "Characteristics of Microservices" you should choose a single solution to manage routing.
API Gateway
You mentioned using API Gateway as a routing solution. API Gateway can be used for routing... however, if you choose to use API Gateway for routing, you should define your routes explicitly in one level. Why?
Using {proxy+} increases attack surface because it requires routing to be properly handled in another micro-service.
One of the advantages of defining routes in API Gateway is that your API is self documenting. If you have multiple API gateways it will become colluded.
The downside of this is that it will take time, and you may have to change existing API's that have already been defined. But, you may already be making changes to existing code base to follow micro-services best practices.
Lambda or other compute resource
Despite the reasons listed above to use API Gateway for routing, if configured properly another resource can properly handle routing. You can have API Gateway proxy to a Lambda function that has all micro-service routes defined or another resource within your VPC with routes defined.
Result
What you do depends on your requirements and time. If you already have an API defined somewhere and simply want API Gateway to be used to throttle, monitor, secure, and log requests, then you will have API Gateway as a proxy. If you want to fully benefit from API Gateway, explicitly define each route within it. Both approaches can follow micro-service best practices, however, it is my opinion that defining each public API in API Gateway is the best way to align with micro-service architecture. The other answers also do a great job explaining the trade-offs with each approach.
I'm going to assume Lambdas for the solution but they could just as well be ECS instances or ELB's.
Current problem
One important concept to understand about lambdas before jumping into the solution is the decoupling of your application code and an event_source.
An event source is a different way to invoke your application code. You mentioned API Gateway, that is only one method of invoking your lambda (an HTTP REQUEST). Other interesting event sources relevant for your solution are:
Api Gateway (As noticed, not effective for inter service communication)
Direct invocation (via AWS Sdk, can be sync or async)
SNS (pub/sub, eventbus)
There are over 20+ different ways of invoking a lambda. documentation
Use case #1 Sync
So, if your HTTP_RESPONSE depends on one lambda calling another and on that 2nd lambdas result. A direct invoke might be a good enough solution to use, this way you can invoke the lambda in a synchronous way. It also means, that lambda should be subscribed to an API Gateway as an event source and have code to normalize the 2 different types of events. (This is why lambda documentation usually has event as one of the parameters)
Use case #2 Async
If your HTTP response doesn't depend on the other micro services (lambdas) execution. I would highly recommend SNS for this use case, as your original lambda publishes a single event and you can have more than 1 lambda subscribed to that event execute in parallel.
More complicated use cases
For more complicated use cases:
Batch processing, fan-out pattern example #1 example #2
Concurrent execution (one lambda calls next, calls next ...etc) AWS Step functions
There are multiple ways and approaches for doing this besides being bound to your current setup and infrastructure without excluding the flexibility to implement/modify the existing code base.
When trying to communicate between services behind the API Gateway is something that needs to be carefully implemented to avoid loops, exposing your data or even worst, blocking your self, see the "generic" image to get a better understanding:
While using HTTP for communicating between the services it is often common to see traffic going out the current infrastructure and then going back through the same API Gateway, something that could be avoided by just going directly the other service in place instead.
In the previous image for example, when service B needs to communicate with service A it is advisable to do it via the internal (ELB) endpoint instead of going out and going back through the API gateway.
Another approach is to use "only" HTTP in the API Gateway and use other protocols to communicate within your services, for example, gRPC. (not the best alternative in some cases since depends on your architecture and flexibility to modify/adapt existing code)
There are cases in where your infrastructure is more complex and you may not communicate on demand within your containers or the endpoints are just unreachable, in this cases, you could try to implement an event-driven architecture (SQS and AWS Lambda)
I like going asynchronous by using events/queues when possible, from my perspective "scales" better and must of the services become just consumers/workers besides no need to listen for incoming request (no HTTP needed), here is an article, explaining how to use rabbitmq for this purpose communicating microservices within docker
These are just some ideas that hope could help you to find your own "best" way since is something that varies too much and every scenario is unique.
I don't think your question is strictly related to AWS but more like a general way of communication between the services.
API Gateway is used as an edge service which is a service at your backend boundary and accessible by external parties. For communication behind the API Gateway, between your microservices, you don't necessary have to go through the API Gateway again.
There are 2 ways of communication which I'd mention for your case:
HTTP
Messaging
HTTP is the most simplistic way of communication as it's naturally easier to understand and there are tons of libraries which makes it easy to use.
Despite the fact of the advantages, there are a couple of things to look out for.
Failure handling
Circuit breaking in case a service is unavailable to respond
Consistency
Retries
Using service discovery (e.g. Eureka) to make the system more flexible when calling another service
On the messaging side, you have to deal with asynchronous processing, infrastructure problems like setting up the message broker and maintaining it, it's not as easy to use as pure HTTP, but you can solve consistency problems with just being eventually consistent.
Overall, there are tons of things which you have to consider and everything is about trade-offs. If you are just starting with microservices, I think it's best to start with using HTTP for communication and then slowly going to the messaging alternative.
For example in the Java + Spring Cloud Netflix world, you can have Eureka with Feign and with that it's really easy to use logical address to the services which is translated by Eureka to actual IP and ports. Also, if you wanna use Swagger for your REST APIs, you can even generate Feign client stubs from it.
I've had the same question on my mind for a while now and still cannot find a good generic solutions... For what it's worth...
If the communication is one way and the "caller" does not need to wait for a result, I find Kinesis streams very powerful - just post a "task" onto the stream and have the stream trigger a lambda to process it. But obviously, this works in very limited cases...
For the response-reply world, I call the API Gateway endpoints just like an end user would (with the added overhead of marshaling and unmarshaling data to "fit" in the HTTP world, and unnecessary multiple authentications).
In rare cases, I may have a single backend lambda function which gets invoked by both the Gateway API lambda and other microservices directly. This adds an extra "hop" for "end users" (instead of [UI -> Gateway API -> GatewayAPI lambda], now I have [UI -> Gateway API -> GatewayAPI lambda -> Backend lambda]), but makes microservice originated calls faster (since the call and all associated data no longer need to be "tunneled" through an HTTP request). Plus, this makes the architecture more complicated (I no longer have a single official API, but now have a "back channel" direct dependencies).
If i understood the whole concept correctly, the "serverless" architecture assumes that instead of using own servers or containers, one should use bunch of aws services. Usually such architecture includes Amazon API Gateway, bunch of Lambda functions and DynamoDB (or alternative) for storing data and state, as Lambda can't keep state. And such services as EC2 is not participating in all this, well, because this is a virtual server and it diminish all the benefits of serverless architecture.
All this looks really cool, but i feel like i'm missing something important, because right now this seems to be not applicable for such cases as real time applications.
Say, i have 2 users online. One of them performs an action in an app, which triggers changes in database, which in turn, should trigger changes in the second user app.
The conventional way to send some data or command from server to client is websocket connection. But with serverless architecture there seem to be no way to establish and maintain websocket connection. So... where did i misunderstood the concept? Or, if i understood everything correctly, then how do i implement the interactions between 2 users as described above?
where did i misunderstood the concept?
Your observation is correct. It doesn't work out of the box using API Gateway and Lambda.
Applicable solution as described here is to use AWS IoT - yes, another AWS Service.
Serverless isn't just a matter of Lambda, API Gateway and DynamoDB, it's much bigger than that. One of the big advantages to Serverless is the operational burden that it takes off your plate. No more patching, no more capacity planning, no more config management. Those may seem trivial but doing those things well and across a significant fleet of instances is complex, expensive and time consuming. Another benefit is the economics. Public cloud leverages utility billing, meaning you pay for what you run whether or not you actually use it. With AWS most of the billing per service is by hour but with Lambda it's per 100ms. The cheapest EC2 instance running for a full month is about $10/m (double that for redundancy). $20 in Lambda pricing gets you millions of invocations so for most cases serverless is significantly cheaper.
Serverless isn't for everything though, it has it's limitations, for example it's not meant for running binaries. You can't run nginx in Lambda (for example), it's only meant to be a runtime environment for the programming languages that it supports. It's also specifically meant for event based workloads, which is perfect for microservice based architectures. Small independent discrete pieces of compute doing work that when done they send an event to another(s) to do something else and if needed return a response.
To address your concerns about realtime processing, depending on what your code is doing your Lambda function could complete in less than 100ms all the way up to 5 minutes. There are strategies to optimize it's duration time but in general it's for short lived work which is conducive of realtime scenarios.
In your example about the 2 users interacting with the web app and the db, that could very easily be built using serverless technologies with one or 2 functions and a DynamoDB table. The total roundtrip time could be as low as milliseconds if not seconds, it really all depends on your code and what it's doing. These would all be HTTP calls so no websockets needed. Think of a number of APIs calling each other and your Lambda code is the orchestrator.
You might want to look at SNS (simple notification service). In your example, if app user 2 is a a subscriber to an SNS topic, then when app user 1 makes a change that triggers an SNS message, it will be pushed to the subscriber (app user 2). The message can be pushed over several supported protocols (Amazon, Apple, Google, MS, Baidu) in addition to SMTP or SMS. The SNS message can be triggered by a lambda function or directly from a DynamoDB stream after an update (a database trigger). It's up to the app developer to select a message protocol and format. The app only has to receive messages through its native channels. This may not exactly be millisecond-latency 'real-time', but it's fast enough for all but the most latency-sensitive applications.
I've been working on an AWS serverless application for several months now, and am amazed at the variety of services available. The rate of improvement and new features being added is enough to leave you out-of-breath.
I am facing a strange behavior in an AWS IAM's application to automatically create users and roles.
My sequence of operation is:
Send an action CreateUser;
Send an action CreateAccessKey for this created user;
Send an action GetUser for this created user to get the account id. I need to do this because I only have the root key and secret;
Send an action CreateRole, with a AssumeRolePolicyDocument where the Principal is this created user.
When I execute step 4, I receive a MalformedPolicyDocument (Invalid principal in policy: "AWS":"arn:aws:iam::123412341234:user/newuser").
But, if before the step 4 I put a 15 seconds delay, it runs without any problem.
Is there any workflow that I don't need to stick with a fixed delay, like read some IAM webservice to check if the user is ready to be used?
As outlined in my answer to Deterministically creating and tagging EC2 instances, the AWS APIs need to be generally treated as eventually consistent only.
Specifically, I mention that it is reasonable to assume that each and every single API action is operated entirely independently by AWS, i.e. is a micro service on its own. This explains, why even within a service like Amazon EC2 or in your case AWS Identity and Access Management (IAM), a call to one API action that results in a resource state change isn't necessarily visible to (all) other API actions within that service right away - that's precisely what you are experiencing, i.e. even though the created user is already visible for one of the other IAM APIs GetUser, it isn't yet visible for a different IAM API action CreateRole.
The correct workflow to work around this inherent characteristics is to repeat the desired API call with an Exponential backoff strategy until it succeeds (or the configured timeout is reached), which is good practice anyway in asynchronous communication scenarios. Several AWS SDKs offer integrated support for retry with exponential support meanwhile, which is usually applied transparently, but can be tailored to specific scenarios if need be, e.g. to extend whatever default timeout for very high latency scenarios etc.