How to build complex apps with AWS Lambda and SOA? - amazon-web-services

We currently run a Java backend which we're hoping to move away from and switch to Node running on AWS Lambda & Serverless.
Ideally during this process we want to build out a fully service orientated architecture.
My question is if our frontend angular app requests the current user's ordered items to get that information it would need to hit three services, the user service, the order service and the item service.
Does this mean we would need make three get requests to these services? At the moment we would have a single endpoint built for that specific request, which can then take advantage of DB joins for optimal performance.
I understand the benefits SOA, but how to do we scale when performing more compex requests such as this? Are there any good resources I can take a look at?

Looking at your question I would advise to align your priorities first: why do you want to move away from the Java backend that you're running on now? Which problems do you want to overcome?
You're combining the microservices architecture and the concept of serverless infrastructure in your question. Both can be used in conjunction, but they don't have to. A lot of companies are using microservices, even bigger enterprises like Uber (on NodeJS), but serverless infrastructures like Lambda are really just getting started. I would advise you to read up on microservices especially, e.g. here are some nice articles. You'll also find answers to your question about performance and joins.
When considering an architecture based on Lambda, do consider that there's no state whatsoever possible in a Lambda function. This is a step further then stateless services that we usually talk about; they generally target 'client state' that does not exist anymore. But a Lambda function cannot have any state, so e.g. a persistent DB-connection pool is not possible. For all the downsides, there's also a lot of stuff you don't have to deal with which can be very beneficial, especially in terms of scalability.

Related

Lambda AWS function for each endpoint

I have an application with 3 modules and 25 endpoints (between modules). Modules: Users, CRM, PQR.
I want to optimize AWS costs and generally respect the architecture best practices.
Should I build a lambda function for each endpoint?
Does using many functions cost more than using only one?
The link in Gustavos' answer provides a decent starting point. I'll elaborate on that based on the criteria you mentioned in the comments.
You mentioned that you want to optimize for cost and architecture best practices, let's start with the cost component.
Lambda pricing is fairly straightforward and you can check it out on the pricing page. Basically you pay for how long your code runs in 1MS increments. How much each millisecond costs depends on how many resources you provision for your Lambda function. Lambda is typically not the most expensive item on your bill, so I'd start optimizing it, once it becomes a problem.
From a pricing perspective it doesn't really matter if you have fewer or more Lambda functions.
In terms of architecture best practices, there is no single one-size-fits-all reference architecture, but the post Gustavo mentioned is a good starting point: Best practices for organizing larger serverless applications. How you structure your application can depend on many factors:
Development team size
Development team maturity/experience (in terms of AWS technologies)
Load patterns in the application
Development process
[...]
You mention three main components/modules with 25 endpoints in total:
Users
CRM
PQR
Since you didn't tell us much about the technology stack, I'm going to assume you're trying to build a REST API that serves as the backend for some frontend application.
In that case you could think of the three modules as three microservices, which implement specific functionality for the application. Each of them implements a few endpoints (combination of HTTP-Method and path). If you start with an API Gateway as the entry point for your architecture, you can use that as an abstraction of the internal architecture for your clients.
The API Gateway can route requests to different Lambda functions based on the HTTP method and path. You can now choose how to implement the backend. I'd probably start off with a common codebase from which multiple Lambdas are built and use the API gateway to map each endpoint to a Lambda function. You can also start with larger multi-purpose Lambdas and refactor them in time to extract specific endpoints and then use the API Gateway to route to the more specialized Lambdas.
You might have noticed, that this is a bit vague and that's on purpose. I think you're going to end up with roughly as many Lambdas as you'll have endpoints, but it doesn't mean you have to start that way. If you're just getting started with AWS, managing a bunch of Lambdas and there interaction can seem daunting. Start with more familiar architectures and then refactor them to be more cloud native over time.
It depends on your architecture and how decoupled you want it to be. Here is a good starting point for you to take a look into best practices:
https://aws.amazon.com/blogs/compute/best-practices-for-organizing-larger-serverless-applications/

Can AWS Lambda Replace an entire Rest Api layer in an enterprise web application

I am new to AWS and havebeen reading about aws lambda. Its very useful but you still have to write individual lambda functions instead of as a whole. i am wondering practically if its possible AWS Lambda can replace an entire Rest Api layer in an enterprise web application
Of course, everything is possible in the computer world but you need to answer lambda-serverless is the best way for me?
For example, you need smaller business flow per lambda(lambda have some hardware limits and need short computing and starting time for cost savings), that's mean you must separate your flow, its success depends on your business area and implementation. is your working area fit for this? But Lambda can handle almost everything with other AWS services(to be honest, maybe in some cases, lambda is a bit harder than the current system and community support is less than traditional systems but it also has lots of advantages as you know). You can check this repo, it full-serverless booking app and this serverless e-commerce repo.
To sum up, if your team is ready for it, you can start the conversion from some part of your application and check everything is ok. This answer totally depends on your team and business BCS nothing is impossible and that's engineering.
That's my opinion because your question looks like a comment question.

Microservice granularity: Per domain model or not?

When building a microservice oriented application, i wonder what could be the appropriate microservice granularity.
Let's image an application consisting of:
A set of various resources types where each resource map a given business model. (ex: In a todo app resources could be User, TodoList and TodoItem...)
Each of those resources are saved within a NoSQL database that could be replicated.
Each of those resources are exposed through a REST Api
The application manage an internal chat room.
An Api gateway for gathering chat room and REST api interaction.
The application front end: an SPA application connected to the API Gateway
The first (and naive) approach when thinking about how microservices could match the need of this application would be:
One monolith service for managing EVERY resources and business logic:
By managing i mean providing the REST API for all of those resources and handling the persistance of those resources within the database.
One service for each Database replica.
One service providing the internal chat room using websocket or whatever.
One service for Authentification.
One service for the api gateway.
One service serving the static assets for the SPA front end.
An other approach could be to split service 1 into as many service as business models exist in the system. (let's call those services resource services)
I wonder what are the benefit of this second approach.
In fact i see a lot of downsides with this approach:
Need to setup an inter service communication process.
When requesting a service representing resource X that have a relation with resource Y, a lot more work are needed (i.e: interservice request)
More devops work.
More difficulty to share common code between resource services.
Where to put business logic ?
When starting a fresh project this second approach seams to me a bit of an over engineered work.
I feel like starting with the first approach and THEN split the monolith resource service into several specific services depending on the observed needs will minimize the complexity and risks.
What's your opinions regarding that ?
Is there any best practices ?
Thanks a lot !
The first approach is not microservice way, by definition.
And yes, idea is to split - each service for Bounded Context - One for Users, one for Inventory, Todo things etc etc.
The idea of microservices, at very simple, assumes:
You want to pay extra dev-ops work for modularity, and complete/as much as possible removal of dependencies between different bounded contexts (see dev/product/pjm teams).
It's idea lies around ownership, modularity, allowing separate teams develop their own piece of code, without requirement from them to know the rest of the system . As long as there is Umbiqutious Language (common set of conventions/communication protocols/terminology/documentation) they can work in completeley isolated, autotonmous fashion.
Maintaining, managing, testing, and develpoing become much faster - in cost of initial dev-ops and sophisticated architecture engeneering investment.
Sharing code should be minimal, and if required, could be done to represent the Umbiqutious Language (common communication interface/set of conventions). Sharing well-documented code, which acts as integration/infrastructure mini-framework, and have special dev/dev-ops/team attached to it ccould be easy business, as long as it, as i said, well-documented, and threated as separate architecture-related sub-project.
Properly engeneered Microservice architecture could lessen maintenance and development times by huge margin, but it requires quite serious reason to use it (there lot of reasons, and lots of articles on that, I wont start it here) and quite serious engeneering investment at start.
It brings modularity, concept of ownership, de-coupling of different contexts of your app.
My personal advise check if you really need MS architecture. If you can not invest engenerring though and dev-ops effort at start and do not have proper reasons for such system - why bother?
If you do need MS, i would really advise against the first method. You will develop wrong thing's, will miss the true challenges of MS, and could end with huge refactor, which could take more work than engeneering MS system from start properly. It's like to make square to make it fit into round bucket later.
Now answering your question title: granularity. (your question body bit different from your post title).
Attach it to Domain Model / Bounded Context. You can make meaty services at start, in order to avoid complex distributed transactions.
First just answer question if you need them in your design/architecture?
If not, probably you did a good design.
Passing reference ids between models from different microservices should suffice, and if not, try to rethink if more of complex transactions could be avoided.
If your system have unavoidable amount of distributed trasnactions, perhaps look towards using/making some CQRS mini-framework as your "shared code infrastructure component" / communication protocol.
It is the key problem of the microservices or any other SOA approach. It is where the theory meets the reality. In general you should not force the microservices architecture for the sake of it. This should rather naturally come from functional decomposition (top-down) and operational, technological, dev-ops needs (bottom-up). First approach is closer to what you would need to do, however at the first step do not focus so much on the technology aspect. Ask yourself why would you need to implement a separate service for particular business function. Treat it as a micro-application with all its technical resources. Ask yourself if there is reason to implement particular function as a full-stack app.
Some, of the functionalities you have mentioned in scenario 1) are naturally ok, such as 'authentication' service - this is probably good candidate.
For the business functions decomposition into separate service, focus on the 'dependencies' problem, if there are too many dependencies and you see that you have to implement bigger chunk of data mode - naturally this is not a micro service any more.
Try to put litmus test , if you can 'turn off' particular functionality and the system still makes sense - it is the candidate for service or further decomposition

I need feedback on this partly serverless architecture design

I want to host a scalable blog or application of this sort in nodeJS on AWS making use of AWS technologies. The idea here is to have a small EC2 server that is not responsible for serving the website, but only for running the CMS/admin panel. While these operations could be serverless as well, I think having a dedicated small VM EC2 instance could be more efficient, and works better with existing frameworks, etc.
In my diagram above, you can see there's two type of users audiences and admin/writers. Admin CRUD operations also cause lambda to run. Lambda generates the static site after Admin changes, which is delivered to S3. Users are directed to the static site hosted in S3. Only admins/writers have access to the server-connecting part of the site.
I think this is a good design for an extremely scalable and relatively cheap site, as long as the user-facing side is all static. An alternative to this is a CDN, but then I have to deal with cache invalidation issues, a site that updates slower, and a larger server.
This seems like a win-win to me. Feedback?
This ought to be a comment rather than an answer, but as I don't have enough points...
There are a couple of other considerations for this architecture. Lambda functions are great for scaling out microservices horizontally with each small function being executed in parallel tens or hundreds of times. Generation of a static site is typically a single threaded operation so you may not see the gains you expect, you'll also need to watch the timeout period (maximum 300 seconds currently) and make sure that you can generate the site in that time. Of course if you are not running Lambda code you are not getting charged.
For your admin frontend I would suggest ElasticBeanstalk, even if you peg it at a single instance, it gives you lots of great features like rolling updates.
Good luck with the project.

Api Gateway, multiple lambda in the same JAR

I'm trying to deploy an API suite by using Api Gateway and implementing code in Java using lambda. Is it ok to have many ( related, of course ) lambdas in a single jar ( what I'm supposing to do ) or it is better to create a single jar for each lambda I want to deploy? ( this will became a mess very easily)
This is really a matter of taste but there are a few things you have to consider.
First of all there are limitations to how big a single Lambda upload can be (50MB at time of writing).
Second, there is also a limit to the total size of all all code that you upload (currently 1.5GB).
These limitations may not be a problem for your use case but are good to be aware of.
The next thing you have to consider is where you want your overhead.
Let's say you deploy a CRUD interface to a single Lambda and you pass an "action" parameter from API Gateway so that you know which operation you want to perform when you execute the Lambda function.
This adds a slight overhead to your execution as you have to route the action to the appropriate operation. This is likely a very fast routing but nevertheless, it adds CPU cycles to your function execution.
On the other hand, deploying the same jar over several Lambda function will quickly get you closer to the limits I mentioned earlier and it also adds administrative overhead in managing your Lambda functions as that number grows. They can of course be managed via CloudFormation or cli scripts but it will still add an administrative overhead.
I wouldn't say there is a right and a wrong way to do this. Look at what you are trying to do, think about what you would need to manage the deployment and take it from there. If you get it wrong you can always start over with another approach.
Personally I like the very small service Lambdas that do internal routing and handles more than just a single operation but they are still very small and focused on a specific type of task be it a CRUD for a database table or managing a selected few very closely related operations.
There's some nice advice on serverless.com
As polythene say's, the answer is "it depends". But they've listed the pros and cons for 4 ways of going about it:
Microservices Pattern
Services Pattern
Monolithic Pattern
Graph Pattern
https://serverless.com/blog/serverless-architecture-code-patterns/