What is the AWS recommendation for hosting multiple micro-services? can we host all of them on same APIG/Lambda? With this approach, I see when want to update an API of one service then we are deploying API of all services which means our regression will include testing access of all services. On the other hand creating separate APIG/Lambda per service we will end up with multiple resources (and multiple accounts) to manage, can be operational burden later on?
A microservice is autonomously developed and should be built, tested, deployed, scaled, ... independently. There are many ways to do it and how you'd split your product into multiple services. One pattern as an example is having the API Gateway as the frontdoor to all services behind it, so it would be its own service.
A lambda usually performs a task and a service can be composed of multiple lambdas. I can't even see how multiple services can be executed from the same lambda.
It can be a burden specially if there is no proper tooling and processes to manage all systems in a automated and escalable way. There are pros and cons for any architecture, but the complexity is definitely reduced for serverless applications since all compute is managed by AWS.
AWS actually has a whitepaper that talks about microservices on AWS.
Related
I’m my scenario wants to separate out the production environment from our development environments.
We'd like to only have our production systems on one AWS account and all other systems and services on another.
I'd like to split/separate for billing purposes. If I do add more monitoring services many charge by the number of running instances. I have considerably more running instances than I need to monitor though so I'd like the separation. This also would make managing permissions in the future a lot easier I believe (e.g. security hub scores wouldn't be affected by LMS instances).
I'd like to split out all public facing assets to a separate AWS account. So RDS, all EC2 instances relating to prod-webserver (instances, target group, AMI, scaling, VPC, etc.), S3 cloudfront.abc.com bucket, jenkins, OpenVPN, all Seoul assets.
Perhaps I could achieve the goal with 'Organizations' or the 'Control Tower' as well. Could anyone please advise what would be best in my scenario? Is there Better alternative for this ?
The fact that you was to split for billing purposes means you should use separate AWS Accounts. While you could split some billing by tags within a single account, it's much easier to use multiple accounts to split the billing.
The typical split is Production / Testing / Development.
You can join the accounts together by using AWS Organizations, which gives some overall security controls.
Separating workloads and environments is considered a best practice in AWS according to the AWS Well-Architected Framework. Nowadays Control Tower (which builds upon AWS Organizations) is the standard for building multi-account setups in AWS.
Regarding multi-account setups I recommend reading the Organizing Your AWS Environment Using Multiple Accounts.
Also have a look at the open-source AWS Quickstart superwerker which sets up a well-architected AWS landing zone using AWS Control Tower, Security Hub, GuardDuty, and more.
AWS provides a lot information about this topic. E.g. a very detailed Whitepaper about Organizing Your AWS Environment in which they say
Using multiple AWS accounts to help isolate and manage your business applications and data can
help you optimize across most of the AWS Well-Architected Framework pillars, including operational
excellence, security, reliability, and cost optimization.
With accounts, you logically separate all resources (unless you allow something else) and therefore ensure independence between e.g. the development environment and the production environment.
You should also take a look at Organizational Units (OUs)
The following benefits of using OUs helped shape the Recommended OUs and accounts and Patterns for organizing your AWS accounts.
Group similar accounts based on function
Apply common policies
Share common resources
Provision and manage common resources
Control Tower is a tool which allows you to manage all your AWS accounts in one place. You can apply policies for every account, OU, or prohibit regions. You can use the Account Factory to create new accounts based on blueprints.
But still you need to collect a lot of knowledge about these tools and best practices because they're just that. Best practices and recommendations you can use to get started and build a good foundation, but they're nothing you can fully rely on because you may have individual factors.
So understanding these factor and consequences is very important.
I'm working on a SaaS application which at the moment is cloud only. It's a traditional Java web application which we deploy to AWS. We rely on AWS concepts like RDS, S3, ELB, Autoscaling and for infrastructure provisioning AMIs, Cloudformation, Ansible and CodeDeploy.
There is now more and more demand for on-premise deployments by potential clients.
Are there any common approaches to package b2b applications for on-premise deployments?
My first thought would be to containerize the app infrastructure (web server, database, etc) and assume a client would be able run images. What are you guys doing and how do you tackle HA and DR aspects which come with cloud infrastructure like AWS?
I'm tackling a similar problem at the moment and there really is no one-fits all answer. Designing software for cloud-nativity comes with a lot of architectural design decisions to use technologies on offer by the platform (as you have with S3, RDS, etc) which ultimately do not cross-over to majority of on-premise deployments.
Containerising your application estate is great for cross-cloud and some hybrid cloud portability but there is no guarantee that a client is using containerised work-loads on their on-premise data centre which makes the paradigm still a way off the target of supporting both seamlessly.
I find another issue is that the design principles behind cloud-hosted software are vastly different to those on-premise, with static resource requirements, often a lack of ability to scale etcetera (ironically some of the main reasons you would move a software solution to a cloud environment) so trying to design for both is a struggle and I'm guessing we will end up with a sub-optimal solution unless we decide to favour one and treat the other as a secondary concern.
I'm thinking maybe the best cross-breed solution is to concentrate on containerisation for cloud hosts taking into account the products and services on offer (and in the roadmap) - and then for making the same software available to clients who wish to use on-premise datacenters still.... perhaps they could be offered VM Images with the software solution packaged in... then make this available on a client portal for them with instructions on installation/configuration.
... I wish everyone would just use Kubernetes already! :)
Based on amazon docs:
An Amazon ECS cluster is a logical grouping of tasks or services. If you are running tasks or services that use the EC2 launch type, a cluster is also a grouping of container instances. If you are using capacity providers, a cluster is also a logical grouping of capacity providers. When you first use Amazon ECS, a default cluster is created for you, but you can create multiple clusters in an account to keep your resources separate.
In our use case, we are not using EC2 launch types. We are mainly using Fargate.
What are the usual basis/strategy on grouping services? Is it a purely subjective thing?
Lets say I have a Payment Service, Invoice/Receipt Service, User Service, and Authentication Service. Do I put some of them in an ECS cluster or is it best practice to have them on separate ECS clusters.
A service is a functioning application, so for example you might have an authentication service or payment service etc.
Whilst services can speak between one another, a service by itself should contain all parts to make it work, these parts are the containers.
Your service may be as simple as one container, or contain many containers to provide its functionality such as caching or background jobs.
The services concept generally comes from the ideas of both service driven design and micro service architecture.
Ultimately the decision comes down to you, you could put everything under one service, but this could lead to problems further down the line.
One key point to note is the scaling of containers is done at a service levels so you would need to increase all containers that are part of your task definition. You generally want to scale to meet the demands of functionality.
An ECS Cluster may contain one service or contain a number of services that produce a deliverable. For example in AWS S3 is made up of more than 200 micro services, these would be a cluster. However you would not expect every AWS service to be part of the same cluster.
In your scenario you define several services, personally I would separate these into different clusters as they deliver completely different business functions.
It's more of an open question and I'm just hoping for any opinions and suggestions. I have AWS in mind but it probably can relate also to other cloud providers.
I'd like to provision IaaC solution that will be easily maintainable and cover all the requirements of modern serverless architecture. Terraform is a great tool for defining the infrastructure, has many official resources and stable support from the community. I really like its syntax and the whole concept of modules. However, it's quite bad for working with Lambdas. It also raises another question: should code change be deployed using the same flow as infrastructure change? Where to draw the line between code and infrastructure?
On the other hand, Serverless Framework allows for super easy development and deployment of Lambdas. It's strongly opinionated when it comes to the usage of resources but it comes with some many out-of-the-box features that it's worth it. It shouldn't really be used for defining the whole infrastructure.
My current approach is to define any shared resources using Terraform and any domain-related resources using Serverless. Here I have another issue that is related to my previous questions: deployment dependency. The simple scenario: Lambda.1 adds users to Cognito (shared resource) which has Lambda.2 as a trigger. I have to create a custom solution for managing the deployment order (Lambda.2 has to be deployed first, etc.). It's possible to hook up the Serverless Framework deployment into Terraform but then again: should the code deployment be mixed with infrastructure deployment?
It is totally possible to mix the two and I have had to do so a few times. How this looks actually ends up being simpler than it seems.
First off, if you think about whatever you do with the Serverless Framework as developing microservices (without the associated infrastructure management burden), that takes it one step in the right direction. Then, what you can do is decide that everything that is required to make that microservice work internally is defined within that microservice as a part of the services configuration in the serverless.yml, whether that be DynamoDB tables, Auth0 integrations, Kinesis streams, SQS, SNS, IAM permissions allocated to functions, etc. Keep that all defined as a part of that microservice. Terraform not required.
Now think about what that and other microservices might need to interact with more broadly. They aren't critical for that services internal operation but are critical for integration into the rest of the organisations infrastructure. This includes things like deployment IAM roles used by the Serverless Framework services to deploy into CloudFormation, Relational Databases that have to be shared amongst multiple services and resources, networking elements (VPC's, Security Groups, etc), monolithic clusters like ElasticSearch and Redis ... all of these elements are great candidates for definition outside of the Serverless Framework and work really well with Terraform.
Any resource would be able to connect to these Terraform defined resource as needed, unlike that hard association such as Lambda functions triggered off of an API Gateway endpoint.
Hope that helps
I have to deploy a restful web api 2 project to Azure expecting a lot of traffic. I am not sure what Azure service to select in regards to the best performance.
Web api services are running in background the complete IIS for http handling whereas a worker role needs implementation of http handling via OWIN. Any experiences?
I would highly recommend you use the Azure App Service (either Web App or API App) in lieu of Azure 'Cloud Services'. The benefits are bountiful, the drawbacks are scarce.
A few notable benefits the App Service brings are auto-scaling, web jobs (think light weight worker roles), simpler and faster deployment mechanism, and some seamless integration with Application Insights.
About the only thing Cloud Services does better is scale (both vertical and horizontal). But for most web/webAPI scenarios these advantages are very much diminished with the new pricing tiers available for the App Service.
The App Service Environment (a new feature of the App Service) where you can literally scale up to an unlimited number of instances (default is 50, but you can call Microsoft to increase the limit) and use beefier (yes, that is a technical term) instance sizes.
Before you go the route of App Service Environment, I would recommend you evaluate the geo-distribution of your user population. Each App Service Plan can scale up to 10 and 25 for Standard and Premium pricing tiers, respectively. You could plop an App Service Plan in a couple different data centers (US-West, US-East, US-Central, or overseas depending the scenario) front it with a Traffic Manager and now you have three app service plans each with a max of 10 or 25 depending on the pricing tier. That can add up to a lot of metal and have the dual benefit of improving end user experience and increasing your system's availability / disaster recover.
These days I would only recommend Cloud Services for really intense batch processing or where there are architectural limitations of your existing application that require the ability to have greater control over the underlying OS of the instance (Cloud Services support startup tasks that let you do all kinds of crazy things when a new instance is spawned that you just can't do with the App Service).
I would recommend that you use Azure API Apps (https://azure.microsoft.com/en-us/documentation/articles/app-service-api-apps-why-best-platform/) as that is a service intended to host Web API 2 services. You get load-balancing, auto-scaling, monitoring, etc. when you use API Apps. So you can focus on building something that fullfil business requirements.
You should always avoid having to do any plumbing on your own as that always can get back and bite you later. API Apps is the right choice in this case!