Cloud Services/Architecture of a Multi-tenant Spring boot Project Deployment - amazon-web-services

Now I am working with our company product developed with spring boot , angular and PostgreSQL technologies where front end angular is communicating with 138 back end ReST API end points. And these 138 end points are from 35 different spring boot project. And all these end points need to separately deploy for 5 different tenant. Actually end point working is same.But databases are different for different tenant. And we decided to go with AWS cloud. And we are looking for cost effective deployment method from AWS.
Our Current Development/Test strategy - Current we are developing application(final stage of development) and testing our application using our On-premise server. Here we are using 5 ubuntu machines. And we created kubernetes cluster with 2 master nodes and 3 worker nodes.And from our SVN repository and Jenkins server we implemented CI/CD pipeline deployment to this 5 machines.
Proposed Cloud Solution - Now we are thinking with to use either EKS deployment method or any of CodeDeploy/CodePipeline method to implement this big project.
So by considering cost and control over infrastructure management which solution is better for my product? Now I am not that much experienced as solution architect and still in cloud learning curve. So can any one suggest/guide me to think properly to achieve my goal please?
Company consideration
Control over infrastructure
Cost effective
Easy management of aws services for multi-tenant deployment
Data security ( Installing database on ec2/ RDS)
Management of load balances

Control over infrastructure
it would be better to manage it on Github, Gitlab, and or AWS code build, or cloud build.
indeed AWS code build, and repo is great tools but again consider the limitation of extra users it allows only 5 users if your team is very big you might have to pay to compare to managing projects at the Github & GitLab level.
Cost effective
EKS would be a good option compared to ECS or others as it has limitations of we can not run the Daemon set or Privilege PODs.
If you are looking for running everything On POD and auto-scalable with little less flexibility and don't want to manage much ECS also a good idea, but again you have to derive the capacity and compare both pricing ECS vs EKS.
Note : EKS will also charge the per hour charges $0.10 for each cluster + worker nodes. it's not just worker nodes like in on-prem we run.
Data security ( Installing database on ec2/ RDS)
RDS would be better as it's managed service compare to managing the EC2 and database performance and encryption etc.
it would be better to use RDS and EKS so the K8s service can connect to RDS easily on a private network.
RDS would be a cost-effective option considering the management of DB over EC2.
Management of load balances
NLB or ALB will take care of that you can use any of them as per the requirement with EKS.
Cloud front could be also a great option with cloud storage to serve static assets, which will reduce calls, improve performance and be cost-effective also.

Related

Can we run an application that is configured to run on multi-node AWS EC2 K8s cluster using kops into local kubernetes cluster (using kubeadm)?

Can we run an application that is configured to run on multi-node AWS EC2 K8s cluster using kops (project link) into local Kubernetes cluster (setup using kubeadm)?
My thinking is that if the application runs in k8s cluster based on AWS EC2 instances, it should also run in local k8s cluster as well. I am trying it locally for testing purposes.
Heres what I have tried so far but it is not working.
First I set up my local 2-node cluster using kubeadm
Then I modified the installation script of the project (link given above) by removing all the references to EC2 (as I am using local machines) and kops (particularly in their create_cluster.py script) state.
I have modified their application yaml files (app requirements) to meet my localsetup (2-node)
Unfortunately, although most of the application pods are created and in running state, some other application pods are unable to create and therefore, I am not being able to run the whole application on my local cluster.
I appreciate your help.
It is the beauty of Docker and Kubernetes. It helps to keep your development environment to match production. For simple applications, written without custom resources, you can deploy the same workload to any cluster running on any cloud provider.
However, the ability to deploy the same workload to different clusters depends on some factors, like,
How you manage authorization and authentication in your cluster? for example, IAM, IRSA..
Are you using any cloud native custom resources - ex, AWS ALBs used as LoadBalancer Services
Are you using any cloud native storage - ex, your pods rely on EFS/EBS volumes
Is your application cloud agonistic - ex using native technologies like Neptune
Can you mock cloud technologies in your local - ex. Using local stack to mock Kinesis, Dynamo
How you resolve DNS routes - ex, Say you are using RDS n AWS. You can access it using a route53 entry. In local you might be running a mysql instance and you need a DNS mechanism to discover that instance.
I did a google search and looked at the documentation of kOps. I could not find any info about how to deploy to local, and it only supports public cloud providers.
IMO, you need to figure out a way to set up your local EKS cluster, and if there are any usage of cloud native technologies, you need to figure out an alternative way about doing the same in your local.
The true answer, as Rajan Panneer Selvam said in his response, is that it depends, but I'd like to expand somewhat on his answer by saying that your application should run on any K8S cluster given that it provides the services that the application consumes. What you're doing is considered good practice to ensure that your application is portable, which is always a factor in non-trivial applications where simply upgrading a downstream service could be considered a change of environment/platform requiring portability (platform-independence).
To help you achieve this, you should be developing a 12-Factor Application (12-FA) or one of its more up-to-date derivatives (12-FA is getting a little dated now and many variations have been suggested, but mostly they're all good).
For example, if your application uses a database then it should use DB independent SQL or no-sql so that you can switch it out. In production, you may run on Oracle, but in your local environment you may use MySQL: your application should not care. The credentials and connection string should be passed to the application via the usual K8S techniques of secrets and config-maps to help you achieve this. And all logging should be sent to stdout (and stderr) so that you can use a log-shipping agent to send the logs somewhere more useful than a local filesystem.
If you run your app locally then you have to provide a surrogate for every 'platform' service that is provided in production, and this may mean switching out major components of what you consider to be your application but this is ok, it is meant to happen. You provide a platform that provides services to your application-layer. Switching from EC2 to local may mean reconfiguring the ingress controller to work without the ELB, or it may mean configuring kubernetes secrets to use local-storage for dev creds rather than AWS KMS. It may mean reconfiguring your persistent volume classes to use local storage rather than EBS. All of this is expected and right.
What you should not have to do is start editing microservices to work in the new environment. If you find yourself doing that then the application has made a factoring and layering error. Platform services should be provided to a set of microservices that use them, the microservices should not be aware of the implementation details of these services.
Of course, it is possible that you have some non-portable code in your system, for example, you may be using some Oracle-specific PL/SQL that can't be run elsewhere. This code should be extracted to config files and equivalents provided for each database you wish to run on. This isn't always possible, in which case you should abstract as much as possible into isolated services and you'll have to reimplement only those services on each new platform, which could still be time-consuming, but ultimately worth the effort for most non-trival systems.

Amazon Fargate vs EC2 container website hosting

I got a project recently in which I have to build a React / NextJS application which will serve occasional high traffic but will mostly sit idle. We are currently looking for the cheapest option in all categories, but also want to build a scalable and manageable app with a quick and easy CI/CD pipeline. For the development server, we chose Heroku's free plan and pipeline, as I think it's perfectly idle for the job. For production, we decided to use Docker as it's the best way to set up a CD pipeline, and with 2000 minutes of free Github Actions per month, the whole Production/Development pipeline will be essentially free of cost for us. We also were thinking to use AWS because of its features and we want to keep a minimum number of bills to manage. For DB we're thinking of using DynamoDB because of free 25GB lifetime storage which will be enough as the only dynamic data in the site will be user data and blogs. And for object storage, the choice is S3.
Here, we're confused between the two offerings by AWS when it comes to Container hosting, ECS EC2, and ECS Fargate. While Fargate definitely feels like a better choice because of the fact that the application will sit idle most of the time, but we're really confused in resource provisioning for containers in Fargate. The app is running on NextJS, so it'll be server-side rendered.
So my question was, will a combo of 0.5 GB RAM x 0.25 vCPU will be enough for a Server Side Rendered NextJS application? Or should I go for a dedicated EC2? Or another cloud provider maybe?
NextJS is a framework that run on top of nodejs, as there is no such specific requirement (nodejs 10 only) mentioned on the documentation but you can treat them as we treat nodejs.
Node.js with V8 suitable for limited memory device?
So my question was, will a combo of 0.5 GB RAM x 0.25 vCPU will be
enough for a Server Side Rendered NextJS application? Or should I go
for a dedicated EC2? Or another cloud provider maybe?
I will not suggest EC2 type ECS service, you can go for fargate with minimal memory and CPU and set auto-scaling of ECS services whenever required.
But I think we have a better option then fargate that is serverless-nextjs
Serverless deployment dramatically improves reliability and scalability by splitting your application into smaller parts (also called [lambdas]3). In the case of Next.js, each page in the pages directory becomes a serverless lambda.
There are a number of benefits to serverless. The referenced link talks about some of them in the context of Express, but the principles apply universally: serverless allows for distributed points of failure, infinite scalability, and is incredibly affordable with a "pay for what you use" model.
Serverless Nextjs

Jenkins On-prem vs Cloud(AWS)

we have Jenkins currently setup on-prem planning to migrate onto AWS, what are advantages and disadvantages running on AWS and on-prem?
By having Jenkins in AWS you gain these benefits:
Adjustable resourcing (changing the instance type as you desire)
Run as pay as you go mode, if only needed during certain hours only run it then.
Scalable worker nodes (more jobs means more scale)
More secure integration with AWS services (Use IAM roles, VPC Endpoints to services)
Easily replaceable
By having Jenkins on-premise you gain these benefits:
No traversing the internet to access
If hardwares already owned you won't be paying any extra for it.
Personally I'd recommend cloud just because of the benefits that you gain from cloud compute.

Multi-cloud solution for data platforms on hybrid and multi-cloud using Anthos

Google Cloud Platform has made hybrid- and multi-cloud computing a reality through Anthos which is an open application modernization platform. How does Anthos work for distributed data platforms?
For example, I have my data in Teradata On-premise, AWS Redshift and Azure Snowflake. Can Anthos joins all datasets and allow users to query or perform reporting with low latency? What is the equivalent of GCP Anthos in AWS and Azure?
Your question is wide. Anthos is designed for managing and distributing container accross several K8S cluster.
For a simpler view, imagine this: you have the Anthos master, and its direct node are K8S masters. If you ask Anthos Master to deploy a pod on AWS for example. Anthos master forward the query to K8S master deployed on EKS, and your pod is deployed on AWS.
Now, rethink your question: what about the data? Nothing magic, if your data are shared across several clusters you have to federate them with a system designed for this. It's quite similar than with only one cluster and with data on different node.
Anyway, you point here the real next challenge of multi-cloud/hybrid deployment. Solutions will emerge from this empty space.
Finally your last point: Azure and AWS equivalent. There isn't.
The newest Azure ARC seems to be light: it only allow to manage VM out of Azure Platform with an agent on it. Nothing as manageable as Anthos. for example: You have 3 VM on GCP and you manage them with Azure ARC. You deployed on each an NGINX and you want to set up a loadbalancer in from of your 3 VM. I don't catch how you can do this with Azure ARC. With Anthos, it's simply a service exposition of K8S -> The Loadbalancer will be deployed according with the cloud platform implementation.
About AWS, outpost is an hardware solution: you have to buy AWS specific hardware and to plug it in your OnPrem infrastructure. Need more investment on prem in your move to cloud strategy? Hard to convince. And not compliant with other cloud provider. BUT ReInvent is coming next month. Maybe an outsider?

Choosing the right AWS Services and software tools

I'm developing a prototype IoT application which does the following
Receive/Store data from sensors.
Web application with a web-based IDE for users to deploy simple JavaScript/Python scripts which gets executed in Docker Containers.
Data from the sensors gets streamed to these containers.
User programs can use this data to do analytics, monitoring etc.
The logs of these programs are outputted to the user on the webapp
Current Architecture and Services
Using one AWS EC2 instance. I chose EC2 because I was trying to figure out the architecture.
Stack is Node.js, RabbitMQ, Express, MySQl, MongoDB and Docker
I'm not interested in using AWS IoT services like AWS IoT and Greengrass
I've ruled out Heroku since I'm using other AWS services.
Questions and Concerns
My goal is prototype development for a Beta release to a set of 50 users
(hopefully someone else will help/work on a production release)
As far as possible, I don't want to spend a lot of time migrating between services since developing the product is key. Should I stick with EC2 or move to Beanstalk?
If I stick with EC2, what is the best way to handle small-medium traffic? Use one large EC2 machine or many small micro instances?
What is a good way to manage containers? Is it worth it use swarm and do container management? What if I have to use multiple instances?
I also have small scripts which have status of information of sensors which are needed by web app and other services. If I move to multiple instances, how can I make these scripts available to multiple machines?
The above question also holds good for servers, message buses, databases etc.
My goal is certainly not production release. I want to complete the product, show I have users who are interested and of course, show that the product works!
Any help in this regard will be really appreciated!
If you want to manage docker containers with least hassle in AWS, you can use Amazon ECS service to deploy your containers or else go with Beanstalk. Also you don't need to use Swarm in AWS, ECS will work for you.
Its always better to scale out rather scale up, using small to medium size EC2 instances. However the challenge you will face here is managing and scaling underlying EC2's as well as your docker containers. This leads you to use Large EC2 instances to keep EC2 scaling aside and focus on docker scaling(Which will add additional costs for you)
Another alternative you can use for the Web Application part is to use, AWS Lambda and API Gateway stack with Serverless Framework, which needs least operational overhead and comes with DevOps tools.
You may keep your web app on Heroku and run your IoT server in AWS EC2 or AWS Lambda. Heroku is on AWS itself, so this split setup will not affect performance. You may heal that inconvenience of "sitting on two chairs" by writing a Terraform script which provisions both EC2 instance and Heroku app and ties them together.
Alternatively, you can use Dockhero add-on to run your IoT server in a Docker container alongside your Heroku app.
ps: I'm a Dockhero maintainer