Iaas vs Paas in context of AWS - amazon-web-services

This is not a duplicate question. I am just confused in Iaas,Saas with respect to AWS services like Dynamo, RDS, RedShift and Kinesis etc. They helps users to create database So, should we categorize them in Iaas or Saas?
Thanks

To help you understand, SaaS is Software as a Service. It's more like an on demand application where you don't have to worry about configurations, accesses, whitelisting etc. For instance, Google Maps (or Google Apps).
IaaS or Infra as a Service gives you more flexibility in terms of spawning of nodes and clusters, to deal with security services at IP and Port levels, manage access control and authentication etc. On AWS, you may specify what all private or public IPs will have access to your system, whether you prefer to go with dense storage or dense compute nodes for your warehouse, rotate your log files etc.
A page on Amazon RDS reads -
When you buy a server, you get CPU, memory, storage, and IOPS, all
bundled together. With Amazon RDS, these are split apart so that you
can scale them independently.
So, in short... Services like AWS and Azure are mostly now either IaaS or PaaS.

Related

Cloud Services/Architecture of a Multi-tenant Spring boot Project Deployment

Now I am working with our company product developed with spring boot , angular and PostgreSQL technologies where front end angular is communicating with 138 back end ReST API end points. And these 138 end points are from 35 different spring boot project. And all these end points need to separately deploy for 5 different tenant. Actually end point working is same.But databases are different for different tenant. And we decided to go with AWS cloud. And we are looking for cost effective deployment method from AWS.
Our Current Development/Test strategy - Current we are developing application(final stage of development) and testing our application using our On-premise server. Here we are using 5 ubuntu machines. And we created kubernetes cluster with 2 master nodes and 3 worker nodes.And from our SVN repository and Jenkins server we implemented CI/CD pipeline deployment to this 5 machines.
Proposed Cloud Solution - Now we are thinking with to use either EKS deployment method or any of CodeDeploy/CodePipeline method to implement this big project.
So by considering cost and control over infrastructure management which solution is better for my product? Now I am not that much experienced as solution architect and still in cloud learning curve. So can any one suggest/guide me to think properly to achieve my goal please?
Company consideration
Control over infrastructure
Cost effective
Easy management of aws services for multi-tenant deployment
Data security ( Installing database on ec2/ RDS)
Management of load balances
Control over infrastructure
it would be better to manage it on Github, Gitlab, and or AWS code build, or cloud build.
indeed AWS code build, and repo is great tools but again consider the limitation of extra users it allows only 5 users if your team is very big you might have to pay to compare to managing projects at the Github & GitLab level.
Cost effective
EKS would be a good option compared to ECS or others as it has limitations of we can not run the Daemon set or Privilege PODs.
If you are looking for running everything On POD and auto-scalable with little less flexibility and don't want to manage much ECS also a good idea, but again you have to derive the capacity and compare both pricing ECS vs EKS.
Note : EKS will also charge the per hour charges $0.10 for each cluster + worker nodes. it's not just worker nodes like in on-prem we run.
Data security ( Installing database on ec2/ RDS)
RDS would be better as it's managed service compare to managing the EC2 and database performance and encryption etc.
it would be better to use RDS and EKS so the K8s service can connect to RDS easily on a private network.
RDS would be a cost-effective option considering the management of DB over EC2.
Management of load balances
NLB or ALB will take care of that you can use any of them as per the requirement with EKS.
Cloud front could be also a great option with cloud storage to serve static assets, which will reduce calls, improve performance and be cost-effective also.

Jenkins On-prem vs Cloud(AWS)

we have Jenkins currently setup on-prem planning to migrate onto AWS, what are advantages and disadvantages running on AWS and on-prem?
By having Jenkins in AWS you gain these benefits:
Adjustable resourcing (changing the instance type as you desire)
Run as pay as you go mode, if only needed during certain hours only run it then.
Scalable worker nodes (more jobs means more scale)
More secure integration with AWS services (Use IAM roles, VPC Endpoints to services)
Easily replaceable
By having Jenkins on-premise you gain these benefits:
No traversing the internet to access
If hardwares already owned you won't be paying any extra for it.
Personally I'd recommend cloud just because of the benefits that you gain from cloud compute.

Multi-cloud solution for data platforms on hybrid and multi-cloud using Anthos

Google Cloud Platform has made hybrid- and multi-cloud computing a reality through Anthos which is an open application modernization platform. How does Anthos work for distributed data platforms?
For example, I have my data in Teradata On-premise, AWS Redshift and Azure Snowflake. Can Anthos joins all datasets and allow users to query or perform reporting with low latency? What is the equivalent of GCP Anthos in AWS and Azure?
Your question is wide. Anthos is designed for managing and distributing container accross several K8S cluster.
For a simpler view, imagine this: you have the Anthos master, and its direct node are K8S masters. If you ask Anthos Master to deploy a pod on AWS for example. Anthos master forward the query to K8S master deployed on EKS, and your pod is deployed on AWS.
Now, rethink your question: what about the data? Nothing magic, if your data are shared across several clusters you have to federate them with a system designed for this. It's quite similar than with only one cluster and with data on different node.
Anyway, you point here the real next challenge of multi-cloud/hybrid deployment. Solutions will emerge from this empty space.
Finally your last point: Azure and AWS equivalent. There isn't.
The newest Azure ARC seems to be light: it only allow to manage VM out of Azure Platform with an agent on it. Nothing as manageable as Anthos. for example: You have 3 VM on GCP and you manage them with Azure ARC. You deployed on each an NGINX and you want to set up a loadbalancer in from of your 3 VM. I don't catch how you can do this with Azure ARC. With Anthos, it's simply a service exposition of K8S -> The Loadbalancer will be deployed according with the cloud platform implementation.
About AWS, outpost is an hardware solution: you have to buy AWS specific hardware and to plug it in your OnPrem infrastructure. Need more investment on prem in your move to cloud strategy? Hard to convince. And not compliant with other cloud provider. BUT ReInvent is coming next month. Maybe an outsider?

What are strategies for bridging Google Cloud with AWS?

Let's say a company has an application with a database hosted on AWS and also has a read replica on AWS. Then that same company wants to build out a data analytics infrastructure in Google Cloud -- to take advantage of data analysis and ML services in Google Cloud.
Is it necessary to create an additional read replica within the Google Cloud context? If not, is there an alternative strategy that is frequently used in this context to bridge the two cloud services?
While services like Amazon Relational Database Service (RDS) provides read-replica capabilities, it is only between managed database instances on AWS.
If you are replicating a database between providers, then you are probably running the database yourself on virtual machines rather than using a managed service. This means the databases appear just like any resource on the Internet, so you can connect them exactly the way you would connect two resources across the internet. However, you would be responsible for managing, monitoring, deploying, etc. This takes away from much of the benefit of using cloud services.
Replicating between storage services like Amazon S3 would be easier since it is just raw data rather than a running database. Also, Big Data is normally stored in raw format rather than being loaded into a database.
If the existing infrastructure is on a cloud provider, then try to perform the remaining activities on the same cloud provider.

Amazon ec2 set up best practices for Rails app with mysql or postgres

I have to setup ec2 for a medium rails app running on apache2, mysql, capistrano and a few background services. I would like to know what is the best practices that every developer usually does to set up his rails app. I would like to know what kind of setup that is easy to scale and can mimimic at least
auto deployment
security
regular data backup and an easy and quick way to restore the data
server recovery
fault tolerance
I am also interested in how to monitor the server status and performance and other kind of best practice would be also helpful.
ps: take into account also that my app database will grow a fast.
I think a good look into the AWS docs and in particular the architecture center would be the best place to start. However, let me address as many of your questions as I can.
Database
The easiest way to get a scalable, fault tolerant database on AWS is to use the Relational Database Service. You should read the docs and best practices to ensure you get the most out of it - ie. multiple AZs.
EC2 Servers
The most recommended way to structure your servers is to decouple them into Web Servers (serve html to users) and App Servers (application logic, usually returns json or xml etc). See this architecture example.
However, the key is to use an AutoScaling group behind an Elastic Load Balancer.
Automation
If you want to use capistrano, just install it into your servers. You could create a pre-configured AMI with it installed along with whatever else you want. Alternatively, you could install it in a deployment script. However, the most recommended method for this kind of thing is to use the AWS OpsWorks service which is Chef in the cloud.
Server Recovery & Fault Tolerance
If you use EC2 AutoScaling, if a server becomes unavailable ie. hardware fails or it stops replying to EC2 health checks, AutoScaling will automatically terminate it and launch a replacement.
With the addition of the ELB and ELB health checks, instances that stop responding to web requests can be brought out of service by the ELB.
You need to read the docs for more info on this.
Backup and Recovery
For backing up data on EBS volumes attached to EC2 instances, use EBS Snapshots. However, the best types of architectures keep EC2 instances stateless - they don't store anything except application code, if they died it wouldn't matter. In these situations all data, including user files can be stored on S3. On S3, you have a number of back up options such as Cross Region Replication and or data archiving to Glacier
Monitoring
AWS provides CloudWatch which can provide you with hypervisor visible metrics such as network in and out, CPU utilization and more. If you want to get more data, you could use custom metrics and push things like eg. memory usage. In addition to cloudwatch, you could use a server level monitoring tool.
Deployment
I recommend AWS Code Deploy.
Security
Use Security Groups to open only the ports you want users to be able to connect on. Also, use security groups to lock down important ports eg.22 to only a specific set of IPs. You can also use Network ACLS to block undesired traffic. AWS provides more information and suggestions here.
I also recommend you read this Whitepaper.