Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We are getting ready to deploy a new app in the Amazon cloud, using EC2, RDS, and elastic load balancers. RDS would be sharded. Looking at the difficulties of manageing and monitoring anything beyond a few servers, one can see how quickly the task could become pretty crazy. Amazon's interfaces allow you to do all this, but we would have to script it all ourselves.
I was wondering what others have done. There is RightScale, for managed solutions. Has anyone found any other companies, or open source frame works, that do this kind of thing? Looking at:
Monitoring EC2, load balancers, RDS.
Spinning up new instances of the above automatically on predefined load levels.
Sending alerts and taking resources offline automatically when thesholds occur.
Promoting new software/upgrades in PHP and MySQL.
Taking numbers of servers offline for maintenance/troubleshooting.
Any thoughts would be much appreciated.
The type of services you are looking for - automated provisioning, scaling in/out and monitoring is generally referred as PaaS - Platform as a Service. The idea is that you submit your application to the PaaS system and it manages the complete life-cycle of your application.
There are several PaaS providers available that might fit your needs. There's a comparison available here: Looking for PaaS providers recommendations
You should consider your requirements carefully and see which provider is right for you in terms of:
Cloud Support: Do you need just EC2 or maybe additional clouds?
Language support: Some providers target specific coding frameworks and languages
Support
Pricing
Open/Closed source
Disclaimer: I work for GigaSpaces, developer of the Cloudify open-source PaaS Stack.
You could have a look at scalr. They offer this services on their own platform but you can also download the software they're using and set it up on your own.
After Amazon EC2 they started expanding into other cloud services as well, so you can run your scalr managed instances on literally all huge cloud providers.
Very feature rich, but so far I haven't tested it by myself.
You could try Xervmon. They offer integrated cloud management suite of tools to deploy, manage, monitor Amazon AWS along with several other providers. They do offer managed services as well.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
We are building a small micro service architecture which we would like to deploy to AWS.
The number of services is growing, so we need solution that allows scaling (horizontal).
What's the best way to build this on AWS? We don't have too much experience with docker, we used EC2 based stuff in the past.
I'm thinking about something like:
Use ECR, create a private docker repository. We push release images there.
Use ECS to automatically deploy those images.
Is this correct? Or should we go for Kubernetes instead? Which one is better?
Our needs:
Automated deployments based on docker images
Deploy to test and prod environments
Prod cluster should be able to handle multiple instances of certain services with load balancing.
Thanks in advance for any advice!
AWS container service team member here. Agreed with others that answers may potentially be very skewed to personal opinions. If you come in with good AWS knowledge but no container knowledge I would suggest ECS/Fargate. Note that deploying on ECS would require a bit of CloudFormation mechanics because you need to deploy a number of resources (load balancers, IAM roles, etc) in addition to ECS tasks that embeds your containers. It could be daunting if not abstracted.
We have created a few tools that allows you to offload some of that boiler plating. In order of what I would suggest for your use case:
Copilot which is a CLI tool that can prepare environments and deploy your app according to specific patterns. Have a look here
Docker Compose integration with ECS. This is a new integration we built with Docker that allows you to start from a simple Docker Compose file and deploy to ECS/Fargate. See here.
CDK is a sw development framework to define your AWS infrastructure as code. See here. These are the specific CDK ECS patterns if you want to go down that route.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
According to Google Cloud documentation, Cloud Dataflow is serverless while Cloud Firestore is fully-managed. If serverless means that the infrastructure and resources are managed by the cloud provider.
Then what's the difference between these two paradigms?
There isn't script definition of these 2 words. Serverless and Fully managed are very close and share the main concept: don't worry about the infrastructure, focus on your business value.
For me, and in most of Google Product, serverless means "pay as you use". No traffic, you pay nothing, lot of traffic, the scaling is automatic and you pay according with the traffic.
Cloud Run, Cloud Function, AppEngine standard, firestore, datastore, dataproc, dataflow, ai-platform are examples of serverless.
Other services are managed but not serverless, like Cloud SQL, BigTable or Spanner. You always have a minimal number of VM/node up and you pay for these, traffic or not. However, you have nothing to worry about: patching, updates, networking, backups, HA, redundancy(...) are managed for you. AppEngine flex belong to this category.
Finally you have hybrid product, like Cloud Storage or BigQuery: you pay as you use the processing (BigQuery) or the traffic (Cloud Storage), but the storage is always billed if you have no traffic.
This is for GCP. If you look for other cloud provider, the definition is not the same. For example, for AWS, Lambda and Fargate are both Serverless product. But with lamba, no traffic = 0 bill, Fargate keep at least 1 VM up and you are charged for this (don't scale to 0).
Be careful, serverless become a trendy and marketing words. Be aware of what it means for you and your use cases!
Difference between serverless and fully-managed concepts in Google Cloud Platform
As per the documentation, Google Cloud’s serverless platform lets you write code your way without worrying about the underlying infrastructure, which is fully-managed by Google.
I would say that there is a small difference in the meaning of the two concepts; however, their meanings do overlap in many ways.
When I think of serverless, I imagine a picture of the code that makes a service or application run. In English, this would be your favorite programming language, runtimes, frameworks, and libraries. You can even choose to deploy as functions, apps, as source code, or containers since the service is fully-managed.
On the other hand, I believe that fully-managed refers to the architecture that a service uses, namely, what is really happening behind the scenes. Google handles the configuring, provisioning, load balancing, sharding, scaling, and infrastructure management, so you can focus on building great serverless applications.
Notice the paradox in my explanation. I hope this helps a bit.
In my experience Google tend to use "fully managed" term with reference to Database and caching layer technologies where you don't have to write any code.
On the other hand "serverless" tend to get used where you have to deploy some sort of code. In most cloud platforms if you write, you own it and you manage it. So Google won't claim fully managed platform for DataFlow.
That's my explanation anyway, happy to be corrected. :)
Be interesting to see what happens if GCP comes up with Serverless database.
Let's take the example of AWS. Aws offers many kinds of computing services, but the most popular are EC2 and Lambda. EC2 is a fully managed service, basically, AWS offers you an empty VM and you install everything you want there. But that means you also have to manage the upgrading of the servers, maintaining it and many other server-related things. Meanwhile in Lambda you just upload your code and AWS takes care of the rest.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I have been going in details about all the major services of AWS like EC2, S3, VPC, Volumes, EIP, Subnets, Gateways, Route53, Auto Scaling, ELB, RDS, DynamoDb, Redshift, Kinesis, Video Transcoder etc .
But going in details about each topic from the official documentation of AWS is tough and hard to remember.
What are the patterns, guidelines, practise tests which I can follow
to clear the topics throughly.
Also do I need to purchase the Questions sets provided by many
websites, or just go with the free available ones.
There is no replacement for experience. Sit at the AWS console and spend time practicing with all the major AWS services (the ones you listed). Then take the time to read the FAQs and White Papers. For added knowledge watch the AWS videos on YouTube. Amazon has good training videos on their training website. I also recommend A Cloud Guru for training videos. Another good resource is QwikLabs.
It is possible to memorize enough to pass the exam by taking practice tests, but then your certification will be worthless to an employer. If your goal is a job, then take the time to really understand how the AWS cloud works and the services offered.
I have seven AWS certifications so I do have a real solid understanding of AWS and their exams. I also have more than 10 years actually working with AWS as part of my job.
[Update]
Time and again I see people put emphasis on taking practice tests. Don't do it. Take a practice test when you know what you are ready to take the exam. I recommend taking all the QwikLabs AWS labs. Do all of them (sometimes several times) and the above suggestions and you won't need a practice test. You will fly thru the exam.
RTFM: The AWS Documentation
The majority of AWS services revolve around Amazon EC2 and Amazon S3. Therefore, you would gain a lot by actually reading the manuals for these services:
Amazon S3 Developer Guide
Amazon EC2 User Guide for Linux InstancesorAmazon EC2 User Guide for Windows Instances
Yes, they are big. You don't have to read every word. Instead, download the PDF and look through the entire guide. Read the headings. Look at the pictures. Read the sections that grab your attention.
You'll actually learn quite a lot!
The other services are also important, but if you don't have the time do learn it all, at least read the FAQs, eg:
Amazon VPC FAQs
Amazon RDS FAQs
Amazon Redshift FAQs
Most important, however, is having actual experience using the services. The exams want to test your actual knowledge of AWS services, not how much you've crammed for the exam.
The best solution for preparing for AWS is to go through the FAQS of each topic mentioned in the official documentation.
Apart from official documentation you need to opt for some practise questions. You can try Whizlabs, Udemy , A Cloud Guru etc.
And for your final revision point to get through the topics in brief you can go for
http://jayendrapatil.com.
All the topics are covered with detailed explanations.
Preparing for AWS Solutions Architect will require you to plan before you start ahead the process.
First and foremost get the details of each topic like EC2, VPC, S3,
ELB from their official documentation and also videos available for
them as well. The details are mentioned in the [Whitepapers of AWS]1 for
CSAA exam.
After getting the details about the topic you need to go through the Practice tests from many sources available.
Please purchase the question sets rather than going through the free ones available as there's some conflicts in answers in the free version of the question sets.
When you are done the above process, now yo can purchase the official
Paper Sample from AWS.
Note: Please go through the Best Practices of each services in the current practical situations according to the requirement.
You will get more clarity of the overall services of AWS in terms of cost, availability, latency to make the infra and application durable and available.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Pivotal gives you option to deploy your application with help of Cloud Foundry inside AWS Cloud. I am little confused how PCF and AWS are differ. I know that PCF gives solution using which host (client) can make their own cloud on-premises.
AWS do not provide anything like that. And has lot of other services for elasticity, agility and scalability.
But these two are huge in terms of offerings. Please help in differentiating these two.
PCF is a commercial cloud platform (product) built by Pivotal on top of open source Cloud Foundry. PCF can be deployed on AWS, GCP, OpenStack, VMware vSphere, and some other IaaS platforms.
You should consider using PCF if you want to run your own cloud platform and you don't want to start from scratch.
When using PCF, you can deploy, configure and operate other products provided by Pivotal and their partners, or build your own ones based on your needs.
A typical use case for PCF is when companies want to deploy their applications on-premises for any reason (cost efficiency, flexibility, legal regulations, control over infrastructure, etc.). In this case they decide to use PCF as a leverage to build and operate their own (private) cloud offering. Another use case is when companies don't want to depend on the underlaying IaaS infrastructure. In this scenario, they rely on the fact PCF is IaaS agnostic to give them the ability to migrate if they need to.
These can help you in finding the real difference between PCF and AWS.
https://aws.amazon.com/types-of-cloud-computing/
https://cloudacademy.com/blog/cloud-foundry-benefits/
In two line:
PCF - can be used as PaaS -[Platform as a Service]
AWS - can be used as IaaS -[Infrastructure as a Service]
The most distinct difference between IaaS and PaaS is that IaaS offers administrators more direct control over operating systems, but PaaS offers users greater flexibility and ease of operation.
SaaS examples: BigCommerce, Google Apps, Salesforce, Dropbox, MailChimp, ZenDesk, DocuSign, Slack, Hubspot.
PaaS examples: AWS Elastic Beanstalk, Heroku, Windows Azure (mostly used as PaaS), Force.com, OpenShift, Apache Stratos, Magento Commerce Cloud.
IaaS examples: AWS EC2, Rackspace, Google Compute Engine (GCE), Digital Ocean, Magento 1 Enterprise Edition*.
referece: bigcommerce
So, technically Pivotal Cloud Foundry is a cloud abstraction framework. It's intention is to wrap preexisting commercial cloud offerings to allow adopters to be protected (to a degree) from solution lock-in (here meaning that, the PCF Cloud API is a mapping and abstraction layer over other cloud delivery systems. It's core advantage is that you can always choose the cheapest provider without needing to rebuild your delivery / deployment infrastructure.
It's basically a re-imaging of the HAL concept (if your familiar with that) but instead of enabling the choice of hardware with a single software solution, it enables choice of cloud.
Main reason for using PCF is to enable a person to advantage from competition. Cloud solution providers want to specifically seek to couple you to their particular flavor of system so it takes alot of effort to change away from them so that they can easily adjust their costs (e.g. increase prices) because customers are sufficiently dependent on their particular service and there is a cost for the customer to switch.
Pivotal may offer a cloud of their own, but the idea of the open source cloud foundry is to not force that choice on the business or consumer.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
Ive been delving into the world of AWS and, with very little server management experience under my belt, I'm quickly getting lost!
I'm looking at creating a system that uses Route 53, Elastic Load Balancing, EC2, RDS, S3 (possibly with CloudFront as well) so I can host a user generated content website that also streams video.
So Ive been looking at the following books:
Host Your Web Site On The Cloud: Amazon Web Services Made Easy
Programming Amazon Web Services: S3, EC2, SQS, FPS, and SimpleDB
Programming Amazon EC2: Run Applications on Amazon's Infrastructure with EC2, S3, SQS, SimpleDB, and Other Services
If I had to go for one of these what would you recommend?
Most importantly are there any resources you can recommend for a newbie like myself to quickly learn and understand the nuances to AWS?
TIA
Although all of those resources are good, the best way to dive into using AWS is in my experience CloudFormation. With CloudFormation you are able to script most if not all of your AWS resources in a single json script. By writing your cloudformation scripts and looking through the documentation and sample scripts, you will start to get aquatinted with how all of the AWS toolsets work.
Most importantly are there any resources you can reconmend for a
newbie like myself to quickly learn and understand the nuances to AWS?
As mentioned above, CloudFormation
However to make sure I answer your question:
If I had to go for one of these what would you recommend?
I have read all 3 resources listed and I found Programming EC2 to be the most useful in understanding the AWS toolset