Can someone give an overview of the various choices on GCloud? - google-cloud-platform

I'm a bit confused as to understanding the various offerings that google cloud has.
Is it basically like this:
Google app engine is fully managed servers, you push the code and it runs.
They have a servers that you manage yourself, you choose the sizes and spin them up and push code manually
Servers that run docker containers for you.
Is that a high level offering of google cloud in terms of the application servers? (excluding their managed services for db, caching etc).

Have a look at this:
https://cloud.google.com/docs/choosing-a-compute-option
There are also cloud functions that belong to the compute group of GCP:
https://cloud.google.com/functions/docs/

Related

Spring boot/cloud microservices on AWS

I have created a Spring cloud microservices based application with netflix APIs (Eureka, config, zuul etc). can some one explain me how to deploy that on AWS? I am very new to AWS. I have to deploy development instance of my application.
Do I need to integrate docker before that or I can go ahead without docker as well.
As long as your application is self-contained and you have externalised your configurations, you should not have any issue.
Go through this link which discusses what it takes to deploy an App to Cloud Beyond 15 factor
Use AWS BeanStalk to deploy and Manage your application. Dockerizing your app is not a predicament inorder to deploy your app to AWS.
If you use an EC2 instance then it's configuration is no different to what you do on your local machine/server. It's just a virtual machine. No need to dockerize or anything like that. And if you're new to AWS, I'd rather suggest to to just that. Once you get your head around, you can explore other options.
For example, AWS Beanstalk seems like a popular option. It provides a very secure and reliable configuration out of the box with no effort on your part. And yes, it does use docker under the hood, but you won't need to deal with it directly unless you choose to. Well, at least in most common cases. It supports few different ways of deployment which amazon calls "Application Environments". See here for details. Just choose the one you like and follow instructions. I'd like to warn you though that whilst Beanstalk is usually easier then EC2 to setup and use when dealing with a typical web application, your mileage might vary depending on your application's actual needs.
Amazon Elastic container Service / Elastic Kubernetes Service is also a good option to look into.
These services depend on the Docker Images of your application. Auto Scaling, Availability cross region replication will be taken care by the Cloud provider.
Hope this helps.

Choosing the right AWS Services and software tools

I'm developing a prototype IoT application which does the following
Receive/Store data from sensors.
Web application with a web-based IDE for users to deploy simple JavaScript/Python scripts which gets executed in Docker Containers.
Data from the sensors gets streamed to these containers.
User programs can use this data to do analytics, monitoring etc.
The logs of these programs are outputted to the user on the webapp
Current Architecture and Services
Using one AWS EC2 instance. I chose EC2 because I was trying to figure out the architecture.
Stack is Node.js, RabbitMQ, Express, MySQl, MongoDB and Docker
I'm not interested in using AWS IoT services like AWS IoT and Greengrass
I've ruled out Heroku since I'm using other AWS services.
Questions and Concerns
My goal is prototype development for a Beta release to a set of 50 users
(hopefully someone else will help/work on a production release)
As far as possible, I don't want to spend a lot of time migrating between services since developing the product is key. Should I stick with EC2 or move to Beanstalk?
If I stick with EC2, what is the best way to handle small-medium traffic? Use one large EC2 machine or many small micro instances?
What is a good way to manage containers? Is it worth it use swarm and do container management? What if I have to use multiple instances?
I also have small scripts which have status of information of sensors which are needed by web app and other services. If I move to multiple instances, how can I make these scripts available to multiple machines?
The above question also holds good for servers, message buses, databases etc.
My goal is certainly not production release. I want to complete the product, show I have users who are interested and of course, show that the product works!
Any help in this regard will be really appreciated!
If you want to manage docker containers with least hassle in AWS, you can use Amazon ECS service to deploy your containers or else go with Beanstalk. Also you don't need to use Swarm in AWS, ECS will work for you.
Its always better to scale out rather scale up, using small to medium size EC2 instances. However the challenge you will face here is managing and scaling underlying EC2's as well as your docker containers. This leads you to use Large EC2 instances to keep EC2 scaling aside and focus on docker scaling(Which will add additional costs for you)
Another alternative you can use for the Web Application part is to use, AWS Lambda and API Gateway stack with Serverless Framework, which needs least operational overhead and comes with DevOps tools.
You may keep your web app on Heroku and run your IoT server in AWS EC2 or AWS Lambda. Heroku is on AWS itself, so this split setup will not affect performance. You may heal that inconvenience of "sitting on two chairs" by writing a Terraform script which provisions both EC2 instance and Heroku app and ties them together.
Alternatively, you can use Dockhero add-on to run your IoT server in a Docker container alongside your Heroku app.
ps: I'm a Dockhero maintainer

Website with Google cloud compute

Total NOOB question. I want to setup a website on google cloud compute platform with:
static IP/IP range(external API requirement)
simple front-end
average to low traffic with a maximum of few thousand requests a
day.
separate database instance.
I went through the documentation of services offered Google and Amazon. Not fully sure what is the best way to go about it. Understand that there is no right answer.
A viable solution is:
Spawn up an n1-standard instance on GCP (I prefer to use Debian)
Get a static IP, which is free if you don't let it dangling.
Depending upon your DB type choose Cloud SQL for structured data or Cloud Datastore for unstructured data
Nginx is a viable option for web-server. Get started here
Rest is upon you. What kind of stack are you using to build your app? How are you gonna deploy your code to instance? You might later wanna use Docker and k8s to get flexibility between cloud providers and scaling needs.
The easiest way of creating the website you want would be Google App Engine with the Datastore as DB. However it doesn't support static IP's, this is due to a design choice. Is this absolutely mandatory?
App Engine does not currently provide a way to map static IP addresses
to an application. In order to optimize the network path between an
end user and an App Engine application, end users on different ISPs or
geographic locations might use different IP addresses to access the
same App Engine application. DNS might return different IP addresses
to access App Engine over time or from different network locations.

How can one seamlessly transfer services hosted on AWS to Google Cloud Platform and vice versa?

Last month there was an outage in AWS and some sites had to be taken down because of that. I was wondering if a company is availing both AWS and Google Cloud Platform for hosting, how easy would it be for them to easily transfer their services from the Amazon platform to the Google platform or vice versa ( In case Google Cloud has some outage) . First of all is it possible or not? And also if it's what would be the cost for performing such an activity and how much time will it take to get the services running back again.
In this I also did some digging up and what I came across was each of the providers (Google and Amazon) have tools of their own to do so i.e. for transferring the stored data from other platforms to their platform -
https://cloud.google.com/storage/docs/migrating?hl=en
https://aws.amazon.com/importexport/
Are these the only options available or there is anything else as well. Hope some AWS/Google cloud expert would be able to answer my question.
You would need to run your application in both environments, keep the deployments in sync, keep the databases in sync, etc. That can get complicated and expensive...
Then to automatically fail over from one environment to another you could use a DNS service such as DynDNS Active Failover that monitors the health of your application and starts sending traffic to the other environment if your primary environment becomes unhealthy.
How you manage deployments, how you continually ship data across environments, how much all that will cost, all those questions are extremely specific to the technologies (programming languages, operating systems, database servers) you are currently using. There's no way to give details on how you would accomplish those tasks without having all the details of your system.
Further, if you are using proprietary technologies on a specific platform, such as Amazon Redshift or DynamoDB, you might not find a service on the other platform that provides the same functionality.
I've seen this subject come up a lot since the last AWS outage, but I think maintaining two environments on two different platforms is overkill for all but the most extremely critical applications. Instead, I would look into maintaining a copy of your application in a different AWS region, and use Route53 health checks to fail-over.

Are there any cloud libraries that takes EC2 inputs?

This might seem like a somewhat strange question, but are there any cloud libraries (jClouds, libclouds etc) that can take EC2 commands and port them to other clouds?
The idea is basically to enable a company with a native EC2 integration to move to a different cloud provider without having to rewrite the provisioning code.
Not exactly what you're looking for, but you can use a service such as Ravello, which supports multiple public clouds for deployment.
The user interacts with the Ravello API/UI, and Ravello handles the interaction with the various cloud APIs.