Increase resources on a Compute Engine VM without shutting services down - google-cloud-platform

To automatically manage Cloud resources in order to meet the felt need of my infrastructure, I need to increase VM resources. But this is possible only while the machine status is TERMINATED.
The problem is that I have got applications on the VMs that must not stop running. Do you have any suggestions about how I could proceed, like increasing my machines resources without interrupting its services? (database, web, etc...)
The purpose of that is to automate my whole infrastructure, to ensure its quality of service even if I'm not monitoring it by myself.

I suggest to take a look to Managed Instance Groups. It will offer some of the characteristics you need: high availability, scalability and the ability to add the instance group to a load balancer.
According to the Google official documentation about MIGs:
Make your workloads scalable and highly available by taking advantage of automated MIG services, including: autoscaling, autohealing, regional (multiple zone) deployment, and automatic updating.
Regarding the need of automation of the services you want, I suggest to generally use fully managed services. You can check a summary of GCP services and you can always inspect if they fit your demands.

Related

how to get budget estimate for copilot?

When I select "scheduled job" while initiating resources, how is the process handled internally?
Can I verify the container in ECS? I guess it will use batch jobs for this option.
# copilot init
Note: It's best to run this command in the root of your Git repository.
Welcome to the Copilot CLI! We're going to walk you through some questions
to help you get set up with an application on ECS. An application is a collection of
containerized services that operate together.
Which workload type best represents your architecture? [Use arrows to move, type to filter, ? for more help]
> Load Balanced Web Service
Backend Service
Scheduled Job
What will be the charges if I select backend service or scheduled job?
Copilot uses Fargate containers under the hood; therefore, your charges for a backend service are based on the number of containers you have running and the CPU/memory size of those containers. The minimum container size is 0.25 vCPU and 512 GB of reserved memory.
For other service types, your pricing depends on a few more things.
Load Balanced Web Service
Fargate containers based on size and number (~$9/month for the smallest possible container)
Application Load Balancer (about $20/month depending on traffic)
Backend Service
Fargate containers based on size and number (~$9/month for the smallest possible container)
Scheduled Job
Fargate containers based on size, number, and invocation frequency and duration (ie you only pay for the minutes you use)
State Machine transitions The first 4000 transitions in a month are free, which corresponds to an invocation frequency of about once every 21 minutes assuming there are no retry transitions. Transitions after that limit are billed at a low rate.
Other notes
All Copilot-deployed resources are grouped with a number of resource tags. You can use those tags to understand billing activity, and even add your own tags via the --resource-tags flag in copilot svc deploy or copilot job deploy.
The tags we use to logically group resources are the following:
Tag Name
Value
copilot-application
name of the application this resource belongs to
copilot-environment
name of the environment this resource belongs to
copilot-service
name of the service or job this resource belongs to
The copilot-service tag is used for both jobs and services for legacy reasons.
Copilot refers to these entities as common cloud architectures [1].
I could not find an official document which outlines how these architectures are composed in detail. I guess it might be an implementation detail from the creators' perspective when you look at point one of the AWS Copilot CLI charter [2]:
Users think in terms of architecture, not of infrastructure. Developers creating a new microservice shouldn't have to specify VPCs, load balancer settings, or complex pipeline configuration. They may not know anything about other AWS services. They should be able to specify what "kind" of application it is and how it fits into their overall architecture; the infrastructure should be generated from that.
I have to agree that more sophisticated users always ask themselves how costs of a specific architecture will look like and I completely endorse the idea of having a special Copilot command such as copilot estimate costs service-xy which can be executed before creating the service.
There is some high-level documentation on the architecture types Load Balanced Web Service and Backend Service. [3]
It mentions the command copilot svc show in conjunction with the --resources flag [4]:
You can also provide an optional --resources flag to see all AWS resources associated with your service.
I think this gives you the ability to estimate costs right after bringing the services up and running.
A somehow more complicated approach which I frequently apply to understand complex constructs in the AWS CDK is to look at the source code. For example, you could open the corresponding Go file for the Load Balanced Web Service architecture: [5]. Digging into the code, you'll notice that they make it pretty clear that they are using Fargate containers instead of EC2 instances.
That is also what they tell us in the high-level service docs [4]:
You can select a Load Balanced Web Service and Copilot will provision an application load balancer, security groups, an ECS Service and run your service on Fargate.
More on Fargate: [6]
Btw, there is a really interesting comment in the issue section which outlines why they decided against supporting EC2 in the first place [7]:
What features have the team currently explicitly decided against?
EC2 comes to mind. I think we could have built a really nice experience ontop of EC2 instances - but I think there's a difference between building an "abstraction" around ECS / Fargate and building an "illusion" around ECS / EC2. What I mean by that is that if we created the illusion of a fully hands off EC2 experience, customers might be surprised that they are expected to be in charge of patching and security maintenance of those instances. This isn't something that Copilot, a CLI can really automate for people (realistically). We're still trying to figure out a good way to expose EC2 to folks - but we definitely see Fargate as the future.

When to use AWS Lambda and when to use Kubernetes (EKS)?

We are trying to evaluate the best ways to scale our J2EE web application and use hosting services with AWS. Are there reasons why we would use the Lambda service over Kubernetes (EKS)? Although it seems that Lambda can scale functional units, I'm not clear why anyone would use that as a substitute for Kubernetes, given Kubernetes can replicate containers based on performance metrics.
They serve different purposes. If you want to have horizontal scalability on a "ec2/pod/container" level and handle the availability yourself (through k8s of course), go for Kubernetes.
If you have a straight forward function doing a particular thing and you don't want to bother yourself with operating costs of having to manage a cluster or packaging it, then you can let Lambda administer it for you (at the time of writing, you would pay 20 US cents per million call). It is just another layer of abstraction on top of a system that is probably similar to Kubernetes, scaling your function per needs.
The goal of these technologies is to remove as much overhead as possible between you and the code and infrastructure can be painful. To summarize, serverless is to Kubernetes what Kubernetes is to containers.
To make a conscious decision, take the following into account:
Does your budget covers operation and maintenance of infrastructure
Do you have the expertise in Kubernetes
How much would it cost to redesign your J2EE app into serverless
ready code
Your timeline (of course...)
Based on the AWS resources you will use, how much do you save or not
by implementing a k8s cluster (database service?, EBS, EC2s, etc.)

how to distribute surplus load of user traffic to google app engine from google compute VM ? running django with apache

I am running django on google VM instance using apache and mod wsgi... i however am unsure of the concurrent requests that my app shall receive from the users and would like to know if i can transfer the surplus load of the VM to the App engine automatically to prevent the server from crashing.
I am unable to find any solution expect running kubernetes cluster or docket containers to effectively manage the load. but in need to be free of this hassle and send off the excess load to GAE.
If you want to analyze the traffic, latency and load of your resources and applications, I would recommend you to start with Stackdriver Trace.
As per documentation, Stackdriver Trace is a distributed tracing system that collects latency data from your applications and displays it in the Google Cloud Platform Console. You can track how requests propagate through your application and receive detailed near real-time performance insights. Stackdriver Trace automatically analyzes all of your application's traces to generate in-depth latency reports to surface performance degradations, and can capture traces from all of your VMs, containers, or Google App Engine projects.
Once you have determine the user traffic or you have a better idea about this, then you can try using "Instance Groups".
GCE offers two kind of VM instance groups:
Managed instance groups (MIGs) allow you to operate applications on multiple identical VMs. You can make your workloads scalable and highly available by taking advantage of automated MIG services, including: autoscaling, autohealing, regional (multi-zone) deployment, and auto-updating.
Unmanaged instance groups allow you to load balance across a fleet of VMs that you manage yourself.

Architecture Questions to Autoscale Moodle on Google Cloud Platform

We're setting up a Moodle for our LMS and we're designing it to autoscale.
Here are the current stack specifications:
-Moodle Application (App + Data) baked into an image and launched into a Managed Instance Group
-Cloud SQL for database (MySQL 5.7 connected through Cloud SQL Proxy)
-Cloud Load Balancer - HTTPS load balancing with the managed instance group as backend + session affinity turned on
Questions:
Do I still need Redis/Memcached for my session? Or is the load balancer session affinity enough?
I'm thinking of using Cloud Filestore for the Data folder. Is this recommendable vs another Compute Engine?
I'm more concerned of the session cache and content cache for future user increase. What would you recommend adding into the mix? Any advise on the CI/CD would also be helpful.
So, I can't properly answer these questions without more information about your use case. Anyway, here's my best :)
How bad do you consider to be forcing the some users to re-login when a machine is taken down from the managed instance group? Related to this, how spiky you foresee your traffic will be? How many users will can a machine serve before forcing the autoscaler to kick in and more machines will be added or removed to/from the pool (ie, how dynamic do you think your app will need to be)? By answering these questions you should get an idea. Also, why not using Datastore/Firestore for user sessions? The few 10s of millisecond of latency shouldn't compromise the snappy feeling of your app.
Cloud Filestore uses NFS and you might hit some of the NFS idiosyncrasies. Will you be ok hitting and dealing with that? Also, what is an acceptable latency? How big is the blobs of data you will be saving? If they are small enough, you are very latency sensitive, and you want atomicity in the read/write operations you can go for Cloud BigTable. If latency is not that critical Google Cloud Storage can do it for you, but you also lose atomicity.
Google Cloud CDN seems what you want, granted that you can set up headers correctly. It is a managed service so it has all the goodies without you lifting a finger and it's cheap compared to serving stuff from your application/Google Cloud Storage/...
Cloud Builder for seems the easy option, unless you want to support more advanced stuff that are not yet supported.
Please provide more details so I can edit and focus my answer.
there is study for the autoscaling, using redis memory store show large network bandwidth from cache server, compare than compute engine with redis installed.
moodle autoscaling on google cloud platform
regarding moodle data, it show compute engine with NFS should have enough performance compare than filestore, much more expensive, as the speed also depend on the disk size.
I use this topology for the implementation
Autoscale Topology Moodle on GCP

Microservices and cloud resource limitations

I am at the beginning of a large migration from a single monolithic web service to a collection of microservices using Spring Cloud/Spring Cloud Netflix. Through my research of microservices I understand that the lines of demarcation between services should mirror the separations of concerns between them. An additional factor affecting separation is which services are required to scale individually.
As a concrete example, depending on the level of granularity desired, a microservice environment could end up like this:
Accounts (containing Signup, Login, Profiles, etc.)
Store (containing Products, Payments, Reporting, Inventories, etc.)
Chat/Social (containing chat rooms, user statuses, etc.)
...
Or it could end up with each of the areas of concern in brackets represented by their own microservice, e.g:
Accounts
Signup
Login
...
I believe there is a preference in the microservices community for the second approach, and I tend to agree. However, the issue I have is one of hosting and resource limitations.
In the migration I would like to streamline the provisioning of resources and the installation of updated services. Since we use the AWS stack, Elastic Beanstalk seemed like the perfect choice. While researching Elastic Beanstalk though I was rather disheartened to discover that there was a limit of 25 applications per account. Not only that, but EC2 has a limit of 20 instances per region per account. It seems like a microservice architecture will hit that limit very quickly, especially when you add multiple environments (staging and production) for each service into the mix, let alone websites and internal tooling.
With all of the amazing content that I've seen around the web regarding microservices, I'm surprised and somewhat disappointed at the lack of information regarding the actual hosting of microservices beyond the development of them. Have I missed something? Is there any information about deploying more than a couple of microservices on AWS?
It is my understanding that Netflix use AWS for their own microservice hosting, beyond requesting additional resources from Amazon and throwing money at it, are there other solutions? Would their Asgard tool help with this issue (possibly by handling the sharing of instances between services) or would it result in the same outcome?
As mentioned in the above comments, AWS will raise your limits if you have a legit use case - why wouldn't they? they are in the business of selling you services.
But since you have asked for suggestion other than increasing those limits, and since you are in the early stages of designing your solution, you should consider basing part of your micro-services architecture on Docker or another container/container like service (my own preference would be for the AWS's container service). Depending on the nature of you solution, even within the limits of 20 EC2 instances (per region), if you had large enough instances running you could fit dozens (or even hundreds of lightweight) docker images running on each of those allocated 20 instances - so potentially hundres or thousands of walled off micro-services running on those 20 EC2 instances.
Using an entire EC2 image for each of many micro-services you may have may end up being a lot more expensive than it needs to be.
You should also consider the use of AWS Lamba for at least portions of your micro-service architecture - its the 'ultra-micro service' tool also offered by AWS.