Is "Zone" different among projects? - google-cloud-platform

According to the documentation, it says a "zone" could be mapped to different cluster for different projects but is it true that a zone may map to a different cluster among projects?
I've never seen a zone mapping difference across projects. Also, since each zone provides different machine types, I'm not even sure if a zone could be mapped to different clusters among projects.
If it does, is there a way to find out which cluster my zone is mapped to like the one in AWS?
Thanks!

A cluster, as defined, is simply a set of physical servers, networks, disk, cooling. In short, a datacenter. It's impossible to know, it's google internal management.
A zone comes on top of one or several clusters. If the initial cluster (aka datacenter) is too small, Google can have chosen to expend it and if it's not possible to add another one. But at user point of view, it's invisible!
Google try to locate all the projects of the same organization in the same cluster, especially for security and performance reason in case of VPC peering or Shared VPC. However, it's not guaranteed. But, because your don't know this, you can't check it.
For example, if 2 projects are on 2 different clusters in the same region, there isn't issue. But if you create a VPC peering, it's not optimized. To solve this, Google can migrate Compute Engine from a cluster to another one, even without stopping the VM (it's called "live migration"), you aren't able to see anything of this VM placement.
Generally the cluster is consistent for a project. In case of huge resources usage, it could be different (HPC for example, or with requirement of 10k+ CPUs), but Googlers must have more detail in this case if you are a big CPU consumer
I tried to create a GKE regional cluster in europe-west3, with N2 cpu type, only available in 2 of the 3 zone and I got this error:

Related

Is there a distributed multi-cloud service mesh solution that's available? Something that cuts across GCP, AWS, Azure and even on-premise setup?

Is there a distributed multi-cloud service mesh solution that is available? A distributed service mesh that cuts across GCP, AWS, Azure and even on-premise setup?
Nathan Aw (Singapore)
Yes it is possible with istio multi cluster single mesh model.
According to istio documentation:
Multiple clusters
You can configure a single mesh to include multiple clusters. Using a multicluster deployment within a single mesh affords the following capabilities beyond that of a single cluster deployment:
Fault isolation and fail over: cluster-1 goes down, fail over to cluster-2.
Location-aware routing and fail over: Send requests to the nearest service.
Various control plane models: Support different levels of availability.
Team or project isolation: Each team runs its own set of clusters.
A service mesh with multiple clusters
Multicluster deployments give you a greater degree of isolation and availability but increase complexity. If your systems have high availability requirements, you likely need clusters across multiple zones and regions. You can canary configuration changes or new binary releases in a single cluster, where the configuration changes only affect a small amount of user traffic. Additionally, if a cluster has a problem, you can temporarily route traffic to nearby clusters until you address the issue.
You can configure inter-cluster communication based on the network and the options supported by your cloud provider. For example, if two clusters reside on the same underlying network, you can enable cross-cluster communication by simply configuring firewall rules.
Single mesh
The simplest Istio deployment is a single mesh. Within a mesh, service names are unique. For example, only one service can have the name mysvc in the foo namespace. Additionally, workload instances share a common identity since service account names are unique within a namespace, just like service names.
A single mesh can span one or more clusters and one or more networks. Within a mesh, namespaces are used for tenancy.
Hope it helps.
An alternative could be using this tool with a Kubernetes cluster that spans through all of your selected cloud providers at the same time without the hassle of managing all of them separately

GCP Compute Engine Resource Not Available

I have 4 vm instances in South-Asia region in two zones (south-asia-a & south-asia-c) out of three. I can't boot up all four instances due to below error regarding resources not available in current zone or region.
I can't move instances to other region, because public IP will change, in South-Asia region all zones shows same error.
Even tried to create new instance in different zone like south-asia-b, but same error.
The zone 'projects/some-projectname/zones/asia-south1-c' does not have enough resources available to fulfill the request. Try a different zone, or try again later.
EDIT: Tried all three zones a,b,c of asia-south.
What can I do?
Yes I have face the same issue. I ended up switching to asia-south1-c. This is a recent error. I first encountered 1.5 weeks ago. If you must use asia-south1-a then you can raise a support ticket. Anyway, the issue should be fixed soon since many users are facing the same problem.
In case you have snapshots of the instances disks you can use the snapshots as an existing disk to create new instances in any region/zone. If you do not have a snapshot you cannot create now since the instance has to be running to do so.
Edit:
We are continuously adding more and more resources to avoid situations like this. If your workload is predictable long term, you may want to purchase commitment and reserve the resources you will use at a discount price.

Do we have anything similar to Azure "Availability Set" in GCP and AWS

Context :
We are prototyping a multi cloud deployment of our application (based on micro services).
For balancing between high availability and co location we used "Availability Sets" feature in Azure. Which kind off ensures that Azure platform/service upgrades doesn't happen in two distinct sets simultaneously.
Availability sets Azure
Scenario :
I couldn't find anything similar in Google Cloud Platform and AWS. So in this case we have to go with separate "Zones" for high availability.
One argument in favor of Availability sets ( theoretically) are they are kind of more closer that Zones as the former is inside an data center.
Do we have anything close to "availability sets" in GCP and AWS. Please share your thoughts.
Regarding GCP, there are several solutions for high-availability. In general it is recommended to Design Robust Systems prone to failures and Building scalable and resilient applications.
By designing robust systems you are insuring that your VMs are available in case of single instance failure, reboot of the instance or if there is an issue with the zone.
What looks most similar to Availability Sets is Managed Instance Groups.
The managed instance group auto-updater allows you to deploy new versions of software to instances in your MIG, supporting different rollout scenarios (rolling updates, canary updates). You can control the speed and scope of deployment as well as the level of disruption to your service.
Also you can use Regional Persistent Disk that replicates data across zones (datacenters).
It sounds like Placement Groups may be an equivalent feature in AWS. There are a few different configurations where you can ask AWS to cluster your instances very close to maximize network I/O performance or spread your instances across hardware to reduce correlated failures.
Cluster – packs instances close together inside an Availability Zone. This strategy enables workloads to achieve the low-latency network performance necessary for tightly-coupled node-to-node communication that is typical of HPC applications.
Partition – spreads your instances across logical partitions such that groups of instances in one partition do not share the underlying hardware with groups of instances in different partitions. This strategy is typically used by large distributed and replicated workloads, such as Hadoop, Cassandra, and Kafka.
Spread – strictly places a small group of instances across distinct underlying hardware to reduce correlated failures.
I can't speak for Google Cloud as I am not aware of a similar feature but I am also not nearly as familiar with their offerings.
Hope that helps.

AWS VPC vs Subnet for Application Wrapping

I'm trying to get a better understanding of AWS organization patterns.
Suppose I define the term "application stack" as a set of interconnected AWS resources (e.g. a java microservice behind ELB + dynamoDB for peristence), then I need some way of isolating independent stacks. Each application would get a separate dynamodb or kinesis so there is no need for cross-stack resource sharing. But the microservices do need to communicate with each other.
A-priori I could see either of the two organizational methods being used:
Create a VPC for each independent stack (1 VPC per 1 Application)
Create a single "production" VPC and each stack resides within a separate private subnet.
There could be up to 100s of these independent "stacks" within the organization so there's the potential for resource exhaustion if there is a hard limit on VPC count. But other than resources scarcity, what are the decision criteria around creating a new VPC or using a pre-existing VPC for each stack? Are there strong positive or negative consequences to either approach?
Thank you in advance for consideration and response.
Subnet's and IP addresses are a limited commodity within your VPC. The number of IP addresses cannot be increased within your VPC if you hit that limit. Also, by default, all subnets can talk to other subnets, so there may be security concerns. Any limits on the number of VPCs are a soft limit and can be increased by AWS support.
For these reasons, separate distinct projects at the VPC level. Never mix projects within a VPC. That's just asking for trouble.
Also, if your production projects are going to include non-VPC-applicable resources, such as IAM users, DynamoDB tables, SQS queues, etc., then I also recommend isolating those projects within their own AWS account (at the production level).
This way, you're not looking at a list of DynamoDB tables that includes tables from different projects.

Alternative for built-in autoscaling groups for spot instances on AWS

I am currently using spot instances managed with auto-scaling groups. However, ASG has a number of shortcomings for use with spot instances. For example, it cannot launch instances of a different instance type if the current type is experiencing a price spike across all availability zones. It can't even re-distribute the number of running instances across zones (if one zone has a price spike, you're down 30% in the number of running instances.)
Are there any software solutions that I could run which would replace built-in AWS Auto-Scaling Groups? I've heard of SpotInst and Batchly, but I do not trust them. Basically, I think their business plan involves being bought out and killed by Amazon, like what happened to ClusterK. The evidence for this is the bizarre pricing policies and other red flags. I need something that I can self-host and depend on.
AWS recently released Auto Scaling for Spot Fleets which seems to fit your use case pretty well. You can define the cluster capacity in terms of vCPU that you need, choose the instance types you'd like to use and their weights and let AWS manage the rest.
They will provision spot instances at their current market price up to a limit you can define per instance type (as before), but integrating Auto Scaling capabilities.
You can find more information here.
https://aws.amazon.com/blogs/aws/new-auto-scaling-for-ec2-spot-fleets/
It's unlikely that you're going to find something that takes into account everything you want. But because everything in Amazon is an API, so you can write that yourself. There are lots of ways to do that.
For example, you could write a small script (bash, ruby, python etc) that shells out the AWS CLI to get the price, then shells out to launch boxes. For bonus points, use the native AWS SDK library instead of shelling out. (That will be slightly easier to handle errors, etc.) For even more bonus points, open source it, and hope that other people to improve on it!
This script can run on your home computer, or on a t1.micro for $5/month. Or you could write it in node.js, and run it on Lambda for pennies per month.
Here at Spotinst, these are exactly the problems we built Elastigroup to solve.
Elastigroup enables running simultaneously on as many instance types and availability zones (within a region) as you’d like. This is coupled with several things to maintain production availability:
Our algorithm makes live choices for the best Spot markets in terms of price and availability.
When an interruption happens, we predict it about 15 minutes in advance and take all the necessary steps to ensure (and insure) the capacity of your group.
In the extreme case that none of the markets have Spot availability, we simply fall back to an on-demand instance.
We have a great relationship with AWS and work closely with both their technical and business teams to provide our joined customers with the best experience possible. As we manage resources inside your own AWS account, I wouldn’t put the relationship between us as a concern, to begin with.