I want to know a brief explanation or an example of how to migrate a Kubernetes application to GCP from AWS.
What services are implicated like EKS or EC2 and GKE or Compute Engine.
I'm very new to migration, I don't know too much about AWS and I recently started using GCP.
Thanks in advance.
It depends.
At first, AWS -> GCP resources mapping:
At first, you'll want to know the mapping between AWS and GCP resources.
There are several articles:
Cloud Services Mapping For AWS, Azure, GCP ,OCI, IBM and Alibaba provider – Technology Geek
Cloud Terminology Glossary for AWS, Azure, and GCP | Lucidchart:
Cloud Services Terminology Guide: Comparing AWS vs Azure vs Google | CloudHealth by VMware
Migrate AWS EKS to GCP GKE: the hard way
If your cluster is deployed with managed kubernetes service:
from Elastic Kubernetes Service (EKS)
to Google Kubernetes Engine (GKE)
Then it would be hard to migrate. Just due to complexity of kubernetes architecture and differences in the approaches of manage cluster in AWS vs GCP`
Migrate VMs and cluster deployed using your own k8s manifest.
If your kubernetes cluster is deployed on cloud virtual machines with k8s or helm manifests, then it would be easier.
And there are two ways:
Either migrate VMs using GCP Migrate Connector (as #vicente-ayala said in his answer)
Or import your infrastructure to the terraform manifest, change resources definitions step-by-step, and then apply this updated manifest to GCP
Migrating with Migrate Connector
You can found the latest migration manual on migrating VM's here:
Prerequisites
As per GCP manual,
Before you can migrate a source VM to Google Cloud, you must configure the migration environment on your on-premises data center and on Google Cloud. See:
Enabling Migrate for Compute Engine services
Installing the Migrate Connector
Migrating
How-to Guides | Migrate for Compute Engine | Google Cloud
Migrating individual VMs
Migrating VM groups
Migrating using Terraform and Terraformer
There is a great tool for reverse Terraform GoogleCloudPlatform/terraformer. Infrastructure to Code
A CLI tool that generates tf/json and tfstate files based on existing infrastructure (reverse Terraform).
And you can import your infrastructure into terraform manifest:
terraformer import aws --resources=vpc,subnet --connect=true --regions=eu-west-1 --profile=prod
You'll get the terraform manifest declared with aws provider
And you may try to replace every AWS resource to the appropriate GCP resource. There is official terraform GCP provider: hashicorp/google. Unfortunately, there isn't mapping for terraform resources of both cloud providers. But, again, you may some of these mapping lists:
Cloud Services Mapping For AWS, Azure, GCP ,OCI, IBM and Alibaba provider – Technology Geek
Cloud Terminology Glossary for AWS, Azure, and GCP | Lucidchart:
Cloud Services Terminology Guide: Comparing AWS vs Azure vs Google | CloudHealth by VMware
And then apply the new GCP manifest:
terraform init
terraform plan
terraform apply
Additional resources on AWS <-> GCP
GCP to AWS Migration: Why and How to Make the Move
GCP | Google Cloud Migrate for Compute Engine | AWS to GCP Migration using Velostrata - YouTube
Managing a Large and Complex GCP Migration (Cloud Next '19) - YouTube
Lessons Learned Migrating from GCP to AWS | Leverege
How to approach a GCP-to-AWS migration
How Rapyder helped ride-sharing app migrate from GCP to AWS | Rapyder
Cloud Migration Use Case: Moving From AWS to GCP (PDF)
Here is a detailed guide of steps you need to perform to migrate you k8s cluster from AWS to GCP.
https://cloud.google.com/migrate/compute-engine/docs/4.8/how-to/migrate-aws-to-gcp/overview
https://cloud.google.com/migrate/compute-engine/docs/4.8/how-to/migrate-aws-to-gcp/aws-prerequisites
https://cloud.google.com/migrate/compute-engine/docs/4.8/how-to/migrate-aws-to-gcp/configure-aws-as-a-source
Related
I am trying to implement as solution on an EKS cluster where jobs are expected to be submitted using kubeflow central dashboard by users/developers. To include spark as a service for users on platform I tried to have standalone spark installation on EKS cluster where everything other config will have to managed by admin. So managed service EMR could be possibly used here as an independent service and will be triggered only when job is submitted.
I an trying to make EMR on EC2 or EMR on EKS available as an endpoint to be used in kubeflow notebooks or pipelines. Tried various things but could not have any robust solution for it.
So if anybody has any sort of experience in the same please feel free to drop in your suggestions.
Does AWS Elastic Kubernetes Service have the same concept as Google Kubernetes Engine apps via a marketplace? I'm looking to deploy RabbitMQ and previously accomplished this on GKE.
Otherwise, it looks like there's a helm chart or I can manually do this via the container on dockerhub.
No, you need top deploy it by your own.
AWS does have a marketplace where you can utilize various AMI's or deploy a certified Bitnami RabbitMQ image. You can see that for yourself here: https://aws.amazon.com/marketplace
The downside is that this isn't something available for AWS EKS and as a result we will have to install/maintain this ourselves. That could look something like using the stable/rabbitmq-ha chart with anti-affinity across AZ's, quorum queues, and EBS.
Learn more about helm here:
https://helm.sh/docs/intro/using_helm/
Learn more about the rabbitmq helm chart here: https://hub.helm.sh/charts/stable/rabbitmq-ha
I have used google cloud migrate for migrating VM from AWS to GCP, Azure to GCP, and On-Prem Datacenter (VMware) to GCP. Please share views on migrating Hyperv -V based VM's to be migrated to Google cloud
As near as I can tell there is no tool for on premises Hyper-V to GCP migration. The only documentation I can find that is helpful talks about uploading the VHD to cloud storage and building an Image from that.
https://cloud.google.com/compute/docs/import/importing-virtual-disks
I am currently uploading a VHD now and will edit my answer with the result.
I have two questions to ask:
So my company has 2 instances of airflow running, one on a GCP
provisioned cluster and another on a AWS provisioned cluster. Since
GCP has Composer, which helps you to manage airflow, is there a way
to sort of integrate the airflow DAGs on the AWS cluster to be
managed by GCP as well?
For Batch ETL/Streaming jobs(in python), GCP has Dataflow (Apache
Beam) for that. What's the AWS equivalent of that?
Thanks!
No, you can't do it, till now you have to use AWS, provision it and manage by yourself. There are some options you can choose: EC2, ECS + Fargate, EKS
Dataflow is equivalent to Amazon Elastic MapReduce (EMR) or AWS Batch Dataflow. Moreover if you want to run current Apache Beam jobs, you can provision Apache Beam in EMR and everything should be the same
I am currently working on Google Cloud Platform to run Spark Jobs in the cloud. To do so, I am planning to use Google Cloud Dataproc.
Here's the work flow I am automatising :
Upload a csv file on Google Cloud Storage which will be the input of my Spark job
On upload, trigger a Google Cloud Functions which should create the cluster, submit a job and shutdown the cluster though the HTTP API available for Dataproc
I am able to create a cluster from my Google Cloud Function using the google apis nodejs client (http://google.github.io/google-api-nodejs-client/latest/dataproc.html). But the problem is that I cannot see this cluster on the Dataproc cluster viewer or even by using the Gcloud sdk : gcloud dataproc clusters list.
However, I am able to see my newly created cluster on Google Api explorer : https://developers.google.com/apis-explorer/#p/dataproc/v1/dataproc.projects.regions.clusters.list.
Note that I am creating my cluster in the current project.
What can I possibly do wrong not to be able to see that cluster when listing with gcloud sdk ?
Thank you in advance for your help.
Regards.
I bet it has to do with "region" field. Out of the box Cloud SDK defaults to "global" region [1]. Try using dataproc Cloud SDK commands with --region flag (e.g., gcloud dataproc clusters list --region)
[1] https://cloud.google.com/dataproc/docs/concepts/regional-endpoints