Can GCP Transfer resources between project? - google-cloud-platform

I want transfer some resources (etc. compute engine, kubernates engine, dataproc.. )
between my projects not recreate.
Is it possible? I can't find any option.

Accordingly to this answer: How to move an instance between two projects in Google Cloud Platform?
You can't move resources between projects, only option for you is to recreate them

Related

Moving Google Cloud Platform resources between projects

I have seen some old answers for this question but I was wondering if Google has added any feature to move resources between projects of the same organization. I would like to move compute engine and SQL resources and maintain the external IP address
You cannot move resources between Google Cloud Projects.
Most resources in Google Cloud are tied to the project's internal infrastructure. The primary issue is network addressing which is private to the Google Cloud Project. Google does not provide automated methods to move resources between projects.
You will have to backup and recreate/restore the resources in the new project. You will also have to assign new public Internet addresses.

Native GCP solution for automatically deploying IAM resources to projects?

I'm trying to come up with a way in GCP to automatically deploy defined IAM roles, policies and policy bindings to selected GCP projects or all GCP projects.
I am aware that GCP organizations exist and that they can be used to define IAM resources in one place to have them inherited to child projects. However, organizations are not mandatory in GCP and some customers will be using the old structure where projects exist side by side without inheritance and not wanting to migrate to an organization.
One solution would be to create scripts which iterate over projects and create everything. However, a GCP native solution would be preferrable. Is there a GCP native way of deploying defined IAM resources like this - and possibly other project level configurations - to specific GCP projects or all projects which works regardless of whether the customer uses organizations or not and without iterating over projects?
I'm trying to come up with a way in GCP to automatically deploy
defined IAM roles, policies and policy bindings to selected GCP
projects or all GCP projects.
Deployment tools use concise descriptions of resources called configuration files. These tools manage resource state, meaning you declare what you want and they make it so. They are not dynamic in that you do not say sometimes do X and sometimes do Y. You say do X to Y and if different make it Y.
Deployment tools are IaaC - Infrastructure as Code. The configuration files are the blueprint for your goal of "desired state". You write the configuration files and the tools know how to build the resources that match the desired state.
If your goal is dynamic configuration based upon inputs, conditionals, and/or external factors, IaaC based tools will fail to meet your goal.
For IaaC based tools, you have two well-supported options.
Google Deployment Manager. This is an official Google product. This product is vendor-specific.
Terraform Google Provider. Terraform is a HashiCorp product. The Google Provider is developed by Google.
I recommend choosing Terraform and the Google Provider. Terraform is cross-platform with most of the world supporting Terraform. Terraform is very easy to use, there are numerous training resources, example configurations, Internet guides, getting-started articles, and YouTube videos. I have written a few articles on Terraform with Google Cloud.
In your question, you mention writing scripts. That is possible, but I do not recommend that. For one-off configurations, using the Google Cloud CLI in a script is workable and sometimes necessary. The benefits of a deployment language, once mastered, are tremendous.
without iterating over projects?
Unless you implement organizations, Google Cloud Projects are separate independent resources. Deployment tools are project-specific, meaning if you want to manage resources in more than one project, you must declare that in the deployment configuration. They do not iterate projects, you declare them.

How to move an instance between two projects in Google Cloud Platform?

In compute engine, How can I move an instance from project A to project B?
I have two projects, and both have the same owner.
I looked at all the interfaces inside the console, but I could not find a way.
This can be done fairly easily now, with the caveat that when you create the VM in the other project it cannot be done through the UI but rather must be done using the gcloud tool. And google even has a page to document how.
First, you need to either create an image or a snapshot of the disk used in the VM. You can do this through the Console UI or the gcloud utility. Google's documentation does a good job of explaining how to do it, but the TLDR is:
stop VM if possible, or reduce number of writes by shutting down services if not
go to Compute Engine -> Images
select create
choose the disk as source
set any other properties you need
press create
Once that has been completed, use the gcloud tool with the other project to create your new VM. To find out/verify the name of the disk image:
gcloud compute images list --project [IMAGE_PROJECT]
then create the vm (add any additional options you need):
gcloud compute instances create [INSTANCE_NAME] --image [IMAGE_NAME] --image-project [IMAGE_PROJECT]
There isn't any tool in GCP that allows migrating one Compute Engine instance from one project to another.
However, it is still possible to recreate one instance from one project to another, by creating a snapshot of the disk, creating a custom image, and create a new VM from it in the second project.
This article gives a nice step by step guide on how to do it.
There is a newer doc page to copy VM between projects

How to share resources(compute engines) among projects in google cloud platform

I am trying to create prototype, where I can share the resources among the projects to run a job within the google cloud platform
Motivation: Let say there are two projects: Project A and Project B.
I want to use the dataproc cluster created in Project A to run a job in Project B.
The project are within the same organisation in the GCP platform.
How do I do that?
There are a few ways to manage resources across projects. Probably the most straightforward way to do this is to:
Create a service account with appropriate permissions across your project(s).
Setup an Airflow connection with the service account you have created.
You can create workflows that use that connection and then specify the project when you create a Cloud Dataproc cluster.
Alternate ways you could do this that come to mind:
Use something like the BashOperator or PythonOperator to execute Cloud SDK commands.
Use an HTTP operator to ping the REST endpoints of the services you want to use
Having said that, the first approach using the operators is likely the easiest by far and would be the recommended way to do what you want.
With respect to Dataproc, when you create a job, it will only bind to clusters within a specific project. It's not possible to create jobs in one project against clusters in another. This is because things like logging, auditing, and other job-related semantics are messy when clusters live in another project.

GCE Third Party for automation (snapshots/images etc...)

I am new to Google Compute Engine and I want to do automatic images/snapshot backups every X hours.
Previously I have used Amazon Cloud (EC2 instances) and did the automatic backups with a third party tool called Skeddly (which is UI that by setting some fields, it makes an automation for this backups).
Now, I would like to find a third party tool that will do something similar in GCE instance.
I know that it is possible to do with gcloud commands, or powershell, but I would like to do it with UI (third party tool) if exists.
What could you recommend me?
Thanks in advance.
You can do it by using directly the SDK, by running gcloud commands with Cron but if you want to use external tools I'll recommend you to use Terraform or Ansible.
Google Cloud now provides dedicated service for backup and disaster recovery.
Backup and DR allows you to protect virtual copies of your data in its native format, manage these copies throughout their lifecycle, and use these copies for disaster recovery, business continuity, and development and test activity.
Here is the official quickstart tutorial which shows how to protect and recover compute engine instance.