Terraform Google Cloud: Executing a Remote Script on a VM - google-cloud-platform

I'm trying to execute a Script on a Google VM through Terraform.
First I tried it via Google Startup Scripts. But since the metadata is visible in the Google Console (startup scripts count as metadata) and that would mean that anybody with read access can see that script which is not acceptable.
So i tried to get the script from a Storage Account. But for that i need to attach a service account to the VM so the VM has the rights to access the Storage Account. Now people that have access to the VM also have access to my script as long as the service account is attached to the VM. In order to "detach" the service account i would have to stop the VM. Also if i don't want to permanently keep the attachment of the service account i would have to attach the service account via a script which requires another stop and start of the VM. This is probably not possible and also really ugly.
I don't understand how the remote-exec ressource works on GCP VMs. Because i have to specify a user and a userpassword to connect to the VM and then execute the script. But the windows password needs to be set manually via the google console, so i can't specify those things at this point in time.
So does anybody know how I can execute a Script where not anybody has access to my script via Terraform?
Greetings :) and Thanks in advance

I ended up just running a gcloud script in which i removed the Metadata from the VM after the Terraform apply was finished. In my Gitlab pipeline i just called the script in the "after_script"-section. Unfortunately the credentials are visible for approximately 3min.

Related

Cannot connect via SSH to GCP Instance

Friends good night.
I have a server on Google Compute Engine, which I do not have access to via ssh and the old administrator did not leave access to it.
Is there any possibility to access this server either through SDK, GCP Console, etc.?
Thank you very much in advance.
If you or your team have an IAM account on the project with sufficient roles/permissions (Owner, ComputeAdmin), you can try the following:
Check this troubleshooting documentation in order to identify and solve your issue
Try to access the VM through the SerialPort.
I had mistakenly locked myself via these files /etc/hosts.allow and /etc/hosts.deny. It took me a day to get back access to the server and I hope below will help someone locked out of a GCP vm. It simply creates a script that runs when your VM is booting up. You can then have all commands to fix your issue run without direct access to the server. Below is how you can for example reset root password.
I am assuming that you have access to GCP console via browser, do below:-
Shutdown the server
Click on edit and scroll down to Custom metadata. Add a new item with key as startup-script and the value as below. Replace yournewpassword with the password you want to set for the root user:
#!/bin/sh
echo "yournewpassword:root" | chpasswd
Reboot your server and use your new password set above to ssh to your vm
Remove the meta and save your VM. You can reboot again.

Is it possible to run a command from one ec2 instance that executes that command onto another ec2 instance?

Right now I am testing to see if I am able to write
touch test.txt
simply to another ec2 instance.
I have looked into both ssh and ssm but I do not understand where to begin the code. Any ideas to remotely send commands?
If you want to send a command remotely you can make use of the AWS run command functionality of SSM.
To do this you will need to ensure that you’re both running SSM agent and have a valid IAM role setup on the remote instance. The getting started section should help that.
Finally you can call the remote instance using the send-command function. Either create your own document or use the existing ‘AWS-RunShellScript’ document.

Where to keep the Dataflow and Cloud composer python code?

It probably is a silly question. In my project we'll be using Dataflow and Cloud composer. For that I had asked permission to create a VM instance in the GCP project to keep the both the Dataflow and Cloud composer python program. But the client asked me the reason of creation of a VM instance and told me that you can execute the Dataflow without the VM instance.
Is that possible? If yes how to achieve it? Can anyone please explain it? It'll be really helpful to me.
You can run Dataflow pipelines or manage Composer environments in you own computer once your credentials are authenticated and you have both the Google SDK and Dataflow Python library installed. However, this depends on how you want to manage your resources. I prefer to use a VM instance to have all the resources I use in the cloud where it is easier to set up VPC networks including different services. Also, saving data from a VM instance into GCS buckets is usually faster than from an on-premise computer/server.

How to give permission for an IAM service account to run a docker container within a GCP VM?

I am trying to run a docker image on startup of a Google Cloud VM. I have selected a fresh service account that I created as the Service Account under VM Instance Details through the console. For some reason the docker run command within the startup script is not working. I suspect this is because the service account is not authorized to run the "docker" command within the VM - which was installed via a yum install. Can anyone tell me how this can be done i.e. to give this service account the permission to run docker command?
Edit.
Inside the startup script I am running docker login command to login to Google Container Registry followed by a docker run to run an image.
I have found a solution and want to share it here so it helps someone else looking to do the same thing. The user running the docker command (without sudo) needs to have the docker group. So I tried adding the service account as a user and gave it the docker group and that's it. docker login to gcr worked and so did docker run. So the problem is solved but this raises a couple of additional questions.
First, is this the correct way to do it? If it is not, then what is? If this is indeed the correct way, then perhaps a service account selected while creating a VM must be added as a user when it (the VM) is created. I can understand this leads to some complications such as what happens when the service account is changed. Does the old service account user gets deleted or should it be retained? But I think at least an option can be given to add the service account user to the VM - something like a checkbox in the console - so the end user can take a call. Hope someone from GCP reads this.
As stated in this article, the steps you taken are the correct way to do it. Adding users to the "docker" group will allow the users to run docker commands as non root. If you create a new service account and would like to have that service account run docker commands within a VM instance, then you will have to add that service account to the docker group as well.
If you change the service account on a VM instance, then the old service account should still be able to run docker commands as long as the older service account is not removed from the docker group and has not been deleted from Cloud IAM; however, you will still need to add the new service account to the docker group to allow it to run docker commands as non root.
Update: automating the creation of a service account when at VM instance creation manually would be tedious. Within your startup script, you would have to first create the Service Account using the gcloud commands and then add the appropriate IAM roles. Once that is done, you would have to still add the service account to the docker groupadd directory.
It would be much easier to create the service account from the Console when the VM instance is being created. Once the VM instance is created, you can add the service account to the docker groupadd directory.
If you would like to request for a new feature within GCE, you can submit a Public Issue Tracker by visiting this site.

"Failed to mount the azure file share. Your clouddrive won't be avaible"

Whenever I start the Azure Cloud Shell, I get this error:
Failed to mount the azure file share. Your clouddrive won't be avaible
Your Cloud Shell session will be ephemeral so no files or system changes will persist beyond your current session.
Can someone help me or explain why this is happening?
Can someone help me or explain why this is happening?
By chance did you delete the storage resource that was created for you when first launching Cloud Shell?
Please follow those steps:
1.Run "clouddrive unmount"
2.Restart Cloud Shell via restart icon or exit and relaunch
3.You should be prompted with the storage creation dialog again.
Here a similar case about you, please refer to it.
Also we can delete this cloud shell (in resource group), then re-create it.
In my situation I was prompted to create the clouddrive after launching the cloud shell for the first time. The resource group and storage account were created but the file share was not. I manually created it and restarted cloud shell with no issues.
I had similar case, and the solution was rather trivial.
Cloud shell couldn't access my file share due to misconfigured security settings. I did tweak the new Azure Storage firewall settings without realizing the impact on Cloud Shell.
In my case, closing and reopening the Azure CLI worked fine.