Using the promotionals credits (the 300$ opening account promotion), i used to connect my VM using SSH gcloud compute ssh instance_name all was fine.
End of promotional credits, i linked the project to my billing account. Then now, running the same SSH connection return this error
ERROR: (gcloud.compute.start-iap-tunnel) Error while connecting [4033: 'not authorized'].
kex_exchange_identification: Connection closed by remote host
Connection closed by UNKNOWN port 65535
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
I don t get it... the VPC is fine, the firewall got all ports open.
Do i really have to set an iap tunnel ? Why the normal SSH connection is not working anymore ?
Thanks if someone can confirm (before i change what was working well).
Accordingly to your comment the Free Trial was ended, I assume that your issue is related to the deletion process started when the Free Trial ends. Please have a look at the documentation End of the Free Trial to find more details:
The Free Trial ends when you use all of your credit, or after 90 days,
whichever happens first. At that time, the following conditions apply:
To continue using Google Cloud, you must upgrade to a paid Cloud Billing account.
All resources you created during the trial are stopped.
Any data you stored in Compute Engine is marked for deletion and might be lost. - Learn more about data deletion on Google Cloud.
Your Cloud Billing account enters a 30-day grace period, during which you can recover resources and data you stored in any Google
Cloud services during the trial period.
You might receive a message stating that your Cloud Billing account has been canceled, which indicates that your account has been
suspended to prevent charges.
Related
I had a VM in my account, and out of nowhere, the VM just disappeared. Is there any way to review what was done and why?
Seems to be if you are using free trial You need to explicitly enable billing while during the trial, otherwise your instances will be shut down when the trial runs out. It is not possible to retrieve the instances that have been deleted once. If it has been stopped, it can be retrieved back by simply starting it again.
But During the creation of the Instance you could configure deletion rules to keep the boot disk when the instance is deleted. This can be configured in the submenu “Management, security, disks, networking, sole tenancy” in the Disks section.
Refer to this SO for more information.
You can review what has been done by Audit Logs on GCP. Audit logs help you answer "who did what, where, and when?" within your Google Cloud resources with the same level of transparency as in on-premises environments. This could help you determine what happened to your VM.
To view Audit Logs for Compute Engine, please refer to this doc. To read more about the Compute Engine Audit Logs, you can review this doc.
Bit panicky here because I can't troubleshoot the error on a production site and it appears to be completely down.
GCP - Compute Engine VM - N1-standard on the US-West-3C zone running a Bitnami Multisite Wordpress deployment
About 2 hours ago my VM stopped responding (as far as I could tell with monitoring tools) and I was unable to SSH into it or connect in any way. I've experienced this occasionally in the past so my process was to grab a snapshot and restart the VM. I did manage to get the snapshot, however it stopped the VM by itself and I'm now stuck where I can't restart the VM.
The error I'm getting is:
Failed to start name-of-vm: A n1-standard-1 VM instance is currently unavailable in the us-west3-c zone. Alternatively, you can try your request again with a different VM hardware configuration or at a later time. For more information, see the troubleshooting documentation.
I tried changing my configuration (it used to be a custom VM) but that didn't do anything.
Searching for similar errors I've found threads about certain Zones running out of resources, but as far as I can tell this error doesn't specifically say 'run out of resources' and the status of the US-West-3C zone is fine. I can't imagine it would run out in a way where it can't even start a measly n1 vm.
Unfortunately due to some mismanagement this project isn't umbrella'd in our Google Workspace/Organization so I can't request technical support for it.
Any assistance or help pointing to some resources would be greatly appreciated.
currently unavailable in a specific zone would also mean that the zone run out of resources for the certain machine type.
You can try to restore the snapshot you had created on a different machine type e2-standard or n2-standard machine type configuration
I lost my vm instance and a huge amount of data with it. I even paid the pending amount to google but when i went to the vm instance page, it is showing only create instance menu. How can i recover my old vm instance. Please help
As #John Hanley mentioned in comments " When a Google account is closed, GCP may impose an internal recovery period of up to 30 days, depending on past account activity. Once that grace period expires, it is marked for deletion ."
In a VM with openvpn we are having connection problems. Pinging to the ips that manage to connect, the ping varies from, for example, 100ms to 6000ms. When there are no problems the ping is normal.
This problem occurred on 04/13/2021 at approximately 15:40h (Spain time) and lasted about 15-20 minutes. This same problem also occurred on 1/4/2021 in the morning and lasted several hours.
Has anyone else had this same problem or a similar problem? Is it normal that Google does not give information about these incidents?
You can check status of Google Cloud services with Google Cloud Status Dashboard.
To check your current latency to GCP regions use this tool - link.
From what I can see, there has been no disruptions on 13th.
I would recommend setting up monitoring or using tools like traceroute to locate the issue.
I am currently running a process on an ec2 server that needs to run consistently in the background. I tried to login to the server and I continue to get a Network Error: Connection timed out prompt. When I check the instance, I get the following message:
Instance reachability check failed at February 22, 2020 at 11:15:00 PM UTC-5 (1 days, 13 hours and 34 minutes ago)
To troubleshoot, I have tried rebooting the server but that did not correct the problem. How do I correct this and also prevent it from happening again?
An instance status check failure indicates a problem with the
instance, such as:
Failure to boot the operating system
Failure to mount volumes correctly
File system issues
Incompatible drivers
Kernel panic
Severe memory pressures
You can check following for troubleshooting
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstancesStopping.html
For future reprting and auto recovery you can create a CloudWatch
Alarm
For second part
Nothing you can do to stop its occurrence, but for up-time and availability YES you can create another EC2 and add ALB on the top of both instances which checks the health of instance, so that your users/customers/service might be available during recovery time (from second instance). You can increase number of instances as more as you want for high availability (obviously it involves cost)
I've gone through the same problem
and then once looking at the EC2 dashboard could see that something wasn't right with it
but for me rebooting
and waiting for a 2-3 minutes solved it and then was able to SSH to the instance just fine
If that becomes a recurrent problem, then I'll follow through with Jeremy Thompson's advice
... put the EC2's in an Auto Scaling Group. The ALB does a health check and it fails will no longer route traffic to that EC2, then the ASG will send a Status check and take the unresponding server out of rotation.