Google Compute Engine goes to sleep after some time - google-cloud-platform

I'm trying to run my application on GCE VM. It uses nodeJs as frontend and a Java backend. I use this server to communicate with my local computer using MQTT. This is working but after some time (one hour and a half), the server seems to go to sleep (or the ports close ?).
Both MQTT and ssh terminal interface connections are lost.
When I connect back, the application is not running anymore, it seems like the VM restarted.
Do you have any idea on how to keep my server alive ? I can give further details.

Answering my own question as John Hanley explained the solution in comments:
"By what method are you running your frontend and backend code. VMs do not go to sleep. If you are starting your software via an SSH session, that will be an issue. Your code needs to run as a service. Edit your question with more details."
I was indeed running my application via the ssh terminal which caused the problem. The solution for me was to remotely access the VM via vncserver and to launch the application using the VM's terminal.

Related

Automate an RDP connection right after Windows instance turns on in GCP

I am performing some UI Automation on GCP using a Windows Server.
The process is as follows:
=> Machine Switches on at a defined time
=> RDP Connection to Machine
=> UI Interaction Script Runs on Startup
=> Process Ends
=> Machine Switches off at a defined time
All the components have been fulfilled except for automating the RDP connection in some way or other. I referred to this link but didn't find much insights or documentations.
Does anyone know a way to Automate an RDP connection right after instance turns on in GCP?
There is a windows application called IAP Desktop, using that you can manage multiple remote Desktop connection to Windows VM. While connecting to the VM you can save the credentials which will allow you to access the Windows VM using RDP just after boot on.
Also to automate the Windows password generation here is the documentation related to 1, inside of that document there are both options available automate or manually.
How are you deploying your startup script?
During the boot sequence, a script will either run before, after or during the boot process. By declaring Windows-specific metadata keys, you can run startup scripts after the instance turns on.
If that doesn't work, there is a paid Cloud Automation service that sounds like it will meet your requirements.
Tried using startup-scripts but no luck IAP Desktop didn't work due to scheduling as well. Finally Managed to solve it via using Windows 10 Auto login settings. This skips login screen and the best part was that out of all the users, it allows you to login via user of your choice. After I Login to the system, I added a startup a bat file by running shell:startup and it worked great.

Can't keep SSH connection to VM using gcloud-sdk

I have a google cloud Deep Learning Virtual Machine Image for PyTorch that uses an SSH connection to connect to the Jupyter Notebook on it. How can I change what I am currently doing so that the Jupyter Notebook remains alive even when I close my laptop/temporarily disconnect from internet?
Currently after turning my VM and opening a tmux window I start up the Jupyter Notebook and its SSH connection with this command:
gcloud compute ssh <my-server-name> -- -L 8080:localhost:8080
This code is taken from the official docs for the deep learning images here: https://cloud.google.com/deep-learning-vm/docs/jupyter
I can then connect at localhost:8080 and do what I need to. However, if I start training a model for a long time and need to close my laptop, when I re-open it my ssh connection breaks, the Jupyter Notebook is turned off, and my model that is training is interrupted.
How can I keep this Juptyer Notebook live and be able to reconnect to it later?
NB. I used to use the Google Cloud browser SSH option and once in the server start a tmux window and the jupyter notebook within it. This worked great and meant the notebook was always alive. However, with the Google Cloud images that have CUDA and Jupyter preinstalled, this doesn't work and the only way I have been able to connect is through the above command.
I have faced this problem before on GCP too and found a simple way to resolve this. Once you have ssh'd into the compute engine, run the linux screen command and you will find yourself in a virtual terminal (you may open many terminals in parallel) and it is here you will want to run your long running job.
Once you have started the job, detach from the screen using the keys the keys Ctrl+a and then d. Once detached, you can exit out of the VM, reconnect to the VM and run screen -r and you will find that your job is still running.
Of course, you can do a lot of cool stuff with the screen command and would encourage you to read some of the tutorials found here.
NOTE: Please ensure that your Compute Engine instance is not a Pre-emptible machine!
Let me know if this helps!
I thinks it's better install Jupyter as server . so your job can keep running even when you disconnect.
There are something you might also want to know.
This is not the multi-user server you are looking for. This document describes how you can run a public server with a single user. This should only be done by someone who wants remote access to their personal machine. Even so, doing this requires a thorough understanding of the set-ups limitations and security implications. If you allow multiple users to access a notebook server as it is described in this document, their commands may collide, clobber and overwrite each other.
If you want a multi-user server, the official solution is JupyterHub. To use JupyterHub, you need a Unix server (typically Linux) running somewhere that is accessible to your users on a network. This may run over the public internet, but doing so introduces additional security concerns.

Port mapping in Windows Server 2016 - Docker

I have been trying to setup Docker in Windows Server 2016 in an AWS instance to run an IIS program.
From this question,
Cannot access an IIS container from browser - Docker, IIS has been setup inside a container and it is accessible from the host without port mapping.
However, if I want to allow other users from the Internet/Intranet to access the website, after Google-ing it, I guess we do need port mapping...
The error I have encountered in port mapping is given in the above question so... I guess using nat is not the correct option. Therefore, my team and I tried to create another network (custom/bridge) following instructions from
https://docs.docker.com/v17.09/engine/userguide/networking/#user-defined-networks
However, we cannot create a network as follows:
; Googled answer:
https://github.com/docker/for-win/issues/1960
My team guessed maybe its because AWS blocked that option, if anyone can confirm me, please do.
Another thing that I notice is: when we create an ECS instance in AWS,
So... only default = NAT network mode is accepted in Windows server?
Our objective: put the container hosted IIS application to Internet/Intranet in Windows Server 2016...
If anyone has any suggestion/advice, please tell me, many thanks.

Google Cloud Platform jupyter notebook still runnig after off local PC

I'm new at GCP and I'm trying to keep my process running on Jupyter Notebook after shutting down my local PC. Does anyone know how can I do it? Nowaday I open a terminal on my VM run jupter notebook and then after start the process on jupyter I'd like to turn my machine off.
I keep following the process on my cellphone and shutdown on there. Does anyone know how to turn this off automatically when it stops?
Sorry to make two questions at once, but I think that one is related with another. If it does not I can edit and make another one.
This is a technical limitation of Jupyter Notebooks unfortunately. The browser window contains the code which updates the notebook itself, so if you close the browser window then there is not process running to update the notebook.
However, there is one workaround which you may find useful.
There is a library called Fairing that you can use with GCP's new AI Platform Notebooks which allows you to pack up your notebook and run it remotely, and that library will save the results of that execution in a GCP Storage bucket. No active internet connection required (once you kick of the notebook run).
You can learn how to use it by creating a new GCP AI Platform Notebook and looking at the tutorials folder inside it. You can also find additional tutorials for Fairing here
Typically to keep your remote sessions up in the event of network connectivity loss (which also covers shutting down the local computer) you'd use a terminal multiplexer application. From Known issues:
Intermittent disconnects: At this time, we do not offer a specific SLA for connection lifetimes. Use terminal multiplexers like tmux
or screen if you plan to keep the terminal window open for an
extended period of time.
But these multiplexers are terminal/text-mode apps, so you'd have to launch the notebook with the --no-browser and then connect your local browser to its port.
You can find a recipe based on tmux and a local browser connection to the notebook using an SSH tunnel at Using Jupyter notebooks securely on remote linux machines.
As for shutting down the session - you'd just have to instruct the multiplexer application to end the session (or terminate the multiplexer app itself) - which you could do automatically via a wrapper script first invoking your process and immediately after the process ends invoking the commands to shutdown the session.

How to ssh port forward into a server to access a mysql host server for local work on Django web app and Jupyter notebook?

I'm unfamiliar with this terrain, so if any one can guide me in a step by step manner- it would really help. My MySQL database sits on a AWS host X- "ec2-xxx-xxx-xxx-xx.compute-1.amazonaws.com". It is blocked to access from individual local machines and is usually accessed from another working server Y- "ec2-yy-yyy-yyy-yy.compute-1.amazonaws.com" through port '3306'. Now it is especially inconvenient to access this via terminal SSH every time and scripts while they run, its hard to prototype or build an elaborate app. I'd like to set up a SSH tunnel from my local to server Y to be able to access MySQL host X from my local machine, to run queries from my locally deployed Jupyter notebook as well as local working-in-progress Django web app.
The reason why I ask for something more step-by-step is that I have to port forward to another server hosting a redis database which again is accessible through a specific server only. So, I'll be able to carry the solution from here to there too. I'm willing to go into chat as well if needed, but I need to resolve this rather quickly. Thanks!
PS: I've tried many guides off of the internet, but nothing has worked, it's become clear to me that I'm missing some foundational understanding or pathway. That's why I'm here, trying to start from the ground.