I setted up a google cloud datalab vm on google cloud following their instructions on the official page. I also created the datalab vm using the google cloud sdk on my local computer and the problem was the same using both ways.
Both ways I could open the notebook, run my code and everything looked good in the beginning. However, if I close the browser windows, computer turns to sleep mode or connection break, I cannot reconnect properly. My code runs for couple of hours producing output. When I reconnect I only see the output from the last autosave but no new data is produced and the notebook is not responding.
From the google cloud platform I can see that the datalab vm instance is running on full cpu power so it seems like the kernel is running but the notebook is not connecting properly. I'm using windows 10 with chrome.
Related
I'm trying to run my application on GCE VM. It uses nodeJs as frontend and a Java backend. I use this server to communicate with my local computer using MQTT. This is working but after some time (one hour and a half), the server seems to go to sleep (or the ports close ?).
Both MQTT and ssh terminal interface connections are lost.
When I connect back, the application is not running anymore, it seems like the VM restarted.
Do you have any idea on how to keep my server alive ? I can give further details.
Answering my own question as John Hanley explained the solution in comments:
"By what method are you running your frontend and backend code. VMs do not go to sleep. If you are starting your software via an SSH session, that will be an issue. Your code needs to run as a service. Edit your question with more details."
I was indeed running my application via the ssh terminal which caused the problem. The solution for me was to remotely access the VM via vncserver and to launch the application using the VM's terminal.
I have tried installing the red5pro streaming service on a vm with a trial account in google cloud platform. I have used the red5pro-installer repo from github to do the "automatic" installation as well as tried installing everything manually.
When I use the installer, the web page works for me when I browse to my vm. I am able to see my webcam's image on the "sample" stream page from the "broadcast" page of the vm. I then enter a stream name and hit the "start broadcast" button and it starts to spin up then throws an error indicating "Publisher.Connection.Closed" (see attached image).
In looking at the logs on the VM, these two errors stand out:
2021-08-05 22:19:35,142 [https-jsse-nio2-0.0.0.0-443-exec-7] WARN c.r.webrtc.stream.RTCSourceStream - ICE sockets not starting for: stream1
2021-08-05 22:19:35,141 [https-jsse-nio2-0.0.0.0-443-exec-7] INFO com.red5pro.override.ProStream - Connection was null so cleanup is skipped
I have verified that all of the necessary ports are open for the correct traffic type on the GCP firewall to the VM.
I have also tried browsing to the vm via port 5080, and when I attempt to start a broadcast there, my webcam image is not showing and I see an error in the output window that says "no suitable publisher found".
Not sure where to turn next.
TIA
Is there any way to run Bluestacks on virtualized instances in the cloud.
I tried both AWS, GCE, both failed.
AWS fails with the message "Failed due to enabled Hyper-V", even It's disabled in the settings.
And GCE just cannot start bluestacks, it simply loads and finally says "couldn't start Bluestacks"
What's interesting that these guys managed to do it.. https://aws.amazon.com/marketplace/pp/Cognosys-Inc-1-Click-Ready-Secured-BlueStacks-App-/B01N0E0YYZ
But how? Cannot use their image as it's old.
Bluestack doesn't work for me too. I saw Cognosys but they are running on Windows Server 2012 which is really old.
Try Memu Play 2GB Ram Minimum (Try the 4GB VPS)
Windows Server 2019 Base
DirectX
It works for me
I have a google cloud Deep Learning Virtual Machine Image for PyTorch that uses an SSH connection to connect to the Jupyter Notebook on it. How can I change what I am currently doing so that the Jupyter Notebook remains alive even when I close my laptop/temporarily disconnect from internet?
Currently after turning my VM and opening a tmux window I start up the Jupyter Notebook and its SSH connection with this command:
gcloud compute ssh <my-server-name> -- -L 8080:localhost:8080
This code is taken from the official docs for the deep learning images here: https://cloud.google.com/deep-learning-vm/docs/jupyter
I can then connect at localhost:8080 and do what I need to. However, if I start training a model for a long time and need to close my laptop, when I re-open it my ssh connection breaks, the Jupyter Notebook is turned off, and my model that is training is interrupted.
How can I keep this Juptyer Notebook live and be able to reconnect to it later?
NB. I used to use the Google Cloud browser SSH option and once in the server start a tmux window and the jupyter notebook within it. This worked great and meant the notebook was always alive. However, with the Google Cloud images that have CUDA and Jupyter preinstalled, this doesn't work and the only way I have been able to connect is through the above command.
I have faced this problem before on GCP too and found a simple way to resolve this. Once you have ssh'd into the compute engine, run the linux screen command and you will find yourself in a virtual terminal (you may open many terminals in parallel) and it is here you will want to run your long running job.
Once you have started the job, detach from the screen using the keys the keys Ctrl+a and then d. Once detached, you can exit out of the VM, reconnect to the VM and run screen -r and you will find that your job is still running.
Of course, you can do a lot of cool stuff with the screen command and would encourage you to read some of the tutorials found here.
NOTE: Please ensure that your Compute Engine instance is not a Pre-emptible machine!
Let me know if this helps!
I thinks it's better install Jupyter as server . so your job can keep running even when you disconnect.
There are something you might also want to know.
This is not the multi-user server you are looking for. This document describes how you can run a public server with a single user. This should only be done by someone who wants remote access to their personal machine. Even so, doing this requires a thorough understanding of the set-ups limitations and security implications. If you allow multiple users to access a notebook server as it is described in this document, their commands may collide, clobber and overwrite each other.
If you want a multi-user server, the official solution is JupyterHub. To use JupyterHub, you need a Unix server (typically Linux) running somewhere that is accessible to your users on a network. This may run over the public internet, but doing so introduces additional security concerns.
I'm new at GCP and I'm trying to keep my process running on Jupyter Notebook after shutting down my local PC. Does anyone know how can I do it? Nowaday I open a terminal on my VM run jupter notebook and then after start the process on jupyter I'd like to turn my machine off.
I keep following the process on my cellphone and shutdown on there. Does anyone know how to turn this off automatically when it stops?
Sorry to make two questions at once, but I think that one is related with another. If it does not I can edit and make another one.
This is a technical limitation of Jupyter Notebooks unfortunately. The browser window contains the code which updates the notebook itself, so if you close the browser window then there is not process running to update the notebook.
However, there is one workaround which you may find useful.
There is a library called Fairing that you can use with GCP's new AI Platform Notebooks which allows you to pack up your notebook and run it remotely, and that library will save the results of that execution in a GCP Storage bucket. No active internet connection required (once you kick of the notebook run).
You can learn how to use it by creating a new GCP AI Platform Notebook and looking at the tutorials folder inside it. You can also find additional tutorials for Fairing here
Typically to keep your remote sessions up in the event of network connectivity loss (which also covers shutting down the local computer) you'd use a terminal multiplexer application. From Known issues:
Intermittent disconnects: At this time, we do not offer a specific SLA for connection lifetimes. Use terminal multiplexers like tmux
or screen if you plan to keep the terminal window open for an
extended period of time.
But these multiplexers are terminal/text-mode apps, so you'd have to launch the notebook with the --no-browser and then connect your local browser to its port.
You can find a recipe based on tmux and a local browser connection to the notebook using an SSH tunnel at Using Jupyter notebooks securely on remote linux machines.
As for shutting down the session - you'd just have to instruct the multiplexer application to end the session (or terminate the multiplexer app itself) - which you could do automatically via a wrapper script first invoking your process and immediately after the process ends invoking the commands to shutdown the session.