Jupyter internal API is not active - Vertex-AI jupyterlab error 524 - google-cloud-platform

I cannot access Jupyterlab by web interface (error 524). It still works by ssh. I've followed the support documentations, but nothing works.
My best guess is that the main issue is with the opened ports of docker.
The key problem is probably below:
curl http://127.0.0.1:8080/api/kernelspecs
curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused
And the following command simply restarts the service without error (but still inaccessible through web interface)
sudo service jupyter restart
Thanks!
EDIT: to clarify, all help from this article which specifically is supposed to fix error 524, doesn't work at all.
The diagnostic tool give this result, and the --repair doesn't work:
And "Verify that the Jupyter internal API is active" is completely useless as it doesn't explain how to fix the error!!
So I know there is a problem with the Jupyter internal API but no idea how to fix that.
EDIT 2:
On the web console, here is a screenshot:

I have gone through the same error, after upgrading the VM problem got solved all the Jupyter API are healthy try upgrading the VM. Before that take a snapshot of disk(upgrading might erase your VM).
How to upgrade the VM

As I mentioned in the comment a work around to resolve the issue can be by create a new instance keeping the old data.For this you can follow below steps:
Step 1: Create a new storage bucket and a new notebook.
Step 2: Copy the data to the newly created bucket by running the following command in the old notebook terminal.
"gsutil cp -R /home/jupyter/* gs://NEW_STORAGE_BUCKET_PATH"
Step 3: From the new managed notebook’s terminal, run the below command to copy the data to this new notebook .
"gsutil cp gs://NEW_STORAGE_BUCKET_PATH* /home/jupyter/"

Related

Neo4J online backup error on AWS - Failed to run a backup using the available strategies

I'm testing neo4j enterprise 3.3.3 on AWS and trying to run an online backup on a db, which is located on a different server.
I run on my AWS instance:
neo4j-admin backup --backup-dir=~/backup --name=graph.db-backup --from=0.0.0.0:4444
where I change 0.0.0.0 for my open IP for the external neo4j db and 4444 for my port.
But then I get this error:
Failed to load private key: /var/lib/neo4j/certificates/neo4j.key
UPDATE
I fixed that by running the command with sudo (on Amazon AWS).
However, now I'm getting another error:
Failed to run a backup using the available strategies.
The documentation on backups says that you only need to uncomment some settings in neo4j.conf, which is what I've done, both on the server which is being backed up and the one that is actually running backup.
Could it be that the issue is because on AWS you have to run commands with
systemctl
And if so, how do I run neo4-admin with it?
It doesn't work if I use
systemctl neo4j-admin ...
Somebody from Neo4J — can you please help? Backup is one of the main reason to get the Enterprise version but there is not enough documentation on how to use it.

Google VM instance transferring files using the gcloud command-line tool

I have a website and I want to send a backup file to a my Google VM server on Google Cloud.
At first I thought of using php and curl script but a minute before I started working on it I saw on google site that it is possible to send a file to google vm by gcloud I thought it was a good and simple solution, but when trying to work according to the directory I encountered a strange problem that I found no Answer.
When I try to run the following code:
gcloud compute scp /root/mivzakim.php jtube-get-youtube-1:~
I get this error:
ERROR: (gcloud.compute.start-iap-tunnel) Python version 2.7.6 does not support SSL/TLS SNI needed for certificate verification on WebSocket connection.
ssh_exchange_identification: Connection closed by remote host
lost connection
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [ 1].
I couldn't understand the connection to Python and I couldn't quite understand the meaning of this error,
if anyone could help me, he would be blessed :) Thank you!

Setting up the VM server on Google Cloud to run Jupyter notebook

I am following tutorial how to run Jupyter notebook on Google Cloud Platform (https://towardsdatascience.com/running-jupyter-notebook-in-google-cloud-platform-in-15-min-61e16da34d52). I am stuck at "Step 8: Set up the VM server". I have created Jupyter configuration file by typing jupyter notebook
--generate-config
in SSH session. After checking if it was created with
ls ~/.jupyter/jupyter_notebook_config.py
I get message No such file or directory. I really don't understand what is going on. I have never created VM before and I am a biologist (who tries to become a data scientist, lost in IT terminology), all I want to do is merge my dataframes on the cloud as I am lacking memory in my laptop. Can you please help me?

Is it possible to SSH in AWS instances using any IDEs such PYCHARM?

I am stuck in a technical issue on a project and I think you the forum could help me out.
I have an EC2 Instance Type:p2.xlarge running on AWS, I cloned a repository in this instance which requires pytorch and cuda dependencies(this point has been taken care of).
Now, The issue is that I wanna work & run this code-base(which is is AWS instance now) somehow in my local pyCHARM IDE. In short, I didn't have proper resources on my laptop to run the repository, so I have to run in an AWS instance but for debugging purposes the local IDE would be a great option.
Is it possible to do that?. In other words, we can do SSH into AWS instance and run code, but all will be done through command line, if we could SSH through PYCHARM and can see the code in AWS here in local machine within PYCHARM and change, debug or run it as it was local but actually it gets executed in the instance.
Please suggest a solution to it.
Thanks in advance.
EDIT-1:
After following, #Cromulent suggestion, I have arrived here
Setting the remote:
Upload happening within the local & remote repo.
I still didn't understand the requirement of syncing the local and remote folders, when I only want to open the remote folder in my PYCHARM IDE and work on it.
I think after this setup, I have to change the code in local copy and the PYCHARM will sync the code in remote copy. How will I be running(using resources-GPUs of the remote Instance, not my local machine.) the remote code in PYCHARM in this scenario, I am just syncing it, for running again I have to ssh through command line and run the script(This does not serve the purpose)?
EDIT-2:
After #Cromulent suggestions.
Actually, it did work, but still, I am not able to run the remote code locally.
I am getting the below error while running any remote script. If I run the same script using ssh in the terminal, the scripts run normally. I tried to fix the problem using this post on StackOverflow, but it didn't work too.
ssh://ubuntu#ec2-52-41-247-169.us-west-2.compute.amazonaws.com:22/home/ubuntu/anaconda3/bin/python -u <08ad9807-3477-4916-96ce-ba6155e3ff4c>/home/ubuntu/InsightProject/scripts/download_flownet2.py
/home/ubuntu/anaconda3/bin/python: can't open file '<08ad9807-3477-4916-96ce-ba6155e3ff4c>/home/ubuntu/InsightProject/scripts/download_flownet2.py': [Errno 2] No such file or directory
The below is the screenshot for the above problem:
PyCharm Professional supports remote Python interpreters (either the globally installed Python interpreter or a virtualenv). It works by creating an SSH connection to the server and then running the code on the remote host. The results are then displayed locally in PyCharm Professional. You can also do remote debugging as well.
You MUST be using the professional version of PyCharm though. The free community version does not support this feature.
You can find the documentation here:
https://www.jetbrains.com/help/pycharm/configuring-remote-interpreters-via-ssh.html
One more solution is to deploy a Jupyter Notebook on your remote server. Then you will be able to use it from PyCharm Professional Edition.
Don't forget to make rules for the jupyter ports (e.g. allow all 8888) in your AWS console and in your instance.
To configure a remote interpreter for your notebook do this (source):
Open the Jupyter Notebook page of the Settings/Preferences dialog.
On this page, select or clear the Markdown cells rendering enabled option, and specify the username and password. Note that for the
single-user notebooks these fields are optional - leave them blank.
Fill in the username (for JupyterHub) and password.
Click the link Configure remote interpreter. You'll find yourself at the Project Interpreter page.
Configure the remote interpreter, as described in the section Configuring Python Interpreter.
You will want to configure a remote interpreter.
I tried the above approach but it didn't work for me. I have edited my post so that I can get additional input from the community, but I didn't any after the first answer was posted.
My friend actually figured out a secondary way to fix the issue. He actually uses "NOMACHINE" on the local machine and open connection to the remote desktop. Then you can directly install PYCHARM in the remote machine and work in there. I hope this will help others.
The solution is in his blog post. (Thanks to Shaobo Guan)
Another solution would be to use VNC instead of NoMachine

gcloud crashed (SSLHandshakeError) in gcloud app deploy

Unable to deploy my app as I started getting below error since today morning.
I have tried gcloud info --run-diagnostics and gcloud components reinstall without much help.
I tried to deploy it using the old Google App Engine Launcher for Windows but faced the same error.
Earlier it worked till yesterday night (IST) using gcloud. Please help!
I am on latest gcloud sdk and have updated all its components. I use Win10. I tried rebooting my laptop as well.
C:\gaurav\coding\python\myapp\myapp\dist>gcloud app deploy --project=myproject --version 1 --verbosity=info ./app.yaml
INFO: Refreshing access_token
ERROR: gcloud crashed (SSLHandshakeError): [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:661)
If you would like to report this issue, please run the following command:
gcloud feedback
To check gcloud for common problems, please run the following command:
gcloud info --run-diagnostics
C:\gaurav\coding\python\myapp\myapp\dist>
Diagnostics Output.
C:\gaurav\coding\python\myapp\myapp\dist> gcloud info --run-diagnostics
Network diagnostic detects and fixes local network connection issues.
Checking network connection...done.
ERROR: Reachability Check failed.
Cannot reach https://accounts.google.com (SSLHandshakeError)
Cannot reach https://cloudresourcemanager.googleapis.com/v1beta1/projects (SSLHandshakeError)
Cannot reach https://www.googleapis.com/auth/cloud-platform (SSLHandshakeError)
Network connection problems may be due to proxy or firewall settings.
Do you have a network proxy you would like to set in gcloud (Y/n)? n
ERROR: Network diagnostic (0/1 checks) failed.
C:\gaurav\coding\python\myapp\myapp\dist>
Although gcloud info --run-diagnostics complains that the three URLs are not reachable. I am able to open them from web browser.
I found that when using Fiddler (for viewing network traffic) and have decrypting https traffic enabled, then I received the SSLHandshakeError.
Stopping the tool (or choosing not to decrypt https traffic) and then running the gcloud resulted in success.
According to the comments, also a problem with other web debugging proxies such as Charles.
A problem in recent GAE and GCloud SDK versions is the presence of invalid SSH certificates, see, for example, Google App Engine SSL Certificate Error and issue 38338974.
You could try to use my suggested solution in the above-mentioned post and replace your SDK's certificate file with a valid one (will have to locate a good one for the gcloud SDK, my answer was for the GAE SDK).
You might also be able to use the gcloud config command to set the core custom_ca_certs_file configurable property to point to a file with up to date certificates, if you have one. I didn't try it, YMMV.
Upgrading to Python 2.7.9 on MacOS High Sierra solved the issue for me.
I had this issue upon install of the google cloud SDK on MacOS Mojave. I am not behind a corporate proxy, and all the answers on the web seemed to indicate that this was the issue. I noticed in the install.sh script that it takes an environment variable CLOUDSDK_PYTHON for the python executable. So, I fixed this by exporting the path to my python 3 executable.
In my case:
export CLOUDSDK_PYTHON=/Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6
The install worked as expected after this.
On MacOS Catalina, the solution was to completely uninstall gcloud and reinstall it.
I had the same issue, downloaded the root/intermediate cert from one of the google url you get when you run the command: gcloud info --run-diagnostics and append to the cacerts.txt file that is being used. In my case it was the following one: google-cloud-sdk/lib/third_party/httplib2/python2/httplib2/cacerts.txt
For me it was a conflict in the python versions. gcloud was calling a different version. The solution was to set CLOUDSDK_PYTHON to point to the correct python (python2 in this case).
I tried to deploy gcloud and I was getting this error. Here is how I fixed it:
Access to Google cloud using Firefox
Download certificate. pem file and chain file
Append them on cert.pem located at:
C:\Program Files\google-cloud-sdk\lib\third_party\certifi\cert.pem
(using this you are creating custom CA cert file based on your proxy)
Update cert file as mentioned here: https://cloud.google.com/sdk/gcloud/reference/config/set
(Absolute path to a custom CA cert file.)
Make sure you have cert.pem at the end of path.
I fixed the problem by installing gcloud from apt-get. The guide is at this link