Error in launching Jupyter notebook on Google Cloud Platform - google-cloud-platform

Followed this blog post on creating a new VM and trying to launch Jupyter notebook on GCP.
https://medium.com/#kn.maragatham09/installing-jupyter-notebook-on-google-cloud-11979e40cd10
Getting this error message

Did you try with the obvious ones?
Does the Cert exist? (maybe it was removed along the process..)
Does the Cert has correct ownership?
Does jupyter/user executing jupyter has the right to access the Cert?
Maybe you could try and run Jupyter in verbose mode. And post it here.

Fyi, GCP now offers a easy to use, pre-setup Jupyter Notebook environment called AI Platform Notebooks. Have you tried using that instead? You won't need to worry about setting up any certs :)

Related

Is it possible to access sagemaker jupyter notebook from intellij IDE?

I have deployed a model via jupyter notebook on sagemaker instance.
Now, I am wondering is there any chance to access sagemaker jupyter notebook from intellij IDE?
I am looking for a way to make an environment to work with peers so that I can get code reviews.
I can see I can control aws lambda functions via terminal, but not sure about Jupyter notebook on Sagemaker instance.

GCP AI notebook instance permission

I have a GCP AI notebook instance. Anyone with admin access to notebook in my project can open notebook and read, modify or delete any folder/file which is created by me or any other user in my team. Is there a way to create a private repository like /home/user, like we could've done if we used JupyterHub installed on a VM?
Implementing your requirement is not feasible with AI Notebooks. AI Notebooks is intended for a rapid prototyping and development environment that can be easily managed, and advanced multi-user scenarios fall outside its intended purpose.
The Python Kernel in AI Notebooks always runs under the Linux user "Jupyter" regardless of what GCP user accesses the notebook. Anyone who has editor permissions to your Google Cloud project can see each other's work through the Jupyter UI.
In order to isolate the user's work, the recommended option is to set up individual notebook instances for each user. Please find the 'Single User' option.
It’s not feasible to combine multiple instances to a master instance in AI Notebooks. So, the recommended ways to give each user a Notebook instance and share any source code via GIT or other repository system. Please find Save a Notebook to GitHub doc for more information.
You probably created a Notebook using Service Account mode. You can provide access to single users only via single-user mode
Example:
proxy-mode=mail,proxy-user-mail=user#domain.com

Where to keep the Dataflow and Cloud composer python code?

It probably is a silly question. In my project we'll be using Dataflow and Cloud composer. For that I had asked permission to create a VM instance in the GCP project to keep the both the Dataflow and Cloud composer python program. But the client asked me the reason of creation of a VM instance and told me that you can execute the Dataflow without the VM instance.
Is that possible? If yes how to achieve it? Can anyone please explain it? It'll be really helpful to me.
You can run Dataflow pipelines or manage Composer environments in you own computer once your credentials are authenticated and you have both the Google SDK and Dataflow Python library installed. However, this depends on how you want to manage your resources. I prefer to use a VM instance to have all the resources I use in the cloud where it is easier to set up VPC networks including different services. Also, saving data from a VM instance into GCS buckets is usually faster than from an on-premise computer/server.

"Failed to mount the azure file share. Your clouddrive won't be avaible"

Whenever I start the Azure Cloud Shell, I get this error:
Failed to mount the azure file share. Your clouddrive won't be avaible
Your Cloud Shell session will be ephemeral so no files or system changes will persist beyond your current session.
Can someone help me or explain why this is happening?
Can someone help me or explain why this is happening?
By chance did you delete the storage resource that was created for you when first launching Cloud Shell?
Please follow those steps:
1.Run "clouddrive unmount"
2.Restart Cloud Shell via restart icon or exit and relaunch
3.You should be prompted with the storage creation dialog again.
Here a similar case about you, please refer to it.
Also we can delete this cloud shell (in resource group), then re-create it.
In my situation I was prompted to create the clouddrive after launching the cloud shell for the first time. The resource group and storage account were created but the file share was not. I manually created it and restarted cloud shell with no issues.
I had similar case, and the solution was rather trivial.
Cloud shell couldn't access my file share due to misconfigured security settings. I did tweak the new Azure Storage firewall settings without realizing the impact on Cloud Shell.
In my case, closing and reopening the Azure CLI worked fine.

Creating an iso of a RHEL instance

I have an amazon ec2 instance with RHEL 7.3 on it. I would like to convert this into an iso so that I can migrate it wherever I want. What are the best tools to create an iso of a virtual machine. Or how do I clone/backup this VM so that I can restore it anywhere I want?
You can work with VM and AWS programatically via AWS CLI commands.
You want to get familiar with import-task and export-task commands.
The best place to start is by reading an official AWS guides for:
Exporting an Instance as a VM Using VM Import/Export
Importing a VM as an Image Using VM Import/Export
The key information you need to pick up from the guide is this quote:
"You can't export an instance from Amazon EC2 unless you previously
imported it into Amazon EC2 from another virtualization environment."
Yes, there are solutions, one of them linked by #Nicholas Smith. That being said, if you go the unofficial route you might end up in a dark alley where help might not be available. I highly recommend and warn here to not proceed with trying to clone EC2 into VM at this point. You will spend a lot of time with a huge risk factor for future.
For you to be able to achieve what you want, you need to create a RHEL VM using any VM software, you need to load this VM into AWS and then you will be able to work with VM in AWS making any necessary changes and export again for local or transportation needs.
As you are running a widely-used Linux distribution - RHEL, you can attempt to recreate your EC2 environment manually by launching a VM that runs the same kernel version along with the same package versions. From there, you can tarball what files you need from your production instance and copy them over to your on-premise site by using SCP/SFTP.
Just get your RHEL environment into VM locally and import to AWS and you set.
Clonezilla provides functionality to create images. Generated images can be converted to ISO files.
It doesn't seem to be something that Amazon promote as a service however the aws cli tools have an ec2-unbundle command for extracting from an AMI. There's a guide here on how to download and run an EC2 AMI locally by using it.
Caveat is it appears the ec2-unbundle command currently only works on Linux and not OS X or Windows.