I have installed xeus, xeus-cling and jupyter extension. I changed the kernel to one of the C++ versions, the cell language to C++ but when I click run the cell never outputs. Can someone please help me solve this?
Running xeus-cling under vs-code
Ceus works in the vs-code environment. You have to activate your conda environment and invoke vs-code from it (i use the code insiders edition). In linux this looks like
conda activate xeus-cling # my env for xeus-cling; where i compiled cling
then invoke code (insiders) in your project directory
code-insiders .& # or code .& if you are using the stable version
If you have still problems try the following:
start a jupyter notebook from command line (of course in your conda environment described above)
jupyter notebook --no-browser
Copy or remember the line with the token, which looks like http://127.0.0.1:8888/?token=8daf8f57bef55918defb467defc55f0305803caa27dd01d2
next go to code-insiders and click on the bottom bar Jupyter Server: Remote
on the top of the window a list will pop up, looking like
select Existing or copy the token into it
now a message should appear reload kernel , click on the button to do so
in the bottom bar select the kernel to e.g. C++14
create a new blank jupyter worksheet and don't forget to change the cell to C++ !!
Heres a solution without having to activate the conda environment. The following commands are what worked on ubuntu:focal
Update conda:
conda update conda --yes
Create the environment to install xeus-cling kernel:
conda create -n xeus-cling --yes
Install xeus-cling kernel in the xeus-cling environment created earlier:
conda install xeus-cling -c conda-forge -n xeus-cling --yes
Find where your conda environments are installed by looking for envs directories in the output of the following command:
conda info
My conda environments were located in /etc/miniconda/envs. Thus there will be a subdirectory for each environment which holds all the installed packages. The kernels are located in xeus-cling/share/jupyter/kernels. The path starts with xeus-cling because that's what we named the conda environment earlier.
Inside of the kernels/ directory you will find a few c++ kernels. To install the conda xeus-cling kernel directly into Jupyter do the following:
jupyter kernelspec install /etc/miniconda/envs/xeus-cling/share/jupyter/kernels/xcpp11 --sys-prefix
jupyter kernelspec install /etc/miniconda/envs/xeus-cling/share/jupyter/kernels/xcpp14 --sys-prefix
jupyter kernelspec install /etc/miniconda/envs/xeus-cling/share/jupyter/kernels/xcpp17 --sys-prefix
Open VS Code as you normally would. No need to activate the conda environment. Create a new Jupyter Notebook. Finally, make sure you select the C++ Kernel in the upper right corner of the screen.
I used #abu_bua answer above and these docs from the xeus-cling project to figure this out. I hope this helps.
Happy coding!
Related
I am trying to use a post-startup script to create a Vertex AI User Managed Notebook whose Jupyter Lab has a dedicated virtual environment and corresponding computing kernel when first launched. I have had success creating the instance and then, as a second manual step from within the Jupyter Lab > Terminal, running a bash script like so:
#!/bin/bash
cd /home/jupyter
mkdir -p env
cd env
python3 -m venv envName --system-site-packages
source envName/bin/activate
envName/bin/python3 -m pip install --upgrade pip
python -m ipykernel install --user --name=envName
pip3 install geemap --user
pip3 install earthengine-api --user
pip3 install ipyleaflet --user
pip3 install folium --user
pip3 install voila --user
pip3 install jupyterlab_widgets
deactivate
jupyter labextension install --no-build #jupyter-widgets/jupyterlab-manager jupyter-leaflet
jupyter lab build --dev-build=False --minimize=False
jupyter labextension enable #jupyter-widgets/jupyterlab-manager
However, I have not had luck using this code as a post-startup script (being supplied through the console creation tools, as opposed to command line, thus far). When I open Jupyter Lab and look at the relevant structures, I find that there is no environment or kernel. Could someone please provide a working example that accomplishes my aim, or otherwise describe the order of build steps that one would follow?
Post startup scripts run as root.
When you run:
python -m ipykernel install --user --name=envName
Notebook is using current user which is root vs when you use Terminal, which is running as jupyter user.
Option 1) Have 2 scripts:
Script A. Contents specified in original post. Example: gs://newsml-us-central1/so73649262.sh
Script B. Downloads script and execute it as jupyter. Example: gs://newsml-us-central1/so1.sh and use it as post-startup script.
#!/bin/bash
set -x
gsutil cp gs://newsml-us-central1/so73649262.sh /home/jupyter
chown jupyter /home/jupyter/so73649262.sh
chmod a+x /home/jupyter/so73649262.sh
su -c '/home/jupyter/so73649262.sh' jupyter
Option 2) Create a file in bash using EOF. Write the contents into a single file and execute it as mentioned above.
This is being posted as support context for the accepted solution from #gogasca.
#gogasca's suggestion (I'm using Option 1) works great, if you are patient. Through many attempts, I discovered that inconsistent behavior was based on timing of access. Using Option 1, the User Managed Notebook appears available for use in Vertex AI Workbench (green check and clickable "OPEN JUPYTERLAB" link) before the installation script(s) have finished.
If you open the Notebook too soon, you will find two things: (1) you will be prompted for a recommended Jupyter Lab build, for instance:
Build Recommended
JupyterLab build is suggested:
#jupyter-widgets/jupyterlab-manager changed from file:../extensions/jupyter-widgets-jupyterlab-manager-3.1.1.tgz to file:../extensions/jupyter-widgets-jupyterlab-manager-5.0.3.tgz
and (2) while the custom environment/kernel is present and accessible, if you try to use ipyleaflet or ipywidget tools, you will see one of several JavaScript errors, depending on how quickly you try to use the kernel, relative to the build that is (apparently) continuing to take place in the background: Error displaying widget: model not found, and/or a broken page icon with a JavaScript error, that, if clicked, will show you something like:
[Open Browser Console for more detailed log - Double click to close this message]
Failed to load model class 'LeafletMapModel' from module 'jupyter-leaflet'
Error: No version of module jupyter-leaflet is registered
at f.loadClass (https://someURL.notebooks.googleusercontent.com/lab/extensions/#jupyter-widgets/jupyterlab-manager/static/134.bcbea9feb6e7c4da7530.js?v=bcbea9feb6e7c4da7530:1:74856)
at f.loadModelClass (https://someURL.notebooks.googleusercontent.com/lab/extensions/#jupyter-widgets/jupyterlab-manager/static/150.3e1e5adfd821b9b96340.js?v=3e1e5adfd821b9b96340:1:10729)
at f._make_model (https://someURL.notebooks.googleusercontent.com/lab/extensions/#jupyter-widgets/jupyterlab-manager/static/150.3e1e5adfd821b9b96340.js?v=3e1e5adfd821b9b96340:1:7517)
at f.new_model (https://someURL.notebooks.googleusercontent.com/lab/extensions/#jupyter-widgets/jupyterlab-manager/static/150.3e1e5adfd821b9b96340.js?v=3e1e5adfd821b9b96340:1:5137)
at https://someURL.notebooks.googleusercontent.com/lab/extensions/#jupyter-widgets/jupyterlab-manager/static/150.3e1e5adfd821b9b96340.js?v=3e1e5adfd821b9b96340:1:6385
at Array.map ()
at f._loadFromKernel (https://someURL.notebooks.googleusercontent.com/lab/extensions/#jupyter-widgets/jupyterlab-manager/static/150.3e1e5adfd821b9b96340.js?v=3e1e5adfd821b9b96340:1:6278)
at async f.restoreWidgets (https://someURL.notebooks.googleusercontent.com/lab/extensions/#jupyter-widgets/jupyterlab-manager/static/134.bcbea9feb6e7c4da7530.js?v=bcbea9feb6e7c4da7530:1:77764)
The solution here is to keep waiting. In my demo script, I transfer a file at the end of the build process. If I wait long enough for this file to actually appear in the Instance directories, the recommendation for a rebuild is absent and the extensions work properly.
When I launch a notebook in Jupyter Lab running on a GCP AI Platform Notebook, it is not recognizing a package despite having already installed it.
I have installed the package (RDKit) using conda and when I run
import rdkit
in a terminal there isn't an issue. However when I open my notebook and try the same line of code I get an error telling me that it can't find the module.
If you install a new package using the terminal you want to ensure two things:
You installed it using the version of Python that matches your notebook kernel (pip vs pip3)
You restart your notebook kernel after installing the new package (go to the Notebook -> Kernel -> Restart Kernel)
But there's a better option:
You can make things easier on yourself by installing the package directly from the notebook cell using:
%pip install <package_name>
With this method there is no need to worry about pip vs pip3 (it is automatically taken care of) and no need to restart the kernel either
I have been using Jupyter with py3 as a root. I installed it with Anaconda. Now I wanted to create a new kernel for py2. But after creating it, my old packages(like matplotlib) of py3 didn't get transferred to py2 and if I am trying to manually install them, it says they are already present. Could anyone help me out?
can you share the actual command line code you're using?
you should be able to do something like..
conda create -n Py27 python=2.7 anaconda matplotlib
and then have access to the libraries there.
You could then do
source activate Py27
jupyter notebook
and that should show you the py27 kernel
I built Cling on my laptop with Ubuntu 15.04 following the instructions given on https://github.com/root-mirror/cling#jupyter because I wanted to use the Cling kernel for Jupyter. I installed Jupyter, I checked that Cling is in my PATH, but when I type the command
jupyter kernelspec install cling
I get the following
OSError: [Errno 2] No such file or directory: 'cling'
Someone knows what's happening?
According to the source code,
jupyter kernelspec install command expects the path to the directory containing kernel spec file (kernel.json) as an argument. So if
you cloned the cling repository in, say, ~/cling/src, this should work:
jupyter kernelspec install ~/cling/src/tools/cling/tools/Jupyter/kernel/cling
That's probably because in your folder 3 versions of Cling kernel are defined (C++11, C++14 and C++17).
So instead of trying to add Cling try to add one of those versions or all three if you want to.
I had the same problem just one minute ago, but I was able to solve it. I executed:
$ jupyter kernelspec install --user cling-cpp11
directly from /home/ubuntu_user/cling_ubuntu/share/cling/Jupyter/kernel.
The installation was successful, I moved to my working directory and called a jupyter notebook; it opened ok, but the kernel immediately died.
I thought the problem was that I have to install cling from where I was going to call the jupyter notebook, and I did so:
After uninstalling the kernel (also from /home/ubuntu_user/cling_ubuntu/share/cling/Jupyter/kernel) with:
jupyter kernelspec uninstall cling-cpp11
I repeated all the installation process:
Let's assume that you are usually going to call jupiter from /home/ubuntu_user, and you have your cling repository here
/home/ubuntu_user/cling_ubuntu.
Then:
Go there: $ cd /home/ubuntu_user
$ source activate my_env (I work with Anaconda, so I activated my environment)
$ export PATH=/home/ubuntu_user/cling_ubuntu/bin:$PATH
$ cd cling_ubuntu/share/cling/Jupyter/kernel/cling-cpp11
$ pip install -e.
Here you have to move to your future working directory.
$ cd /home/ubuntu_user, type:
$ jupyter kernelspec install --user cling_ubuntu/share/cling/Jupyter/kernel/cling-cpp11
.. and the kernel is still alive and works ok.
I followed these instructions to set up GraphLab on my Ubuntu machine. At the end, I opened Python 2.7.6 and ran the first of the test lines import graphlab as gl. This gave me
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named graphlab
How can I begin to diagnose this?
Details:
I ran python -V from a terminal, and it returned me Python 2.7.6.
In /usr/bin I find the following pyth* entries ... I wonder if something somewhere pointed at the wrong version:
python python2.7-config python3.4 python-config
python2 python2-config python3.4m pythontex
python2.7 python3 python3m pythontex3
The Dato Graphlab Create installer did not actually install graphlab on my Mac (El Capitan). I did the following (Anaconda is installed) in a terminal window:
% pip install graphlab-create
That subsequently installed Graphlab Create. You can then easily verify:
% python
Python 2.7.10 |Continuum Analytics, Inc.| (default, Sep 15 2015, 14:29:08)
[GCC 4.2.1 (Apple Inc. build 5577)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> import graphlab
>>>
I've noticed that occasionally, Python will forget that Graphlab Create is installed. A repeat of the above 'pip' command will cause it to remember.
python anaconda graphlab
Another option is to use virtualenvwrapper for the easy creation and application of virtual environments. For example, following this documentation, start with installation:
sudo pip install virtualenvwrapper
Open your .bashrc settings file, for example run gedit .bashrc and append the following lines to the bottom of it:
export WORKON_HOME=$HOME/.virtualenvs
export PROJECT_HOME=$HOME/Devel
source /usr/local/bin/virtualenvwrapper.sh
Restart your terminal window, and then you can make your virtual environment, say call it "test":
mkvirtualenv test
Now test is a virtual environment, and your are in it (i.e., test is "activated" currently). To put GraphLab in test,
pip install graphlab-create
Similarly, you can install other python toolkits in test by using pip, and any python program you run from within test will be able to see only the python toolkits that are installed here.
Maybe you should install graphlab in virtualenv.
1.Ensure your system has virtualenv installed. To verify, execute pip freeze. To install, execute sudo pip install virtualenv in your terminal before proceeding
2.Copy and execute the following commands in your terminal. This will create a virtual environment called 'graphlab' and install graphlab create version 0.9.1
virtualenv graphlab
. graphlab/bin/activate
pip install graphlab-create==0.9.1
You may need to activate the conda env by running
source activate dato-env
inside the terminal
Check your system path
import sys
print sys.path
It should contain graphlab-0.9.1. If not, then something was odd with our installation. I recommend using a virtual environment in python.
I had the same problem on ubuntu 16 desktop. The solution for me was pretty simple. After you start the notebook using
(gl-env) davis#smeagol:~/progs/ml-foundations$ jupyter notebook
Click the file navigator to locate your notebook where you do the import graphlab which causes the error. When it starts the notebook I imagine you see |Python [Root] in the top right. To fix this, click the title bar Kernel->Change kernel->gl-env. Now the top right label should say |Python [gl-env]. Afterwards when you run the notebook import graphlab will work.
There is a tab on the intial landing page of the Jupyter UI which has Conda. In that you can see two env's named root and gl-env. I've tried to delete the root one and even though its not the default all my notebooks start up with that environment and deleting it causes an internal error.
Graphlab is not supported on python3. Install Python 2.7 as mentioned in
https://conda.io/docs/user-guide/tasks/manage-python.html
If you don't see graphlab, simply the path of the environment is not set on "dato-env" (rather it may be set on "root")
If you use "Launcher" application, on top left set "Environment" to "dato-env".
Well,I guess the thread is dead.
After tinkering w/ un/reinstallations a couple times, the only way I can get "import graphlab" to work reliably is to manually activate dato-env.
Open your terminal and type below command
source activate dato-env
Prior to this close all the jupyter notebook. I ascertain that dato-env is in effect when my bash prompt changes to: (dato-env) pydev#smruti:~$
Now on your Jupyter notebook try to do import graphlab,this will execute without showing import error.
Hope this helps!!
I had the same problems, but then I found that in the files that come along with the Machine Learning specialization (https://www.coursera.org/learn/ml-foundations/notebook/lGQH5/open-your-notebook-workspace-to-follow-along) there are some additional codes after which you don't get any errors:
import graphlab
Set product key on this computer. After running this cell, you will not need to re-enter your product key.
graphlab.product_key.set_product_key('your product key here')
Limit number of worker processes. This preserves system memory, which prevents hosted notebooks from crashing.
graphlab.set_runtime_config('GRAPHLAB_DEFAULT_NUM_PYLAMBDA_WORKERS', 4)
Output active product key.
graphlab.product_key.get_product_key()
I had the same problem. I follow these steps.
1.Install Anaconda 2.7 version. Then I created vitual environment and selected python 2.7 version.
2.After create virtual environment open terminal in and run pip install notebook.
3.Then I registered https://turi.com/ because Graphlab Create requires an academic license to use.Run the following command that is given by after registration in terminal.
pip install --upgrade --no-cache-dir https://get.graphlab.com/GraphLab-Create/2.1/your registered email address here/your product key here/GraphLab-Create-License.tar.gz
4.Run jupyter notebook.
5.import graphlab
6.Then I got an error.So i run graphlab.get_dependancies() command. After that restarted the kernel.
7.After above all steps I typed import graphlab again.
8.It executed without errors.