Cling Kernel for Jupyter on Ubuntu - c++

I built Cling on my laptop with Ubuntu 15.04 following the instructions given on https://github.com/root-mirror/cling#jupyter because I wanted to use the Cling kernel for Jupyter. I installed Jupyter, I checked that Cling is in my PATH, but when I type the command
jupyter kernelspec install cling
I get the following
OSError: [Errno 2] No such file or directory: 'cling'
Someone knows what's happening?

According to the source code,
jupyter kernelspec install command expects the path to the directory containing kernel spec file (kernel.json) as an argument. So if
you cloned the cling repository in, say, ~/cling/src, this should work:
jupyter kernelspec install ~/cling/src/tools/cling/tools/Jupyter/kernel/cling

That's probably because in your folder 3 versions of Cling kernel are defined (C++11, C++14 and C++17).
So instead of trying to add Cling try to add one of those versions or all three if you want to.

I had the same problem just one minute ago, but I was able to solve it. I executed:
$ jupyter kernelspec install --user cling-cpp11
directly from /home/ubuntu_user/cling_ubuntu/share/cling/Jupyter/kernel.
The installation was successful, I moved to my working directory and called a jupyter notebook; it opened ok, but the kernel immediately died.
I thought the problem was that I have to install cling from where I was going to call the jupyter notebook, and I did so:
After uninstalling the kernel (also from /home/ubuntu_user/cling_ubuntu/share/cling/Jupyter/kernel) with:
jupyter kernelspec uninstall cling-cpp11
I repeated all the installation process:
Let's assume that you are usually going to call jupiter from /home/ubuntu_user, and you have your cling repository here
/home/ubuntu_user/cling_ubuntu.
Then:
Go there: $ cd /home/ubuntu_user
$ source activate my_env (I work with Anaconda, so I activated my environment)
$ export PATH=/home/ubuntu_user/cling_ubuntu/bin:$PATH
$ cd cling_ubuntu/share/cling/Jupyter/kernel/cling-cpp11
$ pip install -e.
Here you have to move to your future working directory.
$ cd /home/ubuntu_user, type:
$ jupyter kernelspec install --user cling_ubuntu/share/cling/Jupyter/kernel/cling-cpp11
.. and the kernel is still alive and works ok.

Related

Running C++ Jupiter Notebook in VSCode Insiders

I have installed xeus, xeus-cling and jupyter extension. I changed the kernel to one of the C++ versions, the cell language to C++ but when I click run the cell never outputs. Can someone please help me solve this?
Running xeus-cling under vs-code
Ceus works in the vs-code environment. You have to activate your conda environment and invoke vs-code from it (i use the code insiders edition). In linux this looks like
conda activate xeus-cling # my env for xeus-cling; where i compiled cling
then invoke code (insiders) in your project directory
code-insiders .& # or code .& if you are using the stable version
If you have still problems try the following:
start a jupyter notebook from command line (of course in your conda environment described above)
jupyter notebook --no-browser
Copy or remember the line with the token, which looks like http://127.0.0.1:8888/?token=8daf8f57bef55918defb467defc55f0305803caa27dd01d2
next go to code-insiders and click on the bottom bar Jupyter Server: Remote
on the top of the window a list will pop up, looking like
select Existing or copy the token into it
now a message should appear reload kernel , click on the button to do so
in the bottom bar select the kernel to e.g. C++14
create a new blank jupyter worksheet and don't forget to change the cell to C++ !!
Heres a solution without having to activate the conda environment. The following commands are what worked on ubuntu:focal
Update conda:
conda update conda --yes
Create the environment to install xeus-cling kernel:
conda create -n xeus-cling --yes
Install xeus-cling kernel in the xeus-cling environment created earlier:
conda install xeus-cling -c conda-forge -n xeus-cling --yes
Find where your conda environments are installed by looking for envs directories in the output of the following command:
conda info
My conda environments were located in /etc/miniconda/envs. Thus there will be a subdirectory for each environment which holds all the installed packages. The kernels are located in xeus-cling/share/jupyter/kernels. The path starts with xeus-cling because that's what we named the conda environment earlier.
Inside of the kernels/ directory you will find a few c++ kernels. To install the conda xeus-cling kernel directly into Jupyter do the following:
jupyter kernelspec install /etc/miniconda/envs/xeus-cling/share/jupyter/kernels/xcpp11 --sys-prefix
jupyter kernelspec install /etc/miniconda/envs/xeus-cling/share/jupyter/kernels/xcpp14 --sys-prefix
jupyter kernelspec install /etc/miniconda/envs/xeus-cling/share/jupyter/kernels/xcpp17 --sys-prefix
Open VS Code as you normally would. No need to activate the conda environment. Create a new Jupyter Notebook. Finally, make sure you select the C++ Kernel in the upper right corner of the screen.
I used #abu_bua answer above and these docs from the xeus-cling project to figure this out. I hope this helps.
Happy coding!

ModuleNotFoundError: No module named 'fuse'

I have set up a GPU Jupyter Notebook VM using the AI platform on Google Cloud. The server runs Debian stretch.
I want to mount a bucket I've created called example onto a folder called /home/jupyter/transfer. I've been following the instructions outlined here but when I run gsfuse example /home/jupyter/transfer I get the error:
ModuleNotFoundError: No module named 'fuse'
I've installed fuse with:
sudo apt-get install fuse
which is successful but the gsfuse code still doesn't run. I then installed the pip package with:
pip install fuse-python
And it still wouldn't work.
Any ideas?
After a lot of trial and error I managed to figure this out. The problem was the python package and where I was installing it.
If you do:
sudo apt-get install fuse
pip install -U fusepy --user
gsfuse example /home/jupyter/transfer --background
It'll work (where --background) runs the mount in the backgroud.

C:\Windows\system32>pip install pandasFatal error in launcher: Unable to create process using '"'

I have recently installed python-2.7.14 (32-bit) on windows 10. But when I try to install any package using command pip install XXX, it gives me above error.
I tried all solution to this existing problem but it didn't work for me.
My python is installed on C:\Python27
python -m pip install XXX
worked for me but when I tried to open jupyter notebook in windows command prompt by typing
jupyter notebook
It used to give me the same Fatal error.
Actually, it was my antivirus mcafee at organisational level which was blocking the exe to run.
To solve this I had installed python in D:\ folder.

Install libraries for py27 with pre installed py3 Jupyter

I have been using Jupyter with py3 as a root. I installed it with Anaconda. Now I wanted to create a new kernel for py2. But after creating it, my old packages(like matplotlib) of py3 didn't get transferred to py2 and if I am trying to manually install them, it says they are already present. Could anyone help me out?
can you share the actual command line code you're using?
you should be able to do something like..
conda create -n Py27 python=2.7 anaconda matplotlib
and then have access to the libraries there.
You could then do
source activate Py27
jupyter notebook
and that should show you the py27 kernel

GraphLab Create "ImportError: No module named graphlab"

I followed these instructions to set up GraphLab on my Ubuntu machine. At the end, I opened Python 2.7.6 and ran the first of the test lines import graphlab as gl. This gave me
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named graphlab
How can I begin to diagnose this?
Details:
I ran python -V from a terminal, and it returned me Python 2.7.6.
In /usr/bin I find the following pyth* entries ... I wonder if something somewhere pointed at the wrong version:
python python2.7-config python3.4 python-config
python2 python2-config python3.4m pythontex
python2.7 python3 python3m pythontex3
The Dato Graphlab Create installer did not actually install graphlab on my Mac (El Capitan). I did the following (Anaconda is installed) in a terminal window:
% pip install graphlab-create
That subsequently installed Graphlab Create. You can then easily verify:
% python
Python 2.7.10 |Continuum Analytics, Inc.| (default, Sep 15 2015, 14:29:08)
[GCC 4.2.1 (Apple Inc. build 5577)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
>>> import graphlab
>>>
I've noticed that occasionally, Python will forget that Graphlab Create is installed. A repeat of the above 'pip' command will cause it to remember.
python anaconda graphlab
Another option is to use virtualenvwrapper for the easy creation and application of virtual environments. For example, following this documentation, start with installation:
sudo pip install virtualenvwrapper
Open your .bashrc settings file, for example run gedit .bashrc and append the following lines to the bottom of it:
export WORKON_HOME=$HOME/.virtualenvs
export PROJECT_HOME=$HOME/Devel
source /usr/local/bin/virtualenvwrapper.sh
Restart your terminal window, and then you can make your virtual environment, say call it "test":
mkvirtualenv test
Now test is a virtual environment, and your are in it (i.e., test is "activated" currently). To put GraphLab in test,
pip install graphlab-create
Similarly, you can install other python toolkits in test by using pip, and any python program you run from within test will be able to see only the python toolkits that are installed here.
Maybe you should install graphlab in virtualenv.
1.Ensure your system has virtualenv installed. To verify, execute pip freeze. To install, execute sudo pip install virtualenv in your terminal before proceeding
2.Copy and execute the following commands in your terminal. This will create a virtual environment called 'graphlab' and install graphlab create version 0.9.1
virtualenv graphlab
. graphlab/bin/activate
pip install graphlab-create==0.9.1
You may need to activate the conda env by running
source activate dato-env
inside the terminal
Check your system path
import sys
print sys.path
It should contain graphlab-0.9.1. If not, then something was odd with our installation. I recommend using a virtual environment in python.
I had the same problem on ubuntu 16 desktop. The solution for me was pretty simple. After you start the notebook using
(gl-env) davis#smeagol:~/progs/ml-foundations$ jupyter notebook
Click the file navigator to locate your notebook where you do the import graphlab which causes the error. When it starts the notebook I imagine you see |Python [Root] in the top right. To fix this, click the title bar Kernel->Change kernel->gl-env. Now the top right label should say |Python [gl-env]. Afterwards when you run the notebook import graphlab will work.
There is a tab on the intial landing page of the Jupyter UI which has Conda. In that you can see two env's named root and gl-env. I've tried to delete the root one and even though its not the default all my notebooks start up with that environment and deleting it causes an internal error.
Graphlab is not supported on python3. Install Python 2.7 as mentioned in
https://conda.io/docs/user-guide/tasks/manage-python.html
If you don't see graphlab, simply the path of the environment is not set on "dato-env" (rather it may be set on "root")
If you use "Launcher" application, on top left set "Environment" to "dato-env".
Well,I guess the thread is dead.
After tinkering w/ un/reinstallations a couple times, the only way I can get "import graphlab" to work reliably is to manually activate dato-env.
Open your terminal and type below command
source activate dato-env
Prior to this close all the jupyter notebook. I ascertain that dato-env is in effect when my bash prompt changes to: (dato-env) pydev#smruti:~$
Now on your Jupyter notebook try to do import graphlab,this will execute without showing import error.
Hope this helps!!
I had the same problems, but then I found that in the files that come along with the Machine Learning specialization (https://www.coursera.org/learn/ml-foundations/notebook/lGQH5/open-your-notebook-workspace-to-follow-along) there are some additional codes after which you don't get any errors:
import graphlab
Set product key on this computer. After running this cell, you will not need to re-enter your product key.
graphlab.product_key.set_product_key('your product key here')
Limit number of worker processes. This preserves system memory, which prevents hosted notebooks from crashing.
graphlab.set_runtime_config('GRAPHLAB_DEFAULT_NUM_PYLAMBDA_WORKERS', 4)
Output active product key.
graphlab.product_key.get_product_key()
I had the same problem. I follow these steps.
1.Install Anaconda 2.7 version. Then I created vitual environment and selected python 2.7 version.
2.After create virtual environment open terminal in and run pip install notebook.
3.Then I registered https://turi.com/ because Graphlab Create requires an academic license to use.Run the following command that is given by after registration in terminal.
pip install --upgrade --no-cache-dir https://get.graphlab.com/GraphLab-Create/2.1/your registered email address here/your product key here/GraphLab-Create-License.tar.gz
4.Run jupyter notebook.
5.import graphlab
6.Then I got an error.So i run graphlab.get_dependancies() command. After that restarted the kernel.
7.After above all steps I typed import graphlab again.
8.It executed without errors.