Launching anaconda spyder gui in cygwin - python-2.7

I am connecting my windows 7 computer to a linux based cluster using cygwin. Within a specific node in the cluster I want to launch the anaconda spyder gui.
to launch spyder you simply type:
spyder into cygwin
but that returns:
QXcbConnection: Could not connect to display
Aborted (core dumped)
I also tried:
QTA_QPA_PLATFORM=offscreen spyder
but that returns:
QFontDatabase: Cannot find font directory /home/spotter/anaconda2/lib/fonts - is Qt installed correctly?
I installed qt4 dev-tools but it didn't change anything
EDIT:
I installed xinit and xorg and now I try this:
before logging in with ssh i run:
export DISPLAY=localhost:0.0
then I login using ssh:
ssh -Y -X usrname#machine
and now when I try to use spyder I get:
connect localhost port 6000: Connection refused
QXcbConnection: Could not connect to display localhost:11.0

So it sounds like you are running Cygwin on your local Windows machine, logging into a remote server with ssh, and running spyder from that machine with the intent of having it show up on your local screen. Now that you have startx working, you are close to a solution.
Between steps 5 and 6, you need to run the export DISPLAY command on the remote machine and set it to the name of your local computer. You will need to know your hostname for this. The steps will look like this:
startx
ssh -Y -X username#machine
export DISPLAY=win-machine-name:0.0
spyder
The last two commands are executed on the remote machine. I just made up the win-machine-name. In its place, you will put the IP address or machine name of your windows machine. That is how you tell set the DISPLAY environment variable on the remote machine, so X clients know where to send the graphics commands.
Hope this helps!

For me what I did was:
Install packages associated with startx
Change the sshd_config file to allow X11 forwarding
export DISPLAY=localhost:0.0
startx
login with ssh -Y -X username#machine
spyder

Related

GCP VM not installing nVidia driver properly

I have created the VM using GCP Console in browser.
While creating VM, I selected the VM Image as "c2-deeplearning-pytorch-1-8-cu110-v20210619-debian-10". Also, I selected GPU as T4.
VM gets created and started and it shows green icon in browser.
Then I try to connect from "gcloud compute ssh " and it asks if I want to install nVidia Driver and I do Y, then it gives error for lock file and driver is not installed as:
This VM requires Nvidia drivers to function correctly. Installation
takes ~1 minute. Would you like to install the Nvidia driver? [y/n] y
Installing Nvidia driver. install linux headers:
linux-headers-4.19.0-16-cloud-amd64 E: dpkg was interrupted, you must
manually run 'sudo dpkg --configure -a' to correct the problem.
Nvidia driver installed.
I try to verify if driver is installed by running python code as:
import torch
torch.cuda.is_available() #returns False.
Anybody else faced this issue?
This is the correct way to install NVIDIA driver on a GCP instance:
cd /
sudo apt purge nvidia-*
Reboot
cd /
sudo wget https://developer.download.nvidia.com/compute/cuda/11.2.2/local_installers/cuda_11.2.2_460.32.03_linux.run
sudo sh cuda_11.2.2_460.32.03_linux.run
Adjust your config accordingly as it pops options in the terminal
Reboot
Solution to my problem was:
Run manually : sudo dpkg --configure -a
Disconnect from machine.
Connect again using SSH. Select Y again when asked to install nVidia Driver.
It works then.
Make sure you are running as root. I know this sounds silly, but if you use their notebook instances the default user is not root and if you try to ssh into the instance and run something like gpustat etc or run custom code, you might get errors like NVIDIA drivers are not loaded or such.
If you make sure your user (which is called jupyter in the default case) is in the sudoers then all will work fine.
It is often very complicated to install or reinstall GPU drivers on GCP instances. Make sure you actually need to reinstall before you attempt other solutions.

Google Material Design Components on Ubuntu Server on Google Cloud

I cannot get Material Design Components to run on my virtual server. I have tried following their "quick start" page and their Material basics (Web 101) course to no avail. I am able to execute most of the steps in either tutorial, but I cannot see the JavaScript apply to the page. What am I doing wrong? I will detail my process below so that someone can hopefully spot my mistake.
First I create a VM instance on the Google Cloud Platform. It is a Ubuntu 18.04 LTS image with 1 CPU, 3.75 GB memory, and HTTP/HTTPS traffic allowed on the firewall.
Then I install Node.js and NPM on the machine.
sudo apt-get update
sudo apt-get install nodejs
sudo apt-get install npm
Then I clone the codelab from GitHub. (following Web 101 in this example)
git clone https://github.com/material-components/material-components-web-codelabs
...and navigate to the pertinent directory.
cd material-components-web-codelabs/mdc-101/starter
In that directory, I install NPM.
npm install
The install works just fine, save for one optional dependency called "/chokidar/fsevents" which is apparently for Mac OS X anyways.
From the same directory, I start NPM.
npm start
At this point, the tutorial says I should be able to reach the site. It says to navigate to http://localhost:8080/. Since I am installing this on a remote, cloud server, I replace "localhost" with the server's external IP. I invariably get a timeout error from the browser.
Ensure that the port 8080 is open and listening inside the VM instance by running telnet, nmap, or netstat commands.
$ telnet localhost 8080
$ nmap <external-ip-vm-instance>
$ netstat -plant
If it is not listening, then this means that the application was not installed correctly.
Look at the Firewall Rule in the GCP to make sure that the VM instance allows ingress traffic to the port 8080.
Since you are running Ubuntu, make sure that the default Ubuntu firewall did not block the port 8080. If so, you have to allow access to the port 8080 by running the following command:
$ sudo ufw allow 8080/tcp

Local DC/OS is unreachable

I have a DC/OS local installation using vagrant. I have restarted my Mac machine and power up the virtual machines on which dc/os is installed. Still I not not able to open the GUI. When I give docs service command on CLI, I get the following error
URL [http://m1.dcos/mesos/master/state.json] is unreachable.
The response I got from mesosphere:
It would be good if you can login to your master node with:
vagrant
ssh m1
And run the following command:
sudo
systemctl status dcos-adminrouter.service
I have cleaned up the vagrant environment and set up again. I am not facing the problem yet.

Remote access to virtual machines

Hi I have a desktop with WMware Workstation player on it. I've installed a few virtual machines for school (Ubuntu, Windows 8.1, ...). Is there a way to access them remotely? Without installing Teamviewer or VNC on every machine? I can install it on the host, but i don't want access to my whole computer remotely.
You can access win 8.1 with RDP that is enabled by default. for Ubuntu you can install XRDP to access the same way. However - you need to make sure that the source host can be access the internal VM's by mean of IP connectivity (routing)
Yes, you can - you have to use SSH.
http://www.openssh.com/
Not to state the obvious, why not use SSH, just command line access not a full a UI?
I haven't used this myself but I hear this is a good option for windows http://www.freesshd.com/
As for Ubuntu:
sudo apt-get update
sudo apt-get install openssh-server
sudo ufw allow 22

Installing PostGIS on Windows

I've installed PostgreSQL and PostGIS, and now I'm trying to follow these instructions:
http://docs.djangoproject.com/en/dev/ref/contrib/gis/install/#spatialdb-template
But I keep getting the following error, both in the command prompt and in Cygwin:
C:\Users\Home>createdb -E UTF8 template_postgis
createdb: could not connect to database postgres: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
And I know PostgreSQL is running, because I'm using it right now!
Installing open source applications can sometimes be so frustrating...
I'll be very grateful for your help!
Are you by any chance using cygwin here? Particuarly, is the system picking up createdb from a cygwin binary?
If your server is cygwin, try removing it and replace it with the Windows version.
If your server is the Windows version, but you have createdb from a cygwin install in the PATH, try removing cygwin from your PATH to make sure you pick up the Windows version of createdb.