Remote access to virtual machines - vmware

Hi I have a desktop with WMware Workstation player on it. I've installed a few virtual machines for school (Ubuntu, Windows 8.1, ...). Is there a way to access them remotely? Without installing Teamviewer or VNC on every machine? I can install it on the host, but i don't want access to my whole computer remotely.

You can access win 8.1 with RDP that is enabled by default. for Ubuntu you can install XRDP to access the same way. However - you need to make sure that the source host can be access the internal VM's by mean of IP connectivity (routing)

Yes, you can - you have to use SSH.
http://www.openssh.com/

Not to state the obvious, why not use SSH, just command line access not a full a UI?
I haven't used this myself but I hear this is a good option for windows http://www.freesshd.com/
As for Ubuntu:
sudo apt-get update
sudo apt-get install openssh-server
sudo ufw allow 22

Related

GCP VM not installing nVidia driver properly

I have created the VM using GCP Console in browser.
While creating VM, I selected the VM Image as "c2-deeplearning-pytorch-1-8-cu110-v20210619-debian-10". Also, I selected GPU as T4.
VM gets created and started and it shows green icon in browser.
Then I try to connect from "gcloud compute ssh " and it asks if I want to install nVidia Driver and I do Y, then it gives error for lock file and driver is not installed as:
This VM requires Nvidia drivers to function correctly. Installation
takes ~1 minute. Would you like to install the Nvidia driver? [y/n] y
Installing Nvidia driver. install linux headers:
linux-headers-4.19.0-16-cloud-amd64 E: dpkg was interrupted, you must
manually run 'sudo dpkg --configure -a' to correct the problem.
Nvidia driver installed.
I try to verify if driver is installed by running python code as:
import torch
torch.cuda.is_available() #returns False.
Anybody else faced this issue?
This is the correct way to install NVIDIA driver on a GCP instance:
cd /
sudo apt purge nvidia-*
Reboot
cd /
sudo wget https://developer.download.nvidia.com/compute/cuda/11.2.2/local_installers/cuda_11.2.2_460.32.03_linux.run
sudo sh cuda_11.2.2_460.32.03_linux.run
Adjust your config accordingly as it pops options in the terminal
Reboot
Solution to my problem was:
Run manually : sudo dpkg --configure -a
Disconnect from machine.
Connect again using SSH. Select Y again when asked to install nVidia Driver.
It works then.
Make sure you are running as root. I know this sounds silly, but if you use their notebook instances the default user is not root and if you try to ssh into the instance and run something like gpustat etc or run custom code, you might get errors like NVIDIA drivers are not loaded or such.
If you make sure your user (which is called jupyter in the default case) is in the sudoers then all will work fine.
It is often very complicated to install or reinstall GPU drivers on GCP instances. Make sure you actually need to reinstall before you attempt other solutions.

Google Material Design Components on Ubuntu Server on Google Cloud

I cannot get Material Design Components to run on my virtual server. I have tried following their "quick start" page and their Material basics (Web 101) course to no avail. I am able to execute most of the steps in either tutorial, but I cannot see the JavaScript apply to the page. What am I doing wrong? I will detail my process below so that someone can hopefully spot my mistake.
First I create a VM instance on the Google Cloud Platform. It is a Ubuntu 18.04 LTS image with 1 CPU, 3.75 GB memory, and HTTP/HTTPS traffic allowed on the firewall.
Then I install Node.js and NPM on the machine.
sudo apt-get update
sudo apt-get install nodejs
sudo apt-get install npm
Then I clone the codelab from GitHub. (following Web 101 in this example)
git clone https://github.com/material-components/material-components-web-codelabs
...and navigate to the pertinent directory.
cd material-components-web-codelabs/mdc-101/starter
In that directory, I install NPM.
npm install
The install works just fine, save for one optional dependency called "/chokidar/fsevents" which is apparently for Mac OS X anyways.
From the same directory, I start NPM.
npm start
At this point, the tutorial says I should be able to reach the site. It says to navigate to http://localhost:8080/. Since I am installing this on a remote, cloud server, I replace "localhost" with the server's external IP. I invariably get a timeout error from the browser.
Ensure that the port 8080 is open and listening inside the VM instance by running telnet, nmap, or netstat commands.
$ telnet localhost 8080
$ nmap <external-ip-vm-instance>
$ netstat -plant
If it is not listening, then this means that the application was not installed correctly.
Look at the Firewall Rule in the GCP to make sure that the VM instance allows ingress traffic to the port 8080.
Since you are running Ubuntu, make sure that the default Ubuntu firewall did not block the port 8080. If so, you have to allow access to the port 8080 by running the following command:
$ sudo ufw allow 8080/tcp

Launching anaconda spyder gui in cygwin

I am connecting my windows 7 computer to a linux based cluster using cygwin. Within a specific node in the cluster I want to launch the anaconda spyder gui.
to launch spyder you simply type:
spyder into cygwin
but that returns:
QXcbConnection: Could not connect to display
Aborted (core dumped)
I also tried:
QTA_QPA_PLATFORM=offscreen spyder
but that returns:
QFontDatabase: Cannot find font directory /home/spotter/anaconda2/lib/fonts - is Qt installed correctly?
I installed qt4 dev-tools but it didn't change anything
EDIT:
I installed xinit and xorg and now I try this:
before logging in with ssh i run:
export DISPLAY=localhost:0.0
then I login using ssh:
ssh -Y -X usrname#machine
and now when I try to use spyder I get:
connect localhost port 6000: Connection refused
QXcbConnection: Could not connect to display localhost:11.0
So it sounds like you are running Cygwin on your local Windows machine, logging into a remote server with ssh, and running spyder from that machine with the intent of having it show up on your local screen. Now that you have startx working, you are close to a solution.
Between steps 5 and 6, you need to run the export DISPLAY command on the remote machine and set it to the name of your local computer. You will need to know your hostname for this. The steps will look like this:
startx
ssh -Y -X username#machine
export DISPLAY=win-machine-name:0.0
spyder
The last two commands are executed on the remote machine. I just made up the win-machine-name. In its place, you will put the IP address or machine name of your windows machine. That is how you tell set the DISPLAY environment variable on the remote machine, so X clients know where to send the graphics commands.
Hope this helps!
For me what I did was:
Install packages associated with startx
Change the sshd_config file to allow X11 forwarding
export DISPLAY=localhost:0.0
startx
login with ssh -Y -X username#machine
spyder

How can I downgrade CentOS 6.5 to CentOS 6.3 on VirtualBox 4.3.10

I ran the yum on CentOS 6.3, so that it turned into another kernel CentOS 6.5. After restart of the VirtualBox I couldn't anymore run neither the old nor the new OS. Do you know a way to downgrade to CentOS 6.3? How can I get into the machine with SSH without knowing it's IP address?
As for your second question, you require a DNS service and maybe you have it; you can check the settings of your DHCP server where list the hostname of your virtual machine. Assuming that you are in a scenario where you have a DHCP and DNS server normally called "Modem Router" and do require SSH Connection temporarily.
Now the first question, I have a lot to say, but to my knowledge, it is difficult to make a "downgrade" on linux on all packages that could be updated, if it is only the kernel and it is another matter. I suggest:
https://www.centos.org/docs/2/rhl-cg-en-7.2/buildkernel-bootloader.html

Vagrant, VirtualBox built-in or no?

Trying to get set up with Vagrant but getting the error:
The "VBoxManage" command or one of its dependencies could not be found.
Please verify VirtualBox is properly installed. You can verify everything
is okay by running "VBoxManage --version" and verifying that the VirtualBox
version is outputted.
Just confused because the Vagrant documentation states:
"The getting started guide will use Vagrant with VirtualBox, since it is free, available on every major platform, and built-in to Vagrant."
Don't want to install VirtualBox separately if its supposed to be included when I installed Vagrant. Running OSX 10.8 if it's relevant, guessing I just need to install VirtualBox? If that's the case, what do they mean in the documentation when they say it's "built-in"?
Installing VirtualBox is required if you plan on using VirtualBox with Vagrant. I'm guessing they meant that the VirtualBox integration is built-in?
Recently, they've abstracted out the VirtualBox specific code and are working on allowing for multiple providers. I believe VMWare is now supported in addition to VirtualBox.
I had this message but my problem was different. I use Vmware_fusion as the provider. Vagrant was not able to detect what provider I am using.It assumed that I am using VirtualBox. Had this issue fixed by calling vagrant up provider flag. Here is the full command
vagrant up --provider vmware_fusion