I have recently initialized a GPU instance on Google cloud, and installed Anaconda and installed all required dependencies before I stoped that instance. Now when I started the instance, it does not have anaconda installed in it. I found it is so weird. Please let me know if you know any details on it. I also looked into details from the doc of google, I don't find any related comments that should behave like this.
https://cloud.google.com/compute/docs/instances/stopping-or-deleting-an-instance
No, this should not happen if programs got installed properly in persistent/boot disk file system.
If programs are supposedly installed in TMPFS or other memory mapped file system then after the instance is rebooted the memory contents would be lost and consequently data and links to it.
However, this is never done as VM Instance packages are installed in persistent disk.
I guess your installation failed for some reason. Check if the packages are still installed. If you are using a Redhat Linux variant you can use ‘yum list installed’ to see all installed packages or ‘yum list installed|grep -i <package-to-search-for> to filter out a particular package.
If the package shows up, then the issue could be related to a misconfiguration or other problem somewhere. Use dmesg and/or cat /var/log/messages to view the logs and try to find any problems there which may be related to Anaconda or GPU software.
I just encountered the same problem. I know this question is dated but might help a complete beginner like myself. In my case I needed to SSH onto the instance instead of just being in the project level virtual environment.
gcloud beta compute ssh --zone "europe-west2-c" "myinstancename" --project "fired-brimstone-234534"
Related
I’m looking to learn about Cloud Foundry and I’m trying to get a development instance of it set up on my local Windows 10 PC. But I’m not having any luck.
I’m finding a lot of information about PCF Dev which was deprecated a while ago. I also looked at the replacement for PCF Dev, CF Dev (https://github.com/cloudfoundry-attic/cfdev). Its git page mentions that its repository is no longer receiving updates. I still went ahead and tried installing it using the instructions in the README:
cf install-plugin -r CF-Community cfdev
But the link it uses to download the plugin is broken:
Starting download of plugin binary from repository CF-Community...
Get "https://d3p1cc0zb2wjno.cloudfront.net/cfdev/cfdev-v0.0.18-rc.36-windows.exe": dial tcp: lookup d3p1cc0zb2wjno.cloudfront.net: no such host
Can anyone recommend a way to get a development instance of Cloud Foundry set up on my local machine so I can play around with it?
Thanks
Yes, steer clear of pcf-dev and cf-dev, they may still work but are definitely not getting updates so will be way out of date by now.
My understanding, although I haven't tried this process in a while, is that the way to run locally is with VirtualBox. You can run one locally using bosh-deployment & cf-deployment and Virtualbox.
For instructions installing Bosh in VirtualBox using bosh-deployment, see the Install Section to install Bosh.
With Bosh installed, follow the deployment guide to get CF installed. You can skip to step 4, since you're installing into VirtualBox. Be sure to read the entire document before you begin, however pay specific attention to this section which has specific instructions for running locally.
I'm using AWS Lambda, which involves creating an archive of my node.js script, including the node_modules folder and uploading that to their infrastructure to run.
This works fine, except when it comes to node modules with native bindings (using node-gyp). Because the binding was complied and project archived on my local computer (OS X), it is not compatible with AWS's (Amazon Linux) servers.
How can I cross-compile/install a node module (specifically, node-sqlite3) so when I upload it to another server arch it runs?
While not really a solution to your problem, a very easy workaround could be to simply compile the native addons on a Linux machine.
For your particular situation, I would use Vagrant. Vagrant can create virtual machines and configure them within seconds.
Find an OS image that resembles Amazon's Linux distro (Fedora, CentOS, others that use yum as package manager - see Wiki)
Use a simple configuration script that, when run by Vagrant on machine startup, will run npm install (optionally it might also remove the node_modules folder before to ensure a clean installation)
For extra comfort, the script can also create the zip file for deployment
Once the installation finishes, the script will shutdown the VM to avoid unnecessary consumption of system resources
Deploy!
It might require some tuning if the linked libraries are not at the same place on the target machine but generally this seems to me like the best and quickest solution.
While installing the app using Vagrant might be sufficient in some cases, I have found it necessary to build the app on Linux which is as close to Lambda's Amazon Linux AMI as possible.
You can read the original answer here: https://stackoverflow.com/a/34019739/303184
Steps to make it work:
Spawn new EC2 instance. Make sure it is based on exactly the same image as your AWS Lambda runtime. You can review Lambda env details here: http://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html. In our case, it was Amazon Linux AMI called amzn-ami-hvm-2015.03.0.x86_64-gp2.
Install nvm and use it to install the same version of Node.js as on the AWS Lambda. At the time of writing this, it was v0.10.36. You can refer to http://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html again to find out.
You will probably need to install git & g++ compiler on the EC2. You can do this running
sudo yum install git gcc-c++
Finally, clone your app to your new EC2 and install your app's dependecies:
nvm use 0.10.36
npm install --production
You can then easily download the node_modules using scp or such.
Same lines as Robert's answer, when I had to work on my MAC in a different OS I use vm ware like Oracle's free virtualizer VirtualBox to get a linux on my mac, no cost to me. Or sign up for a new AWS account, you get a micro for a year free. Use that to get your linux box, do whatever you need there.
AWS has a page describing how to deal with native NPM modules: https://aws.amazon.com/blogs/compute/nodejs-packages-in-lambda/
I've just set up an Ubuntu Deep Learning AMI EC2 instance. I'm a total beginner on AWS/package handling stuff.
My aim is to use the instance to execute a Python deep learning script. This script uses a variety of packages.
When installing some of these packages with conda, I got an error stating environment inconsistencies for 100+ packages. After many attempts to solve this, I thought removing Anaconda and reinstalling may do the trick. After doing this, I've realised I may have messed up my instance even more. I can now no longer use the preset deep learning environments the AMI has been configured for, as these were accessed using conda commands, which (IMO) I seem to have removed.
I've tried repeating the commands, but I am getting an error stating these environments no longer exist. A tutorial using these commands is mentioned here:
https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-conda.html
source activate tensorflow_p36
I expected the above to enter me into the tensorflow_p36 environment. As in:
(tensorflow_p36) ubuntu#ip-XXX-XX-XX-XX:~/scripts
However it gives an error message:
could not find environment: tensorflow_p36
I realise uninstalling conda was a major rookie error which seems to have totally disabled my instance. If anyone has any ideas to salvage it that would be much appreciated!
Thanks very much
Not exactly your question, but if anybody else is thinking about uninstalling conda from the deep learning AMI because it seems insane, this might help.
The AWS Deep Learning AMIs is configured in a way that makes it refuse to install conda environments that work reliably on other machines. This seems to fix the problem for me:
conda config --set channel_priority false
(This is maybe obvious to conda-heads, but confounded me for a while, so hopefully this helps somebody else.)
so I have installed some python nltk libraries (pip3 install) and c++ libraries (via apt-get install package_name_xxxx) on two different VMS instances.
Python packages for nltk would disappear and require a reinstall after reboot or change of the vms instance (e.g., add memory, cpu core),
C++ libraries disappeared without rebooting or any change of the machine. I do not find anything in the systemlog, a reinstall with apt-get works fine. But I am trying to figure out why it happens.
Is your GCE instance a preemptible instance? this option restarts the instance once every 24 hours and could be the reason why you are missing some packages.
After about an hour of inactivity, modifications not within the $HOME directory are lost. This includes installed packages.
See Custom installed software packages and persistence and usage limits.
How can I compile and run Libvirt-snmp on VMware Vsphere ESXi? Can somebody guide with step by step procedure.
I tried to followed steps mentioned on Libvirt Website
but I guess they are for Linux distribution. Because I could not execute ./configure command.
After searching on Google I found a similar question which tells that I need to create a VIB and than install that VIB. Now I have no idea about creating VIB. Can somebody please guide me on this.
Can somebody guide with step by step procedure.
As a workaround
1. Have a Linux VM and create a nfs share
2. Install and configure the required tool. [in your case, the libvirt] in the linux VM in the nfs share. Note the export path and variables
3. Mount the nfs share as NAS volume in ESXi
4. Give a soft link to the mounted nas volume to /usr/bin in ESXi
5. Create corresponding directory tree under /usr/local/lib as required by the tool and link them too to the nfs share.
And you are good to run the tool.
Now I have no idea about creating VIB
Simply put, VIB is VMware Infrastructure Bundle which is the allowed method to push pgms inside ESXi. You can use ar command to create a vib from a rpm and use vib author too to push the module inside ESXi.
Hope it helps