I have google cloud VM with Ubuntu installed along with various services and libraries. I need to make a similar bootable VM with the same OS and all the data, libraries etc in the already configured VM. How do I clone the VM with these requirements?
I tried to create an image from the already existing VM and could not SSH into it.
So I retraced my installations step by step trying to figure out which step is breaking the image.
I created an Ubuntu(18.04) VM and used that to create an image. The instance I created using the image did allow me to SSH into.
Next installed Ubuntu desktop and xorg server and created an image after that. Using that image, I created a new VM and tried to SSH into it.
But unfortunately, the SSH connection could not be established. So I think it is these installations that are causing the error if it is not some sort of system error.
Below are the exact commands I ran to install these after creating an Ubuntu(18.04) VM:
sudo passwd username
sudo su -
passwd
apt update && apt upgrade -y
adduser username root
adduser username admin
adduser username sudo
apt-get install ubuntu-desktop -y
apt-get install xserver-xorg-video-dummy
nano /etc/X11/xorg.conf
and pasted the following into the .conf file
Section "Device"
Identifier "Configured Video Device"
Driver "dummy"
EndSection
Section "Monitor"
Identifier "Configured Monitor"
HorizSync 31.5-48.5
VertRefresh 50-70
EndSection
Section "Screen"
Identifier "Default Screen"
Monitor "Configured Monitor"
Device "Configured Video Device"
DefaultDepth 24
SubSection "Display"
Depth 24
Modes "1600x900"
EndSubSection
EndSection
After this state, I created the image using which I could not instantiate a VM that I could SSH into.
Since you have your VM ready and running; backup your image as per this GCP document. Follow the guidelines before you begin the process which were mentioned in the document like updating Google cloud CLI setting default region and zone and for general image guidelines.
Few networking features may require guest operating system mode. You can also check how to export a custom image to cloud storage.
You can also consider the Snapshot Approach.
Follow this process in order to create the image exactly as the one you have already set up and you know is working correctly. As you may already know, this is a custom image so they are available only to your Cloud project. You can create a custom image from boot disks and other images if you would like also. Then, use the custom image to create an instance.
I will also suggest you to give a look at this document which would give you a deeper knowledge on the task.
Regards,
Just spin up a new container from a disk snapshot, if you need an exact copy. And if you cannot SSH, you may either not have a SSH public key provisioned, no external IP assigned, or :22 closed.
gcloud ssh always works. One can as well provision project-wide SSH keys, which all VM in the project will inherit then. The documentation below: About VM metadata explains this all in detail.
My personal favorite are rather startup scripts, which describe the configuration, instead of copying it.
And it's not so difficult to get started with these: cat ~/.bash_history > rocky8_startup.sh. In a software-defined data-center, it might make sense to use software-defined configurations (one simply cannot alternate the installation per VM instance, when starting with a disk snapshot).
xserver-xorg-video-dummy is questionable, because one can enable display device -but unless recording the screen, this driver might still suffice; eg. for VNC sessions.
Related
I am working in JupyterLab within a Managed Notebook instance, accessed through the Vertex AI workbench, as part of a Google Cloud Project. When the instance is created, there are a number of JupyterLab extensions that are installed by default. In the web GUI, one can click the puzzle piece icon and enable/disable all extensions with a single button click. I currently run a post-startup bash script to manage environments and module installations, and I would like to add to this script whatever commands would turn on the existing extensions. My understanding is that I can do this with
# Status of extensions
jupyter labextension list
# Enable/disable some extension
jupyter labextension enable extensionIdentifierHere
However, when I test the enable/disable command in an instance Terminal window, I receive, for example
[Errno 13] Permission denied: '/opt/conda/etc/jupyter/labconfig/page_config.json'
If I try to run this with sudo, I am asked for a password, but have no idea what that would be, given that I just built the environment and didn't set any password.
Any insights on how to set this up, what the command(s) may be, or how else to approach this, would be appreciated.
Potentially relevant:
Not able to install Jupyterlab extensions on GCP AI Platform Notebooks
Unable to sudo to Deep Learning Image
https://jupyterlab.readthedocs.io/en/stable/user/extensions.html#enabling-and-disabling-extensions
Edit 1:
Adding more detail in response to answers and comments (#gogasca, #kiranmathew). My goal is to use ipyleaft-based mapping, through the geemap and earthengine-api python modules, within the notebook. If I create a Managed Notebook instance (service account, Networks shared with me, Enable terminal, all other defaults), launch JupyterLab, open the Terminal from the Launcher, and then run a bash script that creates a venv virtual environment, exposes a custom kernel, and performs the installations, I can use geemap and ipywidgets to visualize and modify (e.g., widget sliders that change map properties) Google Earth Engine assets in a Notebook. If I try to replicate this using a Docker image, it seems to break the connection with ipyleaflet, such that when I start the instance and use a Notebook, I have access to the modules (they can be imported) but can't use ipyleaflet to do the visualization. I thought the issue was that I was not properly enabling the extensions, per the "Error displaying widget: model not found" error, addressed in this, this, this, this, etc. -- hence the title of my post. I tried using and modifying #TylerErickson 's Dockerfile that modifies a Google deep learning container and should handle all of this (here), but both the original and modifications break the ipyleaflet connection when booting the Managed Notebook instance from the Docker image.
Google Managed Notebooks do not support third-party JL extensions. Most of these extensions require a rebuild of the JupyterLab static assets bundle. This requires root access which our Managed Notebooks do not support.
Untangling this limitation would require a significant change to the permission and security model that Managed Notebooks provides. It would also have implications for the supportability of the product itself since a user could effectively break their Managed Notebook by installing something rogue.
I would suggest to use User Managed Notebooks.
I'm having difficulty to set this up correctly, and burning through AWS server time while I try to make it work. I have segmentation code that is heavily memory intensive that I'd like to temporarily spin up an AWS server with 192GB of ram. I understand that this is possible using docker, but the instructions on pycharm are non-existent with respect to the docker instructions necessary to tie it together (it references existing code as opposed to showing how to assemble it from scratch). What would the docker run command on the server look like to enable a connection to the 2375 port?
EDIT: I am using Pycharm Professional
UPD: Checking PyCharm options I found that there is an option to use Docker Machines. This seem to be exactly what you need. With Docker Machines you can make Docker spin up an EC2 instance for you with proper security out-of-the-box. Read official documentation on how to get started here and AWS driver options to learn how to set EC2 instance type, AMI, and other options here .
Original post:
To enable this feature you have to run Docker daemon with '-H' option:
sudo dockerd -H tcp://0.0.0.0:2375
You may read more on that in the Docker docs: https://docs.docker.com/engine/reference/commandline/dockerd/ .
Beware though, for EC2 you may also need to open that port using security group https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/authorizing-access-to-an-instance.html .
I also want to add that what you want to achieve isn't good from security perspective. Exposing docker socket like that is like an invitation for bad guys to throw a party at your EC2 instance. But since you mentioned that this is temporary...
How can the guest_additions_mode "attach" be used in packer to install virtualbox guest additions?
The packer documentation tersely states that
If the mode is "attach" the guest additions ISO will be attached as a
CD device to the virtual machine.
But it does not specify the default mount point. I'm looking for an example provision script to install the guest additions based on this mount...
Thank you in advance for your consideration and response.
First, why use attach when upload is much simpler?
Packer documentation don't say which mount point to use since it's agnostic of the guest OS.
The simplest way to find this out is to run packer with "headless": false and a provisioner that does something like sleep 3600. And then log in to the VM and check where the DVD is accessible, i.e. reading the output of dmesg etc.
This is giving me a headache.
Here's what I've done so far
Created an EC2 Virtual Server Instance, and its running
Installed the AWS CLI
Installed Docker on my EC2 Virtual Server after I SSH'd into it
So looking at the docs it tells you how to build an image. Now comes my confusion.
Question 1: So am I right by assuming that one basically have an option to a) build an image off your host or b) pull an image created by others from Docker Hub?
Question 2: If I'm right about Question #1 then what am I building an image ** off of** if I am not pulling one from docker hub? with the AWS docs here?
Question 3: then I see a whole different route I can take, using Docker Compose, so I'd use that instead of all this above? This is so confusing.
EC2 Container Registry – Now Generally Available
So again, here, it tells you to install docker on the Host. Then immediately jumps into "create an image". Create an image off what, that host's OS? I don't get it, I guess that's what it means OR I can pull an image from Docker Hub and not go this route?
Same here, it's talking about creating a docker image, what off the Host?
Or..maybe I'm not understanding what "image" means but I assume going this route, instead of pulling a Docker image from Docker Hub that I'm creating an image off my EC2 virtual Instance?
A1: No. You can't build an image off your host.
You can create an new image according to your requirement like which Operating Sytem (Ubuntu, Fedora), Stack(LAMP, LEMP) and many other things.
Or you can pull an image which will be pre-configured with all the packages like Wordpress Stack image, Magento stack image, Bitnami image which you can pull from docker-hub.
A2: As I have mentioned earlier you can build an image of any operating system you want(Ubuntu, Fedora, Debian) but not off the host.
You just need to pull image from docker-hub. e.g docker pull ubuntu will pull mininmal image of Ubuntu-14.04. And if you need specific version of Ubuntu
like Ubuntu-12.04 version e.g docker pull ubuntu:12.04 will pull minimal image of Ubuntu-12.04
A3: Docker-compose is a tool for defining and running multi-container docker applications. docker-compose conatins a compose file
in which you can configure your application services.
And finally Amazon EC2 Container Registry is little bit different thing. The Idea is the same as docker but Amazon is providing
this as a EC2 Container Service with many other functionality which docker doesn't have right now.
Hope it hepls:-)
I have a question about asterisk, I know that I can install asterisk on EC2, but my questions is:
Its possible install AsteriskNOW on Amazon EC2? if not, why? and where its the best possible server or solution for install this
thanks
AsteriskNow is a complete distribution based on CentOS available as an ISO file. There doesn't appear to be an EC2 AMI available for it so you would have to build an image yourself.
Here's an overview of the process for Oracle Linux which boils down to:
Install AsteriskNow onto a VirtualBox or VMWare instance locally.
Configure all the EC2 specifics (This is the fiddly bit)
Export that virtual machine as a VMDK.
Copy the VMDK to S3
Import the VMDK to an EBS volume and launch on Amazon EC2.
Before you export you will have to make sure AsteriskNow has a kernel that supports EC2. In CentOS this would be the Xen kernel but I don't know if Asterisk would supply one, which means compiling. The PV-GRUB docco also covers a lot of what can and can't be used on EC2. If it doesn't work out of the box it will take some Linux smarts to figure it all out.
It will probably take a number of exports/imports to get it running. Once you have it up on EC2 you can turn that instance into an AMI to quickly create clones in the future without going through the whole export/import process.
can you not just download the ISO directly?
ubuntu#ip-172-31-14-19:~/iso$
ubuntu#ip-172-31-14-19:~/iso$ wget -v https://downloads.asterisk.org/pub/telephony/asterisk-now/AsteriskNow-1013-current-64.iso
--2017-11-17 05:52:53-- https://downloads.asterisk.org/pub/telephony/asterisk-now/AsteriskNow-1013-current-64.iso
Resolving downloads.asterisk.org (downloads.asterisk.org)... 76.164.171.238, 2001:470:e0d4::ee
Connecting to downloads.asterisk.org (downloads.asterisk.org)|76.164.171.238|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1343909888 (1.3G) [application/x-iso9660-image]
Saving to: ‘AsteriskNow-1013-current-64.iso’
AsteriskNow-1013-curr 100%[======================>] 1.25G 1.79MB/s in 9m 54s
2017-11-17 06:02:48 (2.16 MB/s) - ‘AsteriskNow-1013-current-64.iso’ saved [1343909888/1343909888]
ubuntu#ip-172-31-14-19:~/iso$
https://downloads.asterisk.org/pub/telephony/asterisk-now/