Deploying Containers on Compute Engine VMs - google-cloud-platform

I'm a little bit confused, GCP has this new feature Deploying Containers on VMs and Managed Instance Groups which is currently marked as an Alpha release of Containers on Compute Engine and you actually need to request to be whitelisted for this feature.
What I'm struggling with is to understand how is it different from simply choosing Container-Optimized OS in the list of OS images when creating a new CE instance and then running your docker container on that instance? What are the benefits of the new approach?

Container-Optimized OS images have a number of benefits if all you want to do is run containers on your Compute Engine instance.
There is less configuration involved as they come pre-installed and configured with Docker which will already be running as a service when the machine starts.
There is a tick box in the Console when creating a new Container-Optimized OS instance labelled "Deploy a container image to this VM instance". Checking this provides a method of deploying containers/adding images via the Console/Gui and adding settings for commands to be issued to the container, restart policies, environmental variables, host mounts and other mount paths. This essentially allows you to bring up a container at the same time you create your VM.
In general it's more secure as it has a smaller attack surface than a standard VM, as the OS has a smaller footprint. It also includes a 'locked down' firewall and other security settings.
Due to the fact the OS is based on Chromium OS project, and not a full Linux OS, it benefits from automatic updates and comes configured to automatically download weekly updates (a reboot is necessary to install these updates).
So if you want to run containers with minimal setup on a simple operating system with high security, Container-Optimized OS may be suitable.
It should also be said that there are some use cases where these images are not suitable. For example, if you require the flexibility of a full Linux OS (for example, Container-Optimized OS doesn't include a package manager) or if your containers depends on Linux/kernel modules that may not be available in Container-Optimized OS. It would also not be suitable if you wanted your image and OS application to be supported outside of Google Cloud Platform. You would be better off considering public images other than Container-Optimized OS images in these scenarios.

Related

Using GPU with containers and Container Optimized OS in Google Cloud VM

I would like to run a custom Docker image with GPU on Google Compute Engine.
I have built and pushed the image to the Google Container Registry.
It seems logical to use Container-Optimized OS for a host machine in Google Cloud Engine since I don't need any extra soft on the host machine except Docker, Nvidia GPU drivers and nvidia-container-runtime.
I managed to install nvidia-drivers with this solution.
But I can't run my Docker image with GPU (using --gpu all option) without nvidia-container runtime. This step is specified in official Docker documentation.
Is there a way to install nvidia-container-runtime on Container-Optimized OS in Google Cloud VM?
You don't have to set --gpu all, because this is the default option for nvidia-container-runtime. The assumption, that you don't need anything else is wrong, because it requires libnvidia-container.
To precisely answer the question: No, because libnvidia-container needs to be installed on the OS and nvidia-container-runtime needs to be installed within the K8s container. The one exposes an interface - and the other connects it. And so the one is useless without the other.

Creating a duplicate of a VM

I'm preparing to get in to the world of cloud computing.
My first question is:
Is it possible to programmatically create a new, or duplicate an existing VM from my server?
Project Background
I provide a file processing service, and as it's been growing I need to offer a better service.
Project Requirement
Machine specs:
HDD: Min 16gb
CPU: Min 1 core
RAM: Min 2
GB GPU: Min CUDA 10.1 compatible
What I'm thinking is the following steps:
User uploads a file
A dedicated VM is created for that specific file inside Google Cloud Compute
The file is sent to the VM
File is processed using a Anaconda environment
Results are downloaded to local server
Dedicated VM is removed
Results are served to user
How is this accomplished?
PS: I'm looking for resources and advice. Not code.
Your question is a perfect formulation of the concept of Google Cloud Run. At the highest level concept, you create a Docker image (think of it like a VM) and then register that Docker image with GCP Cloud Run. When a trigger occurs, GCP will spin up an instance of that Docker container and pass in information about the cause of that trigger (a file created in GCS or a REST request or others ...). What you do in your container is up to you. You have full power of the Linux environment (under Docker) to do as you like. When your request ends, the container is spun down. You are only billed for the compute resources you use. If your container (VM) isn't being used, you pay nothing until the next trigger.
An alternative to Cloud Run is Cloud Functions. This is a higher level abstraction where instead of providing a Docker container, you provide the body of a function (JavaScript, Java, Python or others) and the request is passed to that function when a trigger occurs. Which you use is mostly personal choice (you didn't elaborate on "File is processed").
References:
Cloud Run
Cloud Functions

How to avoid installing the same software on google cloud instance?

I am using the compute engine of the google cloud platform to do computations.
I am using Ubuntu as the OS and every time I create a new instance, I have to install the software I need from scratch, including the build-essential.
I am pretty sure there is a way to specify the software I would like to have in my VM but couldnĀ“t figure out a straightforward way to do it.
You should use GCE custom images to create VM images with pre-installed software that you need.
Alternatively, you can consider using startup scripts in which you can install software during VM startup. But in contrast to custom images it will increase VM startup time, because startup script will be running during VM startup.

Using Vagrant to manage AWS instances

For some time I am managing EC2 (Windows Boxes), RDS and S3 on AWS.
I do know manual steps that must be made in order to set up lets say a normal box (DB, Storage and Server. I heard about Vagrand, but everywhere I looked it mainly talks about Linux boxes on AWS.
My main question is: Is Vagrand a tool that will save me time for deyploment (windows), or should I not use it at all (in Windows scenario).
Vagrant plays nicely with AWS (via vagrant-aws plugin).
Vagrant seems to play nicely with Windows as well since version 1.6 and the introduction of WinRM support (ssh alternative for Windows).
However AWS plugin doesn't support WinRM communicator yet. So you'll need to pre-bake your Windows AMIs with SSH service pre installed, if you want vagrant to provision it.
Update (29/03/2016): Thanks to Rafael Goodman for pointing to vagrant-aws-winrm plugin as a possible workaround.

Amazon EC2 usable as a VMware testing platform?

We have the need to perform tests on localized platforms that put some burden on our hardware resources because for just a few weeks we might need plenty of servers and clients (Windows 2003 and Windows 2008, Vista, XP, Red Hat, etc) in multiple languages.
We typically have relied on blades with Windows 2003 and VMWare, but sometimes these are overgrown by punctual needs and also have the issue that the acquisition and deployment process is quite slow if the environment needs to grow.
Is Amazon EC2/S3 usable in the following scenario?
Install VMWare (Desktop because we need the ability to have snapshots) on an Amazon AMI.
Load existing VMWare images from S3 and run them on EC2 instances (perhaps 3 or 4 server or client OSes on each EC2 instance.
We are more interested in the ability to very easily start or stop VMware snaphsots for relatively short tests. This is just for testing configurations, not a production environment to actually serve a user workload. The only real user is the tester. These configurations might be required for just a few weeks and then turned off for a few months until the next release requires them again.
Is EC2/S3 a viable alternative for this type of testing purpose?
Do you actually need VMWare, or are you testing software that runs in the VMWare VMs? You might actually need VMWare if you are testing e.g. VMWare deployment policy, or are running code that tests the VMWare APIs. Examples of the latter might be you are testing an application server stack and currently using VMWare to test on many platforms.
If you actually need VMWare, I do not believe that you can install VMWare in EC2. Someone will correct & enlighten me if this is not the case.
If you don't actually need VMWare, you have more options. If you can use one of the zillion public AMIs as a baseline, clone the appropriate AMIs and customize them to suit your needs (save the customized version as a private AMI for your team). Then, you can use as many of them as you like. Perhaps you already have a bunch of VMWare images that you need to use in your testing. In that case, you can migrate your VMWare image to an EC2 AMI as described in various places in Google, for example:
http://thewebfellas.com/blog/2008/9/1/creating-an-new-ec2-ami-from-within-vmware-or-from-vmdk-files
(Apologies to the SO censors for not pasting the entire article here. It's pretty long.) But that's a shortcut; you can always use the documented AMI creation process to convert any machine (VMWare or not) to an AMI. Perform that process for each VMWare VM you have, and you'll be all set. Just keep in mind that when you create an AMI, you have to upload it to S3, and that will take a lot of time for large VMs.
This is a bit of a shameless plug, but we have a new startup that may deal with exactly your problem. Amazon EC2 is excellent for on-demand computing, but is really targeted at just a single user launching production servers. We've extended EC2 to make it a Virtual Lab Management environment, with self-service, policies and VM sharing. You can check it out at http://LabSlice.com and see if it meets your needs.
Amazon provides a solution themselves now: http://aws.typepad.com/aws/2010/12/amazon-vm-import-bring-your-vmware-images-to-the-cloud.html