Using the vSphere SDK to run a script on a host - vmware

I've written applications making use of the vCloud SDK in the past, and it provided the ability to provide a guest customization script that would be run on the VM when you provisioned. This let me automate VM provisioning from code along with a few per-host customizations, which was great. Since these VMs were running on ESXi, this told me that passing in a script to be run on the VM was a capability of ESXi and the VMWare Tools.
Now I'm working with the vSphere SDK, and I can't a similar capability anywhere. I want to be able to provide a script to my hosts that joins them to my domain, but I can't figure a way to pass a script that does this from the SDK. Is this possible? Or is this capability somehow unique to vCloud?

Related

How to migrate EC2 isntance from aws to azure (ubuntu 16.04 instance)

I have AWES EC-2 instances with Ubuntu 16.04 , how to migrate them to Microsoft azure?
I have their image Amazon Machine Images (AMI) on amazon web services, is there a way I could migrate the images to azure ? or the instance configuration? I prefer copy the image I have create in amazon web services (with Ubuntu 16.04 base) to azure.
I have seen this documentation: https://learn.microsoft.com/en-us/azure/site-recovery/migrate-tutorial-aws-azure but it does not specify Ubuntu support and it copy the instance, can I copy the image? and can it be perform with ubuntu 16.04?
As you see, all the support OS version show there. So, unfortunately, it does not support Ubuntu to migrate from AWES to Azure. For Linux, it just supports a part of Red Hat and Centos versions.
For the image, it's possible to export the VM to a VHD file and upload the Azure, but it just shows the Windows VM. You can get the whole steps from Move a Windows VM from Amazon Web Services (AWS) to an Azure virtual machine. You can try for Linux, but I'm not sure about it.
If you have any more questions, please let me know. Or if you think it's OK you can accept it :-)
I suggest you strongly consider implementing the base instance configuration as a userdata or init script. This start up script would install all required software and configuration settings on the instance.
This way you can simply run the script on the Azure instance, and it will work exactly as it would on the AWS instance.
This approach is best practice for managing a baseline configuration of any instance. You can also consider configuration management tools like Ansible to do the same.

accessing a new vm's terminal without using the console in UI

I'm new to vmware and i'm trying to do some automation when creating a vm from an OVA file. Essentially, I have an OVA that I need to get into the console and run a script so I can get to it via the internet...the script is this (runs a netplan config and some iptable commands), I just don't know how to execute or run the commands manually without having to get into the UI via the console.
I'm just trying to figure out how I can run this without having to access the console via the vmware esxi UI.
I'm using packet.com's environment to provision a server with vmware esxi on it via terraform, and then use ansible to deploy a few ova's on it.
The problem then is though that I can't access the newly deployed vm's unless I go into the console of the vm via the UI. I'm trying to see how I can do that either via an api or some other fashion so I can do some further automation after the VM's come up.
Assuming the VM has VMware Tools running, that would give you access to run a process/command/script in the Guest OS using the GuestProcessManager object in the vSphere Web Services API. More specifically, using the StartProgramInGuest method: http://pubs.vmware.com/vsphere-6-5/topic/com.vmware.wssdk.apiref.doc/vim.vm.guest.ProcessManager.html#startProgram

Installing packages in running instance

Is it possible to install packages in a running instance without restarting the instance in gcp using rest api. I tried startupscript but it does the job only after a system restart.
You may rerun a startup-script without having to restart the VM instance by following these instructions in the GCP documentation. However, you would have to connect to the VM instance through SSH.
Regarding the REST API, there is no GCE Rest API to install packages inside the VM, however feel free to open a feature request for this on the Google issue tracker.
Package installations are done through generic Linux commands.

Creating an iso of a RHEL instance

I have an amazon ec2 instance with RHEL 7.3 on it. I would like to convert this into an iso so that I can migrate it wherever I want. What are the best tools to create an iso of a virtual machine. Or how do I clone/backup this VM so that I can restore it anywhere I want?
You can work with VM and AWS programatically via AWS CLI commands.
You want to get familiar with import-task and export-task commands.
The best place to start is by reading an official AWS guides for:
Exporting an Instance as a VM Using VM Import/Export
Importing a VM as an Image Using VM Import/Export
The key information you need to pick up from the guide is this quote:
"You can't export an instance from Amazon EC2 unless you previously
imported it into Amazon EC2 from another virtualization environment."
Yes, there are solutions, one of them linked by #Nicholas Smith. That being said, if you go the unofficial route you might end up in a dark alley where help might not be available. I highly recommend and warn here to not proceed with trying to clone EC2 into VM at this point. You will spend a lot of time with a huge risk factor for future.
For you to be able to achieve what you want, you need to create a RHEL VM using any VM software, you need to load this VM into AWS and then you will be able to work with VM in AWS making any necessary changes and export again for local or transportation needs.
As you are running a widely-used Linux distribution - RHEL, you can attempt to recreate your EC2 environment manually by launching a VM that runs the same kernel version along with the same package versions. From there, you can tarball what files you need from your production instance and copy them over to your on-premise site by using SCP/SFTP.
Just get your RHEL environment into VM locally and import to AWS and you set.
Clonezilla provides functionality to create images. Generated images can be converted to ISO files.
It doesn't seem to be something that Amazon promote as a service however the aws cli tools have an ec2-unbundle command for extracting from an AMI. There's a guide here on how to download and run an EC2 AMI locally by using it.
Caveat is it appears the ec2-unbundle command currently only works on Linux and not OS X or Windows.

Rare scenario in DevOps - using jenkins

I have new to aws and jenkins. I have a scenario as below.
We have an aws AMI which has jenkins installed in it. The AMI is a Linux platform. We already have few jobs set in the AMI for code bases (PHP and Python) for Development and QA environment.
Now that we have a new framework in .net which is again a part of the same project done in PHP. These are windows services written in .net.
Right now the deployment are performed manually. We pull the code and build the code in the same machine. So we take care of stop/starting the services manually during this process on the Windows AMI dedicated for this testing. We would like to create a job (build and deploy) as we do for python and PHP.
The challenge is that we want to build the code on the Windows AMI and the jenkins in running on Linux AMI.
Is there a way to establish a connection between the AMI's running in different operating systems in aws.
Should we install powershell in windows to have ssh access. In that case we can establish a connection from Linux AMI to Windows AMI and then execute a .bat to do the rest of activities.
** We are specifically asked not to install another jenkins in Windows system since we want to maintain all the jobs in a single place and single server.
Its not actually a very rare scenario. Its not uncommon to have Jenkins running on Linux and also have the need to build and deploy windows applications using it.
Lucky for you Jenkins handles this quite easily using the concept of a master/slave architecture, where in your case the master node will be your primary Jenkins install running on Linux and you will setup one or more 'slave' instances running windows and the jenkins agent that allows the two to coordinate.
Its all explained here:
https://wiki.jenkins-ci.org/display/JENKINS/Distributed+builds