I m building an automated infrastructure and provisionning with terraform and ansible.
I use terraform with VMware vSphere Provider. Before creating infrastructure, i build a vm template with packer and then use it as a base. But i m unable to destroy from terraform or the api vsphere as documented here :
https://vmware.github.io/vsphere-automation-sdk-rest/vsphere/index.html#PKG_com.vmware.vcenter
operations > vcenter > vm_template
Is anyone find a way to delete a vm template with the vsphere api?
Currently i have to go through the vmware vsphere web client to delete a vm template.
No, as of today, there are no ways to delete a template (or even be able to modify the template back to a VM) through the vSphere Automation REST API. The web services API is the only way.
Related
My current Approach-
I'm currently using Cloud Build to build and store a .war artifact in a GCS bucket. To deploy it on my custom VM, I'm running a java program on this GCE vm which detects changes to the bucket using Pub/Sub notification and downloads and deploys the fresh .war on the vm.
My Objective-
Download a ~50MB Spring boot 2.X + Java 11 war from GCS using a cloud function written in Java 11
Upload it to the VM(Ubuntu 18.04.x LTS) using cloud function(Generation not relevant)
Deploy it on the VM from the cloud function(The war has an embedded tomcat container, so I only have to java -jar it)
My issue-
Connecting to the VM "externally" is my main issue. The only solution I can think of is running a Spring web service endpoint on the vm which receives a .war using POST. I'll use the cloud function to POST the downloaded .war to this endpoint which will deploy it on the vm.
However, this approach seems like a Rube Goldberg machine from my perspective so I'm wondering if there is better idea than what I've come up with.
P.S- We're aware that pulling from the VM is the more sound approach, but this cloud function deployment is a client request so sadly, we must abide.
I currently have a hosted (GCP) microservice environment that is under development. When working on a service I currently run the environment locally. I run all the services that the service I am working on needs to communicate to.
This provides a bad developer experience because:
I have to spin up every service; there can be a lot
running so many services can use a lot of my system resources
If any of those services need a DB, I have to set that up too
I'm looking for a soution to this. Idealy, I will run just the single service locally and connect to the rest of the services in the hosted environment.
Do any of the popular service meshes offer this as an option? I'm looking at Istio and Kuma primarily. Are there any alternatives solutions that come to mind?
For remote development/debugging I would suggest to have a look at Telepresence.
https://www.telepresence.io/
It is even recommended by Kubernetes docs:
Using telepresence allows you to use custom tools, such as a debugger and IDE, for a local service and provides the service full access to ConfigMap, secrets, and the services running on the remote cluster.
https://kubernetes.io/docs/tasks/debug-application-cluster/local-debugging/
Istio on the other hand enables you to do shadow deployment and canary or blue/green deployment. You can e.g. run a service and send certain user (based on the header) to a new version. You can mirror traffic to a service or shift traffic from 0 to 100 % step by step. I'd say it's more for testing your new service under load or gradually releasing a new version.
We want deploy a commercial software, that the provider send us in ESXi format (virtual appliance), in Google Cloud.
For not miss the warranty, we canĀ“t modify this VM.
Please, could someone help me?, i am new in GCP.
Thanks in advance.
Juanma.
First thing you want to know when deploying ESXi to cloud environments is if your cloud provider supports Nested Virtualization. If you're deploying ESXi to a virtual machine that's a must have.
As of today, Google and AWS don't support nested virtualization.
Your options are: Go to Azure, they have some specific servers that can support nested virtualization (DS3 and E3 Layers) or go to Bare Metal (Softlayer).
I have an application in ASP.NET MVC4 that i need to configure on Amazon EC2 server.
But i am new to this and not familiar with this.Currently my application is configured on Azure server.And now i want to shift it on Amazon EC2.
I go through with this video :-
http://www.youtube.com/watch?v=JPFoDnjR8e8
I have sign up and went to Launch an application.But i did not have the credit card details now.And i used dummy CC details(took from google).
But i guess its not supporting the dummy CC details.
Can anyone help me out on this ?
The basic process is that you need to setup an account with a valid credit card, create an instance thru the aws console, and then use the generated credentials to RDP (Remote Desktop) into the server.
The process of setting up an MVC app (or any program for that matter) is going to be 100% identical to doing it on your own machine - there is no difference once you are able to remote to the instance.
I've written applications making use of the vCloud SDK in the past, and it provided the ability to provide a guest customization script that would be run on the VM when you provisioned. This let me automate VM provisioning from code along with a few per-host customizations, which was great. Since these VMs were running on ESXi, this told me that passing in a script to be run on the VM was a capability of ESXi and the VMWare Tools.
Now I'm working with the vSphere SDK, and I can't a similar capability anywhere. I want to be able to provide a script to my hosts that joins them to my domain, but I can't figure a way to pass a script that does this from the SDK. Is this possible? Or is this capability somehow unique to vCloud?