Deploy VM from template and set VM and DNS name - vmware

I’m using pyVmomi to deploy a VM from a template on vSphere,
this woks ok, the new VM get the name I sent as parameter but I want that the DNS name \ hostname will be same as VM.
Is there a way to set the hostname when doing the actual clone ?
If not how can I do that after the new VM was created ?
Here is part of the code I'm using:
# RelocateSpec
relospec = vim.vm.RelocateSpec()
relospec.datastore = datastore
relospec.pool = resource_pool
# ConfigSpec
configSpec = vim.vm.ConfigSpec()
configSpec.annotation = "This is the annotation for this VM"
# CloneSpec
clonespec = vim.vm.CloneSpec()
clonespec.location = relospec
clonespec.powerOn = power_on
clonespec.config = configSpec
print ("cloning VM...")
task = template.Clone(folder=destfolder, name=vm_name, spec=clonespec)
wait_for_task(task)

I think you need a clonespec.customization (vim.vm.customization.Specification). You should be able to specify the hostname there somehow or other.
Oh, as far as I know VMware Tools must be installed for guest OS customization.
Hope that helps.

Related

Packer: Receiving ID not implemented for builder when using build.ID

When trying to pass through build.ID to shell-local post processor the evaluate string in the post processor is ERR_ID_NOT_IMPLEMENTED_BY_BUILDER I am using vsphere-iso.
The docs mention
Here is the list of available build variables:
ID: Represents the VM being provisioned. For example, in Amazon it is the instance ID; in DigitalOcean, it is the Droplet ID; in VMware, it is the VM name.
So I assumed it was supported with vsphere-iso?
Basically I am trying to passthrough the evaluated vm/template name through to a post powershell post processor.
Here is the post processor config:
post-processor "shell-local" {
environment_vars = [
"VCENTER_USER=${var.vsphere_username}",
"VCENTER_PASSWORD=${var.vsphere_password}",
"VCENTER_SERVER=${var.vsphere_endpoint}",
"TEMPLATE_NAME=${build.ID}",
"TEMPLATE_UUID=${local.build_uuid}",
]
env_var_format = "$env:%s=\"%s\"; "
execute_command = ["${var.common_post_processor_cli}.exe", "{{.Vars}} {{.Script}}"]
script = "scripts/windows/cleanup.ps1"
}
Here is the post processor script
param(
[string]
$TemplateName = $env:TEMPLATE_NAME
)
Write-Host $TemplateName
Here is the result logged to the console
==> vsphere-iso.windows-server-standard-dexp (shell-local): Running local shell script: scripts/windows/cleanup.ps1
vsphere-iso.windows-server-standard-dexp (shell-local): ERR_ID_NOT_IMPLEMENTED_BY_BUILDER

Django Terraform digitalOcean re-create environment in new droplet

I have saas based Django app, I want, when a customer asks me to use my software, then i will auto provision new droplet and auto-deploy the app there, and the info should be saved in my database, like ip, customer name, database info etc.
This is my terraform script and it is working very well coz, the database is now running on
terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.0"
}
}
}
provider "digitalocean" {
token = "dop_v1_60f33a1<MyToken>a363d033"
}
resource "digitalocean_droplet" "web" {
image = "ubuntu-18-04-x64"
name = "web-1"
region = "nyc3"
size = "s-1vcpu-1gb"
ssh_keys = ["93:<The SSH finger print>::01"]
connection {
host = self.ipv4_address
user = "root"
type = "ssh"
private_key = file("/home/py/.ssh/id_rsa") # it works
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"export PATH=$PATH:/usr/bin",
# install docker-compse
# install docker
# clone my github repo
"docker-compose up --build -d"
]
}
}
I want, when i run the commands, it should be create new droplet, new database instance and connect the database with my django .env file.
Everything should be auto created. Can anyone please help me how can I do it?
or my approach is wrong? in this situation, what would be the best solution?

Why don't I create Amazon lightsailclient and set up UserData?

Why don't I create Amazon lightsailclient and set up UserData?
var shuju = new CreateInstancesRequest()
{
BlueprintId = "centos_7_1901_01",
BundleId = "micro_2_0",
AvailabilityZone = "ap-northeast-1d",
InstanceNames = new System.Collections.Generic.List<string>() { "test" },
UserData = "echo root:test123456- |sudo chpasswd root\r\nsudo sed -i 's/^#\\?PermitRootLogin.*/PermitRootLogin yes/g' /etc/ssh/sshd_config;\r\nsudo sed -i 's/^#\\?PasswordAuthentication.*/PasswordAuthentication yes/g' /etc/ssh/sshd_config;\r\nsudo reboot\r\n"
};
If you wish to run a User Data script on a Linux instance, then the first line must begin with #!.
It uses the same technique as an Amazon EC2 instance, so see: Running Commands on Your Linux Instance at Launch - Amazon Elastic Compute Cloud

nomad: Pull docker image from ECR with AWS Access and Secret keys

My problem
I have successfully deployed a nomad job with a few dozen Redis Docker containers on AWS, using the default Redis image from Dockerhub.
I've slightly altered the default config file created by nomad init to change the number of running containers, and everything works as expected
The problem is that the actual image I would like to run is in ECR, which requires AWS permissions (access and secret key), and I don't know how to send these.
Code
job "example" {
datacenters = ["dc1"]
type = "service"
update {
max_parallel = 1
min_healthy_time = "10s"
healthy_deadline = "3m"
auto_revert = false
canary = 0
}
group "cache" {
count = 30
restart {
attempts = 10
interval = "5m"
delay = "25s"
mode = "delay"
}
ephemeral_disk {
size = 300
}
task "redis" {
driver = "docker"
config {
# My problem here
image = "https://-whatever-.dkr.ecr.us-east-1.amazonaws.com/-whatever-"
port_map {
db = 6379
}
}
resources {
network {
mbits = 10
port "db" {}
}
}
service {
name = "global-redis-check"
tags = ["global", "cache"]
port = "db"
check {
name = "alive"
type = "tcp"
interval = "10s"
timeout = "2s"
}
}
}
}
}
What have I tried
Extensive Google Search
Reading the manual
Placing the aws credentials in the machine which runs the nomad file (using aws configure)
My question
How can nomad be configured to pull Docker containers from AWS ECR using the AWS credentials?
Pretty late for you, but aws ecr does not handle authentication in the way that docker expects. There you need to run sudo $(aws ecr get-login --no-include-email --region ${your region}) Running the returned command actually authenticates in a docker compliant way
Note that region is optional if aws cli is configured. Personally, I allocate an IAM role the box (allowing ecr pull/list/etc), so that I don't have to manually deal with credentials.
I don't use ECR, but if it acts like a normal docker registry, this is what I do for my registry, and it works. Assuming the previous sentence, it should work fine for you as well:
config {
image = "registry.service.consul:5000/MYDOCKERIMAGENAME:latest"
auth {
username = "MYMAGICUSER"
password = "MYMAGICPASSWORD"
}
}

Kerberos kinit: Resource temporarily unavailable while getting initial credentials

I am in the process of setting up Kerberos on a CentOS7 (more specific: the Hortonworks HDP 2.3 sandbox) running in a VirtualBox VM. My problem is that kinit seems to be unable to reach my KDC, the answer is "Resource temporarily unavailable while getting inital credentials" if I add an address in my /etc/hosts file and if I leave that file as is I get the message "could not contact any host for realm mycompany while getting initial credentials".
The KDC is running (can find it with ps plus the service starts with an "okay" message), same for kadmin.
As a guide for setting up kerberos I followed these two guides:
CentOS guide
Guide 2
My config files:
krb5.conf
[logging]
default = FILE:/var/log/krb5libs.log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
[libdefaults]
default_realm = MYCOMPANY.COM
dns_lookup_realm = true
dns_lookup_kdc = true
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
[realms]
MYCOMPANY.COM = {
kdc = kerberos.mycompany.com
admin_server = kerberos.mycompany.com
}
[domain_realm]
.mycompany.com = MYCOMPANY.COM
mycompany.com = MYCOMPANY.COM
kdc.conf
[kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88,750
[realms]
MYCOMPANY.COM = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
}
kadm5.acl
*/admin#MYCOMPANY.COM *
/etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.96.140 sandbox.hortonworks.com sandbox ambari.hortonworks.com
192.168.1.3 mycompany.com kerberos.mycompany.com
I get the "Resource..." error if I have any address in the third line of the hosts file, if that line is missing I get the "could not contact..." error.
I could trace the kinit command with something along the lines of krb5_trace or something (unfortunately I can't find the link I got it from any more nor remember the exact command) to the address specified in the host file so kinit seems to contact the fitting address, its just that the KDC does not listen there.
Netstat shows that the KDC is listening on the ports specified in the kdc.conf
Any help would be appreciated
Okay so it does work now. Things I did to fix it:
/etc/resolv.conf
mycompany.com 127.0.0.1
/etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.96.140 sandbox.hortonworks.com sandbox ambari.hortonworks.com
127.0.0.1 mycompany.com kerberos.mycompany.com
And, most embarrassing: I used kinit mycompany/admin for the principal user/admin#mycompany.com which is of course wrong.
The right call is of course kinit user/admin