Packer vmware-iso builder failing with "unexpected EOF" - vmware

I am new to Packer and have been attempting to use the Packer vmware-iso builder. I have attempted to build both an Ubuntu 18.04 and a CentOS 7 template. Both are failing at the same point. I am unsure what I'm missing.
I'm not even sure where to begin troubleshooting.
When I run packer build it proceeds through the following steps:
Retrieving ISO
Leaving retrieve loop for ISO
Creating floppy disk... (I am using a floppy disk for the .cfg file because I'm building directly on the esx host)
Uploading Floppy to remote machine...
Uploading ISO to remote machine...
Creating required virtual machine disks
Building and writing VMX file
Build 'ubuntu-1604' errored: unexpected EOF
Some builds didn't complete successfully and had errors:
ubuntu-1604: unexpected EOF
Builds finished but no artifacts were created
This doesn't seem accurate as artifacts are present on the ESX host:
packer_cache folder with ISO
virtual machine folder with a vmdk file, but no vmx files are present.
Here is some of the log output:
2019/04/19 12:03:30 packer: 2019/04/19 12:03:30 Writing VMX to: /tmp/vmw-iso942565597/ubuntu-1804-base.vmx
2019/04/19 12:03:30 packer: 2019/04/19 12:03:30 Cleaning up remote path: /vmfs/volumes/datastore1/packer_cache/packer670624036
2019/04/19 12:03:30 packer: 2019/04/19 12:03:30 Removing remote cache path /vmfs/volumes/datastore1/packer_cache/packer670624036 (local /vmfs/volumes/datastore1/packer_cache/packer670624036)
2019/04/19 12:03:30 packer: 2019/04/19 12:03:30 [DEBUG] Opening new ssh session
2019/04/19 12:03:30 packer: 2019/04/19 12:03:30 [DEBUG] starting remote command: rm -f "/vmfs/volumes/datastore1/packer_cache/packer670624036"
2019/04/19 12:03:30 packer: 2019/04/19 12:03:30 Deleting floppy disk: /tmp/packer670624036
2019/04/19 12:03:30 packer: panic: runtime error: invalid memory address or nil pointer dereference
2019/04/19 12:03:30 packer: [signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x1df735c]

Related

Percona AWS RDS Database Backup

I am trying to take a single database backup from AWS RDS through Percona XtraBackup. But it is failing in identifying the rdsdata. I am using Mac.
The command I used-
xtrabackup --host='awsserv-staging.fbyvwz.eu-east-2.rds.amazonaws.com' --port=3306 --user='myusername' --password='myPassword' --backup --databases=db-purchases --target-dir=/Users/myuserdir/percona-xtrabackup --no-lock --datadir=innodb_data_home_dir=/rdsdbdata/db/innodb;
The error I am getting is -
xtrabackup: recognized server arguments: --datadir=innodb_data_home_dir=/rdsdbdata/db/innodb
xtrabackup: recognized client arguments: --host=awsserv-staging.fbyvwz.eu-east-2.rds.amazonaws.com --port=3306 --user=myusername --password=* --backup=1 --databases=db-purchases --target-dir=/Users/myuserdir/percona-xtrabackup --no-lock=1
/opt/homebrew/Cellar/percona-xtrabackup/8.0.27-19/libexec/bin/xtrabackup version 8.0.27-19 based on MySQL server 8.0.27 macos12.2 (arm64) (revision id: 50dbc8dadda) 220510 15:50:47 version_check Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup;host=awsserv-staging.fbyvwz.eu-east-2.rds.amazonaws.com;port=3306' as 'myusername' (using password: YES).
220510 15:50:48 version_check Connected to MySQL server
220510 15:50:48 version_check Executing a version check against the server...
220510 15:50:48 version_check Done.
220510 15:50:48 Connecting to MySQL server host: awsserv-staging.fbyvwz.eu-east-2.rds.amazonaws.com, user: myusername, password: set, port: 3306, socket: not set Using server version 8.0.23
Warning: option 'datadir' points to nonexistent directory 'innodb_data_home_dir=/rdsdbdata/db/innodb'
Warning: MySQL variable 'datadir' points to nonexistent directory '/rdsdbdata/db/'
Warning: option 'datadir' has different values:
'innodb_data_home_dir=/rdsdbdata/db/innodb' in defaults file
'/rdsdbdata/db/' in SHOW VARIABLES
xtrabackup: Can't change dir to 'innodb_data_home_dir=/rdsdbdata/db/innodb' (OS errno 2 - No such file or directory) xtrabackup: cannot my_setwd innodb_data_home_dir=/rdsdbdata/db/innodb
The data home directory was changed to /rdsdbdata/db, even though no luck. Could someone please help.

AWS Replication agent problem when launching

I am trying to launch aws replication agent in a CENTOS 8.3 and always returns me an error during the process of replication agent installation ( python3 aws-replication-installer-init.py ......)
The output of the process shows me:
The installation of the AWS Replication Agent has started.
Identifying volumes for replication.
Identified volume for replication: /dev/sdb of size 7 GiB
Identified volume for replication: /dev/sda of size 11 GiB
All volumes for replication were successfully identified.
Downloading the AWS Replication Agent onto the source server... Finished.
Installing the AWS Replication Agent onto the source server...
Error: Failed Installing the AWS Replication Agent
Installation failed.
If i check the aws_replication_agent_installer.log i can see that appears messages like:
make -C /lib/modules/4.18.0-348.2.1.el8_5.x86_64/build M=/tmp/tmp8mdbz3st/AgentDriver modules
.....................
retcode: 0
Build essentials returned with code None
--- Building software
running: 'which zypper'
retcode: 256
running: 'make'
retcode: 0
running: 'chmod 0770 ./aws-replication-driver-commander'
retcode: 0
running: '/sbin/rmmod aws-replication-driver'
retcode: 256
running: '/sbin/insmod ./aws-replication-driver.ko'
retcode: 256
running: '/sbin/rmmod aws-replication-driver'
retcode: 256
Cannot insert module. Try 0.
running: '/sbin/rmmod aws-replication-driver'
retcode: 256
running: '/sbin/insmod ./aws-replication-driver.ko'
retcode: 256
running: '/sbin/rmmod aws-replication-driver'
retcode: 256
Cannot insert module. Try 1.
............
Cannot insert module. Try 9.
Installation returned with code 2
Installation failed due to unspecified error:
stderr: sh: /var/lib/aws-replication-agent/stopAgent.sh: No such file or directory
which: no zypper in (/sbin:/bin:/usr/sbin:/usr/bin:/sbin:/usr/sbin)
which: no apt-get in (/sbin:/bin:/usr/sbin:/usr/bin:/sbin:/usr/sbin)
which: no zypper in (/sbin:/bin:/usr/sbin:/usr/bin:/sbin:/usr/sbin)
rmmod: ERROR: Module aws_replication_driver is not currently loaded
insmod: ERROR: could not insert module ./aws-replication-driver.ko: Required key not available
rmmod: ERROR: Module aws_replication_driver is not currently loaded
Any issue of the error?
Launching with the command:
mokutil --disable-validation
will allow to change kernel modules (next boot will confirm it introducing password that must be entered afet command mokutil)

How do deal with conan server connection refused?

I have a docker container which runs as Conan server. I am trying to upload some Conan packages from another docker container to conan server. After entering username and passwd I am getting following error
Error uploading file: conanmanifest.txt,
'HTTPConnectionPool(host='localhost', port=9300): Max retries exceeded with url: /v1/files/hello/0.1/demo/testing/0/export/conanmanifest.txt?signature=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJyZXNvdXJjZV9wYXRoIjoiaGVsbG8vMC4xL2RlbW8vdGVzdGluZy8wL2V4cG9ydC9jb25hbm1hbmlmZXN0LnR4dCIsInVzZXJuYW1lIjoiZGVtbyIsImZpbGVzaXplIjo1OCwiZXhwIjoxNTg1NDQyMjYyfQ.7sHncjZ7J8gV5HENMqCIwLe7b483QfrGJ2PVyolvjC4
(Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f99c84d2e50>:
Failed to establish a new connection: [Errno 111] Connection refused'))'
ERROR: hello/0.1#demo/testing: Upload recipe to 'my_server' failed: Execute upload again to retry upload the failed files: conanfile.py, conanmanifest.txt. [Remote: my_server]
ERROR: Errors uploading some packages
I ran the following command
docker run -t -p 9300:9999 --name mycont my_conan
then edited server.conf file accordingly
port:9300
public_port:9999
host_name: containerIP
From now on I expect to reach by curl http://localhost:9999 but getting
Failed to connect to localhost port 9999: Connection refused
PS: From my host to the server, it works perfectly fine. Bu the error is appearing when i want to upload from container to container
I solve it by copying a custom config file to docker container. Otherwise I could not override default config file.
1. Write custom config file
2. COPY or ADD to image (in Dockerfile)
3. mv custom.conf /path/to/directory/ (from entrypoint)

AWS Elastic beanstalk hook failed :- unable to copy file to c:/windows/fonts

I have configured the elasticbeanstalk hook to download the file from s3 bucket to windows elasticbeanstalk instance.
file downloaded successfully on Desktop of administrator user of elasticbeanstalk, but I am unable to copy that file to c:/Windows/fonts directory.
below is the .config file.
sources:
"C:/Users/Administrator/Desktop": https://test.s3-ap-southeast-1.amazonaws.com/font/ARIALUNI.zip
commands:
copyfile:
command: copy C:/Users/Administrator/Desktop/ARIALUNI.TTF C:/Windows/Fonts
It's giving below error in elasticbeanstalk.
Error occurred during build: Command copyfile failed
nfra-WriteRuntimeConfig, Infra-EmbeddedPreBuild, Hook-PreAppDeploy,
Infra-EmbeddedPostBuild, Hook-EnactAppDeploy, Hook-PostAppDeploy]
Command failed on instance. Return code: 1 Output: null.
I have also tried to hook file like but that doesn't work.
sources:
"c:/myproject/myapp": https://test.s3-ap-southeast-1.amazonaws.com/font/ARIALUNI.zip
It's giving below error in elasticbeanstalk.
Error occurred during build: [Errno 22] invalid mode ('wb') or
filename: u'c:\Windows\Fonts\ARIALUNI.TTF'
Updated config file with the below code and it works.
Refrence url : https://richardspowershellblog.wordpress.com/2008/03/20/special-folders/
sources:
"c:/windows/temp/fonts": https://test.s3-ap-southeast-1.amazonaws.com/font/ARIALUNI.zip
files:
"C:\\scripts\\install_font.ps1":
content: |
#Commands
$Destination = (New-Object -ComObject Shell.Application).Namespace(0x14)
#Font Location
$Font = "C:\Windows\Temp\Fonts\ARIALUNI.TTF"
#Install
$Destination.CopyHere($Font,0x10)
commands:
install_font:
command: powershell.exe -ExecutionPolicy Bypass -Command "C:\\scripts\\install_font.ps1"
ignoreErrors: false
waitAfterCompletion: 5

Vagrant Provision is not working (tried several times already)

I'm having issues running the provision on the LAMP box.
The version of the Vagrant: 1.3.0
I have created a VM and ran Vagrant up. I get a time out error but the VM seems to be up and running (vagrant status shows the status of the VM).
When I tried running the "vagrant provision" I received the following error.
$ vagrant provision
[default] Running provisioner: shell...
DL is deprecated, please use Fiddle
[default] Running: C:/Users/vjay/AppData/Local/Temp/vagrant-shell20140127-4496-wgoaz8
/bin/bash: /vagrant/shell/os-detect.sh: No such file or directory
/bin/bash: /vagrant/shell/os-detect.sh: No such file or directory
cat: /vagrant/shell/self-promotion.txt: No such file or directory
Created directory /.puphpet-stuff
[default] Running provisioner: shell...
[default] Running: C:/Users/vjay/AppData/Local/Temp/vagrant-shell20140127-4496-1mc5ktv
/bin/bash: /vagrant/shell/os-detect.sh: No such file or directory
/bin/bash: /vagrant/shell/os-detect.sh: No such file or directory
/bin/bash: /vagrant/shell/os-detect.sh: No such file or directory
[default] Running provisioner: shell...
[default] Running: C:/Users/vjay/AppData/Local/Temp/vagrant-shell20140127-4496-1beurlg
/bin/bash: /vagrant/shell/os-detect.sh: No such file or directory
/bin/bash: /vagrant/shell/os-detect.sh: No such file or directory
Installing git
Finished installing git
cp: Copied Puppetfile
cannot stat `/vagrant/puppet/Puppetfile': No such file or directory
Installing librarian-puppet
Finished installing librarian-puppet
Running initial librarian-puppet
/usr/lib/ruby/1.8/pathname.rb:770:in `read': No such file or directory - /etc/puppet/Puppetfile (Errno::ENOENT)
from /usr/lib/ruby/1.8/pathname.rb:770:in `read'
from /usr/lib/ruby/gems/1.8/gems/librarian-puppet-0.9.10/vendor/librarian/lib/librarian/specfile.rb:14:in `read'
from /usr/lib/ruby/gems/1.8/gems/librarian-puppet-0.9.10/vendor/librarian/lib/librarian/action/resolve.rb:12:in `run'
from /usr/lib/ruby/gems/1.8/gems/librarian-puppet-0.9.10/vendor/librarian/lib/librarian/cli.rb:161:in `resolve!'
from /usr/lib/ruby/gems/1.8/gems/librarian-puppet-0.9.10/lib/librarian/puppet/cli.rb:63:in `install'
from /usr/lib/ruby/gems/1.8/gems/thor-0.18.1/lib/thor/command.rb:27:in `__send__'
from /usr/lib/ruby/gems/1.8/gems/thor-0.18.1/lib/thor/command.rb:27:in `run'
from /usr/lib/ruby/gems/1.8/gems/thor-0.18.1/lib/thor/invocation.rb:120:in `invoke_command'
from /usr/lib/ruby/gems/1.8/gems/thor-0.18.1/lib/thor.rb:363:in `dispatch'
from /usr/lib/ruby/gems/1.8/gems/thor-0.18.1/lib/thor/base.rb:439:in `start'
from /usr/lib/ruby/gems/1.8/gems/librarian-puppet-0.9.10/vendor/librarian/lib/librarian/cli.rb:29:in `bin!'
from /usr/lib/ruby/gems/1.8/gems/librarian-puppet-0.9.10/bin/librarian-puppet:9
from /usr/bin/librarian-puppet:19:in `load'
from /usr/bin/librarian-puppet:19
Finished running initial librarian-puppet
[default] Running provisioner: puppet...
Shared folders that Puppet requires are missing on the virtual machine.
This is usually due to configuration changing after already booting the
machine. The fix is to run a `vagrant reload` so that the proper shared
folders will be prepared and mounted on the VM.
Just to add on to that, I dug through several blog posts and one of those posts asked to try the command line git utility (because apparently there were issues around CRLF and LF). I deleted everything and tried the command line git but to no avail.
Can someone please help resolve this issue.
Update: Vagrant file attached:
VAGRANTFILE_API_VERSION = "2"
BOX_NAME = "dct-lamp-local"
RELATIVE = '../..'
ROOT = '/vagrant'
load '../common.include'
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.synced_folder "#{RELATIVE}/www/", "/var/www/html", id: "apache", :nfs => false, :mount_options => ["uid=510,gid=510"]
config.vm.provider :virtualbox do |virtualbox|
virtualbox.customize ["modifyvm", :id, "--name", BOX_NAME]
virtualbox.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
virtualbox.customize ["modifyvm", :id, "--memory", "1024"]
virtualbox.customize ["setextradata", :id, "--VBoxInternal2/SharedFoldersEnableSymlinksCreate/v-root", "1"]
end
config.vm.box = BOX_NAME
config.vm.hostname = "#{BOX_NAME}.local"
end
It looks like you should change the ROOT line in the vagrant file to point to the actual location of the vagrant file you got from PuPHPet.
The first error indicates that it is looking for a script in the shell directory under /vagrant, and ROOT is set to /vagrant in your vagrant file.
/bin/bash: /vagrant/shell/os-detect.sh: No such file or directory