I'm trying to use Packer to build images from iso on a remote VMware cluster, and there are security concerns with allowing direct access to the host. What are the minimal permissions required for an account on the esxi host to successfully complete the build?
The user needs to be able to run the following commands:
vmkfstools
vim-cmd
test
sh
ls
rm
esxcli
stat
mkdir
shaXsum
md5sum
For vim-cmd it must be allowed to run:
vmsvc/power.getstate
vmsvc/reload
vmsvc/power.on
vmsvc/power.off
solo/registervm
vmsvc/unregister
vmsvc/destroy
vmsvc/tools.install
And for esxcli:
network ip connection list
network vm list
network vm port list
system version get
system settings advanced list -o /Net/GuestIPHack
If security is I high concern I would recommend to look into running a dedicated ESXi host for Packer builds or use nested virtualisation to run a ESXi on top of vSphere just as a build host.
Related
I want to launch an AI Platform Notebook instance with a custom container so I can add some dependencies (e.g., Python 3.7). However, when I run a custom container, I am unable to mount my data from Cloud Filestore.
Per the docs, I created a custom container using this deep learning image:
gcr.io/deeplearning-platform-release/tf-cpu
I didn't make my own container yet and have added zero customizations; I just plugged gcr.io/deeplearning-platform-release/tf-cpu into the console for my custom instance.
When I try to mount my Cloud Filestore, I get the following errors:
root#c7a60444b0fc:/# mount <IP_ADDRESS>:/streams cfs
mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
root#c7a60444b0fc:/# mount -o nolock <IP_ADDRESS>:/streams cfs
mount.nfs: Operation not permitted
Now, when I launch a TensorFlow 1.15 notebook from the console (no customizations), the mount works fine and the environment is different from what I get with the deeplearning image. In particular, the deeplearning image launches as the root user whereas the TF 1.15 instance launches as the jupyter user.
So what image is the GCP AI Notebook actually using? What additional customization does the deeplearning image need to be able to mount a Cloud Filestore?
AI Platform Notebooks environments are provided by container images that you select when creating the instance. In this page you will see the available container image types.
The problem that you are facing is a known issue and it happens when rpc-statd.service is not initialized. I have solved it following these steps:
Created a Filestore instance
Created an AI Platform Notebooks instance with Python
environment in the same VPC than the Filestore instance
Entered root mode, installed nfs-common and enabled rpc-statd.service:
sudo bash
sudo apt install nfs-common
systemctl enable rpcstatd.service
Using the above commands and following these steps I could properly mount and access Filestore NFS.
In case that you want to further customize a custom container, you should take into account that in order to get access to Cloud Filestore you will have to use a VPN to connect to the same network as your Filestore instance as suggested in this SO post, otherwise you will find a ‘Connection timed out’ error when calling Filestore’s instance internal IP. You could use Cloud VPN to do so.
My application uses the WebSocket protocol, and I want to deploy it to the AWS using AWS Elastic Beanstalk. But the pre-build Windows Server configuration not including this protocol by default.
Manually i can enable it by setting according item in the Server Manager via Add Roles and Features Wizard (Web Server (IIS) -> Web Server -> Application Development -> Web Socket Protocol).
If I want my app to work, i need connect by RDP and manually check in this option. But it is a terrible approach..
I think this task can be accomplished by deploy setting (.ebextensions)? But how can i get it?
I would be very grateful for the answer!
Add .ebextensions to your EB environment and customize your server software
You may want to customize and configure the software that your
application depends on. These files could be either dependencies
required by the application—for example, additional packages or
services that need to be run.
For your needs use commands option:
Use the commands key to execute commands on the EC2 instance. The
commands are processed in alphabetical order by name, and they run
before the application and web server are set up and the application
version file is extracted.
The specified commands run as the Administrator user.
For example this command will install WebSocket protocol feature:
%SystemRoot%\system32\dism.exe /online /enable-feature /featurename:IIS-WebSockets
in .ebextensions config it may look like:
commands:
01_install_websockets_feature:
command: "%SystemRoot%\\System32\\dism.exe /online /enable-feature /featurename:IIS-WebSockets"
ignoreErrors: true
I'm trying to set up a web development environment on Amazon Workspaces running Amazon Linux AMI, but I didn't find a way to install Vagrant on the machine. I would like to have a virtual webdev machine for various practical reasons, but it seems that I can't run vagrant as AWS is already virtualised.
Is that correct, or is there a way to install and run vagrant/virtualbox containers on AWS Workspace?
AWS workspaces only offers a limited number of packages within its repo manager, so you won't find vagrant there. But you can manually install the repo using the CentOS download on their website. For example, this worked for me inside my Linux AMI WorkSpace:
wget https://releases.hashicorp.com/vagrant/2.1.2/vagrant_2.1.2_x86_64.rpm
yum install vagrant_2.1.2_x86_64.rpm
Now a WorkSpace is essentially a virtualized environment, so its unlikely you will be able to run a vm inside it - See this.
However vagrant offers a number of providers other than the default - including aws, which will allow you to spin up a vagrant box on an ec2 instance rather than locally. You can install it as follows:
vagrant plugin install vagrant-aws
And follow the configuration steps here
I'm trying to use Vagrant to deploy to AWS using the vagrant-aws plugin.
This means I need a box, then I need to add a versioned jar (je.g. myApp-1.2.3-SNAPSHOT.jar) and some statically named files. This also need to be able to work on Windows or Linux machines.
I can use config.vm.synced_folder locally with a setup.sh to move the files I need using wildcards (e.g. cp myApp-*.jar) but the plugin only supports rsync, so only Linux.
TLDR; Is there a way to copy files using wildcards in Vagrant
This means I need a box
Yes and No. vagrant heavily relies on the box concept but in the context of AWS provider, the box is a dummy box. the system will look at the aws.* variable to connect to your account.
vagrant will spin an ec2 instance and will connect to it, you need to make sure the instance will be linked with a security group that allows the connection and open the port to your IP (at minimum)
if you are running a provisioner, please note the script is run from the ec2 instance not from your local.
what I suggest is the following:
- copy the jar files that are necessary on s3 or somewhere the ec2 instance can easily access them
- run the provisioner to fetch the files from this source (s3)
- let it go.
If you have quick turnaround of files in development mode, you can push to a git repo that the ec2 instance can pull the files and deploy the jar directly
I have configured my .ebextensions folder to download and install a windows service on the leader ec2 instance.
Problem is that every time i want to update to a new version of the web application (Not the windows service) Those commands execute again and try to re install the service again.
On the other side. Every time i want to update only the widows service, i have to do the work manually through ssh or rdp. Or re-deploy the whole application which triggers the .ebextensions commands.
Is there a more elegant workflow for this i am skipping?
You are encountering Elastic Beanstalk weakest link. You host two different services on the same EB instance, which is unsupported by EB (which is lame I agree).
I resolved the "setup only once" need by appending a test to the setup extentension config file. In my case it's a linux box, but you can do something similar:
commands:
10_setup_win_service:
test: test ! -f /opt/elasticbeanstalk/.post-provisioning-complete
command: <...>
Now to complete this hack I have a file called .ebextensions/99_finalize_setup.config:
commands:
99_write_post_provisioning_complete_file:
command: touch /opt/elasticbeanstalk/.post-provisioning-complete
this approach ensures the win service is installed only once.
Now for your maintenance issue of the win service, you cannot use the EB toolset for that. Your understanding of the options here are correct - either use SSH to automate the work, or do it manually by logging into the server.