vagrant, puppet, aws but without vagrant on aws - amazon-web-services

So I have been googling for a while now and either I have completed the internet or I cannot articulate my search query to find the answer so I thought I would come here.
So my team and I want to use vagrant on our local machines which is fine. We want to use puppet for our configs. Now we don't want vagrant inside inside our AWS/DigitalOcean/Whatever providers instance. How do I get the puppet config to automatically build the instance for us ?
I am a little stuck, I think I need a puppet master but how does the AWS instance for example get built based on the puppet config and how does vagrant use the same config ?
Thanks

That's the default behavior if you install vagrant on your local workstation and configure an instance for AWS. Vagrant will connect over SSH to the instance and install client software (in this case puppet) to configure the instance.
In short: Vagrant will not install itself on any AWS-Instance.
Here's a link to the Vagrant-AWS Plugin:
Vagrant-AWS
Further information:
Vagrant uses providers to create VM's. The normal workflow is to use for example the virtualbox provider (which is build into vagrant) to create local VM's. You can set attributes for the specific provider in the Vagrantfile. In this case you need the Vagrant aws provider (which is a plugin -> vagrant plugin install <pluginname> command). Thus you can create VM's remotely. Just as with the virtualbox provider vagrant will not install itself on the created VM (remotely or not doesn't matter)

vagrant use masterless provisioning (Puppet Apply): script is running inside your vagrant box.
To provision machine in cloud you need puppet master server and puppet clients.
For automatically bootstrapping clients you can add shell script inside your server 'user-data': Digital Ocean , AWS EC2.
This script is responsible for installing puppet and connecting to master server.

Related

how to access self managed docker registry hosted on AWS EC2 from windows machine?

I want to setup a self managed docker private registry on an EC2 instance without using AWS ECR/ECS services i.e. using the docker registry:2 container image and make it accessible to the development team so that they can push/pull docker images remotely.
The development team has windows laptop with "docker for windows" installed in it.
Please note:
The EC2 instance is hosted on private subnet.
I have already created a AWS-ALB with openssl self-signed certificate and attached it to the EC2 so that the server can be accessed over HTTPS Listener.
I have deployed docker registry using below command:
docker run -d -p 8080:5000 --restart=always --name registry registry:2
I think pre-routing of 443 to 8080 is done because when I hit the browser with
https:///v2/_catalog I get an output in json format.
Currently, the catalog is empty because there is no image pushed in the registry.
I expect this docker-registry hosted on AWS-EC2 instance to be accessible remotely i.e. from windows remote machine as well.
Any references/suggestions/steps to achieve my task would be really helpful.
Hoping for a quick resolution.
Thanks and Regards,
Rohan Shetty
I have resolved the issue by following the below steps:
added --insecure-registry parameter in the docker.service file
created a new directory "certs.d/my-domain-name" at path /etc/docker.
( Please note: Here domain name is the one at which docker-registry is to be accessed)
Placed the self-signed openssl certificate and key for the domain-name inside the above mentioned directory
restart docker

How can I create Docker certificates when running on EC2 instances?

I'm using Terraform to spin up an EC2 instance. Then, I run an Ansible playbook to configure the instance and run docker-compose.
I noticed that Ansible's docker_service requires key_path and cert_path to be set when running against a remote Docker daemon. I'd like to use a remote connection, as the Docker images required have already been built locally as part of my CI/CD process.
How can I generate the cert and key for an EC2 instance?
I found this question: How do I deploy my docker-compose project using Terraform?, but it uses remote-exec from Terraform and assumes the project source will exist on the remote machine.
There are a few questions which provide reasonable workflows, but they all involve using docker-machine which is not quite what I want.
I believe I found it:
https://docs.docker.com/engine/security/https/
This document describes how to create the CA, server key-cert, and client key-cert.
And, the following document describes how to make a cert using Ansible:
https://docs.ansible.com/ansible/2.4/openssl_certificate_module.html

How to run Vagrant/Virtualbox on EC2

I wrote a unittest that verifies the setup of my dev environment by using Vagrant to create a Virtualbox VM and then run through all the setup steps.
I'm not trying to run this unittest as part of my normal build process on a QA server running as an EC2 instance, and it's failing because EC2 is based on Xen and Virtualbox doesn't support Xen. Trying to install the latest Oracle Virtualbox with sudo apt-get install virtualbox-5.1 fails with the error:
vboxdrv.sh: failed: Running VirtualBox in a Xen environment is not supported.
Oddly, installing the vanilla Virtualbox package in Ubuntu's standard repo succeeds, although it doesn't provide the VBoxManage tool needed by Vagrant.
What's the easiest way to get Vagrant to be able to spin up a VM from inside an EC2 instance? Presumably, I could use an EC2 provider, but spinning up an EC2 instance over the network is much much slower and more complicated than creating a local instance.

What does vagrant-aws provide and how to use it in production version?

I have used Vagrant with default Virtualbox provider. It uses Virtualbox to simulate a OS, such as Ubuntu 14.04 on my local computer.
Vagrant has AWS provider as well. I have read the Official document, but I am confused about the following things:
Where does this provider runs on, my local computer or the AWS instance?
If it runs on my local computer and only simulates the AWS instance, why does the SSH key need to be set?
If it runs on the AWS instance, will the intended AMI, such as Ubuntu 14.04, being setup directly on the instance or through simulating tools used by AWS to simulate a OS on the instance, like what Virtual box does based on the local computer?
As Vagrant adds in overhead on the system by using Simulating Provider, whether is it a good idea to use Vagrant in production version? If not, how to put the Vagrant box into the production server, such as the AWS instance?

Create VM without OS with Vagrant

For testing of automated OS deploy on a hardware cluster, I need Vagrant to create few VMs without OS installed, with just network boot enabled.
I succesfully created base box image and configured boot order with Vagrant.
Problem is that Vagrant dies after waiting for VM to boot (which it doesn't, because it has nothing to boot), trying to set up ssh forwarding, shared folders etc.
Is there any way I can tell Vagrant to just power on the machine and not try to configure or boot it?
Vagrant's idea is to manage already installed boxes. It has some requirements for them. SSH or other login access is a fundamental one.
If you just want to spin up a VirtualBox VM, you can call VBoxManage etc. directly.